22:02:09 <sdague> #startmeeting qa 22:02:10 <openstack> Meeting started Thu May 8 22:02:09 2014 UTC and is due to finish in 60 minutes. The chair is sdague. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:02:11 <oomichi> hi 22:02:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:02:14 <openstack> The meeting name has been set to 'qa' 22:02:34 <sdague> there's dkranz 22:02:48 <dkranz> Who is here today? 22:02:54 <adam_g> o/ 22:02:55 <masayukig> o/ 22:02:57 <dpaterson> David Paterson 22:03:00 <andreaf_> o/ 22:03:02 <sdague> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_May_8_2014_.282200_UTC.29 22:03:22 <sdague> dkranz: I just kicked it, but feel free to run it now 22:03:33 <mlavalle> dkranz: I am supporting a desployment to one of the data centes at work. Can I give the Neutron update at the start of the meeting, so I can concentrate at work after that? 22:03:40 <dkranz> sdague: There was something wrong with my connection 22:03:51 <dkranz> mlavalle: Yes, go ahead 22:03:58 <sdague> #topic Neutron Testing 22:04:03 <sdague> go mlavalle 22:04:22 <mlavalle> Of 28 api tests that we are tracking, we h only have 5 more to merge 22:04:36 <mlavalle> the rest have merged, so very good progress on that front 22:05:01 <sdague> nice 22:05:10 <mlavalle> I would like the core team to help us reviewing the following 3, to see if we can merge them soon: 22:05:36 <mlavalle> https://review.openstack.org/#/c/83627/ 22:05:52 <mlavalle> https://review.openstack.org/#/c/67312 22:06:11 <mlavalle> https://review.openstack.org/#/c/63723 22:06:35 <sdague> sounds great 22:06:57 <mlavalle> we also have 20 minutes of tempest on the Neutron agenda next Thursday at 9 am in the design summit 22:06:59 <sdague> #action core review eyes needed on 3 neutron reviews 22:07:09 <mlavalle> this is the etherpad that I put togeteher 22:07:15 <mlavalle> https://etherpad.openstack.org/p/TempestAndNeutronJuno 22:07:43 <mlavalle> In principle, during Juno we will be pursuing four lines of actions 22:07:53 <mlavalle> 1) increase the number of scenario tests 22:08:07 <mlavalle> 2) fill any gaps that might have been left in api tests 22:08:18 <mlavalle> 3) support the nova parity subproject 22:08:26 <mlavalle> 4) support other suprojects 22:08:38 <mlavalle> please feel free to review the etherpad and add to it 22:08:52 <sdague> sounds good 22:08:56 <sdague> mlavalle: anything else? 22:09:00 <mlavalle> that's all I have 22:09:02 <mlavalle> thanks 22:09:19 <sdague> great 22:09:21 <mlavalle> I'll be watching the rest of the meeting, but I might have to drop off 22:09:29 <sdague> ok, back to agenda as it's ordered 22:09:34 <sdague> #topic Reminder about summit etherpads 22:09:49 <sdague> The list of etherpads is here - https://wiki.openstack.org/wiki/Meetings/QATeamMeeting 22:09:59 <sdague> not all have been created yet 22:10:36 <sdague> chmouel's UX one is missing 22:10:43 <sdague> boris-42's rally one is missing 22:10:44 <masayukig> maybe https://wiki.openstack.org/wiki/Summit/Juno/Etherpads#QA ? 22:10:46 <andreaf_> #link https://etherpad.openstack.org/p/Juno-QA-design-summit-topics 22:10:53 <boris-42> sdague ? 22:11:27 <sdague> and maru_afk's functional test one is missing 22:11:42 <dkranz_> sdague: Please run the meeting. I keep flaking in and out. 22:11:48 <sdague> boris-42: etherpad for rally / tempest summit session hasn't been stubbed yet 22:11:55 <sdague> masayukig: yes 22:11:58 <boris-42> sdague when is deadline? 22:12:06 <sdague> sooner the better 22:12:20 <boris-42> sdague ok will do (just finished slides) 22:12:23 <sdague> so people can look and provide feedback pre summit 22:12:41 <sdague> boris-42: there shouldn't be slides for a design summit session 22:12:53 <boris-42> sdague just small intro 22:13:11 <boris-42> sdague i think it's simpler to start from intro no? 22:13:16 <sdague> if you want to link some slides in the etherpad for people to read in advance, that's cool, but there shouldn't be slides in the session 22:13:22 <sdague> that's not what the session is there for 22:13:33 <boris-42> sdague okay I'll just left slides 22:13:40 <boris-42> sdague cause I think intro is required 22:14:24 <sdague> it's probably worth putting those out on the list in advance as well, just to further highlight them 22:14:43 <dkranz_> boris-42: Yes, please send intro to the list 22:15:03 <sdague> I'm going to spend lots of time tomorrow filling out my etherpad 22:15:14 <boris-42> hehe me too 22:15:43 <boris-42> sdague i know it's not super related to the rally & tempest integration 22:15:55 <sdague> ok. Reminder to everyone else to handle etherpads for the summit 22:15:55 <boris-42> sdague but I would like to speak about osprofiler & tempest integration 22:16:06 <boris-42> sdague is it ok? 22:16:29 <sdague> boris-42: I'd say try to keep this narrow to begin with, and if there is more time get there 22:16:33 <sdague> but 40 minutes goes fast 22:16:42 <boris-42> sdague okay it will be last topic 22:16:46 <boris-42> latest* 22:16:57 <sdague> and I think we've got some issues on time accounting that we need to make sure we sort out 22:17:16 <sdague> ok, next topic 22:17:18 <dkranz_> sdague: We should move on 22:17:21 <sdague> #topic Proposal to move success response checking to clients (dkranz) 22:17:33 <sdague> dkranz_: you have the floor 22:17:42 <dkranz_> sdague: I just wanted to see if any one objected to this proposal that was discussed on the ml 22:17:56 <dkranz_> or if there were any other comments 22:18:23 <sdague> I think it was generally agreed. I think it's worth writing up as a qa-spec, and we can approve it through that mechanism 22:18:23 <dkranz_> sdague: I don't really see any downside 22:18:32 <dkranz_> sdague: ok, I will do that. 22:18:34 <sdague> seems big enough to be a spec/blueprint 22:18:39 <sdague> vs. just a bug 22:18:53 <dkranz_> #action dkranz to create spec for moving response checking to clients 22:19:09 <sdague> I think the only details are around multiple allowed success codes 22:19:12 <sdague> so make sure to call that out 22:19:17 <sdague> just so we get that right 22:19:26 <dkranz_> sdague: Right. I wonder how many there actually are. 22:19:47 <dkranz_> sdague: Not counting those that say any 2xx is ok 22:19:53 <dkranz_> That's it 22:20:13 <sdague> cool 22:20:15 <sdague> next topic 22:20:29 <sdague> #topic Can we turn on voting of ironic jobs (recent creds change broke it)? (adam_g) 22:20:42 <dkranz_> adam_g: That's you 22:21:01 <adam_g> context: some refactoring merged recently that broke some of the non-voting jobs 22:21:10 <adam_g> ironic, and i believe solum 22:21:28 <adam_g> i dont think we can really make these voting until the projects have graduated 22:21:55 <andreaf_> adam_g: there's no solum job on tempest I think 22:22:31 <adam_g> andreaf_, oh, maybe not a job in the gate but some of the solum tests (at least reported by devkulkarni earlier) 22:22:58 <sdague> solum is running a ton of out of tree stuff though, so I consider that a different issue 22:23:06 <adam_g> we'll be making these ironic jobs voting in the ironic gate soon, not sure how to prevent this from happening other than urging people to pay attention to non-voting jobs 22:23:35 <adam_g> and /me being more proactive about catching failures during review :) 22:23:43 <sdague> adam_g: I think with the # of jobs on a tempest run now, seeing the non voting votes is going to get harder over time 22:24:14 <dkranz_> adam_g: I suggest sending a ml message saying that ironic is not gating only because of incubation but is solid. 22:24:17 <sdague> any idea if there is a gerrit query that would return those changes? 22:24:21 <andreaf_> adam_g, sdague: yes it's getting harder, and gate is not very stable (voting and not voting) so it's even harder 22:24:39 <andreaf_> dkranz_: +1 sounds good 22:24:53 <adam_g> i just threw up http://no-carrier.net/~adam/openstack/ironic_gate_status.html to help me monitor failures 22:25:02 <adam_g> s/threw up/put up :) 22:25:05 <sdague> heh 22:25:09 <sdague> it's funnier the first way 22:25:20 <dkranz_> sdague: Because we have non-voting jobs that have been downgraded due to failures and others that are solid but waiting to get in for other reasons 22:25:29 <dkranz_> Reviewers just need to know which is which 22:26:24 <andreaf_> dkranz_, sdague, adam_g: sortng the jobs and splitting them in sections would help already 22:26:55 <andreaf_> dkranz_, sdague, adam_g: but even nicer would be to get some stability stat next to the failing job 22:27:14 <adam_g> i'd be cool if there were a way to categorize both using notes in jenkins comments, based on pass/failure ratio over the last N days/weeks 22:27:19 <sdague> andreaf_: sure, that's in my elastic recheck set of futurues to give us that 22:27:23 <andreaf_> or some check that identified new test failures in an unstable job 22:27:48 <sdague> ok, this is turning more into brainstorm though, so I think we should table to beers somewhere at summit 22:28:03 <sdague> is there something actionable beyond sending a heads up to the list? 22:28:04 <adam_g> +1, tho i wont be there so someone will need to drink mine 22:28:09 <dkranz_> Really we just need a third tag which says "non-voting but look at a failure before approving" 22:28:16 <sdague> adam_g: bummer 22:28:35 <sdague> dkranz_: so realistically that feels to me like a "jenkins 2nd vote" 22:28:51 <sdague> which we'd need a bunch of infra buy in and refactor on 22:28:57 <sdague> but is kind of interesting 22:29:05 <andreaf_> sdague: is there a place where we can track such "additional topics to chat about at summit"? 22:29:16 <dkranz_> sdague: I don't know how much real work should be done here vs just living with it 22:29:31 <sdague> andreaf_: not atm, you have a suggestion? 22:29:51 <sdague> I just assume it will come up over coffee / food / beer all week, and my brain will end up full at the end of it 22:29:57 <andreaf_> sdague: perhaps another etherpad 22:30:00 <sdague> dkranz_: agreed, lets move on 22:30:01 <dkranz_> sdague: At past summits we have had "qa meetings" 22:30:16 <sdague> dkranz_: well, usually a lunch somewhere 22:30:34 <dkranz_> sdague: that too 22:30:45 <sdague> #topic Specs Review 22:31:01 <sdague> ok, time for specs that people want to talk about, and get eyes on 22:31:13 <andreaf_> ok 22:31:13 <sdague> dpaterson: I believe that includes you, right? 22:31:25 <andreaf_> dpaterson: go first 22:31:45 <andreaf_> or I'll start 22:31:48 <andreaf_> https://review.openstack.org/81294 22:32:03 <andreaf_> multiauth bp, I think it's ready all comments addressed 22:32:22 <andreaf_> and https://review.openstack.org/#/c/81307/ keystone v3 jobs also all comments addressed 22:32:55 <sdague> andreaf_: this looks pretty good 22:32:56 <andreaf_> and I filed a new one today about client manager refactor https://review.openstack.org/92804 for which I'd love some feedback 22:33:22 <sdague> I'm good on 81294 22:33:31 <sdague> I'll look at 81307 in the morning 22:33:57 <sdague> andreaf_: any specific items you want to bring up about them? 22:34:19 <sdague> or just getting people to look? 22:34:46 <andreaf_> the latter 22:34:49 <sdague> #action qa-specs that are probably ready for final approval https://review.openstack.org/81294, https://review.openstack.org/#/c/81307/ 22:35:01 <dkranz_> andreaf_: I already gave my +2 to 81294 and mtreinish just -1 for syntax issue 22:35:21 <sdague> #action qa-spec on client manager refactor needs review https://review.openstack.org/92804 22:35:31 <dkranz_> dpaterson: You there? 22:35:34 <sdague> mtreinish should be working tomorrow 22:35:34 <andreaf_> dkranz_: yes I fixed the issue 22:35:34 <dpaterson> yup 22:35:50 <dpaterson> Sorry stepped away for a sec 22:35:52 <dkranz_> dpaterson: You can discuss your spec 22:35:53 <sdague> so dkranz_ +2 if you think it still holds and we can nudge him tomorrow for landing 22:36:00 <dpaterson> Sure 22:36:02 <dpaterson> https://blueprints.launchpad.net/tempest/+spec/post-run-cleanup 22:36:02 <dpaterson> https://review.openstack.org/#/c/91777/ 22:36:32 <sdague> dpaterson: so there is a clerical issue here where the patches need to be merged 22:36:36 <dkranz_> dpaterson: You should abandon the old patch and resubmit the new one with no dependency 22:36:39 <dpaterson> Basically I have been working with some QA folks on testing a HA configuration and running into issues with Temepst cleanup 22:37:15 <dpaterson> dkranz: will look into the problem 22:37:53 <sdague> dpaterson: one of the concerns I have about this approach is it papers over state corruption issues in the services by just having Tempest clean things up 22:38:10 <sdague> I'd much rather get the base services fixed to not be in an inconsistent state 22:38:22 <dpaterson> I agree 22:38:31 <sdague> also, we really *can't* access the db directly from tempest 22:38:39 <sdague> for both design reasons 22:38:43 <sdague> and practical reasons 22:38:46 <dpaterson> but before that happens I would like to have a tool to unblock my guys 22:39:12 <dkranz_> sdague: I think he is proposing a script, not having tempest do it automatically 22:39:29 <sdague> oh, ok, yeh I see that now 22:39:35 <dkranz_> sdague: There is already a script in the stress dir but it does not do what is needed. 22:39:38 <dpaterson> My proposal is just python, not dep on tempest. 22:39:47 <sdague> dpaterson: ok sure 22:39:51 <dkranz_> dpaterson: My concern is with the database cleanup part 22:40:00 <dpaterson> Yes 22:40:16 <sdague> on the db cleanup part, the issue we had before when we had whitebox testing was the schemas change a lot in a release 22:40:23 <sdague> so it's basically always breaking 22:40:27 <dkranz_> dpaterson: Any such database cleanup will be very fragile 22:40:28 <sdague> like *always* 22:40:40 <dpaterson> I am open to alternatives but currently they are getting into a state where API calls cannot remove objects. 22:40:56 <dpaterson> So somekind of surgery is going to be required 22:41:14 <dkranz_> dpaterson: I don't doubt that but I'm not sure tempest is the right place for that 22:41:17 <sdague> dpaterson: sure, so lets get the blueprint cleaned up so it's passing the docs job just to look at it. 22:41:28 <dkranz_> dpaterson: Stuff in tempest has to be kept working 22:41:49 <dkranz_> dpaterson: And that is hard with code that only runs when things get seriously messed up 22:42:07 <sdague> I'd be worth trying it, if we also required that any cleanup function needed an upstream bug before it could come in so that we'd actually work towards addressing root issues 22:42:24 <dkranz_> sdague: works for me 22:42:25 <dpaterson> It would only execute if cleanup is flagged to do so 22:42:31 <sdague> but we should also be very aware this is probably going to be very fragile 22:42:43 <dpaterson> agreed 22:43:07 <sdague> dpaterson: I also don't think it should be triggered by tempest. It would just be a helper tool that we keep around that people could run manually if they like 22:43:14 <dpaterson> it is 22:43:18 <dkranz_> +1 22:43:41 <sdague> dpaterson: ok, so if you can respin the spec, will take a look 22:43:47 <sdague> are you going to be in atlanta? 22:43:54 <dkranz_> dpaterson: It is useful because not every one has to figure out the weird db calls 22:44:05 <dpaterson> Sorry not this time, 22:44:06 <sdague> I expect the review queues are going to go really quiet next week regardless 22:44:18 <sdague> so it might not be till the week after that people get real eyes on this 22:44:34 <oomichi> dpaterson: interesting. In the HA testing, tempest will stop openstack for checking right switching? 22:45:01 <dkranz_> oomichi: Not sure what you mean 22:45:12 <dpaterson> I don't quite understand oomichi, 22:46:12 <oomichi> dpaterson: HA switchs active / standby controller-node as I understand. 22:46:50 <dpaterson> Tempest doesn't do any switches. The idea is tempest is run and we get a report 22:46:54 <dkranz_> oomichi: I don't think tempest would know anything about ha, right? 22:46:56 <dpaterson> Then clean the system up 22:47:14 <dpaterson> Take down a controller or introduce some other failure 22:47:17 <dpaterson> and rerun tempest 22:47:26 <oomichi> dpaterson: oh, I see. thanks 22:47:27 <dpaterson> Should get same test report 22:47:41 <sdague> ok, great, dpaterson you need anything else on this? 22:47:50 <dpaterson> nope, thanks 22:48:00 <sdague> ok, great 22:48:05 <sdague> #topic Blueprints 22:48:12 <dpaterson> tx 22:48:17 <sdague> any in process blueprints people want to bring up? 22:49:06 <sdague> I'm assuming most people are prepping for summit 22:49:13 <andreaf_> sdague: Just mentioning that we made some good progress on the multi-auth bp with lots of reviews 22:49:27 <sdague> given that we only have 10 minutes left, lets jump to critical reviews 22:49:31 <dkranz_> sdague: Just that https://review.openstack.org/#/c/91899/ is waiting for final approval 22:49:32 <sdague> #topic Critical Reviews 22:49:37 <sdague> https://review.openstack.org/#/c/92573/ - connection verification as part of RemoteClient init 22:49:46 <sdague> that was put in the agenda 22:50:07 <sdague> dkranz_: +A 22:50:12 <dkranz_> sdague: I think yfried is trying to debug some ssh issues we have seen 22:50:15 <dkranz_> sdague: thanks 22:50:58 <sdague> I'll plug the test and worker summary patch - https://review.openstack.org/#/c/92362/ 22:51:09 <sdague> as we lost test summary with the new subunit trace 22:51:23 <sdague> that will also let us see worker balance in jobs 22:51:28 <sdague> which is kind of useful 22:51:52 <sdague> andreaf_: what's the next patch needed for review in the multi-auth stack? 22:52:07 <sdague> any other critical reviews people need eyes on? 22:52:14 <andreaf_> https://review.openstack.org/#/c/80246/ 22:52:16 <sdague> #topic Open Discussion 22:52:25 <andreaf_> https://etherpad.openstack.org/p/juno-summit-open-topics 22:52:25 <sdague> ok, let's do open discussion 22:52:50 <sdague> andreaf_: I like that 80246 is negative LOC :) 22:52:52 <andreaf_> sdague: just an etherpad for people to put additional ideas we can discuss at lunch or beer 22:53:08 <andreaf_> sdague: :D 22:53:10 <sdague> andreaf_: great, want to also send that to the mailing list 22:53:17 <andreaf_> sdague: ok will do 22:53:45 <sdague> anything else from folks? 22:54:00 <sdague> when are folks getting to atlanta? 22:54:10 <andreaf_> 7pm on Sunday 22:54:15 <dkranz_> sdague: Sorry I will miss the dinner. Don't arrive until 8pm 22:54:22 <oomichi> 15:30 Sunday 22:54:31 <sdague> I will also miss the dinner, as I've got the TC / board thing that night 22:54:45 <sdague> but my evening schedule is always nuts 22:54:59 <dkranz_> sdague: Maybe we should have a lunch table on Monday 22:55:27 <dkranz_> Anyway, should be a busy week :) 22:55:36 <sdague> yes, it should be. 22:55:58 <sdague> I'm getting in Sat because of board things on Sun, so if anyone happens to be around Sat let me know. 22:56:28 <sdague> ok, I think that's a wrap folks. See you in ATL 22:56:41 <sdague> and there won't be a meeting next week, because of the summit, so see folks back here in 2 22:56:45 <sdague> #endmeeting