22:00:54 #startmeeting qa 22:00:55 Meeting started Thu Jul 10 22:00:54 2014 UTC and is due to finish in 60 minutes. The chair is mtreinish. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:56 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:58 The meeting name has been set to 'qa' 22:01:08 hi, who do we have here today 22:01:26 mtreinish: here 22:01:28 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_July_10_2014_.282200_UTC.29 22:01:34 ^^^ today's agenda 22:01:43 dkranz: heh, I guess it'll be a quick meeting 22:02:25 hi 22:02:47 mtreinish: I'l sitting across from mlavalle but he has to go. He said he gave you an update about neutron. 22:02:53 well let's get started hopefully more people will filter in 22:03:00 hi 22:03:01 dkranz: yeah he said that it's in the agenda 22:03:03 o/ 22:03:15 #topic Spec review day outcomes (mtreinish) 22:03:32 so we had our specs review yesterday (or 2 days ago depending on your tz) 22:03:47 I thought that I'd share the numbers 22:04:08 when we started we had 27 open reviews, with 15 waiting on reviewers, and 12 waiting on the submitter 22:04:17 when I ran the numbers again this morning we were down to: 22:04:23 19 open reviews 22:04:29 4 waiting on reviewer 22:04:37 and 15 waiting on submitter 22:05:01 and we merged about 10 patches since before the specs review day 22:05:26 so I think it was a fairly productive day as far as working through the backlog 22:05:39 great :) 22:05:53 hopefully this will also mean that we keep on top of specs reviews in the future too. 22:06:19 so I just wanted to share the numbers, does anyone have anything else to add 22:06:24 otherwise let's move on 22:07:26 mtreinish: There was noticably no comments onn https://review.openstack.org/#/c/97589/ 22:07:46 dkranz: we were actually discussing that one on irc 22:07:56 and he's made some revisions based on that 22:08:09 mtreinish: ok, cool 22:08:24 #topic Mid-cycle Meet-up (mtreinish) 22:08:48 so I just wanted to remind everyone that the midcycle meetup is next week. 22:09:06 so obviously the people who are attending their online presence will probably be decreased 22:09:25 and as part of that I'm going to schedule the meeting for next week 22:09:45 I was curious on preferences for how we wanted to handle scheduling 22:10:07 should we just make the next meeting in 2 weeks at 2200 UTC? so we don't have to update the ical feed 22:10:17 mtreinish: yeh, that's probably easiest 22:10:17 or make it a 1700 UTC meeting? 22:10:26 sdague: yeah that's what I was thinking 22:10:28 I'd just say skip 22:10:36 mtreinish: I think it would be a good idea to have it for those of us who can't be there. 22:10:36 mtreinish: +1 22:11:05 mtreinish: How about having one by hangout? 22:11:31 mtreinish: whatever 22:11:31 dkranz: well, if you want to run the meeting you can 22:11:47 dkranz: I don't think a hangout would work, we've got 30 people in a room next week 22:12:15 trying to communicate with some people online is always very difficult in that situation 22:12:18 mtreinish: ok skip it then 22:12:34 dkranz: ok 22:12:39 mtreinish: There is no point in an irc meeting 22:12:59 #info the next meeting will be Jul 24th at 2200 UTC 22:13:04 hopefully I added correctly 22:13:10 ok let's move on 22:13:24 #topic Specs Review 22:13:35 didn't we sort of just do that :) 22:13:54 heh, yeah but it's still on the agenda :) 22:14:09 so does anyone have a spec review they'd like to bring up 22:14:52 I'll take that as a no, especially given the specs review day 22:14:54 so let's move on 22:15:10 #topic Blueprints 22:15:25 #link https://blueprints.launchpad.net/tempest/+spec/branchless-tempest-extensions 22:15:38 I wanted to bring this BP up because there isn't anyone assigned to it yet 22:15:49 it shouldn't be the most difficult thing to implement 22:16:05 and it'll give good exposure to how infra, devstack, and tempest interact in the gate 22:16:30 so if someone wants to step up to tackle it just assign themselves the BP in LP 22:16:43 I think it's an important feature to add to devstack gate 22:17:09 yeh, I might get back to it at end of cycle, but it's definitely not going to be soon 22:17:15 and would be great to have a volunteer there 22:17:58 sdague: well, we can beg some more at the midcycle 22:18:02 hopefully someone will step up 22:18:02 :) 22:18:17 ok, does anyone have a status update on any inprogress BPs 22:19:08 mtreinish: I'm re-thinking how to proceed with the multi auth version bp in light of sdague proposal to use tempest clients for scenario tests 22:19:18 ^_^ 22:19:23 :) 22:19:32 andreaf: that will massively simplify things for you? 22:19:35 andreaf: heh, yeah well that proposal is the next topic 22:19:47 yeah it'll mean we can basically move forward with that right? 22:19:49 sdague: yes it will 22:19:50 Hi, I'm waiting for spec review approved before continuing to work on the bp: https://review.openstack.org/#/c/97589/ 22:20:30 asselin: heh, you missed the specs review topic, but I'll put it on my list for tonight or tomorrow 22:20:49 i missed both somehow... 22:20:57 asselin: it's in my review list for tomorrow 22:21:08 #link https://review.openstack.org/#/c/97589/ 22:21:25 ok if there aren't anymore BPs to discuss, let's move on 22:21:56 #topic Changing scenario tests to Tempest Client (sdague) 22:22:02 #link http://lists.openstack.org/pipermail/openstack-dev/2014-July/039879.html 22:22:13 sdague: the floor is yours 22:22:21 yeh, mostly I wanted to get this conversation rolling 22:22:41 sdague: you got a few +1 from us 22:22:54 as when I nearly rage quit earlier this week trying to debug scenario test fails, I felt like we should just get rid of this debt 22:22:58 and make tempest simpler 22:23:05 which I think will help in general 22:23:12 Does any one think this is a bad idea? 22:23:30 well, no one has expressed that yet 22:23:34 I tend to agree, but I did like the notion that we used the clients because the scenario tests were use case simulators 22:23:42 but I'm fine with sacrificing that 22:23:48 however people seem to show up late to threads not realizing it 22:23:49 sdague: I meant those of us in this meeting particularly 22:24:02 masayukig: ^^^ you're the one who polished the scenario tests to there current state, any thoughts here? 22:24:07 oh, here, good question. Who else might be descenters 22:24:26 Yeah, actually, I agree with using Tempest Client in scenario tests :) 22:24:53 yay! 22:25:02 sdague: If we go towards what I suggest in the next topic, and what andreaf said, it might be possible so use either 22:25:11 sdague: heh, ok then I guess the next step would be to write up a spec for this 22:25:19 dkranz: I'd rather not do that 22:25:19 Does it means we are planing to add non read only client tests to regain some library coverage ? 22:25:21 dkranz: honestly, I'd rather not design around plugability here 22:25:22 sdague: but use the tempest client in the main gate jobs 22:25:29 but we can segway into the next topic 22:25:34 afazekas: no 22:25:36 because that's how get got to this mess 22:25:56 mtreinish: sure, I can do the spec on the plane tomorrow 22:26:06 or in the airport 22:26:15 afazekas: in fact I really think we should consider removing the CLI tests at some point in the future 22:26:28 sdague: I agree the first step would be to just change the client 22:26:45 afazekas: because I don't see a reason to keep them in tempest anymore, they were only added so we didn't have to setup devstack jobs for each client 22:26:48 dkranz: right, and then delete all the abstractions around the client in the code 22:26:49 but that's simple now 22:27:13 so we don't have that whole convoluted tenant isolation, for instance 22:27:22 sdague: yes, they are at the wrong level in any event 22:27:37 well I think we're already into the next topic 22:27:40 so let's move to that :) 22:27:49 sounds good 22:27:54 #topic Abstraction of test/client interactions (dkranz) 22:27:59 sdague: in the tenant isolation code we should switch to a single client 22:28:03 #link http://lists.openstack.org/pipermail/openstack-dev/2014-July/039927.html 22:28:12 So I put out my thinking in that email 22:28:37 It is really a follow on to the thinking about moving response checking t othe client 22:28:55 so... my concern here is it seems easier said than done 22:28:58 dkranz: yeah I haven't had a chance to respond to the thread 22:29:00 Does any one have any comments about that email proposal? 22:29:15 yeah there are too many fundamental differences between the clients 22:29:20 it makes doing this a real mess 22:29:22 sdague: There will be corner cases araound some sucky apis 22:29:36 dkranz: right, the corner cases end up bringing a lot more debt back in 22:29:38 heck, there isn't even consistency between the different python-*clients 22:30:03 dkranz: so what's the rationale for retargetable client? 22:30:08 dkranz: just look at http://git.openstack.org/cgit/openstack/tempest/tree/tempest/common/isolated_creds.py 22:30:11 like why is it good? 22:30:29 that basically is a simple abstraction layer just for getting creds 22:30:32 there is a lot of debt there 22:30:44 here's the point where I'm going to blow marun's mind and say I've come around to his point of view :) 22:30:49 I am focusing on the client methods and check/serialize/deserialive 22:30:54 dkranz: As I remember for swift you also need to read/test the response headers 22:31:07 sdague: Maru is a joint author of that email 22:31:12 right 22:31:19 He was looking over my sholder when I sent it 22:31:37 dkranz: right, but isn't the motivation here about migration of tests from neutron -> tempest? 22:31:44 Obviously that idea has to be fleshed out 22:32:04 sdague: In part yes, but that proposal involved a retargetable client 22:32:04 because my position has changed, and I think neutron should just keep those tests in their functional job, and that's cool 22:32:18 and lets not over design this 22:32:23 dkranz: it looks to me like me like all you really want is a stable client api from the email 22:32:38 sdague: So you want to just rip out api tests from tempest, end of story? 22:32:44 dkranz: no 22:32:54 sdague: Then how do you avoid duplication? 22:36:59 sdague: You mean put up a sample patch of doing this for a few apis? 22:36:59 that's the thing you keep saying, which make me think it's wrong :) 22:37:00 sdague: You mean put up a sample patch of doing this for a few apis? 22:37:00 dkranz: yeh 22:37:00 sdague: sure thing 22:37:00 I think about stuff better with real code 22:37:00 sdague: will do tomorrow 22:37:01 dkranz: cool 22:37:39 ok then if no one has anything else to add 22:37:41 let's move on 22:37:54 mtreinish: yes 22:38:01 #topic Grenade 22:38:12 mtreinish: perhaps this is a good topic for next week as well 22:38:28 andreaf: maybe, although dkranz won't be there... 22:38:35 sdague: so anything new on the grenade front? 22:38:46 oh, ok 22:39:04 not this week. Mostly I was trying to clean up some of the os-loganalyze things and help find some bugs 22:39:22 ok then does anyone else have something to discuss about grenade? 22:39:25 otherwise let's move on 22:39:56 Ii anyone had idea why we see lot of service statup failure in the grenade jobs? 22:40:04 screen 22:40:35 afazekas: yep, it's screen 22:40:50 I had some retry logic which apparently made it worse 22:40:57 so we're back to long timeout on screen 22:41:02 a lot of sleep 3 in the code 22:41:10 which makes it happen less often 22:41:21 the *real* fix is to make grenade not use screen 22:41:41 thx 22:41:48 #topic Neutron Testing 22:41:55 it *should* be able to do that today, but it can'tt 22:41:59 * andreaf gotta go now, see some of you next week 22:42:26 ok, mlavalle left an update in the agenda on the neutron scenario tests stuff 22:42:33 so you can read that there if you're interested 22:42:49 and I saw an update from salv-orlando about the parallel jobs 22:43:03 we're one bug down one to go on making that transition 22:43:11 at least for the neutron jobs 22:43:25 I confirm that. 22:43:27 which is what we decided to do at the project meeting on Tues. 22:44:23 so does anyone else have something to add about neutron testing? 22:44:51 I think everyone owes salv-orlando a beer 22:45:00 for making awesome progress 22:45:02 probably more than one :) 22:45:49 I have bad failing about we still have some other hidden full job related issue 22:45:54 beer++ 22:46:19 afazekas: maybe, but we'll see when we start running it in volume 22:46:25 at some point you just have to jump 22:46:30 anyway let's move on 22:46:35 #topic Bugs 22:46:49 So I personally haven't been keeping on top of bug triage too well 22:46:59 and I'm not really sure what the state of the tracker is 22:47:16 it may be worthwhile to have another bug day in the near future 22:47:33 but I'll take a call for volunteers on organizing that after the midcycle 22:47:58 aside from that does anyone have any bugs that they would like to raise attention on? 22:48:35 ok let's move one then 22:48:37 not for me 22:48:46 #link https://bugs.launchpad.net/tempest/+bug/1251448 22:48:48 Launchpad bug 1251448 in tempest "BadRequest: Multiple possible networks found, use a Network ID to be more specific. " [Undecided,Won't fix] 22:49:15 afazekas: that's targetted against havana 22:49:38 what specifically about it? 22:49:42 mtreinish: can we enable the tenant isolation on the Havana jobs ? 22:49:54 afazekas: I doubt it 22:50:05 we're only barely getting them working now 22:50:10 afazekas: yeah there is no way 22:50:33 salv-orlando and I spent basically a week in Montreal getting that to work on the tempest side 22:50:41 and there were tons of fixes on the neutron side too 22:51:04 mtreinish: AFAIK it is working on tempest side in the stable/havana 22:51:22 afazekas: well sort of, it actually overprovisions stuff with tenant isolation 22:51:25 which causes issues 22:51:43 that's why you see the network resources attr in the class definitions on some tests 22:52:19 anyway if there aren't any other bugs let's move on 22:52:37 #topic Critical Reviews 22:52:49 so I see dkranz put one on the agenda 22:52:52 Fixes also back-ported to neutron, and would be better to see more real issues than issues about not enabling that config option 22:53:03 #link https://review.openstack.org/#/c/104290/7 22:53:12 mtreinish: I just wanted to get that through as it has suffered many rechecks and rebases 22:53:25 afazekas: the fixes were major rewrites of internals, they're not backportable 22:53:36 afazekas: we can talk about it in -qa after the meeting 22:53:53 dkranz_: ok I'll take a look 22:54:01 mtreinish: thanks 22:54:04 mtreinish: heh, remember the first spec I pushed that you hated? :) 22:54:21 mtreinish: BTW, the last rebase involved a keystone api change 22:54:38 mtreinish: that clearly violates the api stability guidelines 22:54:47 sdague: vaguely, I remember it disappeared 22:54:51 sdague: why? 22:55:02 that was the make the client have single return 22:55:14 mtreinish: which was approved without comment, including by you. 22:55:26 which was the class resp(dict): that had status as an attr 22:55:35 it would make this code just cleaner 22:55:47 mtreinish: Is it our policy that dev core +2 is all that is needed for such a violation? 22:56:01 sdague: yeah that was different, it wasn't talking about moving the checks out of tests 22:56:07 mtreinish: nope 22:56:11 it was just a refactor to stop using a tuple 22:56:18 I'm just thinking about it again with all the _, body 22:56:45 dkranz_: yeah it's the same procedure as before 22:57:00 sdague: Part of the proposal I made is removing the resp argument completely 22:57:11 dkranz: the keystone change was unavoidable 22:57:20 because it actually acts differently under apache 22:57:35 sdague: perhaps but it was not even marked with doc impact 22:57:39 dkranz_: oh, we just landed a skip for that 22:57:54 if it's the head/get apache thing 22:58:06 the test change shouldn't have landed yet 22:58:36 mtreinish: No, and hopefully won't until my patch does 22:58:49 mtreinish: can we run over the time slot a little bit? 22:59:04 mtreinish: because the change now has to be made in the client, not in the test. 22:59:29 nikhil___: probably, I don't think there's anything after, but it'd be better to discuss it in -qa 22:59:37 ok sure 23:00:10 dkranz: sure, that's why I don't like large refactors like this normally 23:00:19 mtreinish: Nor do I 23:00:25 although apparently I can't even remember 3 months back 23:00:42 anyway we're at time 23:00:46 thanks everyone 23:00:57 #endmeeting