22:02:35 #startmeeting QA 22:02:36 Meeting started Thu Dec 12 22:02:35 2013 UTC and is due to finish in 60 minutes. The chair is mtreinish. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:02:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:02:40 The meeting name has been set to 'qa' 22:02:59 o/ 22:02:59 hi who's here today at our new time? 22:03:04 o/ 22:03:05 \o 22:03:11 hi 22:03:11 I am 22:03:14 hi 22:03:22 o/ first time 22:03:29 hi! 22:03:38 cyeoh !! :) 22:03:40 first time for me too :-) 22:03:41 oh right -5 22:03:48 today's agenda: https://wiki.openstack.org/wiki/Meetings/QATeamMeeting 22:03:52 I really thought this was another hour :) 22:03:53 guitarzan: :-) 22:03:59 sdague: yeah I was thrown too I assumed the same thing :)( 22:04:08 its kind of nice having it straight after the Nova meeting 22:04:22 yeh 22:04:27 let's get started then 22:04:32 #topic Blueprints 22:04:49 so giulivo has done an awesome job on clean up here 22:04:57 giulivo: since you're taking the charge of blueprint clean up is there anything to report? 22:05:14 sure so firstly, please not I sent you an email with comments about a few bp 22:05:34 *note 22:05:46 unless you disagree those will be closed/approved accordingly to the comments in the email 22:05:56 if you disagree, please reply 22:05:57 giulivo: sounds great 22:06:16 giulivo: there are also a ton of "new" blueprints at the bottom 22:06:30 you going to purge those as well? 22:06:31 giulivo: is there a deadline to reply to the email? 22:06:47 well this was a topic I'd wanted to discuss 22:06:55 I think anything in new & unknown state should just be marked invalid 22:07:14 some are basically asking for more tests to cover one or another functionality, what shall we do with these? ideas? 22:07:38 I'd close these too 22:07:49 giulivo: so basically how do we track new tests? 22:07:57 or new test development 22:08:04 giulivo: so if people are actually targeting them to milestones, I think we can leave them 22:08:05 or new anything 22:08:06 some instead are asking for new tests to cover components we don't have the services for and I'd actually approve these 22:08:34 mtreinish, indeed that was my point of concern, I think the easiest way for that is just a wiki page, maybe organized by component 22:08:57 cyeoh: well you said a google doc spreadsheet works well for this right? 22:09:01 I can't think of anything else, but I'm obviously open to input 22:09:08 yea a spreadsheet works really well for api tests 22:09:29 I guess I'd take a step back 22:09:37 if we really want to we can have a blueprint point to the appropriate spreadsheet. But a bp is in general a really cumbersome way of tracking progress for that sort of thing 22:09:43 but wouldn't that force people to have a google account? what would be the benefits compared to the wiki? 22:09:47 basically: what is a blueprint useful for 22:10:07 sdague: Avoiding duplication 22:10:11 it's useful if a person is committing to deliver a thing by a milestone so we can ensure it gets review eyeballs 22:10:33 dkranz: sure, but that again needs both an owner and a milestone 22:10:44 sdague: Yes. 22:10:48 and regularly updated status on it 22:11:01 If we simply insisted that new blueprints had an owner and milestone it would to a long way 22:11:09 go 22:11:39 and the reason we go to the effort of that is to ensure that we prioritize reviews for it 22:12:20 and if you have a blueprint, I also expect you to come to this meeting, or provide regular updates 22:12:23 in some other way 22:12:53 so I think doing the giant purge that giulivo is doing is great 22:12:56 sdague, I totally agree with that, but I still wouldn't apply this to "we need tests for cinder backup" type of blueprints 22:13:05 giulivo: agreed 22:13:25 especially as there are already at least 3 bugs I duped together on that today :) 22:14:08 ok, giulivo I think you can probably triage out 75% of the blueprints with the feedback you gathered so far right? 22:14:16 yes 22:14:17 then maybe next week we cycle again on what's left 22:14:30 to handle any sticky issues around them 22:14:45 but one thing, probably not on topic, remains open and that is how we track the actual tests people want to add 22:14:55 maybe we should also write it down in a wiki what sort of things we want blueprints for, and what we don't (and how people should handle it instead)? 22:14:58 giulivo: can we circle to that at the end? 22:15:10 sdague, indeed not on topic, agree on that 22:15:16 cyeoh: so, I'm actually becoming more a fan that we should do that in the tempest docs tree 22:15:16 sdague: it is the last topic on the agenda :) 22:15:21 mtreinish: great :) 22:15:28 ok 22:15:33 sdague: yep, that'd be fine with me 22:15:39 #action giulivo to do the great blueprint purge 22:15:52 and there was much rejoicing! 22:16:20 ok is there anything else that needs to be discussed about blueprints? 22:16:29 otherwise let's move on to the next topic 22:16:37 mlavalle, one thing 22:16:55 I noticed a blueprint where we suggest to have tests for different network topologies in neutron 22:16:57 giulivo: listening... 22:17:03 I wonder if that is at all feasible with our infra? 22:17:31 https://blueprints.launchpad.net/tempest/+spec/quantum-basic-api 22:17:39 I think it is…. the question is to prioritize that development with thereat of things I am doing for Neutron 22:17:40 giulivo: I think we should gather more details on that, it's really more of an openstack-ci item 22:18:13 we can talk after the meeting 22:18:17 mlavalle: yeah I think right now there are other priorities for neutron testing 22:18:26 ok for me 22:18:29 which is a good segway into the next topic 22:18:38 #topic Neutron testing 22:18:48 mlavalle: you're up 22:19:05 so in the api testing fron I didi 3 things this week 22:19:39 number 1 I created a wiki page with a How To for API tests development for Neutron https://wiki.openstack.org/wiki/Neutron/TempestAPITests 22:20:15 number 2 I sent a message to the ML recruting developers and pointing to the wiki page 22:20:53 number 3 I kept adding to the gap analysis in https://etherpad.openstack.org/p/icehouse-summit-qa-neutron 22:21:19 at this point I have completed L2, L3, extensions management and provider networks 22:21:43 I will keep going through the API spec and hope to be finished by next week 22:21:59 at that point I will just start developing tests 22:22:09 from the list 22:22:38 mlavalle: ok as we were talking about a min. ago it might be useful to put that list into a spreadsheet somewhere 22:22:54 that seems to work really well for splitting up api tests 22:23:07 sure…… is that a Google spreadsheet? 22:23:13 especially because it's getting kind of length 22:23:52 let's also acknowledge that EmilienM's patch merged and neutron grenade tests now run, though they don't yet test anything 22:23:55 mlavalle: sure I guess, I don't think we have openstack infra cloud spredsheet 22:24:04 mlavalle: yea we've used a google spreadsheet in the past, but anything really like that would be fine 22:24:23 this is an example: https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc#gid=0 22:24:31 cool…. I'll move it to a spreasheet 22:24:39 it allows for really fine grained self allocation of work 22:25:24 any other questions / observations on this regard? 22:25:44 anteaya: yep, thankful for EmilienM's work there 22:26:05 what about the SSH bug? Anyone have any news on that 22:26:16 we need some devs coming forward to write some of those tests on the list 22:26:24 so far, no volunteers 22:26:29 so if any happen by 22:26:40 point them to -neutron and to mlavalle and myself 22:26:48 they don't need to commit much 22:26:53 any help appreciated 22:27:02 salv-orlando just came online 22:27:07 and has 2 hours to dig into it 22:27:24 I will give him his time and get a report from him before he goes offline 22:27:36 hopefully to hand the baton to someone else 22:27:59 my problem is that there seem to be many error messages captured by that logstash fingerprint 22:28:09 so in my work I am having a hard time 22:28:16 but my logstash foo is weak 22:28:24 that is all I have on the ssh bug 22:28:42 anteaya: thanks 22:28:58 ok is there anything else to discuss on the neutron testing front? 22:29:45 ok then let's move on 22:29:49 #topic Bug status 22:30:01 so adalbas said he couldn't make it today 22:30:25 so, we started the day at 276 ? 22:30:30 but he wanted to thank everyone who contributed to the bug triage day today (or yesterday for some people :) 22:30:50 sdague: he put some notes up here: https://etherpad.openstack.org/p/tempest-bug-triage 22:30:53 #link https://etherpad.openstack.org/p/tempest-bug-triage 22:31:09 http://status.openstack.org/bugday/tempest.html - cron was broken earlier in the day, so it doesn't have full progress 22:31:20 but I just got us to 97 22:31:23 last check 22:31:30 * sdague still triaging 22:31:35 wow that's a big drop 22:31:38 so thanks much to everyone 22:31:57 huge amount of work there, and it helps in getting patterns out of it 22:32:12 like the fact that there are a ton of nova state transition bugs that get filed in piecemeal 22:32:44 which makes me think we actually need a new tempest test(s) that just do large_ops style run the state engine and try to break nova 22:33:11 sdague: something like the stress tests with a fake virt driver? 22:33:20 sdague: we were talking about something like that in HK 22:33:56 mtreinish: yeh, that would probably be a starting point 22:34:12 honestly I don't know if these are nova-compute bugs, or virt layer bugs with libvirt 22:34:20 I would actually +1 the idea of using fake drivers for the api/gate tests and keep the real drivers for the periodic jobs 22:34:34 giulivo: I think we need real drivers in the gate 22:34:38 yeah if they're libvirt bugs we won't catch them with the fake driver 22:34:48 but at gate we're not testing libvirt 22:34:51 nor lvm 22:34:55 And we need to ssh 22:35:07 giulivo: ?? we are testing those things today 22:35:21 We need the gate jobs to be "real" 22:35:21 dkranz: so this class of bugs doesn't need ssh 22:35:32 right, agreed, I think that's a distraction 22:35:38 I'm not trying to remove anything 22:35:39 sdague: I was referring to the comment about gate jobs 22:35:43 yep 22:36:10 sdague: well I can push out a new jjb job for running stress with a fake virt driver for like 20min 22:36:14 heh okay I see I'm a minority here, will try to bring it up again differently not during the meeting :) 22:36:15 staying on topic-ish, I think nova state bugs are huge class of issues today, and we should try to figure out how to make them more frequent 22:36:16 that should be straightforward 22:36:32 we can do it nonvoting to see what it turns up 22:36:38 mtreinish: actually, I think we need to think through this further 22:36:48 one question, can we get libvirt log on the gate? 22:36:50 but the thing is I don't think at gating we should actually be testing if libvirt behaves correctly, but if the api behaves correctly 22:36:58 because I actually expect this might be very surgical 22:37:12 ken1ohmichi_: you know where it's logging to? 22:37:18 libvirt is a subset of the nova drivers and , maybe just because it is the most common , we pick it for the "real" periodic jobs 22:37:40 giulivo: I disagree :) 22:37:40 sdaue: log files under /var/log/libvirt/ 22:37:44 giulivo: how about we save that discussion for after the meeting 22:37:54 ken1ohmichi_: we could definitely add it 22:38:17 sdague: thanks, will check the way. 22:38:28 ken1ohmichi_: get with me in -qa after the meeting, and I'll give yuo the pointers as to where to do that 22:38:53 sdague: great, will catch you:-) 22:38:58 one more thing related to the triage, I saw sdague and dkranz talking about it all day, so should we have guidelines to bug report? cause those tracebacks don't really help us, people are just trying to use bugs in tempest to rechecks 22:39:13 maurosr: yes, definitely 22:39:26 maurosr: yeah we need better triage guidelines 22:39:38 I think that is something adalbas was planning to work on moving forward 22:39:51 do we want to do that now? or discuss in a review? I was going to propose something into the docs tree about triage and good bugs 22:40:06 mtreinish: My patch to send non-whitelisted errors to the log on failure will help 22:40:17 sdague: yeah I think doing it in a review would be fine 22:40:37 mtreinish: It will then be easy to get the error from the log and not just the backtrace, if there is one 22:40:37 dkranz: the d-g one? 22:40:53 mtreinish: Yes, but I don't know how to write shell script so it is not working 22:41:05 dkranz: ok I'll take a look after the meeting 22:41:05 will you post you "good bugs guidelines" to the ml? I need to read them 22:41:12 or a link to the ml 22:41:24 mtreinish: Help appreciated since I didn't understand Clark's comment 22:41:32 dkranz: where's the review? 22:41:41 dkranz: I am happy to clarify :) 22:41:47 sdague: https://review.openstack.org/#/c/61850/ 22:41:56 anteaya: I think we'll handle it in a review in tempest doc tree 22:42:03 clarkb: ok, I'll ping you later, thanks 22:42:06 sdague: I will look there 22:42:46 We've still got 4 topics on the agenda, so is there anything else to discuss on bugs? 22:42:54 sdague: https://review.openstack.org/#/c/61850/1/devstack-vm-gate.sh,unified 22:43:21 dkranz: a good segway :) 22:43:28 #topic Critical Reviews 22:43:45 so does anyone have any reviews that they would like to bring up 22:43:51 that they think need attention 22:44:17 mtreinish: I proposed we give some priority to heat and ceilo reviews 22:44:24 actually, the review queue is pretty short right now 22:44:27 it's pretty nice 22:44:28 dkranz: that's the next topic 22:44:34 mtreinish: ok :) 22:44:40 I actually have 2 22:44:41 I have a question about this review: https://review.openstack.org/#/c/59759/ 22:44:52 #link https://review.openstack.org/#/c/60866/ 22:44:58 which I guess could be added to the conf file cleanup bp 22:44:58 and 22:45:01 #link https://review.openstack.org/#/c/60578/ 22:45:14 rahmu: fire away 22:45:22 have we settled on a way to skip a test if a middleware (in the case of swift) is not installed? 22:45:32 there was some talks on the ml http://lists.openstack.org/pipermail/openstack-dev/2013-December/thread.html#21121 22:45:50 and sdague said that a decorator on setupclass would be okay 22:45:59 rahmu: so I've been working on an approach with the extensions 22:46:14 rahmu: actually using decorators for something that depends on a config variable isn't going to work 22:46:30 you need to do it inside the function 22:46:59 something I figured out a couple of days ago, I can give you hand with it after the meeting 22:47:03 rahmu: so I'd say follow mtreinish's lead on how he's tackling the compute extensions 22:47:18 okay thanks. I'll ping you later mtreinish 22:47:40 rahmu: ok cool 22:47:48 mtreinish: +2 to both of your reviews 22:47:54 sdague: sweet thanks 22:48:10 https://review.openstack.org/#/c/61873/ - easy, and probably closes a bug :) 22:48:11 ok, are there any other reviews anyone would like to bring up? 22:48:28 #link https://review.openstack.org/#/c/61873/ 22:49:03 I'll +2 it, I just want to dig a little bit more in why it's there 22:49:08 because it's a little strange looking 22:49:26 and I don't understand why it would be failing anyway 22:49:26 ok, lets move on 22:49:28 mtreinish: possibly someone was thinking of checking server status 22:49:51 mtreinish: I believe that line was recently added to *avoid* a race condition. I'll track it down. 22:50:02 cyeoh: yeah, I had a couple of ideas 22:50:12 #topic We should consider putting a fast-track on heat and ceilometer tests since they are integrated but lacking. (dkranz) 22:50:15 cyeoh: That method now checks server status too I believe 22:50:23 cyeoh: As of a recent change 22:50:35 mtreinish: Any dissenters? 22:50:36 dkranz, ah ok 22:50:36 dkranz: this lengthy topic is yours... 22:50:46 dkranz: no I'm fine 22:50:59 my only concern is heat tests don't work and we have no way to verify them 22:51:14 mtreinish: Some of them do work. 22:51:23 yeh, we're basically wedged on getting infrastructure up for the slow tests 22:51:23 mtreinish: I don't know what to do about the others 22:51:31 I've been trying to review the other ones 22:51:38 sdague: right 22:51:51 but I think that is fine as priorities go 22:52:05 sdague: At this point I think it would do more good than harm to review things even if they don't run. Just for this special case. 22:52:16 dkranz: it would be good to get folks working on those into the -qa channel as well on a regular basis 22:52:25 is there a rep from heat and from ceilometer in this meeting? 22:52:26 Yes 22:52:29 so we can give more specific feedback 22:52:33 stevebaker: ^^^ 22:52:51 I think we can move on 22:52:52 stevebaker has been great 22:52:58 but we need a ceilo person 22:53:08 dkranz: can you take a todo to recruit one 22:53:09 ? 22:53:20 sdague: Sure 22:53:43 #action dkranz to recruit a ceilo core to be focal point on ceilo tempest tests 22:53:48 ok let's move on 22:53:50 thanks mtreinish 22:53:58 #topic Should we have more specialization on the Core review team? e.g. I am comfortable with the nova v3 patches, but no where on neutron. (dkranz) 22:54:05 dkranz: another lengthy topic 22:54:32 the topics passed pep8 apparently 22:54:52 mtreinish: This is really a question of whether we should all review everything scattershot or each take an area to hit first when reviewing 22:55:07 Doesn't mean we are limited to one area 22:55:22 But I found reviewing the nova v3 changes got easier after I had done a bunch 22:55:46 so I was looking at some neutron tests recently 22:56:04 and I asked in the review for a link to the API docs for that section 22:56:14 which was provided in a comment by lifeless 22:56:19 and it was incredibly useful 22:56:31 well I'd agree there is definitely a big learning curve to be able to review changes for a new api/project 22:56:43 sdague: so your saying you want links to api specs in commit messages for new tests now? 22:56:45 so I kind of wonder if we should ask that of at least API tests 22:56:55 mtreinish: or in a comment 22:57:07 sdague, I would put these links in the aforementioned wiki page were we track api tests 22:57:11 sdague: I think that's a good idea, though admittedly for the v3 api we don't really have a spec document yet 22:57:14 giulivo: or there 22:57:28 cyeoh: yeh, so it's hard to validate API tests for an API with no docs :) 22:57:46 so it seems like we've got a cart / horse problem there 22:57:55 sdague: I asked for that fot nova v3 and was pointed to a diff with v2 which was what was needed. 22:57:56 I think it is a reasonable request 22:58:00 sdague: yea we only have v2/v3 diff document which is still buggy 22:58:13 cyeoh: well as dkranz said, it was useful 22:58:47 but, yeh, some easier way of pointing reviewers to the spec would just expedite reviews 22:58:53 exactly 22:59:04 I think the issues is that when we started, there were a few projects/apis and we all knew all of them 22:59:08 agreed. 22:59:18 But I have not been able to keep up with all the new apis and projects 22:59:29 I was suggesting we divide and conquer for that 23:00:01 dkranz, maybe we could put a couple of nicks next to each component telling people to add those as reviewers *if* in need ? 23:00:05 But not in a rigid way. Let me think a little more and propose something 23:00:13 dkranz: well, we're out of time 23:00:13 Not right now 23:00:15 dkranz: cool 23:00:16 (I mean that in the tempest docs) 23:00:25 yeh, I think we need to give up the room 23:00:29 giulivo: sorry we couldn't get to your topic 23:00:34 we'll save it for next week 23:00:37 #endmeeting