20:59:53 #startmeeting nova 20:59:54 Meeting started Thu Dec 4 20:59:53 2014 UTC and is due to finish in 60 minutes. The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:59:55 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:59:57 The meeting name has been set to 'nova' 21:00:03 Who do we have around for the nova meeting? 21:00:06 o/ 21:00:07 o/ 21:00:09 mikal: o/ 21:00:14 <_gryf> o/ 21:00:18 o/ 21:00:24 Excellent 21:00:33 #topic Kilo specs / blueprints 21:00:40 o/ 21:00:43 All blueprints and specs for juno need to be approved by kilo-1 (December 18th) 21:00:47 hi 21:00:52 ^-- that's what we said at the summit at least 21:01:05 o/ 21:01:05 So... If you're holding off on updating a spec, now would be a good time to do that 21:01:19 It would be nice to avoid a mad rush at the end if at all possible 21:01:40 Also, as a gentle reminder, we also said Non-priority Feature Freeze is kilo-2 (February 5th) 21:01:48 mikal, did you mean specs for kilo? 21:01:51 I have 2 BPs for Hyper-V 21:01:59 So, not getting a spec approved at kilo-1 would be a problem for landing the code by kilo-2 21:02:09 n0ano: sigh, yeah, that 21:02:14 o/ 21:02:20 we're still 3 cores to sponsor an exceptoin in k-2? 21:02:22 Cut and paste from the agenda in my defense 21:02:23 i can't remember 21:02:26 <_gryf> mikal, I've 1 BP for host health awaiting for comments 21:02:29 mriedem: we haven't discussed it yet 21:02:42 mriedem: so I think we're still unsure there 21:02:54 _gryf: there are a few in that state, I am sure reviewers will get to it when they can 21:02:59 i guess we wait until k-1 is passed 21:03:08 mriedem: yeah, its not urgent yet 21:03:10 <_gryf> mikal, gr8 21:03:28 So, we also seem to have one blueprint which is asking for the "Trivial" fast track 21:03:35 https://blueprints.launchpad.net/nova/+spec/hw-iscsi-device-name-support 21:04:30 that feels familiar, like we put a fix in late in juno for it 21:04:33 Does anyone have an objection to that bp beign approved as a fast track? 21:05:04 * mikal tries not to go read code in the meeting to check 21:05:16 seems ok, hard to know w/o POC patch 21:05:17 no objection, seems like just a bug fix though 21:05:20 yeah 21:05:22 yeah, was just about toi say 21:05:26 just make it a bug and move on 21:05:37 Well, let's just approve this then 21:05:42 Instead of making them turn it into a bug 21:05:45 yeah, seems fine to me 21:05:53 Ok, I will approve 21:05:58 mikal: Speaking with johnthetubaguy, he asked me to bring here two “trivial” Hyper-V BPs (not requiring specs) 21:06:14 mikal: should I post the links now or would you prefer to discuss them later? 21:06:22 Let's do it now 21:06:31 mikal: https://blueprints.launchpad.net/nova/+spec/hyper-v-ovs-vif 21:06:53 and https://blueprints.launchpad.net/nova/+spec/hyperv-serial-ports 21:07:07 Ok, first one first... 21:07:21 mikal: cool, tx! 21:07:27 The VIF one is driver feature parity 21:07:32 both are 21:07:33 yep 21:07:38 both seem fine, assuming they're small, which they seem to be 21:07:40 So, I'm ok with approving the first one 21:07:47 (Haven't read the second yet) 21:07:52 not requiring major changes to anything core, just implementing things 21:08:05 No one objects to the first one? 21:08:23 nope 21:08:29 Ok, I will approve 21:08:30 dansmith: correct, implementation is quite small 21:08:31 Now the second one 21:09:06 Looks fine to me 21:09:19 yep 21:09:23 No virt driver api needs to change? 21:09:29 no 21:09:34 i think the virt api already has it 21:09:37 right 21:09:39 hyper-v just needs to implement right 21:09:39 k 21:09:40 alaski: it’s a feature parity one as well 21:09:41 this is just implementing it 21:09:44 okay. seems fine to me then 21:09:51 I'm gonna approve 21:09:56 alexpilotti: Can you try to Cc me on the implementation of the 2nd one. I was thinking about chaning the way the serial console stuff works (probably in L) 21:10:35 Any other fast track approval requests from anyone? 21:10:37 tonyb: sure, we’ll post the code for review this week 21:11:06 Moving on? 21:11:09 sure 21:11:09 alexpilotti: cool thanks 21:11:17 #topic Kilo priorities 21:11:20 thanks guys! 21:11:28 So, this is mostly a reminder that we have the priority review etherpad 21:11:35 #link https://etherpad.openstack.org/p/kilo-nova-priorities-tracking 21:11:42 I've been trying to keep on top of review of those things 21:11:47 But there isn't a lot listed there are the moment 21:11:59 So perhaps people have forgotten to update with their new reviews for priority things? 21:12:06 not me, 21:12:15 just at the bottom of a cycle of pushing new things 21:12:31 That's cool. This is just a reminder to authors and reviewers really. 21:12:48 Moving on... 21:12:57 #topic Gate status 21:13:03 better 21:13:04 mriedem / jogo: talk to me 21:13:11 if you didn't know from earlier in the week, 21:13:14 the gate was fubar'ed 21:13:20 but looks better now 21:13:21 Yeah, I saw some of that 21:13:24 Excellent 21:13:34 oslo decided to release everything they had on the same day 21:13:37 kind of broke some things 21:13:50 mriedem: nova didn't break much though right? 21:13:54 so that's going to be the norm for oslo, a lot more releases 21:13:57 as in nova wasn't to blame 21:14:07 jogo: nova was doing some naughty things that oslo updates exposed 21:14:10 nova was to blame on the bad mocks for locks 21:14:16 yup 21:14:18 oh man 21:14:26 rhymy goodness 21:14:37 Heh 21:14:40 otherwise we look ok http://status.openstack.org/elastic-recheck/gate.html 21:14:53 "mocks for locks" being a rebel band of salmon-loving freelance coders 21:14:57 sdague: ohhhh I blanked out on the nova unit tests wedged for a day 21:15:05 yeh 21:15:07 did we ought to consider pinning our oslo dependencies more strictly and upping the version in a review? 21:15:23 gilliard: that's not allowed 21:15:36 and it causes a whole other set of problems 21:16:11 Is there anything else on the gate or should we move on again? 21:16:25 one question, the docker job in check? 21:16:36 who's staying on top of that? 21:17:05 sdague: as in its failing all the time? 21:17:06 i'm not seeing the usual containers people in here 21:17:29 yeah http://logs.openstack.org/46/97946/27/check/check-tempest-dsvm-docker/41c0379/testr_results.html.gz 21:17:29 yeh, as in failing all the time, all the output format for that job is weird 21:17:39 can we sick anteaya on them :) 21:17:42 that job was just added, right? 21:17:46 yup 21:17:47 So, there is a RH guy whose name I've forgotten who was working on that 21:17:52 and it's already not looked-after? 21:17:53 dims: ^? 21:18:09 dims: you know the RH guy's name right? working on nova-docker. 21:18:19 Ian Main (Slower) 21:18:35 Yeah, that's the one 21:18:41 * dims reading 21:18:44 So, we should ping him I suppose and cc anteaya 21:18:47 i'll look for an e-r query on the failure 21:18:56 If it keeps failing we can drop it back out again 21:19:20 the intent is to gate nova on an out of tree driver? 21:19:20 Ian is certainly the author of https://review.openstack.org/#/c/128753/ 21:19:22 i have a review in progress - https://review.openstack.org/#/c/138714/ 21:19:29 So its in his best interests to be paying attention to this 21:19:50 sdague: not gate no 21:19:53 yeah i'll get the bug opened if there isn't one, and an e-r query up, i see the failure in the compute logs 21:20:02 sdague: but the docker people wanted to do the "move back into nova" two step 21:20:07 Which is that that linked review above is 21:20:12 ah, ok 21:20:14 yeah, so they should be squeeky clean :) 21:20:25 mriedem: agreed, we need them to take this seriously 21:20:29 sdague: thanks for raising it 21:20:34 sorry, hadn't seen any ML on it so I was wondering 21:20:40 Anything else on gating? 21:20:44 nope 21:20:50 Oh, I guess one thing 21:20:53 Who is chasing Ian? 21:20:57 sdague or mriedem or me? 21:21:06 * sdague not it 21:21:08 2 folks primarily Eric (erw) and Ian 21:21:26 I just want to be clear on who is going to reach out so it actually gets done 21:21:31 mriedem: want to do it or shall I? 21:21:40 mikal: i'll get the bug reported, 21:21:45 and ML thread on it 21:21:50 mriedem: thanks 21:21:54 looks like we don't index the docker logs, 21:21:58 so can't get logstash on it 21:22:16 mriedem: something is bonkers on the dir format as well 21:22:25 is that specific test case failing all the time? 21:22:33 dims: w/o e-r i can't tell 21:22:49 dims: no idea, also, why I asked who is the shepard of that job 21:22:55 because someone should be on top of it 21:23:02 i'll start the process... 21:23:12 it's 3rd party CI so not indexed in our logstash 21:23:18 mriedem: is that true? 21:23:22 mriedem: its run in infra land 21:23:22 it's not 3rd party 21:23:23 IIRC 21:23:26 it's in the check queue 21:23:28 i've been updating stuff there when i broke stuff with oslo updates to nova 21:23:30 fyi 21:23:31 clarkb: said it wasn't infra 21:23:43 but am not a core there 21:23:44 really... ? 21:23:50 ok, lets offline that 21:23:55 k 21:23:59 Ok, let's take this one to the mailing list and a bug or two 21:24:09 #topic Bugs 21:24:17 I sneakily added this one to the agenda just now 21:24:22 Because we forgot it, which is lame 21:24:27 We have one critical bug with no assignee 21:24:33 A Xen SSL thing 21:24:41 #link https://bugs.launchpad.net/nova/+bug/1374001 21:24:44 Launchpad bug 1374001 in nova "Xenserver glance plugin uses unsafe SSL connection" [Critical,Confirmed] 21:24:48 oh, that's still open 21:25:01 melanie was doing something w/ that if i remember right 21:25:16 it's xen only, so it's not critical anyway, right? 21:25:25 dansmith: that's true 21:25:38 dims: yes. it's constrained to python 2.4 so it's not straightforward 21:25:46 ah. right 21:26:03 melwitt: are you still working on that or should we unassign you? 21:26:04 well it is a security bug that we released a sec advisory on a 18 months ago 21:26:29 honestly, I think it's actually "can't fix, xen is full of sec holes" 21:26:40 because the dom0 is so old 21:26:44 Erm, vote not me to do that press release 21:26:46 xenfail 21:27:07 mikal: I can update the patch again with what jerdfelt suggested, do nothing for 2.4 and warn, do something if > 2.6. other than that we don't know a better way 21:27:19 >= 2.6 I mean 21:27:21 hi 21:27:29 melwitt: sounds reasonable to me 21:27:40 mikal: okay, will do that then 21:27:42 it's vaguely actually python is crazy pants. We only get SSL cert verification in the *next* python 2.7 release 21:28:03 Are there any other bugs that people want to shout about? 21:28:12 yes 21:28:14 https://bugs.launchpad.net/nova/+bug/832507 21:28:16 Launchpad bug 832507 in nova "console.log grows indefinitely" [High,Fix released] 21:28:27 alaski: I'm working a a spec for that 21:28:36 The bug that keeps on giving 21:28:37 I know its a bug but the fix is big an intrusive 21:28:54 alaski: do you have a specific concern? i.e. is it hurting you at the moment? 21:28:56 yeah, there have been multiple proposed fixes 21:29:20 mikal: no. just is it being actively pursued or do we mark it as wont fix for now 21:29:32 but it sounds like there are plans 21:29:34 alaski: I tricked tonyb into having a look 21:29:34 alaski: the only one I think will work is a helper the manage the log file andf can then be logrotated on the operators schedule 21:29:37 sounds like the answer is: tonyb to the rescue 21:29:45 tonyb: unless we fixed qemu... 21:29:51 tonyb: you do seem to love the qemu codings 21:29:51 yeah, that doesn't work 21:30:08 mikal: Umm fool me twice .... shame on me ;P 21:30:14 mikal: awesome 21:30:18 Cool 21:30:22 Moving on? 21:30:25 tonyb: do you mind assigning it to you? 21:30:31 Apart from "FIX LOTS OF BUGS PLEASE" 21:30:32 alaski: sure 21:30:46 mikal:can we assign a release for this blueprint "https://blueprints.launchpad.net/nova/+spec/hw-iscsi-device-name-support" 21:31:02 tonyb: awesome, thanks 21:31:07 nikesh_vedams_: oh, I forgot milestones 21:31:10 alaski: done 21:31:12 nikesh_vedams_: I will fix that after the meeting 21:31:26 #topic Stuck reviews 21:31:33 There are a few here 21:31:35 mikal:thanks 21:31:41 And I specifically don't want people to get personal here please 21:31:49 #link https://review.openstack.org/#/c/123957/ 21:32:05 For this one, tonyb fixed a qemu bug 21:32:14 But the debate is how we work around distros without the fix 21:32:19 If in fact we work around them at all 21:32:26 I got a little more info from pbrady on this 21:32:30 Its been in review since September 21:32:48 I wanted him to come offer details here, but he had a conflict 21:32:59 dansmith: phooey 21:33:11 so, we have bz's listed with workarounds in the libvirt driver 21:33:15 tonyb: apparently the office holiday party is more important 21:33:20 one at least for live migrate on older versions of libvirt 21:33:32 is there a bz for this and rhel? 21:33:36 there is, 21:33:46 tonyb has done a bug tracking page for distros 21:33:54 and I also got an ETA of "within a week" for this fix being available to RHEL/CentOS users 21:34:01 #link https://wiki.openstack.org/wiki/Bug1368815 21:34:04 how far back in RHEL? 21:34:13 dansmith: we got that same ETA pre-Paris 21:34:13 it's already in proposed fixes repos for fedora and ubuntu 21:34:21 dansmith: but they are closer 21:34:35 dansmith: for fedora21 not for 19 or 20 21:34:44 tonyb: that's fair I guess, I wasn't in on the discussion at that point 21:34:47 tonyb: yeah, but it's fedora 21:34:47 nothing for sles or gentoo 21:34:47 ok, so only rhel 7, not 6.5 21:35:06 anyway, the impact of doing this is pretty significant, 21:35:12 if it's not getting fixed in 6, that seems problematic 21:35:20 mriedem: correct no 6.x fixes 21:35:24 hmm 21:35:26 both system performance impacts, and what it means for us long-term and not knowing when we can remove it 21:35:35 mriedem: I'd have to check if that's affected 21:35:38 so i know rdo only supported juno on rhel 7, 21:35:39 tonyb: did you ask someone about 6? 21:35:47 but we have compute node support for rhel 6.5 and juno 21:35:53 mriedem: but i though rhel7 was the one true rhel now from pur pov 21:36:04 tonyb: not mine 21:36:05 6 is still pretty important for us so I'd be surprised if we're not fixing it 21:36:09 dansmith: I grabbed the centos source as a quick check 21:36:12 also because this affects other products 21:36:41 so this only occurs on qcow from glance on initial explode to raw? 21:36:43 So, one suggested option was a flag 21:36:46 Which no one seems to love 21:36:48 I think it's also worth noting that this fix does not address the many other places we create images, 21:36:51 But might be the only option? 21:36:53 so I'll check RHEK/Centos 6.x and open a bug if required but that doesn't help the review ... 21:36:58 and also pbrady is not sure that this doesn't just narrow the window for the race 21:37:01 and he would... know :) 21:37:27 someone get the lampshade off his head and cocktail out of his hand 21:37:32 mikal: so, I don't love the flag, but I love it more than just blindly putting it in 21:37:43 mikal: also, the other option was just push this into stables that are currently supported 21:37:57 dansmith: I did ask tonyb to check other uses of qemu-img in the code BTW 21:38:02 dansmith: I think he did an audit IIRC 21:38:07 mikal: and the other is that we push this into master and just remove it when some number of distros support it, instead of trying to keep it until we're sure it's globally fixed 21:38:08 and I've done that. 21:38:27 Ok, so there's a proposal there 21:38:33 What do people think of putting this in stable only 21:38:46 And telling people in the release notes for Kilo that they must use a qemu with this bug fix? 21:38:49 "must" 21:38:50 the migration and snapshot code /could/ be problematic but they use slightly different code paths in qemu 21:39:23 the glance case it the only place (that I found) where we use qemu-img convert on something that isn't managed by qemu 21:39:24 tonyb: pbrady seemed to think they *were* problematic, but I really don't know the details 21:39:27 hmmm... any idea if this is related to the live snapshot fails we had going to trusty? 21:39:59 sdague: point me at the specific bug and I'll let you know. 21:40:16 lets talk about options 21:40:20 sdague: yep, good question, is there a ticket for those fails? 21:40:24 because if we can do something less nuclear, we should just do that 21:40:26 https://bugs.launchpad.net/nova/+bug/1334398 21:40:28 Launchpad bug 1334398 in nova "libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the gate" [High,Confirmed] 21:40:45 sdague: thnx 21:40:48 so I think flag is probably least worse option 21:40:59 sdague: less worse than stable only fixes? 21:40:59 sdague: Thanks, I'll digest that and get back to you 21:41:12 stable-only is my preference, but on the flags thing: 21:41:24 default on until we feel it's relatively fixed in the field, then default off and deprecated 21:41:26 we could have a section for workarounds where we put more of this sort of thing 21:41:40 it also makes it easier for us to track workarounds we have in our code 21:41:51 dansmith: as in a flag group? 21:41:53 because they're tied to CONF.workarounds.qemu-img-foo 21:41:55 dansmith: I think that's a good idea 21:41:58 mikal: yeah 21:42:04 yeh, workarounds group sounds good 21:42:23 It also marks those thigns as flags you shouldn't twiddle without thought 21:42:23 so let's just do that and distros can override that if they know they're fixed 21:42:35 sure 21:42:36 I actually really like that idea 21:42:38 mikal: yes and you can set them all to off if you want purity or whatever 21:42:42 We could also immediate deprecate the workaround flag 21:42:46 i.e. start removal in L 21:42:56 depends onthe thing, but sure 21:43:01 Yeah, in this case I mean 21:43:05 yes 21:43:06 mikal: I'd wait to deprecate this one until there are actually fixes in the field 21:43:13 because those are still waiting on the future 21:43:22 they're available by repo to most people 21:43:25 but whatever :) 21:43:31 sdague: sure, we could leave the specifics of that to a code review 21:43:38 dansmith: I thought you said another week for rhel? 21:43:43 Also, OMG, we just made a plan 21:43:51 Ok, let's do that 21:43:54 sdague: there is a proposed repo/channelf or them too I think 21:44:00 sdague: centos has packages 21:44:02 I also want to publicly thank tonyb for chasing this and fixing qemu for us 21:44:04 just not in default repos I think 21:44:07 Although I am massively biased here 21:44:19 ok, can we also confirm the fixes close the issue? 21:44:19 I'm not and I'm massively grateful to him :) 21:44:30 for the most part they'er in testing only repos in fedora/ubuntu don't know about centos 21:44:35 sdague: the distro bugs have repro reports 21:44:41 ok 21:44:44 but really, 21:44:48 I don't care when we deprecate it 21:45:00 as long as we have a road 21:45:04 so I'll send the next version today 21:45:15 We really need to move onto the next stuck review in the list... 21:45:18 Cause time 21:45:18 I'll add a comment in gerrit refereing to this log 21:45:20 tonyb: hit me up when you do 21:45:23 and you'll all approve it right? 21:45:32 dansmith: Thanks. 21:45:34 tonyb: I believe dansmith reads the code first 21:45:40 SSL options - do we make them per-project or global, or both? Neutron and Cinder have config-group specific SSL options, Glance is using oslo sslutils global options since Juno which was contentious for a time in a separate review in Icehouse (https://review.openstack.org/#/c/84522/). Now https://review.openstack.org/#/c/131066/ wants to break that out for Glance, but we also have a patch for Keystone to use the global oslo SSL options, https://review. 21:45:47 ^-- mriedem in the agenda 21:45:51 mikal: of course ;P 21:45:55 yeah, so i put something on the ML about that too 21:46:07 basically to get attention to the inconsistency, 21:46:22 we don't have to decide here but we should figure it out since some reviews are kind of stalled on this i think 21:46:37 so... in what universe do we actually want half a dozen ca's defined in nova.conf? 21:46:42 in icehouse i liked markmc's idea of DictOpt for default global ssl options and then you can override for service-specific options if you wanted 21:46:53 sdague: more common than you might think I bet 21:46:59 I think I liked the idea of oslo global with subsections for each service 21:47:17 i haven't looked at the DictOpt stuff in code 21:47:23 which is what mriedem just said I think 21:47:26 yeah 21:47:45 Who would work on that? 21:47:46 so maybe we just need a POC on that and see how ugly and/or usable it is? 21:47:51 and is a bp needed? 21:48:01 dansmith: have you seen that in the field? 21:48:14 If its just moving to doing somehting oslo implements, then I would think a bp but not a spec? 21:48:18 sdague: not on openstack things, but on other ssl configs, sure :) 21:48:43 zhaoqin made a case in his glance change for it, i'd have to look it up again 21:49:00 So who have we tricked into doing that POC? 21:49:00 anyway, this is more of an FYI and it's in the ML for discussion 21:49:07 Oh, ok 21:49:13 yeah, it's just you don't want to share creds across services in that if one is compromised you get access to all 21:49:13 Finalize a plan there then... 21:49:34 i'd sign up, but i feel like i don't have the time to do much right now (moving next week) 21:49:53 Ok 21:49:54 anyway, i'll reply to the ML looking for volunteers 21:49:57 Let's finish that one on the ML 21:50:02 * mikal looks at the clock 21:50:14 Next one is one from alexpilotti 21:50:16 #link https://review.openstack.org/#/c/129235/ 21:50:17 +1 to what melwitt said about creds 21:50:50 I think alexpilotti was just after review attention here. Its been slow, but isn't horribly off the rails. 21:50:53 Is that a fair summary? 21:51:19 I did reviews of those things today 21:51:23 and then read the email 21:51:28 mikal: it’s one of the many patches waiting for review 21:51:42 mikal: you asked me to choose a couple, so here we are :-) 21:51:56 wow, that's a lot of LOC 21:52:06 Ok, so it sounds like this is a call for a couple of cores to take a look 21:52:15 Noting that dansmith has already done some reviews here 21:52:24 \o/ 21:52:29 do I get an early mark for that? 21:52:31 #link https://review.openstack.org/#/c/136484/ 21:52:37 #link https://review.openstack.org/#/c/131734/ 21:52:38 dansmith: tx, was just replying now! 21:52:42 ^-- ditto those other two as well 21:52:48 dansmith: yes, you may leave 21:53:05 The final one got added really recently so I haven't got a lot of state on it... 21:53:11 zookeeper servicegroup driver issues: nova-conductor processes share zookeeper handle and which causes have lock and race: https://review.openstack.org/#/c/133479/ and https://review.openstack.org/#/c/133500/ 21:53:18 thats mine 21:53:18 #link https://review.openstack.org/#/c/133479/ 21:53:24 #link https://review.openstack.org/#/c/133500/ 21:53:42 we are using zk in our dev environments and nova-cond just don't work 21:54:18 bug are there described and fix are ready to be reviewed 21:54:19 does devstack support zk? 21:54:25 yes 21:54:26 ppalucki: so this is just a request for cores to take a look at those reviews after the meeting, yes? 21:54:34 just it is 21:54:43 ok, was just wondering how easy it is to reproduce with devstack to verify the fix 21:54:44 Cool, well I promise to take a look at least 21:54:52 seems okay on the face of it, fwiw 21:55:03 install zookeeper and set config is enought 21:55:16 So, noting the time we really have to move on 21:55:21 thats good for me 21:55:25 wait wait 21:55:28 I have one https://review.openstack.org/#/c/130721/ 21:55:37 but I did not put it on the agenda, is that okay ? 21:55:48 cannot find a second core maintainer who knows iscsi 21:55:57 Ahhh, ok 21:55:57 at least the ones I talked to didn't 21:55:59 spec in new state? - https://blueprints.launchpad.net/nova/+spec/add-open-iscsi-transport-support 21:56:01 This one is a spec with one +2 21:56:10 yeah 21:56:28 mikal: the spec is not approved 21:56:34 I'm no iscsi expert, but I will take a look at that one after the meeting 21:56:39 sdague: yeah, he's asking for a spec review 21:56:45 oh, sorry 21:56:48 sdague: :P 21:56:49 mikal: thanks ! 21:56:54 NP 21:56:59 #topic Open Discussion 21:57:07 Please enjoy your three minutes of open discussion 21:57:12 Or have an early mark 21:57:15 can anyone look at my spec? 21:57:17 https://review.openstack.org/#/c/132656/ 21:57:24 What is the process now for adding a bug to the list to be reviewed/discussed in the meeeting? 21:57:44 alop: maybe try to talk to jaypipes about anything related to db purge 21:57:51 thanks 21:57:57 alop: since i think jaypipes cringes every time it's mentioned 21:58:06 hehe, this is true 21:58:17 cburgess: just add it to the agenda for next week? 21:58:20 I seem to recall there being a discussion in Paris about how to bring non-critical bugs that have pending fixes to the attention of cores but I don't recall the process that was discussed. 21:58:22 cburgess: i.e. wiki page edit 21:58:23 we have a couple of folks that have done a lot of work around purging here too, alop 21:58:25 I'll point them at it 21:58:27 as well 21:58:34 mikal: OK cool, I just couldn't remember. 21:58:35 cburgess: we talked about bug wednesdays for that 21:58:42 cburgess: np 21:58:53 guilt-free bug fix reviews on wednesday (office hours right?) 21:59:03 erm, am I supposed to set the series goal in the BP myself, or is that something the maintainers do ? 21:59:03 not sure if it's every wednesday though 21:59:08 So are we doing bug wednesdays then or is adding it to the agenda the way to go? 21:59:10 jogo: ^? 21:59:16 Yeah, there is a difference between "stuck" and just "not reviewed" 21:59:30 Like the qemu one above, we really needed to talk through it to come up with a plan 21:59:31 stuck to me means conflicts in opinion 21:59:41 not lack of attention 21:59:44 cburgess: yup 21:59:45 I thought as much, so did not add it there 21:59:56 mriedem: I do think a bug fix which was imporant is legit to bring up in open disucssion here though 21:59:57 is there a separate process for not reviews ? 22:00:00 *reviewed 22:00:08 jogo: OK I'll hit you up for details on how to do that then. 22:00:24 <_gryf> anyone cares about the difference between host state and service state? 22:00:25 Yeah I'm taling about that non-OMG critical bugs. 22:00:29 anish: I'd recommend starting by mentioning it in #openstack-nova? 22:00:41 anish: if its weeks old, then bring it to open discussion here 22:00:46 Specifically things like older minor bugs that have failrly simple fixes that could be merged to help reduce open bug count. 22:00:59 cburgess: oh, I love easy code reviews 22:01:08 Anyways, if I had a gong I'd ring it 22:01:12 #endmeeting