20:59:53 <mikal> #startmeeting nova
20:59:54 <openstack> Meeting started Thu Dec  4 20:59:53 2014 UTC and is due to finish in 60 minutes.  The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:59:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:59:57 <openstack> The meeting name has been set to 'nova'
21:00:03 <mikal> Who do we have around for the nova meeting?
21:00:06 <dansmith> o/
21:00:07 <alaski> o/
21:00:09 <alexpilotti> mikal: o/
21:00:14 <_gryf> o/
21:00:18 <melwitt> o/
21:00:24 <mikal> Excellent
21:00:33 <mikal> #topic Kilo specs / blueprints
21:00:40 <n0ano> o/
21:00:43 <mikal> All blueprints and specs for juno need to be approved by kilo-1 (December 18th)
21:00:47 <mriedem> hi
21:00:52 <mikal> ^-- that's what we said at the summit at least
21:01:05 <gilliard> o/
21:01:05 <mikal> So... If you're holding off on updating a spec, now would be a good time to do that
21:01:19 <mikal> It would be nice to avoid a mad rush at the end if at all possible
21:01:40 <mikal> Also, as a gentle reminder, we also said Non-priority Feature Freeze is kilo-2 (February 5th)
21:01:48 <n0ano> mikal, did you mean specs for kilo?
21:01:51 <alexpilotti> I have 2 BPs for Hyper-V
21:01:59 <mikal> So, not getting a spec approved at kilo-1 would be a problem for landing the code by kilo-2
21:02:09 <mikal> n0ano: sigh, yeah, that
21:02:14 <jogo> o/
21:02:20 <mriedem> we're still 3 cores to sponsor an exceptoin in k-2?
21:02:22 <mikal> Cut and paste from the agenda in my defense
21:02:23 <mriedem> i can't remember
21:02:26 <_gryf> mikal, I've 1 BP for host health awaiting for comments
21:02:29 <mikal> mriedem: we haven't discussed it yet
21:02:42 <mikal> mriedem: so I think we're still unsure there
21:02:54 <mikal> _gryf: there are a few in that state, I am sure reviewers will get to it when they can
21:02:59 <mriedem> i guess we wait until k-1 is passed
21:03:08 <mikal> mriedem: yeah, its not urgent yet
21:03:10 <_gryf> mikal, gr8
21:03:28 <mikal> So, we also seem to have one blueprint which is asking for the "Trivial" fast track
21:03:35 <mikal> https://blueprints.launchpad.net/nova/+spec/hw-iscsi-device-name-support
21:04:30 <sdague> that feels familiar, like we put a fix in late in juno for it
21:04:33 <mikal> Does anyone have an objection to that bp beign approved as a fast track?
21:05:04 * mikal tries not to go read code in the meeting to check
21:05:16 <mriedem> seems ok, hard to know w/o POC patch
21:05:17 <sdague> no objection, seems like just a bug fix though
21:05:20 <mriedem> yeah
21:05:22 <dansmith> yeah, was just about toi say
21:05:26 <dansmith> just make it a bug and move on
21:05:37 <mikal> Well, let's just approve this then
21:05:42 <mikal> Instead of making them turn it into a bug
21:05:45 <alaski> yeah, seems fine to me
21:05:53 <mikal> Ok, I will approve
21:05:58 <alexpilotti> mikal: Speaking with johnthetubaguy, he asked me to bring here two “trivial” Hyper-V BPs (not requiring specs)
21:06:14 <alexpilotti> mikal: should I post the links now or would you prefer to discuss them later?
21:06:22 <mikal> Let's do it now
21:06:31 <alexpilotti> mikal: https://blueprints.launchpad.net/nova/+spec/hyper-v-ovs-vif
21:06:53 <alexpilotti> and https://blueprints.launchpad.net/nova/+spec/hyperv-serial-ports
21:07:07 <mikal> Ok, first one first...
21:07:21 <alexpilotti> mikal: cool, tx!
21:07:27 <mikal> The VIF one is driver feature parity
21:07:32 <dansmith> both are
21:07:33 <alexpilotti> yep
21:07:38 <dansmith> both seem fine, assuming they're small, which they seem to be
21:07:40 <mikal> So, I'm ok with approving the first one
21:07:47 <mikal> (Haven't read the second yet)
21:07:52 <dansmith> not requiring major changes to anything core, just implementing things
21:08:05 <mikal> No one objects to the first one?
21:08:23 <mriedem> nope
21:08:29 <mikal> Ok, I will approve
21:08:30 <alexpilotti> dansmith: correct, implementation is quite small
21:08:31 <mikal> Now the second one
21:09:06 <mikal> Looks fine to me
21:09:19 <dansmith> yep
21:09:23 <alaski> No virt driver api needs to change?
21:09:29 <alexpilotti> no
21:09:34 <mriedem> i think the virt api already has it
21:09:37 <dansmith> right
21:09:39 <mriedem> hyper-v just needs to implement right
21:09:39 <mriedem> k
21:09:40 <alexpilotti> alaski: it’s a feature parity one as well
21:09:41 <dansmith> this is just implementing it
21:09:44 <alaski> okay.  seems fine to me then
21:09:51 <mikal> I'm gonna approve
21:09:56 <tonyb> alexpilotti: Can you try to Cc me on the implementation of the 2nd one.  I was thinking about chaning the way the serial console stuff works (probably in L)
21:10:35 <mikal> Any other fast track approval requests from anyone?
21:10:37 <alexpilotti> tonyb: sure, we’ll post the code for review this week
21:11:06 <mikal> Moving on?
21:11:09 <mriedem> sure
21:11:09 <tonyb> alexpilotti: cool thanks
21:11:17 <mikal> #topic Kilo priorities
21:11:20 <alexpilotti> thanks guys!
21:11:28 <mikal> So, this is mostly a reminder that we have the priority review etherpad
21:11:35 <mikal> #link https://etherpad.openstack.org/p/kilo-nova-priorities-tracking
21:11:42 <mikal> I've been trying to keep on top of review of those things
21:11:47 <mikal> But there isn't a lot listed there are the moment
21:11:59 <mikal> So perhaps people have forgotten to update with their new reviews for priority things?
21:12:06 <dansmith> not me,
21:12:15 <dansmith> just at the bottom of a cycle of pushing new things
21:12:31 <mikal> That's cool. This is just a reminder to authors and reviewers really.
21:12:48 <mikal> Moving on...
21:12:57 <mikal> #topic Gate status
21:13:03 <mriedem> better
21:13:04 <mikal> mriedem / jogo: talk to me
21:13:11 <mriedem> if you didn't know from earlier in the week,
21:13:14 <mriedem> the gate was fubar'ed
21:13:20 <mriedem> but looks better now
21:13:21 <mikal> Yeah, I saw some of that
21:13:24 <mikal> Excellent
21:13:34 <mriedem> oslo decided to release everything they had on the same day
21:13:37 <mriedem> kind of broke some things
21:13:50 <jogo> mriedem: nova didn't break much though right?
21:13:54 <sdague> so that's going to be the norm for oslo, a lot more releases
21:13:57 <jogo> as in nova wasn't to blame
21:14:07 <mriedem> jogo: nova was doing some naughty things that oslo updates exposed
21:14:10 <sdague> nova was to blame on the bad mocks for locks
21:14:16 <mriedem> yup
21:14:18 <dansmith> oh man
21:14:26 <dansmith> rhymy goodness
21:14:37 <mikal> Heh
21:14:40 <mriedem> otherwise we look ok http://status.openstack.org/elastic-recheck/gate.html
21:14:53 <dansmith> "mocks for locks" being a rebel band of salmon-loving freelance coders
21:14:57 <jogo> sdague: ohhhh I blanked out on the nova unit tests wedged for a day
21:15:05 <sdague> yeh
21:15:07 <gilliard> did we ought to consider pinning our oslo dependencies more strictly and upping the version in a review?
21:15:23 <sdague> gilliard: that's not allowed
21:15:36 <sdague> and it causes a whole other set of problems
21:16:11 <mikal> Is there anything else on the gate or should we move on again?
21:16:25 <sdague> one question, the docker job in check?
21:16:36 <sdague> who's staying on top of that?
21:17:05 <mikal> sdague: as in its failing all the time?
21:17:06 <mriedem> i'm not seeing the usual containers people in here
21:17:29 <mriedem> yeah http://logs.openstack.org/46/97946/27/check/check-tempest-dsvm-docker/41c0379/testr_results.html.gz
21:17:29 <sdague> yeh, as in failing all the time, all the output format for that job is weird
21:17:39 <mriedem> can we sick anteaya on them :)
21:17:42 <dansmith> that job was just added, right?
21:17:46 <mriedem> yup
21:17:47 <mikal> So, there is a RH guy whose name I've forgotten who was working on that
21:17:52 <dansmith> and it's already not looked-after?
21:17:53 <mriedem> dims: ^?
21:18:09 <mriedem> dims: you know the RH guy's name right? working on nova-docker.
21:18:19 <dims> Ian Main (Slower)
21:18:35 <mikal> Yeah, that's the one
21:18:41 * dims reading
21:18:44 <mikal> So, we should ping him I suppose and cc anteaya
21:18:47 <mriedem> i'll look for an e-r query on the failure
21:18:56 <mikal> If it keeps failing we can drop it back out again
21:19:20 <sdague> the intent is to gate nova on an out of tree driver?
21:19:20 <mikal> Ian is certainly the author of https://review.openstack.org/#/c/128753/
21:19:22 <dims> i have a review in progress - https://review.openstack.org/#/c/138714/
21:19:29 <mikal> So its in his best interests to be paying attention to this
21:19:50 <mikal> sdague: not gate no
21:19:53 <mriedem> yeah i'll get the bug opened if there isn't one, and an e-r query up, i see the failure in the compute logs
21:20:02 <mikal> sdague: but the docker people wanted to do the "move back into nova" two step
21:20:07 <mikal> Which is that that linked review above is
21:20:12 <sdague> ah, ok
21:20:14 <mriedem> yeah, so they should be squeeky clean :)
21:20:25 <mikal> mriedem: agreed, we need them to take this seriously
21:20:29 <mikal> sdague: thanks for raising it
21:20:34 <sdague> sorry, hadn't seen any ML on it so I was wondering
21:20:40 <mikal> Anything else on gating?
21:20:44 <mriedem> nope
21:20:50 <mikal> Oh, I guess one thing
21:20:53 <mikal> Who is chasing Ian?
21:20:57 <mikal> sdague or mriedem or me?
21:21:06 * sdague not it
21:21:08 <dims> 2 folks primarily Eric (erw) and Ian
21:21:26 <mikal> I just want to be clear on who is going to reach out so it actually gets done
21:21:31 <mikal> mriedem: want to do it or shall I?
21:21:40 <mriedem> mikal: i'll get the bug reported,
21:21:45 <mriedem> and ML thread on it
21:21:50 <mikal> mriedem: thanks
21:21:54 <mriedem> looks like we don't index the docker logs,
21:21:58 <mriedem> so can't get logstash on it
21:22:16 <sdague> mriedem: something is bonkers on the dir format as well
21:22:25 <dims> is that specific test case failing all the time?
21:22:33 <mriedem> dims: w/o e-r i can't tell
21:22:49 <sdague> dims: no idea, also, why I asked who is the shepard of that job
21:22:55 <sdague> because someone should be on top of it
21:23:02 <mriedem> i'll start the process...
21:23:12 <mriedem> it's 3rd party CI so not indexed in our logstash
21:23:18 <mikal> mriedem: is that true?
21:23:22 <mikal> mriedem: its run in infra land
21:23:22 <sdague> it's not 3rd party
21:23:23 <mikal> IIRC
21:23:26 <sdague> it's in the check queue
21:23:28 <dims> i've been updating stuff there when i broke stuff with oslo updates to nova
21:23:30 <dims> fyi
21:23:31 <mriedem> clarkb: said it wasn't infra
21:23:43 <dims> but am not a core there
21:23:44 <sdague> really... ?
21:23:50 <sdague> ok, lets offline that
21:23:55 <dims> k
21:23:59 <mikal> Ok, let's take this one to the mailing list and a bug or two
21:24:09 <mikal> #topic Bugs
21:24:17 <mikal> I sneakily added this one to the agenda just now
21:24:22 <mikal> Because we forgot it, which is lame
21:24:27 <mikal> We have one critical bug with no assignee
21:24:33 <mikal> A Xen SSL thing
21:24:41 <mikal> #link https://bugs.launchpad.net/nova/+bug/1374001
21:24:44 <uvirtbot> Launchpad bug 1374001 in nova "Xenserver glance plugin uses unsafe SSL connection" [Critical,Confirmed]
21:24:48 <sdague> oh, that's still open
21:25:01 <dims> melanie was doing something w/ that if i remember right
21:25:16 <dansmith> it's xen only, so it's not critical anyway, right?
21:25:25 <mikal> dansmith: that's true
21:25:38 <melwitt> dims: yes. it's constrained to python 2.4 so it's not straightforward
21:25:46 <dims> ah. right
21:26:03 <mikal> melwitt: are you still working on that or should we unassign you?
21:26:04 <sdague> well it is a security bug that we released a sec advisory on a 18 months ago
21:26:29 <sdague> honestly, I think it's actually "can't fix, xen is full of sec holes"
21:26:40 <sdague> because the dom0 is so old
21:26:44 <mikal> Erm, vote not me to do that press release
21:26:46 <jogo> xenfail
21:27:07 <melwitt> mikal: I can update the patch again with what jerdfelt suggested, do nothing for 2.4 and warn, do something if > 2.6. other than that we don't know a better way
21:27:19 <melwitt> >= 2.6 I mean
21:27:21 <nikesh_vedams_> hi
21:27:29 <mikal> melwitt: sounds reasonable to me
21:27:40 <melwitt> mikal: okay, will do that then
21:27:42 <sdague> it's vaguely actually python is crazy pants. We only get SSL cert verification in the *next* python 2.7 release
21:28:03 <mikal> Are there any other bugs that people want to shout about?
21:28:12 <alaski> yes
21:28:14 <alaski> https://bugs.launchpad.net/nova/+bug/832507
21:28:16 <uvirtbot> Launchpad bug 832507 in nova "console.log grows indefinitely" [High,Fix released]
21:28:27 <tonyb> alaski: I'm working a a spec for that
21:28:36 <mikal> The bug that keeps on giving
21:28:37 <tonyb> I know its a bug but the fix is big an intrusive
21:28:54 <mikal> alaski: do you have a specific concern? i.e. is it hurting you at the moment?
21:28:56 <alaski> yeah, there have been multiple proposed fixes
21:29:20 <alaski> mikal: no.  just is it being actively pursued or do we mark it as wont fix for now
21:29:32 <alaski> but it sounds like there are plans
21:29:34 <mikal> alaski: I tricked tonyb into having a look
21:29:34 <tonyb> alaski: the only one I think will work is a helper the manage the log file andf can then be logrotated on the operators schedule
21:29:37 <dansmith> sounds like the answer is: tonyb to the rescue
21:29:45 <mikal> tonyb: unless we fixed qemu...
21:29:51 <mikal> tonyb: you do seem to love the qemu codings
21:29:51 <dansmith> yeah, that doesn't work
21:30:08 <tonyb> mikal: Umm fool me twice .... shame on me ;P
21:30:14 <alaski> mikal: awesome
21:30:18 <mikal> Cool
21:30:22 <mikal> Moving on?
21:30:25 <alaski> tonyb: do you mind assigning it to you?
21:30:31 <mikal> Apart from "FIX LOTS OF BUGS PLEASE"
21:30:32 <tonyb> alaski: sure
21:30:46 <nikesh_vedams_> mikal:can we assign a release for this blueprint "https://blueprints.launchpad.net/nova/+spec/hw-iscsi-device-name-support"
21:31:02 <alaski> tonyb: awesome, thanks
21:31:07 <mikal> nikesh_vedams_: oh, I forgot milestones
21:31:10 <tonyb> alaski: done
21:31:12 <mikal> nikesh_vedams_: I will fix that after the meeting
21:31:26 <mikal> #topic Stuck reviews
21:31:33 <mikal> There are a few here
21:31:35 <nikesh_vedams_> mikal:thanks
21:31:41 <mikal> And I specifically don't want people to get personal here please
21:31:49 <mikal> #link https://review.openstack.org/#/c/123957/
21:32:05 <mikal> For this one, tonyb fixed a qemu bug
21:32:14 <mikal> But the debate is how we work around distros without the fix
21:32:19 <mikal> If in fact we work around them at all
21:32:26 <dansmith> I got a little more info from pbrady on this
21:32:30 <mikal> Its been in review since September
21:32:48 <dansmith> I wanted him to come offer details here, but he had a conflict
21:32:59 <tonyb> dansmith: phooey
21:33:11 <mriedem> so, we have bz's listed with workarounds in the libvirt driver
21:33:15 <dansmith> tonyb: apparently the office holiday party is more important
21:33:20 <mriedem> one at least for live migrate on older versions of libvirt
21:33:32 <mriedem> is there a bz for this and rhel?
21:33:36 <dansmith> there is,
21:33:46 <mikal> tonyb has done a bug tracking page for distros
21:33:54 <dansmith> and I also got an ETA of "within a week" for this fix being available to RHEL/CentOS users
21:34:01 <mikal> #link https://wiki.openstack.org/wiki/Bug1368815
21:34:04 <mriedem> how far back in RHEL?
21:34:13 <tonyb> dansmith: we got that same ETA pre-Paris
21:34:13 <dansmith> it's already in proposed fixes repos for fedora and ubuntu
21:34:21 <tonyb> dansmith: but they are closer
21:34:35 <tonyb> dansmith: for fedora21 not for 19 or 20
21:34:44 <dansmith> tonyb: that's fair I guess, I wasn't in on the discussion at that point
21:34:47 <dansmith> tonyb: yeah, but it's fedora
21:34:47 <tonyb> nothing for sles or gentoo
21:34:47 <mriedem> ok, so only rhel 7, not 6.5
21:35:06 <dansmith> anyway, the impact of doing this is pretty significant,
21:35:12 <sdague> if it's not getting fixed in 6, that seems problematic
21:35:20 <tonyb> mriedem: correct no 6.x fixes
21:35:24 <mriedem> hmm
21:35:26 <dansmith> both system performance impacts, and what it means for us long-term and not knowing when we can remove it
21:35:35 <tonyb> mriedem: I'd have to check if that's affected
21:35:38 <mriedem> so i know rdo only supported juno on rhel 7,
21:35:39 <dansmith> tonyb: did you ask someone about 6?
21:35:47 <mriedem> but we have compute node support for rhel 6.5 and juno
21:35:53 <tonyb> mriedem: but i though rhel7 was the one true rhel now from pur pov
21:36:04 <mriedem> tonyb: not mine
21:36:05 <dansmith> 6 is still pretty important for us so I'd be surprised if we're not fixing it
21:36:09 <tonyb> dansmith: I grabbed the centos source as a quick check
21:36:12 <dansmith> also because this affects other products
21:36:41 <sdague> so this only occurs on qcow from glance on initial explode to raw?
21:36:43 <mikal> So, one suggested option was a flag
21:36:46 <mikal> Which no one seems to love
21:36:48 <dansmith> I think it's also worth noting that this fix does not address the many other places we create images,
21:36:51 <mikal> But might be the only option?
21:36:53 <tonyb> so I'll check RHEK/Centos 6.x and open a bug if required but that doesn't help the review ...
21:36:58 <dansmith> and also pbrady is not sure that this doesn't just narrow the window for the race
21:37:01 <dansmith> and he would... know :)
21:37:27 <mriedem> someone get the lampshade off his head and cocktail out of his hand
21:37:32 <dansmith> mikal: so, I don't love the flag, but I love it more than just blindly putting it in
21:37:43 <dansmith> mikal: also, the other option was just push this into stables that are currently supported
21:37:57 <mikal> dansmith: I did ask tonyb to check other uses of qemu-img in the code BTW
21:38:02 <mikal> dansmith: I think he did an audit IIRC
21:38:07 <dansmith> mikal: and the other is that we push this into master and just remove it when some number of distros support it, instead of trying to keep it until we're sure it's globally fixed
21:38:08 <tonyb> and I've done that.
21:38:27 <mikal> Ok, so there's a proposal there
21:38:33 <mikal> What do people think of putting this in stable only
21:38:46 <mikal> And telling people in the release notes for Kilo that they must use a qemu with this bug fix?
21:38:49 <mikal> "must"
21:38:50 <tonyb> the migration and snapshot code /could/ be problematic but they use slightly different code paths in qemu
21:39:23 <tonyb> the glance case it the only place (that I found) where we use qemu-img convert on something that isn't managed by qemu
21:39:24 <dansmith> tonyb: pbrady seemed to think they *were* problematic, but I really don't know the details
21:39:27 <sdague> hmmm... any idea if this is related to the live snapshot fails we had going to trusty?
21:39:59 <tonyb> sdague: point me at the specific bug and I'll let you know.
21:40:16 <dansmith> lets talk about options
21:40:20 <kaisers_> sdague: yep, good question, is there a ticket for those fails?
21:40:24 <dansmith> because if we can do something less nuclear, we should just do that
21:40:26 <sdague> https://bugs.launchpad.net/nova/+bug/1334398
21:40:28 <uvirtbot> Launchpad bug 1334398 in nova "libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the gate" [High,Confirmed]
21:40:45 <kaisers_> sdague: thnx
21:40:48 <sdague> so I think flag is probably least worse option
21:40:59 <mikal> sdague: less worse than stable only fixes?
21:40:59 <tonyb> sdague: Thanks, I'll digest that and get back to you
21:41:12 <dansmith> stable-only is my preference, but on the flags thing:
21:41:24 <sdague> default on until we feel it's relatively fixed in the field, then default off and deprecated
21:41:26 <dansmith> we could have a section for workarounds where we put more of this sort of thing
21:41:40 <dansmith> it also makes it easier for us to track workarounds we have in our code
21:41:51 <mikal> dansmith: as in a flag group?
21:41:53 <dansmith> because they're tied to CONF.workarounds.qemu-img-foo
21:41:55 <mikal> dansmith: I think that's a good idea
21:41:58 <dansmith> mikal: yeah
21:42:04 <sdague> yeh, workarounds group sounds good
21:42:23 <mikal> It also marks those thigns as flags you shouldn't twiddle without thought
21:42:23 <dansmith> so let's just do that and distros can override that if they know they're fixed
21:42:35 <sdague> sure
21:42:36 <mikal> I actually really like that idea
21:42:38 <dansmith> mikal: yes and you can set them all to off if you want purity or whatever
21:42:42 <mikal> We could also immediate deprecate the workaround flag
21:42:46 <mikal> i.e. start removal in L
21:42:56 <dansmith> depends onthe thing, but sure
21:43:01 <mikal> Yeah, in this case I mean
21:43:05 <dansmith> yes
21:43:06 <sdague> mikal: I'd wait to deprecate this one until there are actually fixes in the field
21:43:13 <sdague> because those are still waiting on the future
21:43:22 <dansmith> they're available by repo to most people
21:43:25 <dansmith> but whatever :)
21:43:31 <mikal> sdague: sure, we could leave the specifics of that to a code review
21:43:38 <sdague> dansmith: I thought you said another week for rhel?
21:43:43 <mikal> Also, OMG, we just made a plan
21:43:51 <mikal> Ok, let's do that
21:43:54 <dansmith> sdague: there is a proposed repo/channelf or them too I think
21:44:00 <dansmith> sdague: centos has packages
21:44:02 <mikal> I also want to publicly thank tonyb for chasing this and fixing qemu for us
21:44:04 <dansmith> just not in default repos I think
21:44:07 <mikal> Although I am massively biased here
21:44:19 <sdague> ok, can we also confirm the fixes close the issue?
21:44:19 <dansmith> I'm not and I'm massively grateful to him :)
21:44:30 <tonyb> for the most part they'er in testing only repos in fedora/ubuntu don't know about centos
21:44:35 <dansmith> sdague: the distro bugs have repro reports
21:44:41 <sdague> ok
21:44:44 <dansmith> but really,
21:44:48 <dansmith> I don't care when we deprecate it
21:45:00 <dansmith> as long as we have a road
21:45:04 <tonyb> so I'll send the next version today
21:45:15 <mikal> We really need to move onto the next stuck review in the list...
21:45:18 <mikal> Cause time
21:45:18 <tonyb> I'll add a comment in gerrit refereing to this log
21:45:20 <dansmith> tonyb: hit me up when you do
21:45:23 <tonyb> and you'll all approve it right?
21:45:32 <tonyb> dansmith: Thanks.
21:45:34 <mikal> tonyb: I believe dansmith reads the code first
21:45:40 <mikal> SSL options - do we make them per-project or global, or both? Neutron and Cinder have config-group specific SSL options, Glance is using oslo sslutils global options since Juno which was contentious for a time in a separate review in Icehouse (https://review.openstack.org/#/c/84522/). Now https://review.openstack.org/#/c/131066/ wants to break that out for Glance, but we also have a patch for Keystone to use the global oslo SSL options, https://review.
21:45:47 <mikal> ^-- mriedem in the agenda
21:45:51 <tonyb> mikal: of course ;P
21:45:55 <mriedem> yeah, so i put something on the ML about that too
21:46:07 <mriedem> basically to get attention to the inconsistency,
21:46:22 <mriedem> we don't have to decide here but we should figure it out since some reviews are kind of stalled on this i think
21:46:37 <sdague> so... in what universe do we actually want half a dozen ca's defined in nova.conf?
21:46:42 <mriedem> in icehouse i liked markmc's idea of DictOpt for default global ssl options and then you can override for service-specific options if you wanted
21:46:53 <dansmith> sdague: more common than you might think I bet
21:46:59 <melwitt> I think I liked the idea of oslo global with subsections for each service
21:47:17 <mriedem> i haven't looked at the DictOpt stuff in code
21:47:23 <melwitt> which is what mriedem just said I think
21:47:26 <mriedem> yeah
21:47:45 <mikal> Who would work on that?
21:47:46 <mriedem> so maybe we just need a POC on that and see how ugly and/or usable it is?
21:47:51 <mriedem> and is a bp needed?
21:48:01 <sdague> dansmith: have you seen that in the field?
21:48:14 <mikal> If its just moving to doing somehting oslo implements, then I would think a bp but not a spec?
21:48:18 <dansmith> sdague: not on openstack things, but on other ssl configs, sure :)
21:48:43 <mriedem> zhaoqin made a case in his glance change for it, i'd have to look it up again
21:49:00 <mikal> So who have we tricked into doing that POC?
21:49:00 <mriedem> anyway, this is more of an FYI and it's in the ML for discussion
21:49:07 <mikal> Oh, ok
21:49:13 <melwitt> yeah, it's just you don't want to share creds across services in that if one is compromised you get access to all
21:49:13 <mikal> Finalize a plan there then...
21:49:34 <mriedem> i'd sign up, but i feel like i don't have the time to do much right now (moving next week)
21:49:53 <mikal> Ok
21:49:54 <mriedem> anyway, i'll reply to the ML looking for volunteers
21:49:57 <mikal> Let's finish that one on the ML
21:50:02 * mikal looks at the clock
21:50:14 <mikal> Next one is one from alexpilotti
21:50:16 <mikal> #link https://review.openstack.org/#/c/129235/
21:50:17 <VW> +1 to what melwitt said about creds
21:50:50 <mikal> I think alexpilotti was just after review attention here. Its been slow, but isn't horribly off the rails.
21:50:53 <mikal> Is that a fair summary?
21:51:19 <dansmith> I did reviews of those things today
21:51:23 <dansmith> and then read the email
21:51:28 <alexpilotti> mikal: it’s one of the many patches waiting for review
21:51:42 <alexpilotti> mikal: you asked me to choose a couple, so here we are :-)
21:51:56 <mriedem> wow, that's a lot of LOC
21:52:06 <mikal> Ok, so it sounds like this is a call for a couple of cores to take a look
21:52:15 <mikal> Noting that dansmith has already done some reviews here
21:52:24 <dansmith> \o/
21:52:29 <dansmith> do I get an early mark for that?
21:52:31 <mikal> #link https://review.openstack.org/#/c/136484/
21:52:37 <mikal> #link https://review.openstack.org/#/c/131734/
21:52:38 <alexpilotti> dansmith: tx, was just replying now!
21:52:42 <mikal> ^-- ditto those other two as well
21:52:48 <mikal> dansmith: yes, you may leave
21:53:05 <mikal> The final one got added really recently so I haven't got a lot of state on it...
21:53:11 <mikal> zookeeper servicegroup driver issues: nova-conductor processes share zookeeper handle and which causes have lock and race: https://review.openstack.org/#/c/133479/ and https://review.openstack.org/#/c/133500/
21:53:18 <ppalucki> thats mine
21:53:18 <mikal> #link https://review.openstack.org/#/c/133479/
21:53:24 <mikal> #link https://review.openstack.org/#/c/133500/
21:53:42 <ppalucki> we are using zk in our dev environments and nova-cond just don't work
21:54:18 <ppalucki> bug are there described and fix are ready to be reviewed
21:54:19 <mriedem> does devstack support zk?
21:54:25 <ppalucki> yes
21:54:26 <mikal> ppalucki: so this is just a request for cores to take a look at those reviews after the meeting, yes?
21:54:34 <ppalucki> just it is
21:54:43 <mriedem> ok, was just wondering how easy it is to reproduce with devstack to verify the fix
21:54:44 <mikal> Cool, well I promise to take a look at least
21:54:52 <dansmith> seems okay on the face of it, fwiw
21:55:03 <ppalucki> install zookeeper and set config is enought
21:55:16 <mikal> So, noting the time we really have to move on
21:55:21 <ppalucki> thats good for me
21:55:25 <anish> wait wait
21:55:28 <anish> I have one https://review.openstack.org/#/c/130721/
21:55:37 <anish> but I did not put it on the agenda, is that okay ?
21:55:48 <anish> cannot find a second core maintainer who knows iscsi
21:55:57 <mikal> Ahhh, ok
21:55:57 <anish> at least the ones I talked to didn't
21:55:59 <sdague> spec in new state? - https://blueprints.launchpad.net/nova/+spec/add-open-iscsi-transport-support
21:56:01 <mikal> This one is a spec with one +2
21:56:10 <anish> yeah
21:56:28 <sdague> mikal: the spec is not approved
21:56:34 <mikal> I'm no iscsi expert, but I will take a look at that one after the meeting
21:56:39 <mikal> sdague: yeah, he's asking for a spec review
21:56:45 <sdague> oh, sorry
21:56:48 <mikal> sdague: :P
21:56:49 <anish> mikal: thanks !
21:56:54 <mikal> NP
21:56:59 <mikal> #topic Open Discussion
21:57:07 <mikal> Please enjoy your three minutes of open discussion
21:57:12 <mikal> Or have an early mark
21:57:15 <alop> can anyone look at my spec?
21:57:17 <alop> https://review.openstack.org/#/c/132656/
21:57:24 <cburgess> What is the process now for adding a bug to the list to be reviewed/discussed in the meeeting?
21:57:44 <mriedem> alop: maybe try to talk to jaypipes about anything related to db purge
21:57:51 <alop> thanks
21:57:57 <mriedem> alop: since i think jaypipes cringes every time it's mentioned
21:58:06 <alop> hehe, this is true
21:58:17 <mikal> cburgess: just add it to the agenda for next week?
21:58:20 <cburgess> I seem to recall there being a discussion in Paris about how to bring non-critical bugs that have pending fixes to the attention of cores but I don't recall the process that was discussed.
21:58:22 <mikal> cburgess: i.e. wiki page edit
21:58:23 <VW> we have a couple of folks that have done a lot of work around purging here too, alop
21:58:25 <VW> I'll point them at it
21:58:27 <VW> as well
21:58:34 <cburgess> mikal: OK cool, I just couldn't remember.
21:58:35 <mriedem> cburgess: we talked about bug wednesdays for that
21:58:42 <mikal> cburgess: np
21:58:53 <mriedem> guilt-free bug fix reviews on wednesday (office hours right?)
21:59:03 <anish> erm, am I supposed to set the series goal in the BP myself, or is that something the maintainers do ?
21:59:03 <mriedem> not sure if it's every wednesday though
21:59:08 <cburgess> So are we doing bug wednesdays then or is adding it to the agenda the way to go?
21:59:10 <mriedem> jogo: ^?
21:59:16 <mikal> Yeah, there is a difference between "stuck" and just "not reviewed"
21:59:30 <mikal> Like the qemu one above, we really needed to talk through it to come up with a plan
21:59:31 <mriedem> stuck to me means conflicts in opinion
21:59:41 <mriedem> not lack of attention
21:59:44 <jogo> cburgess: yup
21:59:45 <anish> I thought as much, so did not add it there
21:59:56 <mikal> mriedem: I do think a bug fix which was imporant is legit to bring up in open disucssion here though
21:59:57 <anish> is there a separate process for not reviews ?
22:00:00 <anish> *reviewed
22:00:08 <cburgess> jogo: OK I'll hit you up for details on how to do that then.
22:00:24 <_gryf> anyone cares about the difference between host state and service state?
22:00:25 <cburgess> Yeah I'm taling about that non-OMG critical bugs.
22:00:29 <mikal> anish: I'd recommend starting by mentioning it in #openstack-nova?
22:00:41 <mikal> anish: if its weeks old, then bring it to open discussion here
22:00:46 <cburgess> Specifically things like older minor bugs that have failrly simple fixes that could be merged to help reduce open bug count.
22:00:59 <mikal> cburgess: oh, I love easy code reviews
22:01:08 <mikal> Anyways, if I had a gong I'd ring it
22:01:12 <mikal> #endmeeting