14:01:24 <gibi> #startmeeting nova
14:01:25 <openstack> Meeting started Thu Jan 11 14:01:24 2018 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:29 <openstack> The meeting name has been set to 'nova'
14:01:32 <efried> \o
14:01:34 <takashin> o/
14:01:39 <cdent> o/
14:01:40 <bauzas> _o
14:01:48 <bauzas> (I'm lazy)
14:01:50 <gmann> o/
14:02:00 <gibi> I'm not prepared so I'm sorry in advance :)
14:02:01 * efried hands bauzas a crutch
14:02:09 * gmann while eating dinner
14:02:33 * gibi opening the wiki for the agenda
14:02:40 <edleafe> \o
14:02:57 <gibi> let's start
14:03:00 <gibi> #topic Release News
14:03:12 <gibi> #link Queens release schedule: https://wiki.openstack.org/wiki/Nova/Queens_Release_Schedule
14:03:18 <gibi> #link Release schedule recap email: http://lists.openstack.org/pipermail/openstack-dev/2018-January/125953.html
14:03:32 <gibi> we have some upcoming deadlines
14:03:37 <gibi> #info Jan 18: non-client library (oslo, os-vif, os-brick, os-win, os-traits, etc) release freeze
14:03:42 <gibi> #info Jan 25: q-3 milestone, Feature Freeze, final python-novaclient release, requirements freeze, Soft String Freeze
14:04:38 <gibi> I don't think the number of completed blueprints on the agenda is up to date so I will not copy that here
14:04:55 <gibi> anything else about the release?
14:05:33 <gibi> then, moving on
14:05:43 <gibi> #topic Bugs
14:05:52 <gibi> We don't have critical bugs
14:06:17 <gibi> #info 43 new untriaged bugs  [data could be outdated]
14:06:44 <gibi> Gate seems to be really slow
14:07:12 <gmann> ya it was ~300 patches in check pipeline today
14:07:24 <gibi> I personally followed a patch in the gate queue that took more than 24h to bounce back
14:07:36 <efried> I've been hammering on a couple of mine for three days now.
14:07:44 <ildikov> gmann: and almost 12 hours waiting time :(
14:08:05 <gmann> yea
14:08:10 <gibi> does somebody has some info from infra about the possible solution?
14:08:20 <bauzas> I have zero idea on the root cause
14:08:24 <bauzas> of the slowness
14:08:29 <bauzas> that's in my pipe for later
14:08:50 <efried> They've been doing... stuff.  Like rebooting servers and patching kernels.  Nothing has really seemed to help so far.
14:09:09 <gibi> efried: thanks
14:09:15 <gibi> let's move on
14:09:38 <gibi> after a quick glance at http://ci-watch.tintri.com/project?project=nova&time=7+days the third party CI state look OK
14:10:14 <gibi> anything else about bugs or the gate?
14:10:41 <gibi> #topic Reminders
14:10:48 <gibi> #link queens blueprint status https://etherpad.openstack.org/p/nova-queens-blueprint-status
14:11:07 <gibi> and we have an etherpad for the rocky ptg as well #link https://etherpad.openstack.org/p/nova-ptg-rocky
14:11:45 <gibi> #topic Stable branch status
14:11:53 <gibi> #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z
14:11:56 <gibi> #link Etherpad for bugs tracked for Pike backports: https://etherpad.openstack.org/p/nova-pike-bug-fix-backports
14:12:11 <bauzas> efried: you imply the slowness could be due to any Meltdown patch ? scary.
14:12:41 <gibi> #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z
14:12:44 <gibi> #info Ocata 15.1.0 released
14:12:49 <efried> bauzas The knee-jerk "fix" for meltdown would be to disable the offending CPU optimization, so it's possible.  But I sorta doubt it.
14:13:13 <gibi> #link stable/newton: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/newton,n,z
14:13:16 <gibi> #link Newton 14.1.0 release request: https://review.openstack.org/#/c/529102/
14:13:19 <gibi> Do we still have newton?
14:13:37 <gibi> at least the review list seems empty
14:13:39 <ildikov> bauzas: efried: I think it's more of an outage somewhere after a quick glance on the chats in the infra channel
14:14:03 <bauzas> gibi: Newton is EOL
14:14:09 <gibi> \o/
14:14:17 <bauzas> well
14:14:21 <bauzas> I think it is
14:14:36 <bauzas> I haven't seen personnally the EOLness
14:14:58 <gibi> me neither
14:15:04 <gibi> anyhow moving on
14:15:08 <gibi> #topic Subteam Highlights
14:15:18 <gibi> Cells v2
14:15:34 <gibi> do we have somebody to talk about cells
14:15:35 <gibi> ?
14:16:08 <efried> ENODAN
14:16:27 <gibi> I see that there was a meeting #link http://eavesdrop.openstack.org/meetings/nova_cells/2018/nova_cells.2018-01-10-20.59.log.html
14:16:55 <gibi> let's move on to scheduler then
14:17:03 <efried> \o
14:17:05 <efried> Alternate hosts had one open patch to address a bug; the patch is now merged (https://review.openstack.org/#/c/531022/).  There's now one patch on the series: https://review.openstack.org/#/c/526436/
14:17:05 <efried> Consuming limits in /allocation_candidates: dansmith has a WIP: https://review.openstack.org/#/c/531517/
14:17:06 <efried> Nested + allocation_candidates series is making good progress: https://review.openstack.org/#/c/531443/
14:17:06 <efried> ComputeDriver.update_provider_tree() series is stalled in the gate for days, but otherwise making good progress.
14:17:08 <efried> Granular resource requests is waiting on nested + allocation_candidates (though considering starting to write tests now).
14:17:18 <efried> As of Monday, we're still targeting NRP for Queens, including Xen as the first real consumer; but we do not expect to get to e.g. NUMA modeling or generic PCI stuff yet.
14:17:18 <efried> Placement bug queue is getting a bit deep; a scrub would be helpful.
14:17:26 <efried> Design discussion around conflict detection/resolution revealed that we want report client to raise up conflict exceptions rather than driving any retries itself.  The caller (resource tracker) should drive the retries.  This will eventually prompt a refactor of how inventory update is handled.
14:17:26 <efried> It was also revealed that we're not tracking generations against RP aggregate associations.  This needs to be fixed; cdent volunteered.
14:17:26 <efried> As a matter of priorities, since this is (mostly) a shared RP thing, we could do it in R.  (But note that yesterday sean-k-mooney came up with a legit use case for aggregates that's not about shared RPs.  http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-10.log.html#t2018-01-10T14:22:48)
14:17:37 <efried> .
14:17:42 <efried> Questions, comments, concerns?
14:18:08 * edleafe admires efried's status dump
14:18:17 <efried> I learned it from watching you, okay??
14:18:23 <edleafe> heh
14:18:31 <gibi> efried: thanks, awesome
14:19:04 <gibi> if nothing else on scheduler then moving on to API
14:19:26 <efried> For those of you who didn't get that reference: https://www.youtube.com/watch?v=KUXb7do9C-w
14:19:48 <gibi> it seems there was no API meeting this week
14:19:55 <gmann> yea
14:20:05 <gibi> gmann: anything you want to mention?
14:20:07 <gmann> not mcuh on API side other than ongoing reviews.
14:20:40 <gibi> OK, then Notification
14:20:57 * efried looks around for the notification guy
14:21:02 <gibi> :)
14:21:10 <gibi> we had a short meeting
14:21:22 <gibi> have couple of ongoing bug fixes
14:21:44 <gibi> and we already have notification ehancement bps proposed for Rocky
14:21:57 <gibi> that is all
14:22:05 <gibi> OK moving on to Cinder
14:22:14 <ildikov> o/
14:22:29 <ildikov> mriedem wrote up a nice summary mail
14:22:32 <ildikov> #link http://lists.openstack.org/pipermail/openstack-dev/2018-January/126127.html
14:22:46 <ildikov> we have the chain in Nova ready for reviews
14:22:57 <ildikov> #link https://review.openstack.org/#/c/271047/
14:23:19 <ildikov> #link https://review.openstack.org/#/q/topic:bp/multi-attach-volume+(status:open+OR+status:merged)
14:23:43 * gibi already has the multi attach patch open in his browser
14:23:52 <ildikov> and the above link is to see a broader range of patches with Tempest tests and a multi-attach CI job that's experimental
14:24:16 <ildikov> and of course the gate is not in good shape right now, but so far local tests, etc look good
14:24:36 <ildikov> there are some policy changes in flight in Cinder as well to turn the feature on/off
14:24:45 <ildikov> gibi: sounds awesome, thanks!
14:25:04 <gibi> ildikov: anything else you want to mention?
14:25:08 <ildikov> so I'm basically begging for reviews so we can have the first version of the feature supported in Queens
14:25:38 <ildikov> improvement ideas are welcomed in the Rocky PTG etherpad, we already have a section opened there
14:25:47 <ildikov> gibi: and that would be it, thanks :)
14:25:57 <gibi> ildikov: thank you
14:26:03 <gibi> #topic Stuck Reviews
14:26:45 <gibi> we have one on the agenda
14:26:46 <gibi> (tetsuro) Do we need microversion for https://review.openstack.org/#/c/531328/ ?
14:26:49 <tetsuro_> o/
14:27:15 <tetsuro_> Today's nova /os-hypervisors/details API returns hypervisor_type 'QEMU' for both qemu and kvm backend. This patch changes that value to 'KVM' if kvm is the backend.
14:27:18 <cdent> last I checked the consensus had come round to "yes"
14:27:34 <tetsuro_> Ah, okay.
14:27:55 <gmann> ll check it tomorrow for microversion
14:28:10 <tetsuro_> thanks.
14:28:35 <gibi> cool
14:28:50 <gibi> anything other stuck review we need to discuss here?
14:29:45 <tetsuro_> not anymore from me.
14:29:58 <gibi> #topic Open discussion
14:30:18 <gibi> what is on the agenda was already discussed last week so I guess here the agenda is outdated
14:30:27 <gibi> is there anything to discuss?
14:32:03 <efried> Call it.  Thanks for chairing, gibi
14:32:10 <gibi> then, let's close it
14:32:11 <gibi> sorry again for not being an up-to-date chair for this meeting
14:32:17 <gibi> #endmeeting