14:01:24 #startmeeting nova 14:01:25 Meeting started Thu Jan 11 14:01:24 2018 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:29 The meeting name has been set to 'nova' 14:01:32 \o 14:01:34 o/ 14:01:39 o/ 14:01:40 _o 14:01:48 (I'm lazy) 14:01:50 o/ 14:02:00 I'm not prepared so I'm sorry in advance :) 14:02:01 * efried hands bauzas a crutch 14:02:09 * gmann while eating dinner 14:02:33 * gibi opening the wiki for the agenda 14:02:40 \o 14:02:57 let's start 14:03:00 #topic Release News 14:03:12 #link Queens release schedule: https://wiki.openstack.org/wiki/Nova/Queens_Release_Schedule 14:03:18 #link Release schedule recap email: http://lists.openstack.org/pipermail/openstack-dev/2018-January/125953.html 14:03:32 we have some upcoming deadlines 14:03:37 #info Jan 18: non-client library (oslo, os-vif, os-brick, os-win, os-traits, etc) release freeze 14:03:42 #info Jan 25: q-3 milestone, Feature Freeze, final python-novaclient release, requirements freeze, Soft String Freeze 14:04:38 I don't think the number of completed blueprints on the agenda is up to date so I will not copy that here 14:04:55 anything else about the release? 14:05:33 then, moving on 14:05:43 #topic Bugs 14:05:52 We don't have critical bugs 14:06:17 #info 43 new untriaged bugs [data could be outdated] 14:06:44 Gate seems to be really slow 14:07:12 ya it was ~300 patches in check pipeline today 14:07:24 I personally followed a patch in the gate queue that took more than 24h to bounce back 14:07:36 I've been hammering on a couple of mine for three days now. 14:07:44 gmann: and almost 12 hours waiting time :( 14:08:05 yea 14:08:10 does somebody has some info from infra about the possible solution? 14:08:20 I have zero idea on the root cause 14:08:24 of the slowness 14:08:29 that's in my pipe for later 14:08:50 They've been doing... stuff. Like rebooting servers and patching kernels. Nothing has really seemed to help so far. 14:09:09 efried: thanks 14:09:15 let's move on 14:09:38 after a quick glance at http://ci-watch.tintri.com/project?project=nova&time=7+days the third party CI state look OK 14:10:14 anything else about bugs or the gate? 14:10:41 #topic Reminders 14:10:48 #link queens blueprint status https://etherpad.openstack.org/p/nova-queens-blueprint-status 14:11:07 and we have an etherpad for the rocky ptg as well #link https://etherpad.openstack.org/p/nova-ptg-rocky 14:11:45 #topic Stable branch status 14:11:53 #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 14:11:56 #link Etherpad for bugs tracked for Pike backports: https://etherpad.openstack.org/p/nova-pike-bug-fix-backports 14:12:11 efried: you imply the slowness could be due to any Meltdown patch ? scary. 14:12:41 #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z 14:12:44 #info Ocata 15.1.0 released 14:12:49 bauzas The knee-jerk "fix" for meltdown would be to disable the offending CPU optimization, so it's possible. But I sorta doubt it. 14:13:13 #link stable/newton: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/newton,n,z 14:13:16 #link Newton 14.1.0 release request: https://review.openstack.org/#/c/529102/ 14:13:19 Do we still have newton? 14:13:37 at least the review list seems empty 14:13:39 bauzas: efried: I think it's more of an outage somewhere after a quick glance on the chats in the infra channel 14:14:03 gibi: Newton is EOL 14:14:09 \o/ 14:14:17 well 14:14:21 I think it is 14:14:36 I haven't seen personnally the EOLness 14:14:58 me neither 14:15:04 anyhow moving on 14:15:08 #topic Subteam Highlights 14:15:18 Cells v2 14:15:34 do we have somebody to talk about cells 14:15:35 ? 14:16:08 ENODAN 14:16:27 I see that there was a meeting #link http://eavesdrop.openstack.org/meetings/nova_cells/2018/nova_cells.2018-01-10-20.59.log.html 14:16:55 let's move on to scheduler then 14:17:03 \o 14:17:05 Alternate hosts had one open patch to address a bug; the patch is now merged (https://review.openstack.org/#/c/531022/). There's now one patch on the series: https://review.openstack.org/#/c/526436/ 14:17:05 Consuming limits in /allocation_candidates: dansmith has a WIP: https://review.openstack.org/#/c/531517/ 14:17:06 Nested + allocation_candidates series is making good progress: https://review.openstack.org/#/c/531443/ 14:17:06 ComputeDriver.update_provider_tree() series is stalled in the gate for days, but otherwise making good progress. 14:17:08 Granular resource requests is waiting on nested + allocation_candidates (though considering starting to write tests now). 14:17:18 As of Monday, we're still targeting NRP for Queens, including Xen as the first real consumer; but we do not expect to get to e.g. NUMA modeling or generic PCI stuff yet. 14:17:18 Placement bug queue is getting a bit deep; a scrub would be helpful. 14:17:26 Design discussion around conflict detection/resolution revealed that we want report client to raise up conflict exceptions rather than driving any retries itself. The caller (resource tracker) should drive the retries. This will eventually prompt a refactor of how inventory update is handled. 14:17:26 It was also revealed that we're not tracking generations against RP aggregate associations. This needs to be fixed; cdent volunteered. 14:17:26 As a matter of priorities, since this is (mostly) a shared RP thing, we could do it in R. (But note that yesterday sean-k-mooney came up with a legit use case for aggregates that's not about shared RPs. http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-10.log.html#t2018-01-10T14:22:48) 14:17:37 . 14:17:42 Questions, comments, concerns? 14:18:08 * edleafe admires efried's status dump 14:18:17 I learned it from watching you, okay?? 14:18:23 heh 14:18:31 efried: thanks, awesome 14:19:04 if nothing else on scheduler then moving on to API 14:19:26 For those of you who didn't get that reference: https://www.youtube.com/watch?v=KUXb7do9C-w 14:19:48 it seems there was no API meeting this week 14:19:55 yea 14:20:05 gmann: anything you want to mention? 14:20:07 not mcuh on API side other than ongoing reviews. 14:20:40 OK, then Notification 14:20:57 * efried looks around for the notification guy 14:21:02 :) 14:21:10 we had a short meeting 14:21:22 have couple of ongoing bug fixes 14:21:44 and we already have notification ehancement bps proposed for Rocky 14:21:57 that is all 14:22:05 OK moving on to Cinder 14:22:14 o/ 14:22:29 mriedem wrote up a nice summary mail 14:22:32 #link http://lists.openstack.org/pipermail/openstack-dev/2018-January/126127.html 14:22:46 we have the chain in Nova ready for reviews 14:22:57 #link https://review.openstack.org/#/c/271047/ 14:23:19 #link https://review.openstack.org/#/q/topic:bp/multi-attach-volume+(status:open+OR+status:merged) 14:23:43 * gibi already has the multi attach patch open in his browser 14:23:52 and the above link is to see a broader range of patches with Tempest tests and a multi-attach CI job that's experimental 14:24:16 and of course the gate is not in good shape right now, but so far local tests, etc look good 14:24:36 there are some policy changes in flight in Cinder as well to turn the feature on/off 14:24:45 gibi: sounds awesome, thanks! 14:25:04 ildikov: anything else you want to mention? 14:25:08 so I'm basically begging for reviews so we can have the first version of the feature supported in Queens 14:25:38 improvement ideas are welcomed in the Rocky PTG etherpad, we already have a section opened there 14:25:47 gibi: and that would be it, thanks :) 14:25:57 ildikov: thank you 14:26:03 #topic Stuck Reviews 14:26:45 we have one on the agenda 14:26:46 (tetsuro) Do we need microversion for https://review.openstack.org/#/c/531328/ ? 14:26:49 o/ 14:27:15 Today's nova /os-hypervisors/details API returns hypervisor_type 'QEMU' for both qemu and kvm backend. This patch changes that value to 'KVM' if kvm is the backend. 14:27:18 last I checked the consensus had come round to "yes" 14:27:34 Ah, okay. 14:27:55 ll check it tomorrow for microversion 14:28:10 thanks. 14:28:35 cool 14:28:50 anything other stuck review we need to discuss here? 14:29:45 not anymore from me. 14:29:58 #topic Open discussion 14:30:18 what is on the agenda was already discussed last week so I guess here the agenda is outdated 14:30:27 is there anything to discuss? 14:32:03 Call it. Thanks for chairing, gibi 14:32:10 then, let's close it 14:32:11 sorry again for not being an up-to-date chair for this meeting 14:32:17 #endmeeting