14:00:27 #startmeeting nova 14:00:28 Meeting started Thu Mar 22 14:00:27 2018 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:32 The meeting name has been set to 'nova' 14:00:34 o/ 14:00:35 o/ 14:00:37 \o 14:00:49 o/ 14:00:52 @/ 14:00:54 hi! I will be your host today 14:01:06 o/. 14:01:27 #topic Release News 14:01:38 #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule 14:01:54 Apr 19: r-1 milestone, nova spec freeze 14:02:24 Apr 27: spec review focus day 14:02:39 march 27? 14:02:50 mriedem: correct :) 14:02:51 \o 14:02:51 sorry 14:03:03 March 27: spec review focus day 14:03:36 phew 14:03:36 #link http://lists.openstack.org/pipermail/openstack-dev/2018-March/128630.html 14:03:49 * johnthetubaguy hides in the back of the room 14:04:04 #link os-vif 1.10.0 was released on 2018-03-21: https://review.openstack.org/#/c/554948 14:04:05 o/ 14:04:39 anything else about the release? 14:04:58 #topic Bugs (stuck/critical) 14:05:20 30 new untriaged bugs right now #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 14:05:43 went through a lot of garbage bugs yesterday 14:06:06 that was 47 on the agenda so thanks for the triaging 14:06:41 we dont have critical bugs 14:07:10 there are 4 untriaged and untagged bug in our queue https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 14:07:38 tagging helps to route bugs to experts for triage 14:07:43 #help tag untagged untriaged bugs with appropriate tags to categorize them 14:08:24 yeah I need to help :( 14:08:54 Gate status 14:08:56 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 14:09:13 I don't see any major nova related failure in the list 14:09:13 it seems to be oddly slow, but overall no known big failures 14:09:36 there was some hanging functional tests the other day 14:09:44 I don't know what was the cause 14:10:11 meltdown fixes? 14:10:20 i would be interested to know what % of nodes are not running openstack stuff now 14:10:30 with the new CI/CD zuulv3 split thingy with the foundatoin 14:10:35 but that's another topic for another channel 14:10:41 * johnthetubaguy nods 14:10:52 do we share a nodepool with other projects? 14:10:53 3rd party CI #link http://ci-watch.tintri.com/project?project=nova&time=7+days 14:11:07 dansmith: I guess we run tests triggered from github 14:11:20 I didn't think _our_ zuul did 14:11:33 don't know, probably a question for that thread in the ML 14:12:01 we'd see them in our status dashboard I would think 14:12:05 * alex_xu_ waves late 14:12:06 unless it's just nodepool that is shared 14:12:09 anyway, sorry to derail 14:12:19 I thought it was a separate nodepool 14:12:22 going back to 3rd party CI 14:13:02 I don't kknow what I have to look for in http://ci-watch.tintri.com/project?project=nova&time=7+days 14:13:13 does the recent redness of IBM PowerKVM CI relevant? 14:13:28 we can ask mmedvede later 14:14:06 anything else about bugs or CI ? 14:14:31 #topic Reminders 14:14:36 #link Rocky Review Priorities https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 14:15:16 #info Rocky spec review day: next week Tuesday March 27, let's focus a day on reviewing specs 14:15:23 once again ^^ :) 14:16:07 * stephenfin walks in late 14:16:10 Runaway proposal #link http://lists.openstack.org/pipermail/openstack-dev/2018-March/128574.html 14:16:54 anything else for reminders? 14:17:30 #topic Stable branch status 14:17:36 #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z 14:17:59 gonna do another queens release shortly 14:18:04 once a couple of other fixes land, 14:18:13 we had a revert that impacts hard reboots for libvirt that we need to get released 14:18:18 same for pike 14:18:23 #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 14:18:39 thanks mat 14:18:43 thanks mriedem 14:18:52 #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z 14:19:14 anything else for stable branches? 14:19:27 the stable eol thing is likely getting approval tomorrow 14:19:28 by the TC 14:19:31 mriedem: we have some problems in CI we are addressing, something internal 14:19:46 #link https://review.openstack.org/#/c/548916/ 14:20:06 mmedvede: thanks for the info 14:20:20 turned off reporting 14:20:22 * bauzas needs to leave early (as today is buggy for most people in my country) 14:20:50 #topic Subteam Highlights 14:20:54 Cells v2 (dansmith) 14:20:58 no meeting this week 14:21:42 Scheduler (edleafe) 14:21:50 Discussed whether mirroring Nova host aggregates to Placement was a proxy API, or whether it was a necessary duplication, due to them being used for different purposes. No clear consensus was reached. 14:21:54 efried admitted to being a PITA 14:21:56 jaypipes expressed his opinion that we have already done enough in terms of extracting placement for Rocky, and to hold off on any further work. cdent disagreed, and will continue to push forward. 14:22:00 We agreed that we need to focus on the work for nesting providers in allocation_candidates. 14:22:03 cdent wondered if he is a bug. 14:22:06 That's it. 14:22:21 thanks 14:22:22 Notification (gibi) 14:22:44 every bp I tracked got discussion and most of them is approved 14:22:52 list and status is in #link Notification (gibi) 14:23:02 I mean #link http://lists.openstack.org/pipermail/openstack-dev/2018-March/128562.html 14:23:31 but some have been approved for awhile with no code yet right? 14:23:36 mriedem: yes 14:23:42 which is... :/ 14:23:47 >:( 14:23:48 oops 14:23:54 mriedem: I share your pain 14:24:03 ~\o/~ 14:24:11 Stuck Reviews 14:24:15 #topic Stuck Reviews 14:24:27 nothing on the agenda 14:24:43 do we have something to bring up here? 14:24:58 I have one thing 14:25:04 stephenfin: go ahead 14:25:12 sahid's patch on TX/RX queue sizes 14:25:14 * bauzas leaves now 14:25:16 * stephenfin get's link 14:25:21 ha as bauzas leaves 14:25:44 *gets, even 14:25:46 https://review.openstack.org/539605 14:26:16 As discussed on the spec, we were pretty sure we were going with the global nova.conf option but there's been some disagreement on that 14:26:39 doesn't this need guest support, I should re-read that thing 14:26:45 The spec has been rewritten to use extra spec keys. I'm OK with that but want to make sure this is OK for everyone else 14:27:03 the original patch from nic used extra specs right? 14:27:05 johnthetubaguy: it's virtio only 14:27:05 johnthetubaguy: More hypervisor support than guest support, if I understand it correctly 14:27:26 mriedem: Correct 14:27:27 mriedem, stephenfin: If it is OK to add yet another flavor extra_spec key that is libvirt specific then I'm OK with the rest 14:27:41 the point of the extra spec is it doesn't have to be libvirt-specific, 14:27:42 I thought these things needed specific kernel versions in the guest for virtio to negociate, but I could be miss-remembering, its probably like 2.20 or something 14:27:44 but a config option would be 14:27:52 johnthetubaguy: correct, 14:27:57 it's super use case specific 14:27:57 and we (mostly I) pushed him away from that given previous opposition to yet more extra spec knobs 14:27:57 but a config option being libvirt-specific isn't a huge proble 14:28:18 Yeah, it's easy move it to another group if we need to 14:28:19 a config option isn't a huge problem no 14:28:42 But I feel bad about pushing him one way when he's now being dragged back, heh 14:28:47 extra spec keys that are in libvirt-specific units (i.e. queues) is less palatable to me than a config option in conf.libvirt 14:28:58 why would you not want to set it, extra memory usage? 14:29:04 is there no way that other virt drivers would never be able to do this same thing? 14:29:18 xen has something quite similar brewing upstream 14:29:24 if it's in extra_specs I would like it to be something like shares or percentage of the total, where the total (or max) is configured by compute in conf or something like that 14:29:25 johnthetubaguy: gibi raised that, yeah. Realistically though, this is going to be set everywhere and not touched again 14:29:51 Until the hardware is upgraded to 40 or 100 Gig, anyway 14:30:01 so if you want it everywhere, feels like it should be a config option? 14:30:12 right, feels mostly a property of the host capabilities 14:30:13 config option feels like the easy way to go here for now, IMHO 14:30:15 That was my feeling, yes 14:30:29 ++ config option here, as much as the churn on the spec sucks 14:30:57 i'm happy to bow out of the spec review if dansmith and johnthetubaguy want to take over 14:30:58 No one has asked for this to be per guest, so the benefits of that feature are slight at best 14:31:09 adding a config option now, and if the use case comes up to set it per VM then addign an extra spec later is doable 14:31:14 Yeah, I'm also going to bow out 14:31:20 mriedem: sure 14:31:25 On account of the "go this way, no this way" thing 14:31:28 stephenfin: wait, aren't you the one that asked for it to be extra_specs? 14:31:33 stephenfin: no fair bowing out now :) 14:31:34 i did 14:31:37 oh 14:31:42 Nope, that was mriedem 14:31:43 i didn't realize this couldn't be used by other virt drivers 14:31:48 okay, then all the more reason for stephenfin to stick around for his victory lap 14:31:59 :D 14:32:07 sahid is going to be pi******ed :D 14:32:15 anyway, johnthetubaguy you'll help review? 14:32:17 all he has to do is revert to an older patchset 14:32:18 but eh, good to get a move on this 14:32:23 I'll comment that we had a big discussion 14:32:25 Do I sense properly that we have an agreement to use config option? 14:32:26 dansmith: yes 14:32:29 he hates me anyway :) 14:32:38 * dansmith -> lightning rod 14:32:38 gibi: +1 from me 14:33:12 dansmith: thanks for taking the bad guy role :) 14:33:18 let's move on 14:33:31 (y) 14:33:32 #topic Open discussion 14:33:41 * mriedem prepares for the deluge of powervm blueprints 14:33:42 there is a long list of specless bps on the agenda 14:33:50 (melwitt): seeking approval for specless blueprint for improving the xenapi image handler: https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement 14:34:00 It’s to change a XenAPI config option to specify the image handler, so that xenapi can support non-FS based storage repositories (LVM, ISCSI). 14:34:07 approve 14:34:08 We also talked about it in PTG. 14:34:11 it was discussed at the ptg 14:34:24 same thing as the libvirt imagebacked right 14:34:24 ? 14:34:29 image_type or whatever that option is 14:34:44 mriedem, yes. similarly. 14:34:57 but not image type but the image handler. 14:35:24 any objection against approving it? 14:36:14 then it is approved 14:36:20 next 14:36:21 * johnthetubaguy remembers something about me reviewing that 14:36:21 Thanks. 14:36:24 (melwitt): seeking approval for specless blueprint for removing execs of system commands: https://blueprints.launchpad.net/nova/+spec/execs-ive-had-a-few 14:37:00 no reason not to do this, IMHO 14:37:07 "Also, it makes us look silly at social occasions." 14:37:07 now that we have privsep, it's possible, but wasn't before 14:37:37 +1 from me. I've been reviewing those already thinking it was just another one of mikal's daft topics :) 14:37:55 i'm fine with it 14:37:55 See we are in agreement, so It is approved 14:38:00 ++ 14:38:14 next 14:38:20 (melwitt): Based on replies to the spec review day ML thread: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128603.html 14:38:25 Can we have an informal vote on when people would like to start using runways? (Start immediately vs start after spec freeze, and let's go with the consensus) 14:38:53 * gibi votes for starting after the spec freeze 14:39:00 as someone that also reviews specs, i'm a big meh on this one 14:39:21 +1 "Obviously this will be an experiment and we won't get 14:39:21 it right the first time." 14:39:31 nobody replied to me, which tells me I'm the only one (and efried) 14:39:36 so we should just do it late I guess 14:39:49 and we'll just approve a bunch of stuff that won't land, like usual :) 14:40:28 how about after the spec review day, we do everything else on a runway? 14:40:28 anyone else going to speak up in this meeting? 14:40:34 Can anyone tell me why we shouldn't a) allow things like specs in runways, and b) start TODAY? 14:41:05 specs in runways is an idea i haven't had 14:41:05 * stephenfin still has to read that email and will stay quiet for this bit 14:41:16 but, 14:41:19 I pretty much agreed with dansmith, but have little enough of an opinion that I didn't bother to response on the email. I also agree with efried just said 14:41:20 stephenfin: That's the gist of it. Let's start now. 14:41:22 the idea is to flush out the approved stuff 14:41:31 my reason to vote for after the spec freeze is the amount of time I need to spend on the bandwidth spec these days 14:41:52 specs in runways is a little hard because we'd need to have a way to say "okay, we're not going to do this this cycle, so we eject it from the runway without merging", IMHO 14:41:58 which is a little awkward, but I guess we could 14:42:00 okay, but it's clear that not everybody will be able to focus all their time all the time. 14:42:11 right, I don't really understand why we can't do specs in the background, 14:42:20 So gibi is focusing on that spec right now, but joebob will be focusing on a feature after spec freeze etc. 14:42:31 and I don't understand why we can't ramp up more spec review/approval when we're getting low on approved things to go in the queue 14:42:35 efried: yeah agreed 14:42:49 dansmith: yeah, I think that is what I was wanting to see, at least eventually 14:42:53 i think what i said on the etherpad and in dubling, 14:42:56 *dublin, 14:43:03 was if we're going to do both at the same time, 14:43:14 is we should extend spec freeze to basically feature freeze 14:43:25 or milestone 2, something like that 14:43:29 yeah we'd kinda have to, 14:43:39 as we'd be approving things later to keep the pipeline full as necessary 14:43:40 I'm not opposed to that in principle, though I think that decision can be made later. 14:43:45 so if we're cool with extending the spec freeze to at least milestone 2, i'm ok with doing runways now 14:43:47 well, its like all specs need an exception, you get granted when there is a slot for you? 14:43:49 so we'd be shooting for smaller things to be approved later 14:44:21 so a bit I missed, is how were we going to track the runways? 14:44:28 i'm also ok with doing runways now w/o extending the spec freeze, i should say, that's why i said 'meh' 14:44:37 johnthetubaguy: etherpad 14:44:57 the only issue I have with extending the deadline is increased difficulty getting less obvious specs landed 14:44:58 mriedem: I was wondering if trello might work, but not sure if that excludes folks 14:45:01 without the pressure of said deadline 14:45:15 johnthetubaguy: it excludes people with a soul 14:45:25 johnthetubaguy: https://etherpad.openstack.org/p/nova-runways-rocky 14:45:37 stephenfin: umm, 14:45:43 dansmith: shit, I'm out then. 14:45:53 well that's kind of the thing - you can have us all focus on runways and a few do spec reviews, 14:46:06 or we extend the spec review deadline so there is time for both for the few that actually review specs 14:46:19 OK, I missed its just a list of three, ignore me, etherpad is perfect 14:46:33 so, 14:46:41 can we just have people reply with this stuff on that thread as the votes? 14:46:45 mriedem: Good point. I guess it'll just shuffle things around a bit 14:46:52 I am tempted to say we just grant exceptions when needed 14:47:10 * johnthetubaguy goes to email thread 14:47:10 dansmith: let's do that 14:47:10 kind of sucks to have this discussion without the PTL here, 14:47:12 so yeah ML it is 14:47:20 mriedem: right that's my point 14:47:23 timezones suck 14:47:24 #action vote on the ML 14:47:34 moving on 14:47:44 (esberglu): PowerVM specless blueprints 14:48:05 Network Hotplug: https://blueprints.launchpad.net/nova/+spec/powervm-network-hotplug 14:48:11 we originally had a spec for this, but mriedem and melwitt agreed yesterday these could be specless 14:48:11 approve 14:48:13 Perfect example of how we can use runways for bp+feature as we go. 14:48:20 yep 14:48:33 that one is already up and ready for review once the bp is approved 14:48:39 the impl I mean 14:48:49 Queue up the first couple. Get 'em in a runway. Get 'em approved. Queue up the next couple. Repeat until we run out of... runway. 14:49:24 vscsi and snapshot are also essentially implemented... a little more subteam review needed 14:49:25 efried: do you mean not take the whole list of 6 bp now just the first 3? 14:49:31 how we handle priority for a runway slot is up for debate probably 14:49:45 gibi: I think queueing them up individually would be appropriate. 14:49:54 i.e. i think the certs api stuff from john hopkins all trumps the powervm bp's 14:50:06 I'd like to get at least the first 3 approved today, since they're implemented... 14:50:18 mriedem: One of the main purposes of runways was to allow focus on lower-priority work. We already have high focus on high-priority work. 14:50:36 the certs api thing is low priority work, 14:50:39 it's been so gd low priority, 14:50:43 it's deferred since ocata 14:50:47 the priority thing is hard, and more important later in the cycle, which is one reason I want to start now 14:50:48 Point was that, with some (but few) exceptions, the queue is FIFO 14:50:50 so at this point, it's kind of high priority right? 14:50:55 so things that are low priority but ready can go ahead and benefit 14:51:07 If the cert thing is ready, queue it tf up and it'll bubble to the top in order. 14:51:22 efried: it's already in the queue actually :) 14:51:38 so i can approve https://blueprints.launchpad.net/nova/+spec/powervm-network-hotplug right? 14:51:54 any objection? 14:51:56 +1 from me :) 14:52:23 seems there is no objectsion so it is approved 14:52:26 and vscsi and snapshot? 14:52:29 next 14:52:30 vSCSI cinder volumes: https://blueprints.launchpad.net/nova/+spec/powervm-vscsi 14:52:51 this was actually ready to go in Queens but didn't get reviews 14:53:09 * bauzas is silently (well, not really) back 14:53:14 we've made a few small improvements while we've been waiting 14:53:22 any objection? 14:53:36 approved 14:53:39 next 14:53:41 Instance Snapshot: https://blueprints.launchpad.net/nova/+spec/powervm-snapshot 14:54:00 * gibi waiting for objectsion 14:54:09 impl for this is up and tested. I think efried and I both had comments that were getting addressed and then we'd open it up for wider review 14:54:20 probably a few days 14:54:27 approved 14:54:40 you guys are also enabling CI testing for all of these right? 14:54:45 b/c tempest tests attach interface and snapshot 14:54:53 Not vSCSI 14:54:53 i will -1 the living hell out of these patches if i don't see it 14:54:57 i know not that one 14:55:05 Snapshot and attach yes 14:55:12 ok, fair warning 14:55:27 next? 14:55:28 do we want to go further down the list of powervm bps or it is enough for now and we will take the rest when this 3 are merged? 14:55:40 *these 3 14:55:40 i'll just hit the rest in the agenda 14:55:42 after the meeting 14:55:53 or melwitt can 14:55:59 mriedem: does hit means approve? 14:56:08 yeah. 14:56:14 these were all in a spec that was basically approved, 14:56:18 but we didn't need a spec, 14:56:25 mriedem: ohh I see 14:56:26 so they were split out as feature parity blueprints 14:56:36 then I agree 14:56:39 because a spec with a laundry list is annoying 14:56:46 so the last item on the agenda 14:56:47 (jianghuaw): seeking approval for specless blueprint for "vGPU work in rocky": https://blueprints.launchpad.net/nova/+spec/vgpu-rocky 14:56:55 This BP is for tracking the work in Rocky for the vGPU functions which have not been done in Queens. 14:56:56 approve 14:57:02 we talked about this in dublin 14:57:05 it's just closing gaps 14:57:07 cool mriedem:-) 14:57:13 everyone agree? 14:57:22 ++ 14:57:31 * gibi doesn't see any objections 14:57:47 anything else to discuss in the next 2 minutes? 14:58:02 nein 14:58:02 Thanks all:-) 14:58:47 OK. let's close this 14:58:50 thanks for the meeting 14:58:58 #endmeeting