14:00:13 <melwitt> #startmeeting nova 14:00:14 <openstack> Meeting started Thu Apr 5 14:00:13 2018 UTC and is due to finish in 60 minutes. The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:17 <openstack> The meeting name has been set to 'nova' 14:00:22 <melwitt> howdy everyone 14:00:22 <dansmith> o/ 14:00:24 <cdent> o/ 14:00:26 <takashin> o/ 14:00:46 <tssurya> o/ 14:00:51 <tetsuro> o/ 14:00:56 <edleafe> \o 14:01:06 <hshiina> o/ 14:01:16 <melwitt> let's get started 14:01:19 <edmondsw> o/ 14:01:20 <efried> ö/ 14:01:22 <melwitt> #topic Release News 14:01:30 <melwitt> #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule 14:01:43 <melwitt> next milestone is r-1 on Apr 19, two weeks from now 14:02:09 <melwitt> spec freeze this cycle is r-2 Jun 7 14:02:18 <melwitt> #link Rocky review runways: https://etherpad.openstack.org/p/nova-runways-rocky 14:02:33 <melwitt> current runway status: 14:02:39 <melwitt> #link runway #1: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/update-provider-tree 14:02:44 <melwitt> #link runway #2: privsep series starting at https://review.openstack.org/#/c/551921 14:02:51 <melwitt> #link runway #3: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nova-validate-certificates 14:03:10 <mriedem> o/ 14:03:22 <melwitt> thanks to everyone for making runway reviews a priority. been seeing lots of activity on runway reviews 14:04:02 <melwitt> we were able to remove https://blueprints.launchpad.net/nova/+spec/placement-req-filter from a runway early as all of the non-WIP patches merged well before the runway end date 14:04:28 <melwitt> so we've moved it out to the Log area on the etherpad (we'll add notes when we remove a blueprint to capture how it went) 14:04:52 <melwitt> and https://blueprints.launchpad.net/nova/+spec/nova-validate-certificates was moved into the vacated runway 14:05:25 <melwitt> does anyone have anything else on release news? 14:05:27 <efried> #1 is down to one patch which already has Jay's +2, and it's pretty small. Should be an easy one. 14:05:35 <melwitt> nice 14:05:44 <efried> I lied. 14:05:50 <melwitt> not nice 14:06:08 <efried> Down to one patch in the main series. Three patches overall. But they're all easy :) 14:06:16 <mriedem> merge conflict on one 14:06:27 <efried> sigh, working it now... 14:06:57 <melwitt> cool. anything else on release news or runways? 14:07:24 <melwitt> #topic Bugs (stuck/critical) 14:07:38 <melwitt> no critical bugs in the link 14:07:44 <melwitt> #link 37 new untriaged bugs (up 1 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 14:08:20 <melwitt> not making a dent on bug triage lately so we need to try and get some more bugs triaged 14:08:24 <mriedem> https://bugs.launchpad.net/nova/+bug/1761405 is a duplicate i think, will dig 14:08:25 <openstack> Launchpad bug 1761405 in OpenStack Compute (nova) "impossible to disable notifications" [Undecided,New] 14:08:34 <mriedem> i at least remember talking to gibi about that 14:08:43 <melwitt> a-ha, cool 14:09:01 <melwitt> here are the instructions for bug triage, makes it a lot easier IMHO: 14:09:02 <melwitt> #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags 14:09:12 <melwitt> #link 9 untagged untriaged bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 14:09:20 <melwitt> #help tag untagged untriaged bugs with appropriate tags to categorize them 14:09:47 <melwitt> next I'm going to link some categories that have some untriaged bugs in them: 14:09:54 <melwitt> #link 3 untriaged cells bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=cells+&field.status%3Alist=NEW 14:10:01 <melwitt> #link 2 untriaged ceph bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=ceph+&field.status%3Alist=NEW 14:10:07 <melwitt> #link 2 untriaged console bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=console+&field.status%3Alist=NEW 14:10:13 <melwitt> #link 6 untriaged libvirt bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=libvirt+&field.status%3Alist=NEW 14:10:18 <melwitt> #link 5 untriaged live-migration bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=live-migration+&field.status%3Alist=NEW 14:10:28 <melwitt> #help If you have familiarity with any of these categories, please help triage (set validity and importance) 14:11:12 <melwitt> I'll be looking at some of these too 14:11:40 <melwitt> Gate status: 14:11:41 <mriedem> i can do one of these now :) 14:11:54 <melwitt> yesss 14:11:58 <melwitt> thank you mriedem 14:12:04 <mriedem> i get a prize? 14:12:34 <melwitt> hm, maybe. we'll have to see what we have in the prize bin 14:12:51 <melwitt> I wanted to shout out to takashin, thanks for all of the bug triage in novaclient land 14:13:02 <melwitt> okay, now Gate status 14:13:11 <melwitt> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 14:13:28 <melwitt> anecdotally I've seen a lot of TIMED_OUT in the gate lately 14:13:45 <mriedem> same 14:13:48 <mriedem> within the last 24 hours 14:13:51 <melwitt> yeah 14:14:08 <melwitt> AFAIK there's been no announcement from infra about it 14:14:29 <melwitt> might have to ask them if it's still going on today 14:14:44 <melwitt> last night I also noticed this bug seems back in the gate https://bugs.launchpad.net/nova/+bug/1707284 14:14:45 <openstack> Launchpad bug 1707296 in os-brick "duplicate for #1707284 Bad logic in ISCSIConnector._get_potential_volume_paths" [Undecided,Fix released] - Assigned to Mathieu Gagné (mgagne) 14:14:58 <melwitt> I added a comment at the bottom 14:15:00 <dansmith> they fixed it 14:15:14 <dansmith> it was one provider and they were taxing something with too many threads 14:15:24 <melwitt> I see it in logstash starting on 2018-03-26 14:15:24 <dansmith> they merged a change that seemed to fix it, per late last nigh 14:15:27 <melwitt> oh, the new one? 14:15:31 <melwitt> oh good 14:15:56 <melwitt> alright well that's ideal :) already fixed 14:16:09 <melwitt> 3rd party CI: 14:16:15 <melwitt> #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days 14:16:58 <melwitt> I haven't noticed anything unusual with 3rd party CI lately 14:17:05 <mriedem> vmware seems to be much more active now 14:17:08 <cdent> I can report that although it hasn't quite bedded in yet the number of compute resources now dedicated to vmware ci has been increased, and the previous limitations on what is tested have been relaxed. so that should improve a bit 14:17:09 <cdent> jinx 14:17:19 <mriedem> thanks for pushing that 14:17:28 <melwitt> +1 thanks cdent 14:17:29 <cdent> i shook my fist a lot 14:17:38 <melwitt> heh 14:17:47 <cdent> dag nabbit, etc 14:18:11 <melwitt> anything else on bugs or gate status? 14:18:45 <melwitt> #topic Reminders 14:18:51 <melwitt> #link Rocky Review Priorities https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 14:19:04 <dansmith> I haven't been looking at this since runways 14:19:11 <dansmith> I assume that page is dead/old news right? 14:19:27 <mriedem> it has bugs and other things 14:19:35 <melwitt> I'm still using it to find subteam reviews and trivial bugs if people have them there 14:19:46 <dansmith> but also has priorities that aren't 14:19:51 <dansmith> maybe it just needs some restructuring? 14:20:01 <edleafe> We are going to use that etherpad to help prioritize the outstanding specs/reviews for Placement 14:20:08 <edleafe> Just need to find the time :) 14:20:10 <melwitt> yeah, maybe I should clean that up. I wasn't sure if it was confusing anyone 14:20:36 <melwitt> I still want virt driver teams, subteams, bugs to be able to be surfaced somewhere for finding 14:20:51 <melwitt> *subteams reviews 14:21:17 <melwitt> #action melwitt to restructure priorities etherpad to not sound conflicting with runways 14:21:35 <melwitt> does anyone else have any reminders? 14:21:37 <dansmith> "non-priority approved blueprints" for example should go away I think 14:21:44 <melwitt> ah, true 14:22:20 <melwitt> #topic Stable branch status 14:22:36 <melwitt> #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z 14:22:47 <melwitt> only one review 14:22:53 <melwitt> #link Queens 17.0.2 released on 2018-04-02: https://review.openstack.org/#/c/557704/ 14:23:13 <melwitt> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 14:23:20 <melwitt> er 14:23:25 <melwitt> #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 14:23:35 <melwitt> 4 reviews 14:23:40 <melwitt> #link Pike 16.1.1 released on 2018-04-02 :https://review.openstack.org/#/c/558130/ 14:23:58 <melwitt> thanks to all the stable branch reviewers for helping get those releases out 14:24:09 <melwitt> #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z 14:24:22 <melwitt> lots of stuff for ocata 14:25:05 <melwitt> and I still have a TODO to investigate some things there, like func test failures on a backport, along with ceph CI 14:25:47 <melwitt> branchless tempest broke ceph CI on ocata and I'm still figuring out what needs to be done to fix it. needs an update to the volume tests to sync with the tempest change that happened 14:26:10 <melwitt> anything else on stable branch status? 14:27:04 <melwitt> #topic Subteam Highlights 14:27:06 <melwitt> dansmith: cells v2? 14:27:11 <dansmith> we had a meeting, 14:27:22 <dansmith> we discussed a couple new bugs with v1 that the cern people are going to work on 14:27:37 <dansmith> and acknowledged that we have some open reviews to get to 14:27:44 <dansmith> and that's pretty much it. 14:27:52 <melwitt> nice, thanks 14:27:58 <melwitt> edleafe: scheduler? 14:28:03 <edleafe> A light crowd; many were off for $spring_holiday 14:28:03 <edleafe> We started by recognizing that we are overloaded. There are 18 specs still under review, and 30 patch series. 14:28:06 <edleafe> We need to prioritize, and it was suggested that we use the Nova priorities etherpad as a starting point. 14:28:09 <edleafe> We discussed 3 patches that change behavior: 14:28:11 <edleafe> * Change provider summaries to include all resource info (currently just the requested resources) 14:28:14 <edleafe> * "Anchor" providers, such as a compute node that doesn't provide its own resources, but is associated with a sharing aggregate that does. 14:28:17 <edleafe> * Add the anchor provider to allocation requests, albeit with no resources to claim. 14:28:20 <edleafe> The net result of these discussions is that, given the workload we have, these should be pushed off until Stein. 14:28:23 <edleafe> efried confused alex_xu with his use of the word "broughten" 14:28:26 <edleafe> That's it. 14:28:51 <efried> except I changed my mind since tetsuro responded to that line of thought. 14:29:17 <edleafe> oh? 14:29:26 <efried> I now feel we do indeed need that stuff for nested to work sanely; and should therefore roll it into the nrp-in-alloc-cands bp/spec 14:29:54 <efried> Left some notes on that spec accordingly; and jaypipes still owes a comeback on the email thread. 14:30:33 <edleafe> gotcha 14:30:34 <efried> Since the code is basically done, folding it into the existing bp/spec/microversion for nrp will have the bonus side effect of reducing the paperwork load. 14:30:59 <cdent> magic 14:31:01 <melwitt> so the thing we're going to go ahead and do this cycle and not push off to stein is the 3 behavior change patches? 14:31:23 <efried> I wouldn't say it's definitely decided yet, but that's the proposal, yes. 14:31:29 <melwitt> k 14:32:00 <efried> maybe tricky to finagle so all the behavior changes trigger at the same microversion bump even though they're spread across half a dozen patches, but... 14:32:06 <efried> such is life. 14:32:41 <melwitt> for the notification subteam, gibi is still on vacay and there's not a meeting till he's back the week of Apr 16 14:32:55 <mriedem> i've got a thing there 14:33:03 <melwitt> k 14:33:14 <mriedem> https://bugs.launchpad.net/nova/+bug/1761405 has come up before, i think there is a workaround by setting the notification driver to 'noop', 14:33:14 <openstack> Launchpad bug 1761405 in OpenStack Compute (nova) "impossible to disable notifications" [Low,Confirmed] 14:33:25 <mriedem> which means we'll still generate a payload, just to send it to dev/null 14:33:34 <mriedem> but at least it doesn't put load on the rpc queue 14:33:52 <mriedem> otherwise i emailed gibi about that one so we can talk about it when he's back 14:34:18 <mriedem> <end/> 14:34:43 <melwitt> cool, thanks. sounds good 14:35:06 <melwitt> anything else on subteams? 14:35:39 <melwitt> #topic Stuck Reviews 14:35:49 <melwitt> no items in the agenda. does anyone have any stuck reviews? 14:36:09 <kashyap> Probably no one had any stuck reviews, then :-) 14:36:11 <mriedem> i might have one 14:36:16 <mriedem> well it's jay's 14:36:25 <mriedem> https://review.openstack.org/#/c/545057/ 14:36:39 <mriedem> i don't know if that's stuck, but it's the one to mirror nova host aggregates to placement 14:36:52 <dansmith> do you disagree with the fundamental idea of mirroring? 14:37:00 <mriedem> i'm a bit uneasy with it 14:37:14 <mriedem> it feels very much like proxy stuff to neutron that we stopped doing back in newton 14:37:19 <dansmith> to me, 14:37:31 <dansmith> it feels like nova using an external service as means to an end 14:37:33 <dansmith> like, 14:37:44 <dansmith> we report data into placement so the scheduler can make effective queries there 14:37:56 <dansmith> we could still do our scheduling in-house, but we don't want to do it all, 14:38:05 <dansmith> so our computes report data that our scheduler can use to make queries 14:38:10 <dansmith> this seems to me to be the same thing 14:38:43 <mriedem> i can maybe buy that 14:38:44 <dansmith> computes report resource data I mean 14:39:35 <mriedem> anyway, i guess i'll think about that some more :) 14:39:39 <dansmith> hah, okay 14:39:40 <mriedem> since no one else is jumping in here 14:40:09 <melwitt> I'll check out the spec and comment there. I haven't read through it yet 14:40:16 <melwitt> cool, thanks for bringing that up 14:40:25 <melwitt> any other stuck reviews? 14:40:26 <mriedem> there was the other one, about createBackup 14:40:36 <mriedem> but i guess we're waiting to hear about actual user usage 14:40:54 <melwitt> oh, yeah. I added openstack-operators@ to that thread 14:41:01 <melwitt> on the ML 14:41:12 <mriedem> i asked internally, 14:41:36 <mriedem> and our public cloud doesn't use the API (disables? doesn't expose in the UI?), 14:41:54 <mriedem> and our private cloud product apparently has it but doesn't hear that it's used 14:42:12 <dansmith> is the suggestion that we quietly disallow zero? 14:42:16 <mriedem> no, 14:42:26 <mriedem> suggestion is i think to just deprecate it 14:42:36 <mriedem> if it's not used 14:42:38 <dansmith> okay so disallow in a microversion 14:42:46 <melwitt> the original proposal was to nix the unnecessary create backup when '0' is passed 14:43:04 <melwitt> *spec proposal 14:43:20 <dansmith> okay I remember getting some of it in my head at ptg, but it's all out at this point 14:43:23 <melwitt> which implied someone is using '0' to purge backups 14:43:31 <mriedem> disallow 0 and add a deleteBackups API 14:43:33 <mriedem> in a new mv 14:43:38 <dansmith> mriedem: yeah 14:43:45 <melwitt> but it hasn't been clear to me if people are really using it or if someone found weirdness in the API and wanted to fix it just cause, or what 14:43:50 <dansmith> mriedem: I mean that seems valid regardless 14:43:52 <mriedem> alex asked in the spec review if we should just deprecate the api since it's a proxy to glance 14:44:09 <dansmith> so tbh, 14:44:12 <dansmith> in my own usage of nova, 14:44:21 <mriedem> i.e. i think you can do all of this yourself by creating a snapshot with metadata and then controlling the rotation in glance yourself 14:44:34 <dansmith> whenever I need to make a backup, I just create a snapshot and name it myself because I don't want to go read the docs on backup :) 14:44:36 <kashyap> Also, maybe this is tangetial (& should bring up on -nova): Do clouds care about back ups? I was asked by the QEMU / libvirt folks who're fixing some backup infrastructure. 14:44:48 <dansmith> kashyap: yes they definitely do in this sense of the word 14:45:04 <dansmith> kashyap: it's about snapshotting instances before you make a change 14:45:07 <kashyap> Right; I was also asked about do people ever ask for "incremental backups" 14:45:46 <mriedem> not that i know of, 14:45:47 <kashyap> (Anyway, that's a discussion for -nova.) 14:46:01 <mriedem> although i think with the rbd imagebackend you can enable fast snapshots using incremental backups in ceph 14:46:14 <mriedem> but that's not something the end user knows about from the API 14:46:15 <kashyap> mriedem: I see; the most appealing part of "incremental backups" is it saves bandwidth -- 14:46:23 <kashyap> As long as you have a chain of deltas, copying the incremental from the last delta is probably smaller than copying the entire disk. 14:46:26 <dansmith> any dedup'ing storage will give you effective incrementals 14:46:32 <dansmith> let's not get mired in that though 14:46:35 <kashyap> Yep 14:46:37 <dansmith> because that's not what is stuck 14:46:49 <dansmith> mriedem: I think it's legit to just deprecate zero and add deleteBackup going forward, 14:46:53 <mriedem> so i guess we hold on this spec until we hear more 14:46:57 <dansmith> even if it's lightly used because it's a clearer API 14:47:15 <melwitt> yeah, I've been a soft +1 on it since it's so easy to do 14:47:16 <dansmith> if we want to remove it entirely because it's "just a proxy" then that's a stickier discussion to me 14:47:27 <dansmith> I know you can do it externally, and that would be a reason not to add it, 14:47:30 <dansmith> but meh, it's there 14:48:06 <melwitt> yeah, fair 14:48:07 <mriedem> it would be nice if the NTT people actually confirmed they have users complaining about this 14:48:41 <dansmith> yeah, Idon't want to do this just because it's a theoretical thing to clean up 14:48:59 <melwitt> same. that was my concern. I noticed the spec is light on use case 14:49:20 <melwitt> #link ML thread http://lists.openstack.org/pipermail/openstack-operators/2018-April/015092.html 14:49:37 <dansmith> I think we should move on 14:49:58 <melwitt> k. I'll ask the author again about users 14:50:15 <melwitt> anything other topics for stuck reviews? 14:50:19 <melwitt> *any 14:50:36 <melwitt> #topic Open discussion 14:50:51 <melwitt> two items on the agenda: 14:50:52 <melwitt> (hshiina) specless blueprint request: ironic rescue mode https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode 14:51:16 <melwitt> also says ironic work was done in queens 14:51:26 <mriedem> this was previously approved in pike and queens, 14:51:34 <mriedem> but always slid waiting for the ironic stuff to merge, 14:51:42 <dansmith> I notice that future mriedem commented on the blueprint 14:51:43 <mriedem> which i think the ironicclient changes landed right around FF for queens 14:51:59 <mriedem> past mriedem? 14:52:07 <dansmith> the code patch seems to have some ironicclient versioning stuff up in the air 14:52:14 <dansmith> mriedem 20191011 14:52:26 <melwitt> heh 14:52:26 <mriedem> ha 14:52:50 <mriedem> fixed 14:52:53 <mriedem> anyway, +1 on re-approval 14:52:55 <mriedem> for the bp 14:53:19 <melwitt> I'm +1 too 14:53:42 <melwitt> any other opinions? 14:54:27 <melwitt> okay, sounds like no one is against it and we have two +1s, so I'll re-approve it 14:54:37 <hshiina> thank you 14:54:56 <melwitt> thanks for reminding us about it 14:55:04 <melwitt> last on the agenda: 14:55:09 <melwitt> (kashyap) RFC: Next minimum libvirt / QEMU versions for "Stein" release 14:55:15 <melwitt> #link http://lists.openstack.org/pipermail/openstack-dev/2018-April/129048.html 14:55:29 <kashyap> Yeah, so this is the top-level link: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128937.html 14:55:43 <kashyap> mriedem: You might be interested in this, as you were asking about it too 14:55:45 <dansmith> this seemed to be uncontentious on the ML, did I miss something? 14:55:46 <mriedem> i haven't heard any disagreements in the ML thread 14:55:56 <kashyap> Yeah, mriedem There is one point though: About Debian 14:55:58 <mriedem> kashyap: did you get a reply from zigo for debian? 14:56:12 <kashyap> Not yet; I talked to Debian folks on #debian-backports 14:56:29 <kashyap> mriedem: I posted a quick summary of it here: https://review.openstack.org/#/c/558171/ 14:56:34 <kashyap> In exchange w/ johnthetubaguy 14:57:06 <kashyap> mriedem: If zigo or anyone doesn't do the backports as noted above 14:57:34 <zigo> mriedem: About what? 14:57:34 <kashyap> (We might be stuck with one version lower - see the discussion on the list / review.) 14:57:37 <zigo> o/ 14:57:50 <kashyap> zigo: Hi there, see my last comment here: https://review.openstack.org/#/c/558171/ 14:58:03 <mriedem> and http://lists.openstack.org/pipermail/openstack-dev/2018-March/128937.html 14:58:59 <kashyap> zigo: In short: Is it possible for Debian 'stretch' users to get libvirt 3.2.0 and QEMU 2.9.0 or higher via any other way? 14:59:01 <mriedem> we need to move this to -nova 14:59:06 <kashyap> Yeah; can move there 14:59:10 <zigo> Debian Sid has 4.1.0-2, stable has 3.0.0 14:59:22 <melwitt> we have less than a minute left 14:59:24 <kashyap> zigo: Yep, that I'm aware. 14:59:31 <kashyap> zigo: Let's move this to -nova 14:59:36 <melwitt> okay, we'll continue this discussion in -nova 14:59:42 <melwitt> thanks all 14:59:45 <melwitt> #endmeeting