21:00:04 #startmeeting nova 21:00:05 Meeting started Thu Jun 7 21:00:04 2018 UTC and is due to finish in 60 minutes. The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:09 hello everybody 21:00:09 The meeting name has been set to 'nova' 21:00:12 o/ 21:00:15 o/ 21:00:22 o/ 21:00:25 \o 21:00:39 #topic Release News 21:00:46 #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule 21:00:49 ō/ 21:01:04 today is the r-2 milestone, so we're proposing a release by EOD 21:01:24 is there a particular patch or patches to wait for, for the release tag? 21:02:12 I had been trying to get this one squared away https://review.openstack.org/540258 but am stuck on the functional test, found an issue in the cells fixture, don't know the root cause, etc it's going to take more work 21:02:49 so my plan is to just take the current HEAD of the tree at EOD my time to propose the release with 21:03:12 so if anyone has an important bug patch, let me know otherwise 21:03:13 i've also been meaning to write a functoinal test related to something in that patch we talked about 21:03:22 but...time 21:03:28 * melwitt nods 21:03:39 "TODO: We need a new bug and test for the multi-cell affinity scenario where two instances are scheduled at the same time in the same affinity group. We need 2 cells with 1 host each, with exactly enough capacity to fit just one instance so that placement will fail the first request and throw it into the other host in the other cell. The late affinity check in the compute won't fail because it can't see the other memb 21:03:39 n the other cell, so it will think it's fine. " 21:03:54 ftr 21:04:29 k. yeah, really similar to what I'm doing except without the parallel request 21:04:36 I mean I'm not doing parallel 21:04:50 #link Rocky review runways: https://etherpad.openstack.org/p/nova-runways-rocky 21:05:01 #link runway #1: Certificate Validation - https://blueprints.launchpad.net/nova/+spec/nova-validate-certificates (bpoulos) [END DATE: 2018-06-15] https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nova-validate-certificates 21:05:08 #link runway #2: Neutron new port binding API for live migration: https://blueprints.launchpad.net/nova/+spec/neutron-new-port-binding-api (mriedem) [END DATE: 2018-06-20] Starts here: https://review.openstack.org/#/c/558001/ 21:05:14 #link runway #3: XenAPI: improve the image handler configure:https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement (naichuans) [END DATE: 2018-06-20] starts here: https://review.openstack.org/#/c/486475/ 21:05:28 please lend your eyeballs to runways blueprint patch reviews 21:06:36 thanks to all who have been helping out there 21:06:52 anyone have anything else for release news or runways? 21:07:01 people need to put stuff in the runways queue 21:07:04 don't wait for subteams 21:07:21 yeah, that's a good reminder 21:07:51 folks needn't feel pressured to have too much pre-review. if the implementation is done, no longer in WIP state, it's a good idea to join the runways queue 21:09:07 #topic Bugs (stuck/critical) 21:09:20 no critical bugs in the link 21:09:26 #link 44 new untriaged bugs (up 2 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 21:09:32 #link 13 untagged untriaged bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW 21:09:39 #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags 21:10:01 I know things have been really busy lately, but hopefully soon we can get some more triage done. how to guide above ^ 21:10:10 Gate status 21:10:19 #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 21:10:36 gate has seemed okay 21:10:42 #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days 21:10:45 http://status.openstack.org/elastic-recheck/index.html#1775491 was big and new 21:10:47 but there is a fix in the gate 21:10:55 https://review.openstack.org/#/c/573107 21:11:12 a-ha, cool. I saw that one a few times but didn't realize it was that big 21:11:44 anyone have anything else for bugs, gate status or third party CI? 21:12:17 #topic Reminders 21:12:24 #link Rocky Subteam Patches n Bugs https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 21:12:31 #info Spec Freeze Day today Thursday June 7 21:12:57 that said, I think we're looking at a couple of exceptions for major issues that are still being spec reviewed, 21:13:14 one is the placement resource providers => nested resource providers migration 21:13:25 #link https://review.openstack.org/#/c/572583/ 21:13:26 that will affect anyone upgrading to rocky 21:13:55 the other is the handling of a down cell, related to resiliency in a multiple cells deployment 21:14:00 #link https://review.openstack.org/#/c/557369/ 21:14:15 i've asked for user and ops feedback on ^ 21:14:22 so far it's just me and gibi on the spec review 21:14:46 but it's pretty huge in what's being proposed 21:15:05 our friends at CERN have been working with us on this one, they're running queens with multiple cells and have run into issues with resiliency for down or low performing cells/databases 21:15:33 so we really need to do something to deal with those issues 21:15:47 yes, there's been a post to the ML by mriedem on that asking for input 21:16:28 #link http://lists.openstack.org/pipermail/openstack-dev/2018-June/131280.html 21:16:50 okay, I think that's all I have. anyone else have anything for reminders? 21:17:42 #topic Stable branch status 21:17:49 #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z 21:17:54 #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z 21:17:58 #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z 21:18:34 we released queens and pike pretty recently and I can't remember if I linked those in a previous meeting 21:19:10 #link queens 17.0.5 released on 2018-06-04 https://review.openstack.org/571494 21:19:34 #link pike 16.1.4 released on 2018-06-04 https://review.openstack.org/571521 21:19:55 #link ocata 15.1.3 soon to be released https://review.openstack.org/571522 21:20:13 does anyone have anything else for stable branch status? 21:20:55 #topic Subteam Highlights 21:21:04 we had a cells v2 meeting this week 21:21:14 main topics were the handling of a down cell spec mentioned earlier 21:21:38 and the nova-network removal go-ahead for removing the REST API bits but keeping the core functionality intact until Stein 21:22:20 CERN is in the middle of a nova-network => neutron migration and keeping nova-network functioning underneath is a really helpful safety net for them. so we're deferring removal of the core functionality until Stein 21:22:51 but we are in the clear to remove the REST API bits and one change has merged for that and others are proposed 21:23:18 scheduler subteam, jaypipes or efried? 21:23:31 cdent chaired 21:23:39 rats, lemme look up the logs quick... 21:23:57 I was there too but forgot everything 21:25:15 right, so the summary is that nrp-in-alloc-cands is priority (we've merged the bottom four or five since then; progress is being made) but the upgrade business is a close second. 21:25:46 yeah, upgrade/migration issue is very high priority 21:25:57 ...and blocks blueprints that are changing their tree structures by *moving* existing resource classes, but *not* anyone who's just *adding* new inventories to child providers. 21:25:57 okay, cool 21:26:16 upgrade depends on nrp-in-alloc-cands and consumer generations. 21:26:20 upgrade spec was linked earlier. 21:26:39 k thanks 21:26:50 gibi left some notes for notifications, 21:26:59 "We had a meeting with Matt and talked about the possible need of a major bump of the ServerGroupPayload due the the renaming and retyping of the policies field of the InstanceGroup ovo." 21:27:05 "We agreed to aim for keeping both the deprecated policies and adding the new policy field with a minor version bump if possible." 21:27:10 #link https://review.openstack.org/#/c/563401/3/doc/notification_samples/common_payloads/ServerGroupPayload.json@10 21:27:33 anything else for subteams? 21:27:45 that reminds me, 21:27:52 i need to talk to dansmith about yikun's changes there at some point 21:27:54 but low priority atm 21:28:06 the InstanceGroup.policies field is being renamed 21:28:10 which is weird with objects 21:28:44 didn't know about that. curious why the need to rename but I'll go look it up later 21:28:45 https://review.openstack.org/#/c/563375/11/nova/objects/instance_group.py@165 21:28:50 it's a different format 21:28:55 details are in the spec 21:29:02 k, will check that out 21:29:19 #topic Stuck Reviews 21:29:34 nothing in the agenda. anyone in the room have any stuck reviews they need to bring up? 21:30:07 #topic Open discussion 21:30:22 i've got 2 things 21:30:31 k 21:30:44 from the tc meeting today 21:30:51 1. mnaser is our guidance counselor now https://wiki.openstack.org/wiki/Technical_Committee_Tracker#Project_Teams 21:31:03 so if you are mad at your parents or gf/bf, you can talk to him 21:31:04 \o/ 21:31:10 awe-some 21:31:17 hi 21:31:35 mnaser: have I told you lately how thick and luscious your beard is? 21:31:38 2. as part of the goals selection stuff and the "finally do stuff in OSC" goal, i've started an etherpad to list the gaps for compute API microversions in OSC https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc 21:31:50 i haven't gotten far, 21:31:58 and i'll post a more format call for help in the ML later 21:32:00 but fyi 21:32:01 #link https://etherpad.openstack.org/p/compute-api-microversion-gap-in-osc 21:32:13 good to know 21:32:13 *formal 21:32:14 efried: hah :p thanks, I guess? 21:32:21 * mnaser lets meeting go back to its order 21:32:29 efried: have you seen gmann? 21:32:32 no contest 21:32:35 true story 21:32:37 lol 21:32:42 But gmann is not our guidance counselor. 21:32:47 true 21:32:58 that's it 21:33:07 🙄 21:33:17 I guess since I'm thinking about it, there's a post to the ML about increasing the number of volumes to attach to a single instance > 26 and some approaches. lend your thoughts on the thread if you're interested 21:33:25 #link http://lists.openstack.org/pipermail/openstack-dev/2018-June/131289.html 21:33:47 that's all I have. anyone have anything else for open discussion? 21:33:53 But anyways, all jokes aside, we’re trying to be more proactive as TC to check in on project health so feel free to reach out 21:34:16 super 21:34:51 okay, if no one has anything else, we can call this a wrap 21:34:59 thanks everyone 21:35:02 #endmeeting