14:00:28 <mriedem> #startmeeting nova 14:00:28 <openstack> Meeting started Thu Nov 3 14:00:28 2016 UTC and is due to finish in 60 minutes. The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:31 <openstack> The meeting name has been set to 'nova' 14:00:34 <dansmith> o/ 14:00:35 <takashin> o/ 14:00:41 <raj_singh> o/ 14:00:42 <diana_clarke> o/ 14:00:45 <lyarwood> o/ 14:00:58 <cdent> o/ 14:01:09 <johnthetubaguy> o/ 14:01:33 <jroll> \o 14:02:19 <mriedem> alright let's get started 14:02:29 <mriedem> #link meeting agenda https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting 14:02:38 <mriedem> #topic release news 14:02:45 <mriedem> #link Ocata release schedule: https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule 14:02:53 <mriedem> ^ has the official schedule for nova now 14:03:09 <mriedem> the only nova-specific date is the spec approval freeze 14:03:17 <mriedem> #info Next upcoming date: Nov 17: o-1 milestone, spec approval freeze 14:03:34 <mriedem> so we have 2 weeks to approve specs for ocata 14:03:41 <mriedem> #link Open specs: https://review.openstack.org/#/q/project:openstack/nova-specs+status:open 14:03:46 <mriedem> and there are lots of them open 14:04:09 <mriedem> #info we've already approved 43 blueprints for ocata https://blueprints.launchpad.net/nova/ocata 14:04:23 <mriedem> so we've already got a lot of committed work to get done 14:04:59 <mriedem> so having said that, if you have a spec that needs a 2nd +2 or should be trivial, speak up in -nova to get eyes on it 14:05:12 <mriedem> questions? 14:05:24 <mriedem> moving on 14:05:26 <mriedem> #topic bugs 14:05:37 <mriedem> there are no critical bugs right now 14:05:58 <mriedem> however, the number of new untriaged bugs is on the rise 14:06:10 <mriedem> we used to keep that hanging <40 but it's at 72 right now 14:06:44 <mriedem> so please help out and triage a bug or two each day to keep those numbers down, or at least spot anything that's really critical from the rest of the noise 14:07:21 <mriedem> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html 14:07:26 <mriedem> pretty quiet 14:07:27 <dansmith> mriedem: was there any outcome of looking at why that n-net patch was hanging so bad? 14:07:54 <mriedem> dansmith: nope. i abandoned just to kill it until we know we don't have any non-cellsv1 n-net jobs in ocata 14:08:09 <dansmith> okay I'm still curious how it was evading the timer 14:08:12 <mriedem> me too 14:08:23 <dansmith> because I have some bitcoin to mine, and that would be excellent 14:08:41 <mriedem> there has been improvement on removing n-net jobs from ocata gate https://review.openstack.org/#/q/topic:neutron-default 14:08:42 * johnthetubaguy giggles 14:08:51 <mriedem> i plan on working on clarkb's d-g change today 14:08:56 <mriedem> which will make neutron the default in ocata jobs 14:09:13 <mriedem> i think i have to cap the grenade n-net jobs at newton first, but that's trivial 14:09:54 <mriedem> i don't have any news about 3rd party ci really 14:10:08 <mriedem> powervm is going to work on getting runs of their CI on nova changes, non-voting 14:10:18 <mriedem> anthonyper is going to work on making the xenproject ci run with neutron 14:10:27 <mriedem> powerkvm and vmware ci already already working on using neutron 14:10:33 <mriedem> *are already 14:10:45 <mriedem> questions about bugs and/or CI? 14:11:01 <mriedem> alright 14:11:03 <mriedem> #topic reminders 14:11:11 <mriedem> #link Ocata review priorities https://etherpad.openstack.org/p/ocata-nova-priorities-tracking 14:11:17 <mriedem> my guess is ^ is stale 14:11:45 * dansmith is updating now 14:11:49 <mriedem> #help if you're a subteam lead or have a section in the https://etherpad.openstack.org/p/ocata-nova-priorities-tracking etherpad please make sure it's up to date with the latest priority reviews 14:12:00 <johnthetubaguy> the api one has been updated by alex_xu recently 14:12:29 <mriedem> #topic Stable branch status: https://etherpad.openstack.org/p/stable-tracker 14:12:53 <mriedem> i have one stable/newton change to mention, https://review.openstack.org/#/c/391086/ 14:13:06 <mriedem> that's the backport of the db schema migration made last week 14:13:16 <mriedem> to fix a regression with creating servers in newton 14:13:28 <mriedem> i'd like to get that merged and the reno that sits on top of it and then put out a release 14:13:48 <mriedem> it's just a table alter to make a column bigger 14:14:06 <mriedem> so johnthetubaguy dansmith i'm probably looking to you guys there 14:14:09 <dansmith> yeah 14:14:38 <mriedem> i don't really have anything to mention about stable otherwise 14:14:44 <mriedem> i believe liberty-eol is in a couple of weeks 14:14:56 <mriedem> moving on 14:15:03 <mriedem> #topic subteam highlights 14:15:14 <mriedem> dansmith: i know the cellsv2 meeting was cancelled, but any review priority you want to point out here? 14:15:27 <dansmith> yeah, so, I've been working on this for the last few days: 14:15:28 <dansmith> https://review.openstack.org/#/q/topic:bp/cells-scheduling-interaction+project:openstack/nova+status:open 14:15:40 <dansmith> which is the move-instance-creation-to-conductor, 14:15:46 <dansmith> which is the first step to getting multicell working 14:15:57 <dansmith> I've made a lot of progress, but it definitely needs review from cellsv2ish people, 14:16:19 <dansmith> specifically bauzas at the moment because it includes some legacy scheduling stuff baked into the api that we might not want to import 14:16:30 <dansmith> there is also a multicell database fixture in there that probably needs some review from test-y people 14:16:33 <dansmith> as it creates temporary files 14:16:45 <bauzas> oh man, did I missed the nova meeting ? 14:16:47 <dansmith> other than that, not much has changed since last week and melwitt, et al have been out 14:16:54 * bauzas hates daylight shifts 14:16:59 <mriedem> doctor test is still on sabbatical 14:17:02 * bauzas waves super late \o 14:17:04 <mriedem> but i've starred the bottom change 14:17:27 <dansmith> mriedem: yeah, I added him to the review but might be good to get some other people to look so we don't delay because of a fixture 14:17:27 <bauzas> dansmith: yeah, I need to review that stack, that's in my pipe 14:17:32 <dansmith> bauzas: thanks 14:17:44 <mriedem> i'm glad we both know who doctor test is 14:17:52 <dansmith> I'm going to start working on the top WIP today 14:18:02 <dansmith> mriedem: of course :) 14:18:10 <dansmith> for one, he's the only one on sabbatical :) 14:18:31 <mriedem> alright anything else? 14:18:34 <dansmith> nay 14:18:37 <mriedem> thanks 14:18:42 <mriedem> edleafe: scheduler meeting highlights? 14:19:05 <cdent> I think edleafe I not yet returned. No particular highlights. 14:19:20 <mriedem> review priorities? 14:19:34 <mriedem> i've been watching the series here 14:19:35 <mriedem> https://review.openstack.org/#/c/390062/ 14:19:37 <cdent> anything from jaypipes, pretty much 14:19:38 <mriedem> for resource classes 14:20:06 <cdent> work has begun on the newton-leftovers 14:20:27 <cdent> and we still need to resolve the api on requesting a list of filtered resource providers 14:20:28 <mriedem> ok 14:20:42 <bauzas> I'm just rewriting https://review.openstack.org/#/c/386242/3/nova/objects/resource_provider.py based on cdent's comments on 14:21:04 <bauzas> and I did cut that patch into two changes, one for the object layer and one for the REST API exposure 14:21:35 <bauzas> that's it for me 14:21:37 <cdent> this will need to be fixed soon: https://bugs.launchpad.net/nova/+bug/1638681 14:21:37 <openstack> Launchpad bug 1638681 in OpenStack Compute (nova) "resource tracker sets wrong max_unit in placement Inventory" [Undecided,Triaged] - Assigned to Prateek Arora (parora) 14:21:50 <cdent> people are on it though, so no crisis 14:21:50 <mriedem> ok moving on 14:21:57 <mriedem> tdurakov: live migration meeting highlights? 14:23:06 <mriedem> ok i guess tdurakov isn't here, and didn't dump any highlights 14:23:23 <mriedem> i didn't attend the full meeting, but tdurakov has patches up to enable ceph + ephemeral in the live migration job 14:23:36 <mriedem> that's been his focus from what i can tell, i just need to re-review that series 14:24:05 <tdurakov> mriedem: hi, right, also, several patches that worth review 14:24:35 <mriedem> tdurakov: ok make sure those are in https://etherpad.openstack.org/p/ocata-nova-priorities-tracking please 14:24:43 <tdurakov> mriedem: acked 14:24:48 <mriedem> let's move on 14:24:53 <mriedem> alex_xu: api meeting highlights? 14:25:36 <mriedem> johnthetubaguy: ^ did you attend the api meeting this week? 14:25:51 <johnthetubaguy> sorry, I was just looking up the reviews 14:26:07 <johnthetubaguy> capabilities API WG spec: https://review.openstack.org/#/c/386555/1 14:26:35 <johnthetubaguy> the parameter validation framework spec, to help cells v2 stuff: https://review.openstack.org/#/c/388518 14:26:41 <johnthetubaguy> also the POC for that spec is up 14:26:54 <johnthetubaguy> waiting on the spec for the /servers filter changes themselves 14:27:21 <johnthetubaguy> POC is: https://review.openstack.org/#/c/389003/ 14:27:25 <johnthetubaguy> I think that the main bits 14:27:30 <mriedem> ok 14:27:40 <mriedem> i also wanted to point out 2 specs with +2s for the api 14:27:43 <jaypipes> hey guys, sorry for being late 14:27:52 <mriedem> https://review.openstack.org/#/c/386771/ and https://review.openstack.org/#/c/357884/ 14:28:02 <mriedem> for simple tenant usage paging and the diagnostics info 14:28:10 <johnthetubaguy> yeah, they are on my TODO list now 14:28:41 <mriedem> o/ 14:28:53 <mriedem> lbeliveau: was there an sriov/pci meeting this week? 14:29:07 <lbeliveau> mridem: yes 14:29:18 <mriedem> anything you want to share? 14:29:35 <lbeliveau> discussed mostly blueprints and patches to push (which is done already) 14:29:51 <lbeliveau> that's it 14:29:54 <mriedem> ok 14:30:02 <mriedem> gibi_: notifications meeting? 14:30:30 <mriedem> looks like several were just approved https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-ocata 14:30:35 <mriedem> making some decent progress there 14:30:46 <mriedem> and the searchlight team reported some bugs 14:31:04 <mriedem> for potential gaps, e.g. putting volume attachments (bdms) in the server notifications 14:31:16 <mriedem> i think searchlight's goal is to sync up the REST API and the notifications 14:31:35 <mriedem> the server notifications expose interface attachments, so it seems to make sense to also put bdms in that notification 14:31:39 <mriedem> anyone disagree? 14:32:01 <dansmith> no 14:32:08 <mriedem> note: i still worry about performance hits to building notification payloads that don't get sent 14:32:18 <mriedem> b/c to get the bdms we'll have to go back to the db 14:32:30 <mriedem> there was a ml thread on this but didn't go very far 14:33:01 <bauzas> I think it's acceptable to lookup the BDMs for sending them over the wire if we need them 14:33:06 <johnthetubaguy> I thought most of the time we already have the objects we are sending though 14:33:13 <cdent> mriedem: I got the impression that you were agreed with and it would be figured out when we get there? 14:33:14 <mriedem> sometimes we do 14:33:31 <mriedem> anyway, we can move on 14:33:40 <johnthetubaguy> frankly BDMs should be part of the instance, like all the other bits are, its so wasteful right now because its not, but thats a total distraction 14:33:59 <dansmith> johnthetubaguy: yeah that was the dumbest idea ever :/ 14:34:09 <mriedem> cdent: i'm failing to parse that 14:34:24 <mriedem> what is it and there? 14:34:36 <johnthetubaguy> dansmith: you mean BDMs being separate? 14:34:53 <cdent> mriedem: on the mailing list people agreed with your assessment of the problem and that it ought to be guarded against and we'll figure it out, somehow 14:34:53 <dansmith> johnthetubaguy: no BDMs including instance instead of instances including their BDMs 14:35:08 <johnthetubaguy> dansmith: right 14:35:12 <cdent> mriedem: where "it" is building notifications 14:35:22 <mriedem> cdent: yeah, there is agreement that it's an issue 14:35:27 <mriedem> which makes me feel warm and fuzzy 14:35:29 <johnthetubaguy> I guess its already done, so needs fixing like you suggested 14:36:09 <mriedem> probably just falls into the oslo.messaging realm to see if there is a way they can tell us if notifications are enabled for a certain host/queue 14:36:33 <mriedem> moving on 14:36:36 <mriedem> #stuck reviews 14:36:40 <mriedem> oops 14:36:44 <mriedem> #topic stuck reviews 14:36:51 <mriedem> (dane-fichter): Tempest changes to support security testing gate are held up. Some reviews from people in the Nova community would be helpful: https://review.openstack.org/#/c/392329/ and https://review.openstack.org/#/c/390085/ 14:37:13 <dane-fichter> yes ok I have a question for nova folks 14:37:31 <dane-fichter> would it be acceptable to have the security scenarios in the barbican tempest plugin 14:37:40 <dane-fichter> or should they be in tempest itself 14:37:46 <mriedem> the barbican tempest plugin 14:38:09 <mriedem> which we'll enable when we pull barbican/devstack plugin into a security-specific CI job 14:38:10 <dane-fichter> alright I guess that solves that debate 14:38:13 <mriedem> :) 14:38:34 <dane-fichter> thanks. that's it for me 14:38:48 <mriedem> sure, thanks for bringing it up 14:38:59 <dane-fichter> no problem 14:39:03 <mriedem> #topic open discussion 14:39:14 <mriedem> (jroll): Specless blueprint for "Update Ironic plug/unplug_vifs to use new Ironic interface attach/detach APIs": https://blueprints.launchpad.net/nova/+spec/ironic-plug-unplug-vifs-update 14:39:21 <jroll> hey 14:39:36 <jroll> so the ironic spec is attached to this, curious if we want a nova spec as well 14:39:53 <jroll> the nova side is basically "rework plug_vifs to use a less crappy api" 14:40:16 <johnthetubaguy> does this want to be converted to an os-vif driver? 14:40:20 <mriedem> what's the status on the ironic api? 14:40:26 <jroll> if you want to wait for the ironic side to complete before approving the bp, that's cool 14:40:32 <mriedem> i do 14:40:38 <jroll> mriedem: spec in review, code up and working 14:40:55 <jroll> anyway, wanted to bring it up and see if we should start hacking on the spec 14:41:00 <jroll> or if it can just be done 14:41:04 <mriedem> will the special ironic + multitenant networking CI job in nova's experimental queue test this? 14:41:08 <jroll> johnthetubaguy: I'm not sure why we would or would not want to do that 14:41:16 <jroll> mriedem: all ironic jobs would exercise this 14:41:48 <jroll> it's refactoring, essentially, not a new feature 14:42:05 <mriedem> so with this change, it looks like 2 things could be eventually pulled out of the neutron api code we have, (1) the mac address set and (2) the binding host ID stuff? 14:42:07 <mriedem> is that correct? 14:42:17 <johnthetubaguy> jroll: in a cloud with libvirt and ironic, say, when you create a port for ironic you may activate a different neutron backend, and os-vif will drive the negotiation there, but there are more questions in that info than questions 14:42:37 <dansmith> johnthetubaguy: more questions than questions? 14:42:40 <dansmith> that's a lot of questions 14:42:57 <johnthetubaguy> hmm, not sure about the binding host id stuff moving, I need to get my head around how this really works 14:43:22 <jroll> mriedem: (1) yes, (2) don't think so 14:43:22 <jroll> actually, I think the nova code is up 14:43:23 <mriedem> jroll: does the ironic spec mention the impacts to nova? 14:43:51 <mriedem> jroll: if the ironic spec has the high-level nova impacts i'm fine with not duplicating that in a nova spec 14:43:57 <mriedem> i think that's what ironic has been doing of late 14:44:03 <jroll> mriedem: nova poc https://review.openstack.org/#/c/364413/ 14:44:04 <johnthetubaguy> dansmith: creeping towards levels of infinity there I guess 14:44:16 <jroll> mriedem: and yeah, it's there https://review.openstack.org/#/c/317636/9/specs/approved/interface-attach-detach-api.rst 14:44:19 <jroll> line 185 14:44:54 <mriedem> ok, cool. 14:45:07 <johnthetubaguy> that POC kinds points towards it only affecting the driver, which is handy 14:45:08 <mriedem> so i think we'll just let the nova bp sit until the ironic spec is good to go 14:45:22 <johnthetubaguy> yeah, good to wait for the ironic BP to merge 14:45:25 <johnthetubaguy> spec 14:45:27 <jroll> sounds good, I'll bug you when that lands, hoping for next week 14:45:58 <mmedvede> hey nova team, I'd like to request permission for IBM PowerKVM CI to start voting on nova patches. We have been reporting on changes for quite some time now 14:46:00 <mriedem> i look forward to it 14:46:21 <mriedem> mmedvede: is that the nova-net one or the neutron one? 14:46:21 <jroll> awesome, thanks 14:46:32 <mmedvede> mriedem: the nova-net for now 14:46:41 <mriedem> mmedvede: i think i'd prefer to wait for the neutron backed job 14:46:46 <mriedem> and see how that shakes out 14:47:06 <dansmith> yeah, a week ago I wouldn't have cared 14:47:13 <mmedvede> mriedem: ok, sounds reasonable 14:47:18 <dansmith> but.. were right in the middle of trying to make it always fail :) 14:47:29 <dansmith> *we're 14:47:30 <mriedem> mmedvede: fwiw, if you're using devstack-gate, you might just start running neutron by default real soon 14:47:55 <mriedem> https://review.openstack.org/#/c/392934/ 14:48:02 <mriedem> when ^ lands sparks might fly 14:48:15 <cdent> sparks++ 14:48:17 <mmedvede> hehe 14:48:27 <mriedem> ok anything else? 14:48:45 <mriedem> nope. ok thanks everyone. 14:48:49 <mriedem> #endmeeting