14:00:06 #startmeeting neutron_drivers 14:00:07 Meeting started Fri Jan 11 14:00:06 2019 UTC and is due to finish in 60 minutes. The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:10 The meeting name has been set to 'neutron_drivers' 14:02:23 o/ 14:02:30 hi 14:02:43 hi 14:02:45 hi 14:02:59 hi 14:03:23 hi, how are you? 14:03:33 very good, thx :) 14:03:58 good :) 14:04:10 let's wait a minute to see if yamamoto or haleyb join in 14:05:02 cheng1, kailun: these are the the RFEs we have in the agenda for today: https://bugs.launchpad.net/neutron/+bugs?field.tag=rfe-triaged 14:05:17 do you want to discuss something else? 14:05:38 can we talk about this bug https://bugs.launchpad.net/neutron/+bug/1808731 14:05:38 Launchpad bug 1808731 in neutron "[RFE] Needs to restart metadata proxy with the start/restart of l3/dhcp agent" [Undecided,Triaged] - Assigned to cheng li (chengli3) 14:05:46 if we have time today 14:05:52 I have no topic, just interested in some of the topics in the agenda :) 14:05:58 thanks :) 14:07:29 cheng1: I most likely I don't think we will have time. Here's how the process works: the RFE's move forward in the following order in a pipeline: rfe, rfe-confirmed, rfe-triage. We discuss them here in the rfe-triaged stage 14:07:47 before that, we conduct the conversation with you in launachpad 14:09:18 what I can offer you is that I will provide feedback in launchpad today and will ask the other members of the team do the same 14:09:31 we'll try to discuss it next week. Does that work? 14:09:43 mlavalle: sure, thanks 14:09:46 * haleyb_ wanders in late 14:10:07 haleyb_: right when we are about to start the discussion 14:10:59 Again, these are the RFEs that we have up for discussion today: https://bugs.launchpad.net/neutron/+bugs?field.tag=rfe-triaged 14:11:11 #topic RFEs discussion 14:11:36 Let's start from the top: 14:11:40 https://bugs.launchpad.net/neutron/+bug/1806052 14:11:40 Launchpad bug 1806052 in neutron "[RFE] Changing segmentation_id of existing network should be allowed" [Wishlist,New] 14:11:50 submitted by our own haleyb_ 14:12:07 is there anything you want to add to it, haleyb_? 14:13:14 sorry, irc got messed-up, segment_id rfe? 14:13:21 haleyb: yes 14:13:48 do you want to add anything to it? 14:14:58 i hadn't answered the questions, but yes, it would require re-binding all the ports and assumes down time 14:15:32 so the downtime is acceptable in this use case, correct? 14:15:35 but that is ok for our use case since deleting and re-creating is not preferred 14:16:00 yes, downtime while re-binding is ok in this case 14:16:08 the RFE makes sense to me 14:16:12 I think that if user would like to change it now, so he need to remove and add new networks downtime would be even longer so that can be considered as improvement IMO 14:16:32 it makes sense to me 14:16:34 and I also think that it makes sense 14:16:46 makes sense to me 14:16:47 even with downtime, users do not need to recreate their VMs for example 14:17:01 should we go ahead and start writing a spec? 14:17:44 yes, we could do that 14:17:54 is this something that you plan to work on haleyb? 14:18:18 agree. it would clarifz the scope of the work. 14:18:34 mlavalle: either myself or someone on our team, yes 14:19:30 also I think we need to consider how to do this in such a way that we don't overwhelm the server with massive re-bindings 14:20:35 i can add that to the bug 14:20:48 yeah 14:21:03 I already marked it approved 14:21:12 so please go ahead working on the spec 14:21:54 ack 14:22:03 Next up is https://bugs.launchpad.net/neutron/+bug/1808062 14:22:03 Launchpad bug 1808062 in neutron "[RFE] Limit VXLAN to within Neutron availability zones" [Wishlist,New] 14:22:45 kailun: btw, forgot to mention yopu are always welcome to attend 14:22:55 I wonder if L2pop is not "solution" for such use case described in this BZ 14:23:20 slaweq: what is "BZ"? 14:23:31 amotoki: sorry, launchpad 14:23:37 BZ is bugzilla 14:23:43 too much D/S work ;) 14:23:44 aha 14:24:45 but even tweaking l2pop, the simplified mesh will cross AZ boundaries, which is what the submitter proposes to prevent 14:25:08 l2pop aims to simplify the mesh, righ? 14:25:17 mlavalle: no problem, thx :) 14:25:46 mlavalle: but if AZ limits VMs to be spawned only in one AZ, then L2pop should make that tunnels will be created between nodes from same AZ, right? 14:26:26 and if AZ is not guarantee that, how we can create tunnels only in one AZ? What with vms which are in same network and on hosts from other AZ then? 14:27:10 he is showing in his description that the tuneels were formed accross AZs 14:27:27 but yeah, maybe he didn't have l2pop enabled 14:27:33 that's a good point 14:28:02 i should have asked dan yesterday in person :-/ 14:28:29 this rfe reminds me the "segment" feature. if we split a network into VXLAN zones, ports cannot communicate across VXLAN zones. 14:29:41 i haven't fully understood the motivation of this (and NSX transport zone) 14:30:05 amotoki: he proposes this as a config option and presumably he is is willing to pay that price 14:30:44 can it be somehow restricted from nova side that vms connected to one network will always be only in one AZ? 14:30:58 or we should ensure somehow that ports connected to such network are only in one AZ? 14:31:14 without that I think that this might be error-prone for users 14:31:59 that's the reason I mentioend "segment" feature. do we need to map subnets on a network into segments when we support this? 14:32:33 at least it seems we cannot use a same subnet for different VXLAN zones (in this case, AZ) 14:33:13 right, if a user deploys this, all the networking is contained in AZ 14:34:21 we would be segregating networking by AZ 14:34:23 so perhaps this is related to slaweq's l2pop question. this means a neutron network is split into multiple segments and l2pop will work inside a segment 14:35:26 I think it would mean a network would not span more that one AZ 14:35:47 mlavalle: same as my understanding 14:36:38 ah, i see. 14:36:58 there are two inpretations: the one is that a neutron network is limited inside one AZ 14:37:19 yes, that's one 14:37:33 the other is a neutron network is spread across AZs and VXLAN zone(?) can be mapped into a segment. 14:37:43 I first thought the latter option. 14:38:23 so my comment "same as my understanding" looks not correct 14:38:29 No I don't think that is what proposing: he mentions an example where firewalls prevent traffic accross AZs 14:39:08 and therefor it doesn't make sense to have VXLAN accross the AZs 14:39:42 so it really boils to enabling AZs at the network level, I think 14:39:57 mlavalle: but isn't it same as sitaution with segments? 14:40:58 no beacause if you are in a sgemt of a network, you expect to be able to be able to be able to communicate with any other segment in your same network 14:41:05 it also would require some guidance for operators, for example possibly needing to plan on having a network node per AZ 14:41:48 mlavalle: yes, operators are responsible to ensure traffic is reachable across segments (in case of the segment feature). 14:42:38 correct. segements are more admin constructs. tenants for the most part, should worry about segmentation. They just see their network 14:42:57 tenants shouldn't worry about segments ^^^ 14:43:27 njohnston: yes, this has deployment guidance implications 14:44:43 so I propose to post some clarifyin questions in the RFE as next step: 14:45:01 1) as slaweq said, would l2pop be an implicit solution for this? 14:45:34 2) Is the submitter proposing to constrain networks to span only 1 AZ? 14:45:56 3) What would be the impact for the deployer in terms of networking nodes, etc? 14:46:00 makes sense? 14:46:07 +1 14:46:10 +1 14:46:28 +1 14:46:39 +1 14:46:59 ok, for the sake of time, I'll post them at the end of the meeting 14:47:28 let's go to the third RFE of the day: https://bugs.launchpad.net/neutron/+bug/1809878 14:47:28 Launchpad bug 1809878 in neutron "[RFE] Move sanity-checks to neutron-status CLI tool" [Wishlist,New] - Assigned to Slawek Kaplonski (slaweq) 14:48:25 I like this proposal. 14:48:30 yes, I was thinking that now, when we have "neutron status upgrade check" tool 14:48:42 I think it makes a lot of sense, and I like how it enables stadium projects 14:48:44 we can also do "neutron-status sanity check" as subcommand of it 14:49:21 yes, it also made sense to me last night 14:49:23 I can take care of it and move existing sanity checks to new CLI tool and format but 14:50:52 slaweq: do other projects have sanity check tools as well? just wondering if it also has uses outside neutron 14:51:03 it also proposes to allow stadium/3rd party projects to add sanity checks. it is nice too. 14:51:04 haleyb: I don't know in fact 14:51:43 I don't know there are another sanity checks in other projects either :p 14:52:22 slaweq: you ended above your sentence with "but". What's the "but"? 14:52:40 maybe matt does? was just a thought, didn't want to necessarily make this a stadium project 14:52:45 mlavalle: I don't know :) 14:52:52 nevermind 14:52:53 it shoudn't be there :) 14:52:54 sorry 14:53:01 \b\b\b\ 14:53:30 and i meant openstack not neutron stadium btw 14:53:42 slaweq: wouuld it make sense to get mriedem's opinion? 14:53:50 mlavalle: yes, sure 14:53:52 haleyb: ^^^^ 14:53:59 I can ask him about opinion about this 14:54:15 mlavalle: ack 14:54:22 he is off this week (suffering in Cancun) 14:54:38 he'll be back on Monday 14:54:53 ok, I will ask him next week to take a look at this RFE then 14:54:58 thx mlavalle for info 14:55:21 ok, let's finish the discussion on this one once we get Matt's opinion 14:55:28 +1 14:55:36 that's the Matt you meant haleyb, right? 14:56:05 mlavalle: yes, the suffering Matt :) 14:56:42 I also encourage the team to provide feedback to https://bugs.launchpad.net/neutron/+bug/1808731 14:56:42 Launchpad bug 1808731 in neutron "[RFE] Needs to restart metadata proxy with the start/restart of l3/dhcp agent" [Undecided,Triaged] - Assigned to cheng li (chengli3) 14:57:10 so we can help cheng1 next week 14:57:25 ok 14:57:35 Thanks for attending 14:57:43 Have a great weekend! 14:57:50 #endmeeting