14:00:06 <mlavalle> #startmeeting neutron_drivers
14:00:07 <openstack> Meeting started Fri Jan 11 14:00:06 2019 UTC and is due to finish in 60 minutes.  The chair is mlavalle. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:10 <openstack> The meeting name has been set to 'neutron_drivers'
14:02:23 <njohnston> o/
14:02:30 <slaweq> hi
14:02:43 <cheng1> hi
14:02:45 <amotoki> hi
14:02:59 <kailun> hi
14:03:23 <mlavalle> hi, how are you?
14:03:33 <slaweq> very good, thx :)
14:03:58 <amotoki> good :)
14:04:10 <mlavalle> let's wait a minute to see if yamamoto or haleyb join in
14:05:02 <mlavalle> cheng1, kailun: these are the the RFEs we have in the agenda for today: https://bugs.launchpad.net/neutron/+bugs?field.tag=rfe-triaged
14:05:17 <mlavalle> do you want to discuss something else?
14:05:38 <cheng1> can we talk about this bug https://bugs.launchpad.net/neutron/+bug/1808731
14:05:38 <openstack> Launchpad bug 1808731 in neutron "[RFE] Needs to restart metadata proxy with the start/restart of l3/dhcp agent" [Undecided,Triaged] - Assigned to cheng li (chengli3)
14:05:46 <cheng1> if we have time today
14:05:52 <kailun> I have no topic, just interested in some of the topics in the agenda :)
14:05:58 <kailun> thanks :)
14:07:29 <mlavalle> cheng1: I most likely I don't think we will have time. Here's how the process works: the RFE's move forward in the following order in a pipeline: rfe, rfe-confirmed, rfe-triage. We discuss them here in the rfe-triaged stage
14:07:47 <mlavalle> before that, we conduct the conversation with you in launachpad
14:09:18 <mlavalle> what I can offer you is that I will provide feedback in launchpad today and will ask the other members of the team do the same
14:09:31 <mlavalle> we'll try to discuss it next week. Does that work?
14:09:43 <cheng1> mlavalle: sure, thanks
14:09:46 * haleyb_ wanders in late
14:10:07 <mlavalle> haleyb_: right when we are about to start the discussion
14:10:59 <mlavalle> Again, these are the RFEs that we have up for discussion today: https://bugs.launchpad.net/neutron/+bugs?field.tag=rfe-triaged
14:11:11 <mlavalle> #topic RFEs discussion
14:11:36 <mlavalle> Let's start from the top:
14:11:40 <mlavalle> https://bugs.launchpad.net/neutron/+bug/1806052
14:11:40 <openstack> Launchpad bug 1806052 in neutron "[RFE] Changing segmentation_id of existing network should be allowed" [Wishlist,New]
14:11:50 <mlavalle> submitted by our own haleyb_
14:12:07 <mlavalle> is there anything you want to add to it, haleyb_?
14:13:14 <haleyb> sorry, irc got messed-up, segment_id rfe?
14:13:21 <mlavalle> haleyb: yes
14:13:48 <mlavalle> do you want to add anything to it?
14:14:58 <haleyb> i hadn't answered the questions, but yes, it would require re-binding all the ports and assumes down time
14:15:32 <mlavalle> so the downtime is acceptable in this use case, correct?
14:15:35 <haleyb> but that is ok for our use case since deleting and re-creating is not preferred
14:16:00 <haleyb> yes, downtime while re-binding is ok in this case
14:16:08 <mlavalle> the RFE makes sense to me
14:16:12 <slaweq> I think that if user would like to change it now, so he need to remove and add new networks downtime would be even longer so that can be considered as improvement IMO
14:16:32 <amotoki> it makes sense to me
14:16:34 <slaweq> and I also think that it makes sense
14:16:46 <njohnston> makes sense to me
14:16:47 <amotoki> even with downtime, users do not need to recreate their VMs for example
14:17:01 <mlavalle> should we go ahead and start writing a spec?
14:17:44 <haleyb> yes, we could do that
14:17:54 <mlavalle> is this something that you plan to work on haleyb?
14:18:18 <amotoki> agree. it would clarifz the scope of the work.
14:18:34 <haleyb> mlavalle: either myself or someone on our team, yes
14:19:30 <mlavalle> also I think we need to consider how to do this in such a way that we don't overwhelm the server with massive re-bindings
14:20:35 <haleyb> i can add that to the bug
14:20:48 <mlavalle> yeah
14:21:03 <mlavalle> I already marked it approved
14:21:12 <mlavalle> so please go ahead working on the spec
14:21:54 <haleyb> ack
14:22:03 <mlavalle> Next up is https://bugs.launchpad.net/neutron/+bug/1808062
14:22:03 <openstack> Launchpad bug 1808062 in neutron "[RFE] Limit VXLAN to within Neutron availability zones" [Wishlist,New]
14:22:45 <mlavalle> kailun: btw, forgot to mention yopu are always welcome to attend
14:22:55 <slaweq> I wonder if L2pop is not "solution" for such use case described in this BZ
14:23:20 <amotoki> slaweq: what is "BZ"?
14:23:31 <slaweq> amotoki: sorry, launchpad
14:23:37 <slaweq> BZ is bugzilla
14:23:43 <slaweq> too much D/S work ;)
14:23:44 <amotoki> aha
14:24:45 <mlavalle> but even tweaking l2pop, the simplified mesh will cross AZ boundaries, which is what the submitter proposes to prevent
14:25:08 <mlavalle> l2pop aims to simplify the mesh, righ?
14:25:17 <kailun> mlavalle: no problem, thx :)
14:25:46 <slaweq> mlavalle: but if AZ limits VMs to be spawned only in one AZ, then L2pop should make that tunnels will be created between nodes from same AZ, right?
14:26:26 <slaweq> and if AZ is not guarantee that, how we can create tunnels only in one AZ? What with vms which are in same network and on hosts from other AZ then?
14:27:10 <mlavalle> he is showing in his description that the tuneels were formed accross AZs
14:27:27 <mlavalle> but yeah, maybe he didn't have l2pop enabled
14:27:33 <mlavalle> that's a good point
14:28:02 <haleyb> i should have asked dan yesterday in person :-/
14:28:29 <amotoki> this rfe reminds me the "segment" feature. if we split a network into VXLAN zones, ports cannot communicate across VXLAN zones.
14:29:41 <amotoki> i haven't fully understood the motivation of this (and NSX transport zone)
14:30:05 <mlavalle> amotoki: he proposes this as a config option and presumably he is is willing to pay that price
14:30:44 <slaweq> can it be somehow restricted from nova side that vms connected to one network will always be only in one AZ?
14:30:58 <slaweq> or we should ensure somehow that ports connected to such network are only in one AZ?
14:31:14 <slaweq> without that I think that this might be error-prone for users
14:31:59 <amotoki> that's the reason I mentioend "segment" feature. do we need to map subnets on a network into segments when we support this?
14:32:33 <amotoki> at least it seems we cannot use a same subnet for different VXLAN zones (in this case, AZ)
14:33:13 <mlavalle> right, if a user deploys this, all the networking is contained in AZ
14:34:21 <mlavalle> we would be segregating networking by AZ
14:34:23 <amotoki> so perhaps this is related to slaweq's l2pop question. this means a neutron network is split into multiple segments and l2pop will work inside a segment
14:35:26 <mlavalle> I think it would mean a network would not span more that one AZ
14:35:47 <amotoki> mlavalle: same as my understanding
14:36:38 <amotoki> ah, i see.
14:36:58 <amotoki> there are two inpretations: the one is that a neutron network is limited inside one AZ
14:37:19 <mlavalle> yes, that's one
14:37:33 <amotoki> the other is a neutron network is spread across AZs and VXLAN zone(?) can be mapped into a segment.
14:37:43 <amotoki> I first thought the latter option.
14:38:23 <amotoki> so my comment "same as my understanding" looks not correct
14:38:29 <mlavalle> No I don't think that is what proposing: he mentions an example where firewalls prevent traffic accross AZs
14:39:08 <mlavalle> and therefor it doesn't make sense to have VXLAN accross the AZs
14:39:42 <mlavalle> so it really boils to enabling AZs at the network level, I think
14:39:57 <amotoki> mlavalle: but isn't it same as sitaution with segments?
14:40:58 <mlavalle> no beacause if you are in a sgemt of a network, you expect to be able to be able to be able to communicate with any other segment in your same network
14:41:05 <njohnston> it also would require some guidance for operators, for example possibly needing to plan on having a network node per AZ
14:41:48 <amotoki> mlavalle: yes, operators are responsible to ensure traffic is reachable across segments (in case of the segment feature).
14:42:38 <mlavalle> correct. segements are more admin constructs. tenants for the most part, should worry about segmentation. They just see their network
14:42:57 <mlavalle> tenants shouldn't worry about segments ^^^
14:43:27 <mlavalle> njohnston: yes, this has deployment guidance implications
14:44:43 <mlavalle> so I propose to post some clarifyin questions in the RFE as next step:
14:45:01 <mlavalle> 1) as slaweq said, would l2pop be an implicit solution for this?
14:45:34 <mlavalle> 2) Is the submitter proposing to constrain networks to span only 1 AZ?
14:45:56 <mlavalle> 3) What would be the impact for the deployer in terms of networking nodes, etc?
14:46:00 <mlavalle> makes sense?
14:46:07 <njohnston> +1
14:46:10 <amotoki> +1
14:46:28 <haleyb> +1
14:46:39 <slaweq> +1
14:46:59 <mlavalle> ok, for the sake of time, I'll post them at the end of the meeting
14:47:28 <mlavalle> let's go to the third RFE of the day: https://bugs.launchpad.net/neutron/+bug/1809878
14:47:28 <openstack> Launchpad bug 1809878 in neutron "[RFE] Move sanity-checks to neutron-status CLI tool" [Wishlist,New] - Assigned to Slawek Kaplonski (slaweq)
14:48:25 <amotoki> I like this proposal.
14:48:30 <slaweq> yes, I was thinking that now, when we have "neutron status upgrade check" tool
14:48:42 <njohnston> I think it makes a lot of sense, and I like how it enables stadium projects
14:48:44 <slaweq> we can also do "neutron-status sanity check" as subcommand of it
14:49:21 <mlavalle> yes, it also made sense to me last night
14:49:23 <slaweq> I can take care of it and move existing sanity checks to new CLI tool and format but
14:50:52 <haleyb> slaweq: do other projects have sanity check tools as well?  just wondering if it also has uses outside neutron
14:51:03 <amotoki> it also proposes to allow stadium/3rd party projects to add sanity checks. it is nice too.
14:51:04 <slaweq> haleyb: I don't know in fact
14:51:43 <amotoki> I don't know there are another sanity checks in other projects either :p
14:52:22 <mlavalle> slaweq: you ended above your sentence with "but". What's the "but"?
14:52:40 <haleyb> maybe matt does?  was just a thought, didn't want to necessarily make this a stadium project
14:52:45 <slaweq> mlavalle: I don't know :)
14:52:52 <mlavalle> nevermind
14:52:53 <slaweq> it shoudn't be there :)
14:52:54 <slaweq> sorry
14:53:01 <haleyb> \b\b\b\
14:53:30 <haleyb> and i meant openstack not neutron stadium btw
14:53:42 <mlavalle> slaweq: wouuld it make sense to get mriedem's opinion?
14:53:50 <slaweq> mlavalle: yes, sure
14:53:52 <mlavalle> haleyb: ^^^^
14:53:59 <slaweq> I can ask him about opinion about this
14:54:15 <haleyb> mlavalle: ack
14:54:22 <mlavalle> he is off this week (suffering in Cancun)
14:54:38 <mlavalle> he'll be back on Monday
14:54:53 <slaweq> ok, I will ask him next week to take a look at this RFE then
14:54:58 <slaweq> thx mlavalle for info
14:55:21 <mlavalle> ok, let's finish the discussion on this one once we get Matt's opinion
14:55:28 <slaweq> +1
14:55:36 <mlavalle> that's the Matt you meant haleyb, right?
14:56:05 <haleyb> mlavalle: yes, the suffering Matt :)
14:56:42 <mlavalle> I also encourage the team to provide feedback to https://bugs.launchpad.net/neutron/+bug/1808731
14:56:42 <openstack> Launchpad bug 1808731 in neutron "[RFE] Needs to restart metadata proxy with the start/restart of l3/dhcp agent" [Undecided,Triaged] - Assigned to cheng li (chengli3)
14:57:10 <mlavalle> so we can help cheng1 next week
14:57:25 <slaweq> ok
14:57:35 <mlavalle> Thanks for attending
14:57:43 <mlavalle> Have a great weekend!
14:57:50 <mlavalle> #endmeeting