14:00:36 #startmeeting neutron_drivers 14:00:37 Meeting started Fri Jan 10 14:00:36 2020 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:38 hi 14:00:40 The meeting name has been set to 'neutron_drivers' 14:02:43 hi 14:02:59 hi amotoki 14:03:17 it looks like slower start than usual :) 14:03:19 for now there are only 2 of us so lets wait for others 14:03:24 yeah 14:03:31 hi, sorry for the delay 14:03:38 mlavalle: welcome 14:03:55 lets wait 2 more minutes for haleyb and yamamoto, maybe they will join too 14:04:02 ok 14:04:07 no worries. it seems we are recovering from the holiday weeks 14:04:12 hi, was sitting here doing reviews 14:04:23 hi haleyb :) 14:05:35 ok, lets start as we have quorum already 14:05:47 welcome back on drivers meetings in 2020 :) 14:05:49 #topic RFEs 14:05:59 we have couple of RFEs to discuss today 14:06:08 let's start with easy one (I hope) 14:06:16 https://bugs.launchpad.net/neutron/+bug/1843165 14:06:16 Launchpad bug 1843165 in neutron "RFE: Adding support for direct ports with qos in ovs" [Wishlist,In progress] - Assigned to waleed mousa (waleedm) 14:06:29 this one seems preety simple, patch is already proposed also 14:06:37 and amotoki even +2'ed this patch already 14:06:55 (hi, sorry for being late) 14:07:05 I forgot to check the RFE status :-( 14:07:21 amotoki: no problem :) 14:07:27 hi ralonsoh 14:07:42 so I'm ok with this proposal and this patch also 14:08:04 patch is here: https://review.opendev.org/#/c/611605/ 14:08:56 o/ 14:09:28 I am also ok with it. Just +2ed the patch 14:09:35 is this QoS API for offloaded ports the same as OVS ones? 14:09:52 maybe we should add some new sanity checks to check ovs and kernel version 14:10:36 ralonsoh: from https://github.com/openvswitch/ovs/commit/e7f6ba220e10c0b560da097185514b6e33e2dc71 seems that it's the same 14:10:47 yes, I was reading it 14:10:50 perfect for me! 14:10:51 You need to set ingress_policying_rate 14:11:00 as regular OVS ports 14:11:11 * slaweq will be back in 1 minute 14:11:16 (actually is not qos, but policing) 14:11:25 but is the same as with normal OVS ports now 14:12:03 (I'll try to ask in my company for some HW with offload capabilities, to check this) 14:12:18 It looks like OVS provides the same abstraction for hw-offload ports 14:12:28 so neutron support looks simple. 14:12:36 sure! 14:13:22 ok, I'm back 14:13:51 amotoki: yes, from neutron pov it looks very simple 14:14:21 so I think we can approve this rfe, or haleyb or amotoki have got any concerns? 14:14:30 yes, +1 from me 14:14:41 I'm fine with this proposal. ovs version needs to be clarified. 14:15:00 ok, so I will approve it 14:15:27 amotoki: where You would like to have clarified ovs version? in docs somewhere? 14:15:50 sanity check or document would work 14:16:03 ahh, ok 14:16:21 I was thinking that You have something else in mind 14:16:38 I will add note about that in rfe and patch review after the meeting 14:16:44 cool 14:17:23 thx, so one is done 14:17:25 :) 14:17:27 next one 14:17:36 https://bugs.launchpad.net/neutron/+bug/1855854 14:17:36 Launchpad bug 1855854 in neutron "[RFE] Dynamic DHCP allocation pool" [Wishlist,New] 14:17:53 this one is a bit long so I will give You some time to read it :) 14:22:33 Interesting 14:24:20 one data point is that the dnsmasq maintainer might be warming to the patch, at least he responded this week with a "If I've understood correctly, this looks like it might be a viable 14:24:20 solution." 14:24:38 dnsmasq patch that is 14:24:54 i don't see hjensas here 14:26:16 one question i have is will this help support privacy addresses? 14:26:34 if this proposal is implemented, we cannot know IP address(es) assigned to a port if the port uses the dynamic allocation pool, right? 14:26:45 although it is just for dhcpv6, not slaac 14:27:35 amotoki: right, neutron will not have in db IP address associated with port 14:27:49 amotoki: looks like it, could possibly do it with a callback script 14:28:17 a provider might see that as a security issue if they can't lookup a port by IP 14:28:33 thanks for clarification 14:28:42 that also affects other things, like security groups 14:29:33 but reading this part: "In the Ironic example, a dynamic address will be used during instance provisioning. Once Ironic is done provisioning, a port with a static address, i.e a "standard" neutron port with a neutron ipam address allocation etc is bound to the final baremetal instance." it seems that it would be not known only for some time, right? 14:29:58 it seems this feature needs to be used for a specific subnet. 14:30:45 we seem to need some assumption on this feature as it affects SG as njohnston_ commented. 14:32:04 slaweq: yes, it is only during the boot process 14:32:26 I think it is specific to the provisioning network(s) in ironic. 14:32:42 it sounds reasonable for that case. 14:34:17 amotoki: for ironic probably yes (if dnsmasq's patch will not be merged soon) 14:34:42 maybe this could also be solved by running an additional boot-specific dhcp server? 14:34:46 I agree, with the asterisk that until it is transitioned to a regular neutron port, unpredictable results might occur. 14:37:05 njohnston: I agree with that, if we will have it, it should be very well documented to make operators aware, what is the goal of this and that this shouldn't be used always 14:37:13 frickler: that can work. we can run a DHCP server for a specific purpose in a subnet with specifyig enable_dhcp=False in a neutron subnet). 14:38:20 amotoki: sorry but I'm not sure if I undertood corectly 14:38:41 You want to run something different than dnsmasq for subnets with disabled dhcp? 14:39:20 amotoki: even without setting enable_dhcp=False, if there is a range of addresses not covered by the allocation-pool, it should be able to add some other dhcp server just handing that out to the pxeboot clients maybe 14:39:42 folks, why don't we ask hjensas to propose a solution for this? maybe he has something in mind 14:39:56 I haven't tested it yet, but I think neturon DHCP server is not launched if enable_dhcp is False so an extra DHCP server can serve for DHCP request from instances. 14:39:59 he is even proposing to write a spec 14:40:14 spawning a dhcp server in a dhcp=false subnet... is not something desirable 14:41:46 amotoki: dnsmasq server is spawned per network if there is at least one network with enable_dhcp=True 14:42:01 and than various subnets are added to its config file 14:42:34 and I agree with ralonsoh that it wouldn't be super clear to run dhcp server (some other server) for subnets with disabled dhcp 14:42:52 I think we should request a spec and while it is written and reviewed, see if any of the 3 alternatives listed in note #3 bear fruit. Those can be analyzed in the alternatives section of the spec 14:43:00 +1 14:43:03 perfect 14:43:13 slaweq: you're right. I should say 'network' rather than 'subnet'. 14:43:17 mlavalle: I agree with that 14:43:33 +1 for a spec 14:43:38 lets ask for spec now 14:44:10 ok, so lets move on 14:44:29 I will summarize this discussion in comment to the rfe after the meeting 14:44:36 in this case, as a team we should place special focus on alternatives evaluation as part of the review process, IMO 14:45:10 next one 14:45:13 https://bugs.launchpad.net/neutron/+bug/1759790 14:45:13 Launchpad bug 1759790 in neutron "[RFE] metric for the route" [Wishlist,In progress] - Assigned to Bin Lu (369283883-o) 14:45:26 this was also discussed some time ago 14:46:55 based on last comments it seems that this rfe is only about API change, that different backends will be able to implement this (or not) on their own 14:47:30 previous discussion: http://eavesdrop.openstack.org/meetings/neutron_drivers/2019/neutron_drivers.2019-10-11-14.00.log.html#l-13 14:47:51 thx njohnston 14:48:58 slaweq, can you ping xiaodan again in the bug? just to clarify this 14:49:06 but seems that you are right 14:49:27 there is a reply from the RFE author, it said "just add metric for user configuration and record info in neutron db". I am not sure how metric information should be applied to a neutron router. 14:49:41 the parameter will be there in the route, and the backend will use it 14:49:52 amotoki, perfect 14:51:33 yes, so this rfe is not about implementing it in in-tree L3 agent 14:52:03 only add new API extension and parameter to the router's extra route entries 14:52:21 which is fine for me in generall 14:52:24 I think so, yes 14:53:08 only concern which I have is that it may be confusing for users if someone will set various metrics in api and it will not be applied on the backend 14:53:25 but, don't we need to clarify how the added parameter is interpreted in the reference implementation? 14:54:29 slaweq says the same thing 14:56:04 amotoki: my understanding was that, if we would implement that in reference implementation, it would be something like "ip route add ... metric X" added to each extra route entry in router's namespace 14:56:59 it is what we expect :) 14:57:57 so maybe we should ask there that implementation for reference backend should be also part of this rfe? 14:58:17 agree 14:58:27 and if author of this proposal is going to implement that 14:58:39 or the opposte, if it is not implemented in the reference implemenation, do it completely out of tree 14:58:56 mlavalle: +1 14:59:05 I will add such comment to summarize this discussion there 14:59:18 haleyb: any last minute thoughs on that? 14:59:44 if the functionality is going to be out of tree, what do they need us for? 15:00:12 slaweq: no extra thoughts, i'm fine with it 15:00:15 mlavalle: probably nothing but I will ask in the comment about that too 15:00:20 haleyb: ok, thx 15:00:24 mlavalle: perhaps we need to ask it together 15:00:25 we are out of time already 15:00:28 bye! 15:00:33 thx a lot for attending 15:00:35 o/ 15:00:36 thanks! 15:00:37 have a great weekend 15:00:40 #endmeeting