14:00:08 #startmeeting neutron_drivers 14:00:09 Meeting started Fri Feb 21 14:00:08 2020 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:12 The meeting name has been set to 'neutron_drivers' 14:00:17 o/ 14:00:17 welcome on drivers meeting :) 14:00:22 hi 14:00:25 hi 14:00:26 hi 14:00:28 hi 14:00:31 o/ 14:00:55 ok, lets start as we have full agenda for today 14:00:59 #topic RFEs 14:01:15 hi 14:01:15 o/ 14:01:20 first 2 RFEs which comes in some way together: 14:01:21 https://bugs.launchpad.net/neutron/+bug/1862032 14:01:22 Launchpad bug 1862032 in neutron "[rfe] Add RBAC for subnet pools" [Undecided,In progress] - Assigned to Igor Malinovskiy (imalinovskiy) 14:01:23 and 14:01:26 https://bugs.launchpad.net/neutron/+bug/1862968 14:01:28 Launchpad bug 1862968 in neutron "[rfe] Add RBAC support for address scopes" [Wishlist,In progress] - Assigned to Igor Malinovskiy (imalinovskiy) 14:02:43 both RFE make sense 14:03:36 I've question related to the logic of adding RBAC policies to SNP 14:04:34 Does it make sense to to auto-share Address scope with target project when admin share Subnetpool? or it's better to just check permissions, if Address Scope is not shared to target project - show validation error and ask admin to share address scope for target project first 14:04:37 ralonsoh: I agree - for me both makes sense too 14:04:48 I agree 14:05:31 generally both make sense to me. what I haven't fully checked is what are the requirements on sharing subnetpools and address scopes. 14:06:12 if project A is a subnet pool owner and would like to share it to project B, should associated address scope be shared to project B? 14:06:54 amotoki: yes that's my question 14:07:21 Assume the address scope is owned by project C. Does project C need to share the address scope to project B too? 14:07:41 imalinovskiy: yeah, that's a question which hit me. 14:10:14 I can start with obvious implementation - forbid to share SNP if it has assigned AS and target project doesn't have permission for this AS 14:10:53 imalinovskiy: I think that this will be easier way to go 14:11:08 agree. 14:11:24 we can modify it later if there will be use case/need for that 14:11:36 yamamoto, mlavalle any thoughts? 14:11:59 I am good with both RFEs. Seem no brainers 14:12:15 it makes sense to me 14:12:26 ok, thx 14:12:36 so I will mark both of them as approved 14:12:44 (that was fast) :) 14:12:52 lets go with next one then 14:12:56 as I said, no brainers 14:13:10 do we need a spec on this or skip it? 14:13:39 amotoki: IMHO we don't need spec for that but I'm open for it if You want :) 14:14:16 two RFEs are tightly coupled, so it seems better to clarify the relationship somewhere 14:14:34 at the momet, both RFE bugs have simple description. 14:14:55 amotoki: ok, so lets have a spec for it with described details of reference between both of them 14:14:56 imalinovskiy: can we add a bit more detail in the RFE descriptions 14:15:14 it would be a simple spec 14:15:29 amotoki: sure, will do 14:15:41 imalinovskiy: thx a lot 14:15:50 ok, so lets go to the next one now 14:16:02 https://bugs.launchpad.net/neutron/+bug/1863113 14:16:03 Launchpad bug 1863113 in neutron "[RFE] Introduce new testing framework for Neutron - OVN integration - a.k.a George" [Undecided,New] - Assigned to Jakub Libosvar (libosvar) 14:16:09 o/ 14:16:17 It's jlibosva's new friend: George :) 14:17:58 I don't know if I'm supposed to talk about it now 14:18:12 IMO this seems reasonable and hopefully in the future it can replace fullstack framework as it should provide better isolation between fake "hosts" 14:18:21 so I wanted to discuss the plan, if it gets accepted, which jobs should it use 14:18:25 so we wouldn't need dirty monkeypatching of some agents 14:18:50 is my understanding correct jlibosva? 14:18:53 slaweq: yes 14:19:20 we can discuss the roadmap on the LP, so far the code is a PoC with OVN 14:19:34 it reminded me zephyr, which was somehow rejected in favor of fullstack https://github.com/midonet/zephyr 14:19:36 but it should be quite easy to deploy with ml2/ovs, I think ralonsoh maybe even tried that 14:19:51 succesfully 14:20:02 yamamoto: thanks for the link, I haven't heard about it 14:20:06 IMO, the idea could be exported, in a future, to other backends 14:20:18 is the fullstack testing for OVS mech driver and is the proposed one for OVN integration? 14:20:47 amotoki: essentially yes 14:20:49 yamamoto: I also didn't heard about it before 14:21:44 amotoki: fullstack isn't only for ml2/ovs 14:21:53 it can also test linuxbridge backend 14:22:03 zephyr creates separate containers for neutron server, midonet agent, to simulate a multi node deployment 14:22:03 and also there are some tests for L3/DHCP agents 14:22:10 slaweq: yeah, i know it covers more than ovs and L3 stuffs. 14:23:05 perhaps my word is not enough. OVN covers both L2 and L3, so the proposed one will cover them from POV of OVN integration. 14:23:32 that's what I understand so far. 14:25:08 amotoki: that's correct, it works with standalone Neutron using OVN backend and it tests data plane L2 and L3 on single node 14:25:46 jlibosva: maybe You can investigate this zephyr solution and potentially reuse at least some pieces of it? 14:26:11 slaweq: yeah, I'm reading the docs. 14:27:03 and as I know current limitations of fullstack framework, I'm ok to approve this new RFE and give a chance to some new solution 14:27:03 why George? I mean, the name 14:27:22 mlavalle: yeah, that's good question :) 14:27:34 mlavalle: heh, I needed to use some name and it was the first name that came in my head ... so I used it as a "placeholder" but then it became the real name 14:27:49 I may need to come up with some explanation of abbreviation 14:27:51 how about Jakub? 14:27:57 hehehehe 14:27:58 that's already my name 14:28:00 :) 14:28:07 :) 14:28:21 I'm not sure if we need more Jakubs in Neutron :P 14:28:34 one should be enough 14:29:28 if we have a contributor whose name is George, it would be super confusing :p 14:29:46 * in future 14:30:00 "George tested it" would be ambiguous 14:30:05 LOL 14:31:08 but now seriously, can we vote to approve it or do You have any other questions? :) 14:31:42 I tested this 2 weeks ago, +1 from me 14:31:52 the proposal itself makes sense from POV that we need some framework for OVN integration. 14:32:12 I have tested it as well, it worked well for me 14:32:21 +1 from me to approve this and continue work with this 14:32:36 as implementation, jlibosva can check zephyr too. 14:33:04 amotoki: I think it would make sense to replace current fullstack in the future. It's much faster to build the env with George 14:33:26 +1. we should have added this kind of stuff a few years ago. 14:33:28 and also for the record - the code already exists here as POC :) https://review.opendev.org/#/c/696926/ 14:33:30 yeah, If the new framework turns out more generic in future and can cover more, we can go beyond the initial scope (OVN integration). 14:33:34 +1 14:34:04 +1 14:34:14 ok, so we approved it 14:34:16 I'm fine to approve it 14:34:28 I will comment on the RFE after the meeting 14:34:32 thx 14:34:47 thanks :) 14:34:56 thx jlibosva 14:35:00 ok, next one than 14:35:09 as we have stephen-ma here, lets discuss https://bugs.launchpad.net/neutron/+bug/1861529 14:35:10 Launchpad bug 1861529 in neutron "[RFE] A port's network should be changable" [Wishlist,New] 14:36:44 my personal opinion about this is similar to what yamamoto and frickler said in the comments to the RFE 14:37:19 I'm really not sure if we should change such basic logic of Neutron 14:38:48 Can such a change be limited to only ML2? 14:39:00 but how? 14:39:10 if you change the network 14:39:23 that means you also change the subnets and the pools 14:39:27 I have similar feeling as you have. this RFE is talking about unaddressed ports. Even if we focus on L1(virtual port)/L2 stuff we still have not small number of topics around plug/unplug. 14:39:36 it's not possible to isolate ML2 from L3-4 14:39:47 in theory it can be limited from API PoV as other plugins don't need to support API extension which provides that 14:40:39 what is PoV? 14:40:45 point of view 14:40:53 ralonsoh: thx :) 14:41:33 it doesn't make things much easier as ml2 is _the_ core plugin nowadays 14:43:13 Part of the problem here is the intricate dance of port assignment neutron does with nova 14:43:28 what if the new network is one that would not be scheduled to the hypervisor that the VM is on? 14:43:58 just as one possible wrinkle 14:44:57 stephen-ma: what guest OS are we talking about? do you have PoC code? In other words, have you explored how difficult it is? 14:45:20 what with e.g. linuxbridge backend - I think that bridg'e name to which port is plugged is based on network name and this name is also in libvirt's xml file, no? 14:45:54 Yes https://review.opendev.org/#/c/704203/ , and https://review.opendev.org/#/c/704884/ 14:46:25 I can't imagine this happening without an unplug and replug, almost like a shelve and unshelve 14:46:28 I only have a 1-node devstack environment to test them. 14:47:19 mhhh, so is this something you are proposing just for you? 14:47:40 stephen-ma: in PoC You have only some implementation for ovs-agent - but what about other drivers? 14:48:07 or how You want to provide on API level when this is possible (backend supports it) and when not 14:49:02 that ia a good question. 14:50:30 Here's the key question for me: why wouldn't you just delete the old port and then create a new port on the new network, i.e. openstack port delete && openstack port create? What value is retained in keeping the same port? 14:50:45 The request came from a customer. It doesn't want to do an online removal of a VM port followed by an online addition. 14:51:03 Did they explain their reasoning? 14:51:41 in other words, let's try to understand the real use case 14:51:43 The desire is to have an OS agnostic way of changing the network. 14:52:45 There is nothing OS-specific about "openstack port delete && openstack port create" 14:53:19 Back when I did VMWare stuff, that was how we would do it there, and it worked for windows, linux, etc. VMs 14:53:27 njohnston: I think that stephen-ma's point here is that in some cases guest OS may not notice hot plug of new port 14:53:41 You mean "openstack server remove " and openstack server add 14:53:53 whats guest OS are we talking about? 14:54:38 Correct, a VM may be running an OS that was developed before PCI. 14:54:50 like MS-DOS. 14:55:23 stephen-ma: no, I mean "openstack port delete --oldportid; openstack port create --host instancename --network newnetwork" 14:56:17 do we really want to introduce a lot of code just to support some very old, legacy operating systems? 14:57:43 ok, we are almost on top of the hour and I have 2 proposals for that RFE: 14:58:04 1. we will continue discussion about that in review of spec where more details will be shared with us, 14:58:36 2. we don't accept this RFE 14:58:48 or maybe You have some 3rd option? 14:59:41 personally I'm voting for option 2 14:59:53 is that "a we don't accept this RFE for the time being, until we learn more details in the spec and make a final decision"? 15:00:06 I would vote for 2 unless we have a very clear user story that means this will have a real benefit for today's cloud customer 15:00:26 mlavalle: no, my option 2 was more like we simply don't accept this RFE 15:00:39 mlavalle: the option 1 is more like what You described 15:00:52 ok, we are out of time already 15:01:04 I lean towrds 1 15:01:07 I will bring this back on next meeting to make some decision 15:01:07 FWIW 15:01:15 let's think about it for few more days 15:01:24 thx for attending the meeting 15:01:26 o/ 15:01:28 #endmeeting