14:00:10 #startmeeting neutron_drivers 14:00:11 Meeting started Fri Feb 14 14:00:10 2020 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:14 The meeting name has been set to 'neutron_drivers' 14:00:15 o/ 14:00:15 hi 14:00:17 hi 14:00:18 hi 14:00:22 o/ 14:00:34 hi 14:01:09 lets wait few more minutes for amotoki haleyb and mlavalle 14:01:16 hi 14:01:44 hi haleyb 14:01:56 so we have already quorum and I think we can start 14:02:13 as TheJulia is here, lets start not as usual :) 14:02:19 #topic On Demand Agenda 14:02:32 TheJulia has added topic to https://wiki.openstack.org/wiki/Meetings/NeutronDrivers 14:03:44 o/ 14:03:47 hello TheJulia 14:04:01 hi1 sorry for late 14:04:08 hi mlavalle and amotoki 14:04:33 So bottom line is we're wondering if the mac address update can be made non-admin or covered by a specific policy because ironic is making the service more multitenanty and usable for non-admins, but we pass credentials through for port actions and are trying to avoid pulling a second admin session as the ironic service user to just update the mac address 14:04:35 just FYI, I started today with On demand agenda as TheJulia added some topic to it and I didn't want to hold her on the meeting for whole hour :) 14:04:56 slaweq: much appreciated,.... for I have hours of meetings ahead of me :) 14:05:06 I wonder if this could be achieved with a policy.json modification defining a role tied to a specific service credential for Ironic 14:05:14 njohnston: I think it can: https://github.com/openstack/neutron/blob/master/neutron/conf/policies/port.py#L192 14:05:27 it's defined there IIUC what is the need from Ironic 14:06:19 and it seems for me that it can be done by admin or advsvc user 14:06:50 slaweq: yes that is exactly what I was thinking about 14:07:16 are we discussing mac address update by all non-admin users or users with specific roles? 14:08:00 non-admin users of baremetal, which has me thinking we're going to have to do the thing we don't want to do which is pull a separate client/session to directly update the port mac as a separate action 14:08:53 the thing is, Neutron has no way of distinguishing between the Ironic use case and the other use cases where non-admin access to this would be a bad idea 14:09:36 we prepared the advsvc role for such purpose. If it works it would be great. 14:10:51 Yeah, I suspect we could just have ironic learn how to do it separately, which would prevent potential security issues. I guess well need to look at that. Anyway thanks everyone! 14:11:32 one thing to note is that updating mac address should be limited to a private network. 14:11:41 I mean "self-service network". 14:11:57 Are there some baremetal scenarios where it would not be a good idea to allow mac address updating? 14:12:18 it should not be allowed in a shared network. 14:12:38 in other words, the operation should be limited to a network owner IMHO. 14:12:57 njohnston: none, we must be able to update the mac address for pxe booting and addressing of physical ports. 14:14:55 in that case the ironic service account needs to perform the port update action for just the mac address 14:15:28 since we know it and manage it 14:16:12 I am wondering if we could permit port update for all in policy.json and then later in the logic require specific privileges unless it's a baremetal port. 14:17:12 But I don't know if there are baremetal-but-not-Ironic scenarios that could bite us with that 14:17:24 njohnston: I'm not sure if we should do such hard coded rules for some specific kinds of resources 14:17:26 that is only informed via the ?binding profile? which I think is later on, also since users can request vifs on user created networks and ironic will request it be attached to that network 14:18:03 Having a separate admin session has the virtue of simplicity, other methods for doing it get complex quickly it seems to me 14:18:19 agreed 14:18:39 Thanks, I'll let the contributor working on the multitenancy feature set know! 14:19:31 ok, so I think we are good with Your topic TheJulia, right? You will try to use advsvc role for this action. 14:20:39 yup, thanks 14:21:02 thx TheJulia :) 14:21:15 so now we can move to our regular topic 14:21:17 #topic RFEs 14:21:31 we have 2 RFEs for today 14:21:34 first one: 14:21:36 https://bugs.launchpad.net/neutron/+bug/1860521 14:21:38 Launchpad bug 1860521 in neutron "L2 pop notifications are not reliable" [Undecided,New] 14:24:21 I remember from when I was working in OVH that we had similar problems with L2pop mechanism and we added something like periodic sync of tunnels config on host 14:25:49 I am not sure what the impact to the message bus would be to change from fanout/cast to RPC calls 14:27:24 njohnston: to the message bus not much but for neutron-rpc workers which will send such messages and wait for reply, impact will be at least "noticeable" IMO 14:27:59 slaweq: Yeah, I was worried more about the RPC workers 14:28:21 njohnston: ahh, ok :) 14:28:25 it's a matter of whether the mesh of tunnels works vs the cost 14:28:36 Does OVN use l2pop? IIRC it doesn't but I haven't looked in that part of the code in a while. 14:29:06 njohnston: nope 14:29:07 I am not sure right now which is better to switch it to RPC calls or to sync info periodically. 14:29:25 if the number of nodes to be informed is small, it makes sense to switch it to RPC calls. 14:30:22 but I am sure the problem is more acute in large deployments 14:31:09 There are costs and benefits each way - if the RPC call idea adds overhead then it could make the situation in large deployments worse. With the periodic sync you have a period of time where things might not work correctly, before the next sync. 14:31:27 yeap 14:31:33 it's always a trade off 14:31:59 I personally favor the periodic sync as being in keeping with our "eventually consistent" way of doing things, but I have a bias towards large deployment thinking. 14:32:22 actually looking at the code it will not be possible to switch always to call() from cast() 14:32:23 but if the mesh of tunnels gets to a point where it doesn't work, then it's time to consider the trade offs 14:32:28 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc.py 14:32:45 in some cases it uses fanout=True and then it can't call() be used 14:33:15 slaweq, why? 14:33:46 actually one problem we have with the MQ is that some calls are blocking 14:34:59 ralonsoh: https://docs.openstack.org/oslo.messaging/latest/reference/rpcclient.html#oslo_messaging.RPCClient.call 14:35:16 call() waits for return value so You can't send it to many hosts and wait for many replies 14:35:28 that's at least how I understand it 14:35:33 I know, but why do we need to use call instead of cast? 14:35:44 if the MQ is down, the server will stop working 14:35:56 ralonsoh: this was "Option 2" in the RFE 14:36:01 (maybe this is out of topic, sorry) 14:36:06 yeah, switching cast() into call() is not straight-forward. in case of fanout=True, we need to convert it into multiple call() and also need to check the status of individual call(). 14:36:18 using call() allows us to check the result 14:36:39 but perhaps it will bring another scaling issue in this case. 14:36:51 slaweq: when can an RFE can be brought up for discussion? 14:37:02 amotoki: yes, but IMO that's not good idea as L2pop was IMO designed to address some scale problems and such change would make it totally not scallable 14:37:20 slaweq: exactly 14:37:31 I just tried to explain what would happen. 14:37:39 stephen-ma: are You asking about https://bugs.launchpad.net/neutron/+bug/1861529 ? If yes, I keep it for the end of the meeting :) 14:37:40 Launchpad bug 1861529 in neutron "[RFE] A port's network should be changable" [Wishlist,New] 14:38:03 yes that's the RFE 14:38:21 amotoki: ralonsoh so IMO to address issue described by Oleg, we should only consider "option 1 - periodic sync mechanism" 14:38:45 slaweq: agree. I personally prefer to the periodic sync. 14:39:19 (I don't, that's why we have the MQ) 14:39:28 but the problem is why the MQ is not reliable 14:39:56 anyway, if making this update periodic can solve this problem, I'm ok 14:40:02 ralonsoh: problem here is that with cast() method neutron-server don't know if agent configured everything fine 14:41:17 I will comment in the bug 14:41:34 but that should not be a server problem 14:41:49 if agent is down, server should keep working 14:41:52 MQ is durable in some cases, but from my operator experience it is not easy to ensure MQ msgs are reliable, so it is nice if neutron (MQ users) provides some mechanism for reliability. 14:42:08 if the agent received the config and everything went fine, ok 14:42:20 if not, the agent should communicate to the server informing about the error 14:42:44 The other alternative, just to play devil's advocate, is to build the reliability higher up in the stack 14:42:45 which is synch of sorts, right? 14:43:10 njohnston, the reliability is on the services: agent, server, etc 14:43:11 ralonsoh: If the MQ never sent the message tot he agent then the agent has no idea it has something to complain about 14:43:26 the agent should be smart enough to send a warning message to the server 14:43:35 njohnston, I know 14:43:51 and if the MQ does not work, then will have a stopped Neutron server 14:43:54 ralonsoh: exactly how njohnston said, in case of cast() You will not have e.g. message timeout error on server side 14:44:07 ralonsoh: Instead of depending on the call() method the agent sends a message saying "I processed this update", and neutron server counts these acks. 14:44:07 (something very common in some bugs) 14:44:50 Similarly to how you design a reliable service on top of UDP, you don't have the transmission mechanism ensure reliability, you build it into the application layer 14:44:54 exactly: this should be like a UDP call, and the client should inform..... 14:45:01 exactly! 14:45:05 I was writing the same 14:45:30 and all of his is a synch mechanism, isn't it? 14:45:30 goal: do NOT block the server 14:45:33 that requires neutron to track what agent(s) should respond to this kind of request and keep an account for responses received 14:45:43 yes 14:46:05 in a async way 14:46:32 mlavalle: the value here is that the approach is not periodic or timer-based 14:46:52 sure 14:47:56 whenever to asynch entities need to cooperate (server and agent for example) you need a way to fins synchronization points, periodic or otherwise 14:48:23 nature of distributed systems 14:48:41 I'd say the idea has merit and we should explore it further with a spce 14:48:56 *spec 14:49:23 agree 14:49:43 My main question: the work of syncing to the database for FDB updates and then keeping an account of responses received, is it worth the effort? Compared to the simpler periodic sync mechanism. 14:50:10 IMO, you don't need to track the responses 14:50:18 ok, so to sumup what we discussed so far: we should continue discussion in the sync (periodic or not) mechanism in the spec, we don't want to switch from cast() to call(), right? 14:50:22 you'll have error responses or nothing 14:50:28 If you don't track the responses then you can't reissue the update when a response is not received 14:50:47 1) if the agent is not working, the server will notice that, period checks 14:51:17 2) if the agent message didn't work, the agent will reply with an error 14:51:32 3) if the MQ is unreliable.... well, this IS a problem 14:51:40 but not Neutron's problem 14:52:00 But addressing 3 is the point of the RFE, it is Neutron's problem 14:52:08 If AMQP drops the message then neither server nor client will know there was an error 14:52:33 drops a casted message to be specific 14:52:38 I agree with njohnston here - we should do as much as we can to address such case on our side 14:52:42 it is Neutron's problem in the sense that is has at least cope with it 14:52:57 ok 14:55:25 so, I will sum up this discussion in RFE and ask for spec to continue discussion there, right? 14:55:32 slaweq: I would point out in the RFE that in light of today's discussion, we lean towards some sort of synch problem 14:55:49 and that we would like to explore it in a spec 14:55:55 mlavalle: sure 14:56:06 +1 14:56:12 +1 14:56:16 +1 14:56:42 I meant synch mechanism 14:56:59 ok, thx for discussion about this RFE - it was the good one today :) 14:57:22 as we are almost on top of the hour, I don't want to start discussion about next rfe 14:58:03 but I want to ask all of You to check https://bugs.launchpad.net/neutron/+bug/1861529 14:58:04 Launchpad bug 1861529 in neutron "[RFE] A port's network should be changable" [Wishlist,New] 14:58:12 ok 14:58:31 and that's all for today from me 14:58:36 thx for attending 14:58:41 and have a great weekend 14:58:42 o/ 14:58:45 o/ 14:58:45 o/ 14:58:45 bye 14:58:46 #endmeeting