14:04:06 #startmeeting neutron_drivers 14:04:06 Meeting started Fri Nov 15 14:04:06 2024 UTC and is due to finish in 60 minutes. The chair is ralonsoh. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:04:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:04:06 The meeting name has been set to 'neutron_drivers' 14:04:16 Ping list: ykarel, mlavalle, mtomaska, slaweq, obondarev, tobias-urdin, lajoskatona, amotoki, haleyb, ralonsoh 14:04:19 \o 14:04:19 hello all 14:04:22 o/ 14:04:24 o/ 14:04:31 o/ 14:04:41 hi all 14:04:50 we have many RFEs to discuss today 14:05:01 and we have quorum, so I think we can start 14:05:09 hello all 14:05:13 First one: [RFE] Configurable Agent Termination on OVS Restart 14:05:16 https://bugs.launchpad.net/neutron/+bug/2086776 14:05:34 s3rj1k, please, present the RFE 14:06:50 ralonsoh: Hi, yea so basically the idea is to automate manual fixes that are done by human, of agent in k8s env 14:07:26 s3rj1k, my question, same as Liu in the bug is: why don't we fix what is broken in the OVS agent code? 14:07:34 what exact flows are not recovered? 14:07:43 to do that, there is an idea to introduce opt-in functionality to terminate agent instead of flow recovery 14:08:34 fixing is a separate thing, here is about making sure that prod envs are working without human interaction 14:09:13 the OVS agent is capable of detecting when the vswitch has been restarted and tries to restore the flows 14:09:13 Sebastian Lohff proposed openstack/neutron master: docs: Fix typo in openstack create subnet command https://review.opendev.org/c/openstack/neutron/+/935182 14:09:29 Liu also mentioned that this is only related to containerized evns, as for systemd setups there is already similar restarts happen, as I understand 14:09:35 if we fix what is broken, that will solve your problem (and for any other one using it) 14:10:04 Liu is stating, as he is right, that restaring the OVS agent has a high cost in the Neutron API 14:10:14 to fix that we need to reporo bug, there is no time to do that on envs that have client workloads 14:10:47 > restaring the OVS agent has a high cost - agree, hence opt-in 14:11:07 I personally feel like this proposal is more like workaround for some bug rather then fix of it 14:11:12 exactly 14:11:23 agree, workaround 14:11:32 s3rj1k, can you share with the community what is broken when the OVS restores the flows? 14:11:41 because this is the critical point here but is not documented 14:11:46 this is what needs to be fixed 14:12:20 I have no specific repros right now, details that I have is in ticket, one of the flows was not recovered 14:12:58 s3rj1k, so please, add this information in the LP bug and how to reproduce it locally, if possible 14:12:59 when I will have a repro I will create a bug for it, but htis does not mean that workaround is invalid 14:13:04 with this information we can try to fix it 14:13:07 Could you please check the suggestion from Liu (https://bugs.launchpad.net/neutron/+bug/2086776/comments/5 ) that seam reasonable and in sync with the current design? 14:13:32 that is a lightweight workaround too 14:13:41 no API involved in this case 14:14:52 but maybe could be a solution: instead of the current code that restores the flows, when the OVS is restarted, restart the OVS agent monitoring mechanism 14:14:59 anyway if we have a list of missing things as ralonsoh suggested that can help to find the places to fix 14:15:32 yeah perhaps 14:16:14 s3rj1k, so please, before proceeding on any decision, we need to know what is actually broken. Then we can propose a solution (fix the restore methods, lightweight restart or full restart) 14:16:54 are we sure we want to block general workaround on specific bug repro? 14:17:05 things can be done async 14:17:05 sorry what does it mean? 14:17:51 continue with terminate workaround and as a separate bug try to find repro for that one specific flow 14:18:31 I am pretty sure that there are more cases for flows recovery but I have no info on that, only value report on one of them 14:18:50 if you ask me to implement a workaround for a bug I don't understand, I will say no 14:18:57 I need to know what is broken 14:19:03 personally I don't like idea of such kind of workaround because it don't seems for me as professional thing - it's a bit like implementing restart of service to fix memory leak :) So I would personally vote for fixing real bugs and handle ovs restart properly instead of this rfe 14:19:03 as basically fix is manual restart, nobody is triaging bugs in prod 14:20:08 nobody is triaging bugs in prod --> I do this for a living 14:20:16 slaweq: so better to ignore that and keep line1 ops just restart all the things? 14:20:23 also if you need workaround like that you can probably implement some small script which will be doing what you need to do - it don't need to be in the agent itself 14:20:46 s3rj1k no, IMO it's better to try to reproduce the issue on dev env and try to fix it 14:20:47 sure it can be done as some script 14:21:52 no need to reproduce intentionally, just next time when manual intervention is needed (agent restart) - dump flows before and after restart, that could be enough for investigation 14:21:57 was just coming in here to post about our outage we've had during 2 of our last 3 upgrade tests. Seems directly related to the OVS conversation you're having here now 14:22:29 obondarev: +1, good that you added here 14:22:47 as I said, when some details come in, I will create a separate bug for that 14:23:05 we've had a 5 minute outage between when kolla-ansible restarts OVS containers (which triggers 2 separate flow re-creations) and we don't get FIP connectivity back until the neutron role, some 5 minutes later 14:23:24 this is more about do we want some workaround in place or not 14:24:00 in our testing, both the restart of openvswitch_db and openvswitch-vswitchd trigger neutron agent to recreate the flows, this happens within about 20 seconds of each other 14:25:23 IMHO it is better to have script e.g. in https://opendev.org/openstack/osops for that rather then implement that in neutron 14:25:37 +1 to slaweq 14:25:46 s3rj1k, we want to fix this bug, for sure. We don't want to implement a workaround for something we don't understand. In order to push code, we need to know what is happening 14:25:46 for some reason, but not always, neutron ovs agent doesn't seem to like this and connectivity is lost until the agent is restarted during the neutron role 14:26:17 sorry for just injecting my recent experience, just excited that this is being discussed in here, hoping for some kind of mitigation 14:26:20 so, for now, we are not going to vote for this RFE until we have a better description of the current problem 14:26:39 s3rj1k, please, do not open a separate bug, provide this info in the current one 14:27:01 ok, we have more RFEs and just 35 mins left 14:27:05 greatgatsby_: do you think your issue is related to https://bugs.launchpad.net/neutron/+bug/2086776 ? 14:27:14 greatgatsby_: can you please add more info to RFE if possible 14:28:06 ok, next RFE: [RFE] L3 Agent Readiness Status for HA Routers 14:28:07 ralonsoh: ok, if have more details I'll add them to this RFE 14:28:12 #link https://bugs.launchpad.net/neutron/+bug/2086794 14:28:27 we had a similar bug https://bugs.launchpad.net/neutron/+bug/2011422 14:28:39 that was also discussed in a PTG (I don't remember which one) 14:28:44 can be also discussed together with 2086799 14:28:47 I have a general comment for this and next one ([RFE] OVS Agent Synchronization Status for Container Environments ) 14:29:09 first let's wait for the RFE proposal 14:29:14 s3rj1k, please, go on 14:29:41 Are these about monitoring our processes / agents etc.... Nova discussed something similar: https://etherpad.opendev.org/p/nova-2025.1-ptg#L255 perhaps worth to check before reinventing it 14:29:51 both RFEs are about extending available data for k8s probes 14:30:28 it sounds very similar. We've only just identified the trigger, and it not being 100% reproducable, it takes me a couple days to get back to a fresh environment. My next test I will log the flows the whole upgrade. For now, we've just identified it's caused by the 2 OVS container restarts, neutron agent doesn't recover properly somehow, and connectivity is lost to our VMs until minutes later 14:30:28 when the agent is restarted as part of the neutron role in kolla-ansible 14:30:41 so that pod mgmnt can do monitroing and life-cycling of containerized agents 14:31:16 we've using DVR and VLANs, if that matters 14:31:23 s3rj1k, but what is the information required? 14:31:31 what do you exactly need to monitor? 14:31:38 I think it is good idea generally for all the agents to report in some file what they are doing, like "full_sync in progress", "sync finished", etc. 14:31:47 OVS agent's synchronization status (started/ready/dead states) 14:32:02 slaweq, why a file? we have the Neutron API and heartbeats 14:32:04 as sometimes such full sync after e.g. agent start can take very long time indeed 14:32:10 and for second one HA router status 14:32:29 ralonsoh IIUC it's about checking it locally 14:32:42 by e.g. readiness probe in k8s 14:32:42 I can extend both of RFEs with details if in general this sounds good 14:32:48 agree with neutron API, we already have some healtcheck API, that should be extended 14:33:02 we were surprised that both OVS container restarts each trigger flow re-creation in rapid succession 14:33:10 Nova also move that direction see the spec: https://specs.openstack.org/openstack/nova-specs/specs/2024.2/approved/per-process-healthchecks.html 14:33:14 https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/openvswitch/handlers/main.yml 14:33:15 we have a fantastic Neutron API that can be feed with anything we want 14:33:36 and for such puprose reporting this to the neutron db and checking for each agent through api will not scale at all 14:33:42 lajoskatona, exactly: this is a matter of improving the agent information mechanism 14:34:01 for k8s probes it is best to use something local to pod 14:34:27 we can send such info in the hearbeat too but saving it in a file locally shouldn't hurt anyone and can help in some cases 14:35:41 but we should probably have some standarized list of possible states and reuse them in all agents 14:35:48 I'm against storing local info if we don't push that to the API too 14:35:55 probes in general tend to run frequently so anything that is network related can have pref impact on cluster, better to have local data 14:35:59 not that each agent will report something slightly different 14:36:16 slaweq, yes, that was the PTG proposal, having a set of states per agent, that coudl be configurable 14:36:32 DHCP: network status (provisioning), L3: router status, etc 14:36:45 that could be defined in each agent independently 14:37:11 but again, this info should go to the API. If we want to store it locally as an option, perfect 14:37:19 like a state machine 14:37:40 mlavalle, yes, and not only a global one per agent, but also per resource (network, router, etc) 14:37:53 do we need them per agent, shouldn't just few standard ones be enough? Like e.g. "sync in progress", "operational" or something like that 14:37:56 ralonsoh: works for my case if bot local and API 14:38:00 what else would be needed there? 14:38:20 slaweq, do you mean having standard states? 14:38:25 regardless of the resource/agent 14:38:42 ralonsoh yes, IMHO it can work that way 14:38:47 but maybe I am missing something 14:38:50 I think that makes sense 14:39:12 slaweq, we can always add new states (that don't need to be used by all resources/agents) 14:39:19 true 14:39:32 maybe there is a common set of states and then each agent has a few of its own 14:39:34 but yes, initially I would propose a set of states that could match with the state of the common agents 14:39:44 but that way we can have them in the neutron lib defined so that everyone will be able to rely on them pretty easily 14:39:53 yes to both 14:39:57 so let's have now some framweork with basic common information locally and on API? 14:40:35 on api it is very easy as you can add whatever you want to the dict send in the heartbeat IIRC 14:40:41 yes, some information that will be provided to the API and, via config, stored locally if needed 14:40:54 so yes, I would say - have them in both places - api and locally 14:41:21 I'm ok with this proposal, but of course we need a spec for sure 14:41:25 locally it would be similar to the ha state of the router stored in the file 14:41:32 ^ yes 14:41:52 +1 14:41:56 yes, spec would be good. It don't need to be long but we should define there what states we want to have 14:42:13 so +1 with spec 14:42:19 +1 14:42:24 +1 from me 14:42:26 +1 14:42:26 should we start proposing states in RFE comments? 14:42:29 +1 14:42:43 s3rj1k, it would be better to describe it in the spec 14:42:45 s3rj1k: wrte a short spec 14:43:08 ok, will do 14:43:10 Like these : https://opendev.org/openstack/neutron-specs/src/branch/master/specs/2024.2 under 2025.1 folder 14:43:12 s3rj1k: do you know how to propose a spec? 14:43:47 s3rj1k, are you going to combine the RFEs in one single spec? 14:43:56 I'll walk through docs, in case of questions just ask them here on regular day, so no prob with that, thanks 14:44:24 > in one single spec? - that can make sense yes 14:44:41 perfect, I'll comment that in the LP bugs after this meeting 14:45:02 so we can skip the 3rd one and go for the last one 14:45:10 [RFE] Add OVN Ganesha Agent to setup path for VM connectivity to NFS-Ganesha 14:45:16 #link https://bugs.launchpad.net/neutron/+bug/2087541 14:45:38 Is Amir here now? 14:45:52 Yes I'm here 14:46:03 perfect, can you explain it a bit? 14:46:12 sure 14:46:26 According to the Manila documentation, there is a need for network connectivity between the NFS-Ganesha service and the client mounting the Manila share. 14:46:41 Currently, no existing solution addresses this requirement. This RFE proposes to solve it using Neutron and OVN. 14:47:28 first thing from me - instead of new agent we can do this as extension to the neutron-ovn-agent probably 14:47:59 I couldn't agree more (note: during this release I need to make the OVN agent the default one...) 14:48:14 +1 14:48:20 we already have a generic agent for OVN, with a plugable interface 14:48:29 that was the whole point of it 14:48:34 is just a matter of implementing the needed plugin and enable it 14:49:09 amnik, what is actually needed? what the agent needs to do? 14:49:17 what will trigger these actions? 14:49:43 this solution inspired by ovn metadata agent 14:50:13 it detects ports for ganesha connectivity and plug the on compute nodes 14:50:45 these ports are distributed localport like metadata port 14:51:12 I don't know about Ganesha at all, can you maybe in briefly explain what this port is needed for on compute nodes , how vms will use it, etc.? 14:52:29 ganesha mediated traffic to the cephFS. You can think about it like NFS server between vms and cephFS in ceph cluster 14:53:23 after we plugging the port on compute nodes we add some iptables rule to dnat traffic to the ganesha 14:54:00 private_ip_port:2049 -> ganesha_ip:2049 14:54:16 is this port a VM port? 14:54:58 and it is one virtual port per network? So all vms conected to that network will use the same port? 14:55:23 It like metadata port. Not bind to any chassis. 14:55:36 but who is creating this port? 14:55:39 is in the OVN database? 14:55:47 and other question: ganesha_ip is from the same private network or not? 14:57:12 apart from the technical questions, I'm a bit reluctant to add something so specific in the Neutron repository 14:57:19 slaweq: I think better to create port for each share that vms want to connect to. So we can manage it per share(storage on cephFS ) 14:58:26 ralonsoh: the port will be created with API and after that ganesha agent detect it 14:59:03 ralonsoh: yes in OVN database that agent can detect it with event. 14:59:09 how? this port won't be bound to a chassis 14:59:17 we use the SB and the local OVS database 14:59:37 same as an ovn-controller 15:00:05 could it be that neutron-ovn-agent extension which would be respoinsible for this would be actually in Manila and just loaded by neutron-ovn-agent if needed? We can accept new device type if needed so that you can create port with this new device type in neutron api but other things may actually be in Manila as it seems that this is team of experts on this topic, not we 15:00:39 slaweq: No ganesha is service that deploy for example on our controller servers and has IP with network of that server. 15:02:10 ralonsoh: agent can detect it by device_id convention like ovnganesh- and port type in PORT_binding table 15:02:27 slaweq, yes, that could work 15:02:45 amnik, you said this port is not bound 15:03:04 who is physically creating this port? 15:03:37 if that would have to land in the neutron repository, we would probably need spec but I agree with ralonsoh that this may be a bit outside our expertise here so maybe it would be better to keep as much as possible in manila and just add necessary bits in neutron (neutron-lib) to handle that new type of ports correctly 15:04:26 agree with this 15:04:28 ralonsoh: This is a seperate API call that user should create the port (openstack port create ...) 15:04:41 amnik, no 15:04:49 who is creating the layer one port 15:04:51 ? 15:05:13 anyway, we are running out of time 15:05:19 we can continue after this meeting 15:05:32 what is the recommendation for this RFE? 15:05:53 I agree with slaweq's comment 15:06:27 me too 15:06:33 I would like to see detailed spec with exact description about how this is going to work in both API and backend 15:06:36 ralonsoh: This is a responsiblity of ganesha agent. user create port. agent will plug it to the compute node with veth devices like metadata port 15:07:00 but for now this RFE is not approved, right? 15:07:09 Then we can discuss there what have to be in neutron and maybe what can be somewhere else 15:07:28 I think RFE needs to be updated at least, new agent seems an overkill 15:07:30 ralonsoh IMO not yet, it's too early 15:07:43 ok, I'll comment that in the LP bug 15:07:49 I'm closing this meeting now 15:07:54 #endmeeting