14:00:36 #startmeeting networking 14:00:36 Meeting started Tue Jul 20 14:00:36 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:36 The meeting name has been set to 'networking' 14:00:41 o/ 14:00:44 o/ 14:01:12 o/ 14:01:13 hi 14:01:14 o/ 14:01:51 I don't think we will have more people so let's start 14:01:59 #topic Announcements 14:02:02 * njohnston lurks 14:02:24 Xena-2 was reached last week, now we are in Xena-3 14:02:35 next important date is week of Aug 16th - final release for non-client libraries 14:02:35 o/ 14:03:29 next one 14:03:36 October virtual PTG 14:03:45 announcement http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023370.html 14:03:50 etherpad https://etherpad.opendev.org/p/neutron-yoga-ptg 14:04:12 please add your ideas of topics to discuss there 14:04:31 and your name if You are going to attend our sessions 14:05:07 I had only few responses to doodle which I created 14:05:26 but based on that I booked 3h slots every day, 1300-1600 UTC 14:05:40 I think it should be enough for us 14:05:54 there will be also TC & PTLs session in Monday or Tuesday, so I will probably cancel Neutron time slots in the same time to be able to attend that other session 14:06:07 but for now all 5 days are booked for our sessions 14:06:28 +1 14:06:55 and last one 14:07:00 PTL on vacation next week (again) 14:07:06 just FYI :) 14:07:34 but that should be last week when I'm off this summer 14:07:41 and that's all from me 14:07:53 any other announcements/reminders from anyone? 14:09:20 if not, let's move on 14:09:26 #topic Blueprints 14:09:32 hi, sorry to be late 14:09:41 I updated list of the BPs for Xena-3 14:09:47 https://bugs.launchpad.net/neutron/+milestone/xena-3 14:09:51 hi obondarev :) 14:10:50 rubasov: I wanted to ask You for help with setting priorities for 4 BPs proposed by You and Your colleagues 14:11:27 for all of them specs are merged now, should we now focus on some of them as most important? 14:11:57 thanks for asking 14:12:20 I would say the explicit extra routes has definitely lower priority than the others 14:12:58 so I set "Low" priority for it 14:13:17 and as I mentioned before we hope to kick off beyond the reference implementation a vendor specific plugin development too 14:13:52 for that to work we hope to propose the API extension patches as soon as possible 14:14:16 I think I saw some related to e.g. BFD already 14:14:28 if those could be merged (maybe as kind of experimental) that would be helpful 14:14:40 of course if this workflow is okay for the team 14:15:18 IMO workflow is ok - we have to have api-ref and api extensions first 14:15:22 and them work on implementation 14:16:26 that is very helpful to us, thank you 14:16:38 rubasov: but please keep in mind that final release for neutron-lib needs to be in the week of Aug 16th 14:16:54 ack 14:16:54 so You have just about 1 month for that 14:17:13 if You want to have it in neutron-lib which will be used in Xena 14:18:14 nit: it would be nice if the api-ref has a comment until the reference implementation lands. 14:18:51 amotoki: I meant something like that by experimental 14:19:06 rubasov: sounds good. thanks 14:20:10 ok, anything else anyone wants to discuss regarding BPs? 14:21:15 let's move on 14:21:18 #topic Bugs 14:21:27 we have reports from rubasov and ralonsoh for this week 14:21:34 http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023590.html 14:21:38 http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023728.html 14:21:41 ? 14:21:56 It wasn't me this week? 14:21:58 last week 14:22:01 rubasov was bug deputy 2 weeks ago, but there was no meeting last week 14:22:09 ah sorry 14:22:12 LOL 14:22:15 so I want to ask about reports from 2 weeks today :) 14:22:21 hehehehe 14:22:37 any bugs You want to discuss from those reports? 14:22:40 IIRC every bug in my report was taken by somebody 14:23:18 thx rubasov 14:23:22 just one: https://bugs.launchpad.net/neutron/+bug/1935959. But 1 hour ago a folk told me that could be solved in core OVN 14:23:45 this is the OVN bug: https://bugzilla.redhat.com/show_bug.cgi?id=1929901 14:23:51 that would be great for us 14:24:48 ralonsoh: can You add comment about it in LP then? 14:24:53 sure 14:24:56 thx a lot 14:25:23 I would also like to discuss about few bugs today :) 14:25:25 first one: 14:25:36 https://bugs.launchpad.net/neutron/+bug/1934957 - it's sriov related 14:26:02 seems easy to fix in code but the question is if we really want to add config options for that in neutron 14:27:12 IIUC in new versions of the i350 driver it should works fine and we shouldn't raise exception there, is that correct? 14:27:51 that's kind why I offered the fix to the reporter, it's propably only him/her who needs this 14:28:21 the intel ticket I saw talked about it as something not yet implemented 14:28:31 but maybe I missed a newer release note 14:28:53 so if it's not implemented, why we shouldn't care about that error at all? 14:28:58 slaweq, this is part of the standard ip link API 14:29:06 by default, the command should not fail 14:29:19 same as with some cards when setting the min BW QoS 14:30:08 ok, but in that case it fails due to intel driver, right? 14:30:21 looks like, yes 14:30:29 the root cause is definitely in the intel driver 14:30:39 shouldn't we simply handle such exception and log e.g. warning if that will happen? 14:30:53 to avoid such ugly stack trace in the logs 14:31:02 that's the point: that should never happen 14:31:23 this is a basic command, root should be always capable of setting the VF status 14:31:41 is like being capable of setting a device status, UP or DOWN 14:31:55 ok, so IMO we shouldn't do anything with that on our side then 14:32:01 right 14:32:05 it's bug in Intel driver and should be fixed there 14:33:02 with that I would say to close it on our side, we shouldn't IMO add another config option just to workaround intel bug 14:33:07 wdyt? 14:33:20 for me that's okay 14:33:35 no, we should not add a new config knob for this 14:33:56 ok, I will update LP with what we discussed here 14:33:58 thx 14:33:59 sounds reasonable to me. the driver author should fix such bug as long as the nic is being supported. 14:34:26 let's move to the next one 14:34:41 https://bugs.launchpad.net/neutron/+bug/1934666 - shushuda wants to talk about it today and he added it to on demand section 14:34:50 but I think we can discuss it here as well 14:34:57 also ddeja and Labedz are here to talk about it 14:35:01 Takashi Kajinami proposed openstack/python-neutronclient master: DNM: Test functional tests https://review.opendev.org/c/openstack/python-neutronclient/+/801496 14:35:19 yup 14:35:46 question is simple - do/should we support dvr-snat mode on compute nodes? 14:36:11 this is not explicitly defined in the docs 14:36:30 but there is no place there in the docs where we allow it 14:36:34 that means: 14:37:00 a network node is just for routing and networkinf stuff, it won't run compute agents 14:37:16 https://docs.openstack.org/neutron/pike/admin/deploy-ovs-ha-dvr.html 14:37:16 that's also how I understood Liu's answer, that today it's not supported, but our docs are not clear about this 14:38:28 Is it not supported by design, or is it just not working/not tested for now? 14:38:45 or a bug? :) 14:38:53 both, not designed as it nor tested 14:39:09 or at least nobody considered this option 14:39:09 What is preventing us from letting compute nodes run snat stuff as well? As in, why not? 14:40:27 Mamatisa Nurmatov proposed openstack/neutron master: use payloads for PORT and FLOATING_IP https://review.opendev.org/c/openstack/neutron/+/800604 14:41:02 IIRC such case was not considered well in the design. the number of compute nodes is much larger than the number of network nodes and dvr-snat l3-agent i sused to handle SNAT which cannot be distributed. 14:41:13 I think that Liu mentioned couple of potential issues with it in https://bugs.launchpad.net/neutron/+bug/1934666/comments/5 14:42:17 also commit which Liu mentioned is talking about some issues with spawning radvd on all backup nodes https://github.com/openstack/neutron/commit/2f9b0ce940099bcc82d2940b99bdc387db22d6fc 14:42:34 so it was done like that on purpose 14:42:56 exactly, that was done on purpose 14:42:58 still will it be a decision that it should be separate in (close) future? 14:43:10 considering services should be independent 14:43:30 that will make an exception in design 14:43:41 services are independent, not the underlauing network 14:44:07 so the network node is not expecting packets from inside 14:44:18 The patch I have proposed is affecting only DVR HA routers, as in it spawns radvd only for DVR HA, not for legacy HA. Since regular DVR is spawning radvd for dvr local routers anyway, would it affect much to also spawn it for backup routers? 14:45:26 I didn't explore that topic a lot but IMO it will bring back issues described in https://github.com/openstack/neutron/commit/2f9b0ce940099bcc82d2940b99bdc387db22d6fc for dvr-ha 14:47:42 ralonsoh: services are independent, not the underlauing network - for me kind of the same 14:48:24 but one question - should radvd be spawned in qrouter namespace on all computes to make it working? 14:48:24 it is not like we are insisting - just matter of future and plans for deployments 14:48:45 or only in the snat namespaces (or snat nodes)? 14:50:32 ok, sorry 14:50:36 there was no question 14:52:38 so I guess conclusion for now is: do not put l3 dvr_snat on same hosts as computes 14:52:48 ddeja: yes 14:52:56 I think so too :) 14:53:08 maybe let's notice it in doc? 14:53:17 +1 14:53:18 Labedz: we should 14:53:23 to not have confusions in future 14:53:50 just a point: from opeartor point of view it may be a additional cost for infra 14:54:31 but the number of compute nodes is much larger than the number of network nodes, so wouldn't the extra cost be small? 14:55:01 TBH we will see in close future :) 14:55:32 Labedz: if there use case for that, I'm fine to explore it more but we should do it carefully and check potential issues which may be there, like e.g. those mentioned by Liu in that LP 14:56:16 slaweq: sure, we will come back with feedback when gain some experience 14:56:55 ok, I will update that bug with summary of what we discussed today 14:57:00 thx for bringing it up 14:57:40 bug deputy this week is lucasagomes and he's aware of that 14:57:46 next week will be jlibosva 14:57:53 o/ 14:57:56 o/ 14:58:23 one last thing to mention, I will propose new neutron-lib release this week 14:58:36 so if You have something what You would like to include there, please let me know asap 14:58:56 so we can prioritize review and land such patch(es) quickly :) 14:59:17 so far there is only https://review.opendev.org/c/openstack/neutron-lib/+/747523 on my list 14:59:46 and we are out of time 14:59:51 thx for attending the meeting today 14:59:55 bye 14:59:58 slaweq: https://review.opendev.org/c/openstack/neutron-lib/+/801084 may be worth (removing a deprecation notice) 15:00:00 remember - next week I will cancel that meeting 15:00:09 thx bcafarel 15:00:11 o/ 15:00:15 #endmeeting