14:05:02 #startmeeting neutron_drivers 14:05:02 Meeting started Fri Sep 10 14:05:02 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:05:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:05:02 The meeting name has been set to 'neutron_drivers' 14:05:06 #chair lajoskatona 14:05:06 Current chairs: lajoskatona slaweq 14:05:09 o/ 14:05:10 hi 14:05:13 o/ 14:05:15 hi 14:05:20 hi 14:05:49 ok, I will do it today for the last time :) 14:05:53 no worries 14:06:17 ok, I think we can start 14:06:20 #topic RFEs 14:06:29 we have 3 rfes for today 14:06:34 first one 14:06:38 https://bugs.launchpad.net/neutron/+bug/1830014 14:07:28 For that I found an old merged spec: https://review.opendev.org/c/openstack/neutron-specs/+/662541 14:07:35 this is an old RFE which I though would be good to get back to and decide finally maybe if we want to approve it (as an idea) or not 14:07:51 lajoskatona this spec isn't merged 14:08:08 oh, bad link: https://review.opendev.org/c/openstack/neutron-specs/+/308973 14:08:23 it's something similar 14:08:53 and it seems an old topic 14:09:25 but idea here is a bit different IIUC 14:09:35 I just quickly looked at that old spec 14:09:58 old spec wants to focus on diagnostic of the neutron resources like agent, network or subnet 14:10:24 and liuyulong's proposal now is to have some "probe" to check locally connectivity to the instance 14:10:30 that's my understanding at least 14:11:13 yeah that's a difference, that's true 14:12:15 whose CI problems are we trying to address with this? Ours? 14:12:38 TBH I have problem with that liuyulong's proposal as IMO it will do very small diagnosis actually 14:13:06 and it can only check if VM is configured properly in the guest os - it will really tell nothing about where connectivity can be broken 14:13:34 please, let's focus on one spec 14:13:38 both are very different 14:13:40 mlavalle TBH original issue with SSH not ready in our CI was solved/workarounded some time ago with simple patch which checks console-log of the vm 14:13:58 so the original use case given in that spec is not the problem anymore IMO 14:14:32 that means spec 308973 is not relevant anymore? 14:15:08 not for this conversation 14:16:50 ralonsoh when I was talking about original use case, I meant the one described in Liu's spec: https://review.opendev.org/c/openstack/neutron-specs/+/662541 14:16:57 I didn't even read the old one :) 14:17:43 slaweq, ok. About Liu's spec, I think skydive can provide this functionality 14:18:21 more or less 14:18:35 ralonsoh, will you please share a link? 14:18:45 obondarev, http://skydive.network/ 14:19:00 thanks! 14:19:28 the only thing which Liu's proposal can address is checking if some service, like e.g. ssh works on the guest vm or if security groups are ok or not 14:19:49 as he wan't to install probe in the node where vm is placed 14:20:31 to me it sounds like something similar to tempest: like some subset of tests that could be run easily 14:20:48 with pings, ssh, etc. 14:21:11 obondarev kind of but it's for "real" vms 14:21:20 so this is basically a probe with a set of tools 14:21:26 I can imaging that this could be maybe useful for e.g. support team in public cloud company 14:22:19 I'm ok if, in the spec, we specify what will be able to do, the implementation 14:22:24 maybe if probe could be manually placed on any host that could be useful - someone from support could then e.g. install probe in the host where customer's router is and check if from there there is connectivity to vm 14:24:03 the use case described in the RFE is upstream CI. you, lajoskatona, ralonsoh, slaweq are the prime use case targets. Do you find this useful? 14:24:17 described in the RFE and the spec ^^^^ 14:24:33 the CI has its own tools, if this is the target, no 14:24:43 mlavalle as I said, in the ci it isn't really needed now 14:25:25 ok, then we can speculate about possible use cases... but if we don't have users representing those use cases demanding this, why do it? 14:25:39 yeah if I understand well we should add / change existing tests to use the API and install probe to test hosts 14:26:15 the original debug tool was only used in CI or in production environemt also? 14:27:37 additional lines of code add to the maintenance challenge (entropy) of the project. Why add to it if we don't have a clear use case? 14:28:45 mlavalle: I agree with You 100% 14:29:05 also, it can be proposed as separate project if it could be really useful for some use cases 14:29:20 but for now, I tend to vote -1 for this rfe 14:29:37 in factin fact Yulong said something about this back in January, looking at the spec 14:29:58 so I don't think it's a pressing issue even for him, on the evidence we have 14:31:21 true, this is sitting there for very long time :) 14:31:40 OpenStack Release Bot proposed openstack/neutron-lib stable/xena: Update .gitreview for stable/xena https://review.opendev.org/c/openstack/neutron-lib/+/808228 14:31:42 OpenStack Release Bot proposed openstack/neutron-lib stable/xena: Update TOX_CONSTRAINTS_FILE for stable/xena https://review.opendev.org/c/openstack/neutron-lib/+/808230 14:31:45 OpenStack Release Bot proposed openstack/neutron-lib master: Update master for stable/xena https://review.opendev.org/c/openstack/neutron-lib/+/808234 14:31:49 OpenStack Release Bot proposed openstack/neutron-lib master: Add Python3 yoga unit tests https://review.opendev.org/c/openstack/neutron-lib/+/808236 14:32:04 OpenStack Release Bot proposed openstack/os-ken stable/xena: Update .gitreview for stable/xena https://review.opendev.org/c/openstack/os-ken/+/808251 14:32:06 OpenStack Release Bot proposed openstack/os-ken stable/xena: Update TOX_CONSTRAINTS_FILE for stable/xena https://review.opendev.org/c/openstack/os-ken/+/808254 14:32:07 OpenStack Release Bot proposed openstack/os-ken master: Update master for stable/xena https://review.opendev.org/c/openstack/os-ken/+/808255 14:32:08 OpenStack Release Bot proposed openstack/os-ken master: Add Python3 yoga unit tests https://review.opendev.org/c/openstack/os-ken/+/808256 14:32:21 OpenStack Release Bot proposed openstack/ovsdbapp stable/xena: Update .gitreview for stable/xena https://review.opendev.org/c/openstack/ovsdbapp/+/808269 14:32:25 OpenStack Release Bot proposed openstack/ovsdbapp stable/xena: Update TOX_CONSTRAINTS_FILE for stable/xena https://review.opendev.org/c/openstack/ovsdbapp/+/808273 14:32:37 OpenStack Release Bot proposed openstack/ovsdbapp master: Update master for stable/xena https://review.opendev.org/c/openstack/ovsdbapp/+/808276 14:32:40 OpenStack Release Bot proposed openstack/ovsdbapp master: Add Python3 yoga unit tests https://review.opendev.org/c/openstack/ovsdbapp/+/808278 14:32:40 agree, it's not something that is a must in the upstream CI 14:33:05 OpenStack Release Bot proposed openstack/python-neutronclient stable/xena: Update .gitreview for stable/xena https://review.opendev.org/c/openstack/python-neutronclient/+/808304 14:33:21 are we ready to vote on this one? 14:33:28 -1 14:33:28 I'm -1 for that rfe 14:33:31 -1 14:33:40 -1 14:33:43 -1 14:33:48 thx 14:33:52 OpenStack Release Bot proposed openstack/python-neutronclient stable/xena: Update TOX_CONSTRAINTS_FILE for stable/xena https://review.opendev.org/c/openstack/python-neutronclient/+/808323 14:33:53 OpenStack Release Bot proposed openstack/python-neutronclient master: Update master for stable/xena https://review.opendev.org/c/openstack/python-neutronclient/+/808324 14:33:55 OpenStack Release Bot proposed openstack/python-neutronclient master: Add Python3 yoga unit tests https://review.opendev.org/c/openstack/python-neutronclient/+/808325 14:33:59 I will mark it as declined after the meeting 14:34:05 let's move on to the next one 14:34:15 https://bugs.launchpad.net/neutron/+bug/1916428 14:34:25 this is also old one 14:34:48 not even a year old :-) 14:34:50 and tbh I'm not even sure if we should treat it as new rfe 14:34:57 this is a hard nut to crack 14:35:18 it's not about new feature but new driver for the PD 14:35:50 so in general I'm totally fine with new driver, especially as dibbler client is not maintained anymore 14:36:36 yes, having a second driver would be perfect 14:37:24 so my only concern is about what driver should it be 14:37:43 I don't know that software too much but I saw that lajoskatona did some research there 14:38:25 yeah I just checked how we use dibbler today and if we can have similar for kea 14:39:14 as I remember we use external scripts and there something similar called hooks in kea 14:40:24 has the license issue you raised in the RFE been sorted out? 14:40:36 and another question is that kea has many features but for neutron we need on pd, and another is to have package support, and for ubuntu20 the hook supporting version is not included 14:41:03 mlavalle: nope, for that we have to ask 14:41:28 I suppose openstack has somebody who has experience with these legal things 14:43:55 I agree that we should choose wisely the backend software for which we will have new driver 14:44:18 Merged openstack/neutron-lib master: Add API shim extension "quota-check-limit" https://review.opendev.org/c/openstack/neutron-lib/+/807876 14:44:20 but TBH for me RFE is about "having new driver" and in that sense I'm ok with approving it 14:44:27 wdyt? 14:44:34 +1, we need it 14:44:57 +1 14:44:57 well, if we don't fix it, it will come back and bite us eventually, so yeah, +1 14:45:15 +1 14:45:33 actually if we will not have any new driver we will need to drop support for PD at some point probably 14:45:38 ok, thx for voting 14:46:03 I will mark rfe as approved and we can continue discussion there to choose proper backend software for which driver we will want to have 14:46:26 ok, last one for today 14:46:27 https://bugs.launchpad.net/neutron/+bug/1870319 14:46:32 this is also something old :) 14:47:00 (I did some cleaning for lajoskatona :)) 14:47:12 slaweq: thanks 14:47:17 It's on me but I didn't had time for it earlier 14:47:34 basically it came from the Kuryr team who would like to have such feature 14:48:22 as for amotoki's questions I don't think that bulk port deletion would solve the issue completly for them 14:48:46 but in fact it can be step to accomplish final goal which is cascade network deletion 14:49:22 as you commented in the bug in c#3. "what response should be in case of partial failure?" 14:49:33 ralonsoh that's good question 14:49:36 because it is easy to retrieve the ports to be deleted 14:49:44 and send the bulk port deletion 14:49:48 I spent recently some time reading about it and there is no one straight way for that 14:50:12 I asked e.g. how Octavia is doing it but their API is asynchronous 14:50:35 so if used do "loadbalancer delete --cascade" it will return 204 immediately 14:50:47 and then will start processing deletion of resources 14:50:52 that's not the problem I think having bulk port deletion 14:50:58 so user will need to periodically check if all is deleted already 14:51:25 for example, in a heat stack you track the resources created and deleted 14:51:32 ralonsoh, actually for bulk port deletion it would be similar thing 14:51:35 and you can set this stack as failed or something similar 14:51:41 what if some ports will be deleted and others not? 14:52:19 right so there should be a detailed output at the end of the command 14:52:39 in case of error, of course 14:52:49 in case of bulk deletion we can return 207 Multistatus 14:52:53 https://httpstatuses.com/207 14:53:02 and details about each port in the body 14:53:11 right 14:53:25 maybe for cascade network deletion we can do similar 14:53:30 207 Multi status 14:53:35 and then in body something like 14:54:00 [{"resource": "port", "id": ....", "status": 204}, {....}] 14:54:08 hmm well, I don't think we should use a 200 return status 14:54:12 so we would have everything detailed in the body 14:55:00 ok: "Further elements contain 200, 300, 400, and 500 series status codes generated during the method invocation" 14:55:01 but I would of course write a spec where we can discuss such details 14:55:06 right 14:55:11 that's something for the spec 14:55:32 easiest thing is to just fail on first resource(port) delete failure saying: following port could not be deleted: "", right? 14:55:56 that's also an option obondarev 14:56:12 but then we will not return what actually was deleted before that port :) 14:56:38 agree this is a discussion for the spec :) 14:56:45 *I agree 14:57:26 I say approve the RFE and hash out the details in the spec 14:57:41 thx mlavalle 14:57:46 I will not vote on this one :) 14:57:57 +1 for the RFE (and waiting for the spec) 14:58:01 +1 14:58:29 +1 14:58:35 ok, thx 14:58:46 so I will mark that as approved too 14:59:02 and that's all from my side for this meeting 14:59:24 next week our new chair will be lajoskatona :) 14:59:46 thx a lot for this meeting and for all others so far 14:59:50 \o/ 15:00:02 that was really great for me to be chair of this meeting for about 2 years 15:00:13 and we are on top of the hour now 15:00:22 o/ 15:00:27 o/ 15:00:28 have a great weekend and see You all next week 15:00:29 o/ 15:00:31 bye 15:00:36 #endmeeting