opendevreview | Merged openstack/neutron stable/wallaby: [stable-only] "_clean_logs_by_target_id" to use old notifications https://review.opendev.org/c/openstack/neutron/+/821563 | 00:17 |
---|---|---|
opendevreview | Ghanshyam proposed openstack/neutron master: Updating python testing as per Yoga testing runtime https://review.opendev.org/c/openstack/neutron/+/819201 | 02:13 |
opendevreview | Ghanshyam proposed openstack/os-vif master: Updating python testing classifier as per Yoga testing runtime https://review.opendev.org/c/openstack/os-vif/+/819204 | 02:20 |
opendevreview | Federico Ressi proposed openstack/neutron master: Change tobiko CI job in the periodic queue https://review.opendev.org/c/openstack/neutron/+/813977 | 08:45 |
bbezak | Hello, did anybody observed this bug? https://bugs.launchpad.net/neutron/+bug/1953510 - looks pretty important, as neutron ovn metadata agent cannot really connect correctly to new OVN SB raft leader - and OVS 2.16 is doing automatic leader transfer while doing snapshots. | 09:03 |
ralonsoh | bbezak, is it reproducible just restarting the OVN SB? | 09:08 |
bbezak | ralonsoh: yes, very much so - tested just now | 09:12 |
ralonsoh | bbezak, ok, I'll try to reproduce it locally and debug it | 09:12 |
ralonsoh | thanks for the report | 09:12 |
bbezak | furthermore I can see weird behaviour that old metadata agents that were removed before (like 6 months prior) came back in down status in openstack network agent list. looks like different issue I guess | 09:13 |
bbezak | but it is happening at the same time though | 09:14 |
ralonsoh | bbezak, removed how? from the API? the chassis? | 09:14 |
bbezak | removed via openstack network agent delete | 09:14 |
ralonsoh | ok | 09:14 |
ralonsoh | just a heads-up | 09:14 |
bbezak | and before stopped it's services of course on hosts | 09:14 |
ralonsoh | this is for clean-up purposes | 09:15 |
ralonsoh | that means: when you delete a chassis and the ovn-controller does not end gracefully | 09:15 |
ralonsoh | the chassis register will remain in the SB DB | 09:15 |
ralonsoh | that means Neutron will see it as an active chassis | 09:15 |
ralonsoh | the Neutron API allows to remove the controller and metadata agents associated to this chassis | 09:15 |
ralonsoh | but as I said, for clean-up purposes, only when the Chassis and Chassis_Private registers do not exist | 09:16 |
bbezak | right, so what would be proper procedure for removal of old metadata agents entries then? remove it directly within OVN? | 09:18 |
opendevreview | Slawek Kaplonski proposed openstack/neutron-tempest-plugin master: Update irrelevant-files for api/scenario jobs https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821160 | 09:19 |
opendevreview | Slawek Kaplonski proposed openstack/neutron-tempest-plugin master: Update irrelevant-files for stadium project's tests https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821414 | 09:19 |
ralonsoh | bbezak, if you didn't properly stop the ovn-controller, then the metadata and the controller agents will be still present in the DB | 09:20 |
ralonsoh | sorry, in the Neutron API | 09:20 |
bbezak | ok | 09:20 |
ralonsoh | you should remove the Chassis and Chassis_Private refisters | 09:20 |
ralonsoh | that will trigger the agent deletion | 09:20 |
bbezak | ok, but you mean in OVN SB - I need to get rid of only metadata agent - as this node won't be hypervisor anymore. So only chassis_private to remote then? | 09:31 |
bbezak | I meant to remove ;) | 09:32 |
ralonsoh | bbezak, if there are no hypervisors in a compute, then you need to stop the ovn-controller | 09:32 |
ralonsoh | that will remove the Chassis and the Chassis_Private | 09:32 |
ralonsoh | both are created and deleted together | 09:32 |
bbezak | but it is a controller/network node | 09:32 |
ralonsoh | you can't have one without the other | 09:32 |
bbezak | (supposed to be hyper-converged at the beginning) | 09:33 |
ralonsoh | I don't know now how to stop having a metdata in a node | 09:35 |
ralonsoh | but it doesn't matter | 09:35 |
ralonsoh | that won't affect your architecture | 09:35 |
ralonsoh | and the VMs will retrieve from the local metadata servers | 09:35 |
bbezak | yeah, indeed, but I don't like agents down :), not a huge deal though, first issue is more important | 09:36 |
bbezak | sorry for sidetracking | 09:36 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Remove the expired reservations in a separate DB transaction https://review.opendev.org/c/openstack/neutron/+/821592 | 10:01 |
ralonsoh | slaweq, https://review.opendev.org/c/openstack/neutron/+/818399 | 10:03 |
ralonsoh | if you have a couple of mins to review it | 10:04 |
ralonsoh | thanks in advance | 10:04 |
ralonsoh | ah, and this one: https://review.opendev.org/c/openstack/neutron/+/821271 | 10:04 |
ralonsoh | obondarev, lajoskatona ^^ | 10:04 |
ralonsoh | if you have time | 10:04 |
slaweq | ralonsoh: sure. I will check it in few minutes | 10:10 |
ralonsoh | thanks! | 10:10 |
lajoskatona | ralonsoh: me too :-) | 10:10 |
ralonsoh | thanks! | 10:10 |
slaweq | ralonsoh are You sure that failed test_neutron_status_cli test in patch https://review.opendev.org/c/openstack/neutron/+/818399 is not related to that change? | 10:17 |
ralonsoh | slaweq, hmmm I saw the other one... sorry | 10:18 |
ralonsoh | it is related but I really don't know what happened | 10:18 |
ralonsoh | the DB connection is also used in other checks | 10:18 |
slaweq | maybe it was just some issue with MySQL backend | 10:19 |
slaweq | I will recheck it once again to see if it will be green | 10:19 |
ralonsoh | yes, but a bad coincident with this new check... | 10:19 |
ralonsoh | as I said, there are other checks that use the DB backend | 10:20 |
ralonsoh | not only this one | 10:20 |
opendevreview | Lajos Katona proposed openstack/neutron-tempest-plugin master: Add logs for test_floatingip_port_details https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821557 | 10:23 |
opendevreview | Lajos Katona proposed openstack/neutron master: Add logs for port_bound_to_host https://review.opendev.org/c/openstack/neutron/+/821599 | 10:27 |
opendevreview | Merged openstack/neutron stable/victoria: [stable-only] "_clean_logs_by_target_id" to use old notifications https://review.opendev.org/c/openstack/neutron/+/821564 | 10:45 |
bbezak | ralonsoh: FYI: just updated bug with debug logs from metadata agent - https://bugs.launchpad.net/neutron/+bug/1953510 | 11:05 |
ralonsoh | thanks | 11:05 |
slaweq | ralonsoh | 11:06 |
slaweq | lajoskatona please check https://review.opendev.org/q/Ia3fbde3304ab5f3c309dc62dbf58274afbcf4614 when You will have few minutes | 11:06 |
slaweq | thx in advance | 11:06 |
ralonsoh | slaweq, sure | 11:06 |
opendevreview | Merged openstack/neutron stable/ussuri: [stable-only] "_clean_logs_by_target_id" to use old notifications https://review.opendev.org/c/openstack/neutron/+/821565 | 11:16 |
*** bluex_ is now known as bluex | 11:17 | |
bluex | I'm getting warning from _is_session_semantic_violated caused (if I'm not wrong) by (ml2.plugin)delete_port calling (l3_db)disassociate_floatingips which tries to send AFTER_UPDATE notification while parent transaction from delete_port is still active | 11:44 |
bluex | Because of that FIP notifications from delete_port seems to be never never sent | 11:44 |
bluex | Is this known issue? | 11:44 |
ralonsoh | bluex, when this is happening? | 11:45 |
ralonsoh | do you have logs? | 11:45 |
ralonsoh | slaweq, https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821079 can you check my comment there? | 11:45 |
ralonsoh | bluex, what backend are you using? | 11:46 |
ralonsoh | (that should have been my first question) | 11:46 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: [OVN][Placement] Attend to chassis configuration events https://review.opendev.org/c/openstack/neutron/+/821693 | 11:49 |
bluex | ralonsoh, sure give me sec, I'll post relevant logs somewhere | 11:51 |
bluex | as for backend, I'm not exactly sure what backend are you asking about | 11:52 |
ralonsoh | bluex, OVS, OVN, linux bridge... | 11:52 |
bluex | OVS | 11:53 |
bluex | btw https://paste.openstack.org/ gives me 502, is there other recommended paste platform | 11:55 |
ralonsoh | bluex, use this | 11:56 |
ralonsoh | https://etherpad.opendev.org/p/problem_fips | 11:56 |
slaweq | ralonsoh I just replied | 12:00 |
slaweq | I like Your idea of combining both tests there | 12:00 |
slaweq | thx | 12:00 |
ralonsoh | thanks! | 12:00 |
ykarel | many neutron jobs are red https://zuul.openstack.org/status | 12:01 |
ykarel | likely some infra issue, checking | 12:01 |
bluex | ralonsoh, relevant section of logs pasted | 12:01 |
bluex | A solution I'm thinking about is: no longer checking _is_session_semantic_violated when notification is published and instead performing open transaction check directly in _ObjectChangeHandler.dispatch_events | 12:02 |
bluex | Then if transaction is still ongoing we can put notification back on queue with some delay (1s for example) instead of discarding it. | 12:02 |
ralonsoh | bluex, where are the logs? | 12:03 |
ykarel | ohkk checked, pip install fails for different modules, also seems only few providers impacted, so jobs going to fail unless fixed | 12:07 |
ykarel | will check what all providers impacted and will ping infra to get it clear | 12:08 |
ykarel | ralonsoh, slaweq fyi ^ | 12:08 |
ralonsoh | I'm aware, yes | 12:09 |
ralonsoh | could be just a problem in pip mirror | 12:09 |
ralonsoh | that happens sometimes | 12:09 |
slaweq | thx ykarel for info and for taking care of it | 12:10 |
bluex | ralonsoh, sorry, etherpad didn't appreciate having to paste >300 lines of logs so I pasted only traceback (10 lines at once) for now | 12:14 |
ralonsoh | bluex, but that is the method that is handling this event and raising this exception? | 12:14 |
ralonsoh | that's the key point here | 12:14 |
ralonsoh | and we don't see that in the logs | 12:14 |
bluex | ok, added some previous logs | 12:19 |
bluex | traceback comes from plugins.ml2.ovo_rpc._ObjectChangeHandler._is_session_semantic_violated | 12:19 |
bluex | while plugins.ml2.plugin.Ml2Plugin.delete_port is being executed | 12:20 |
bluex | I should also mention it's executed on older version of stable/stein branch, but relevant code is the same as on master | 12:38 |
lajoskatona | ykarel: thanks for the headsup | 12:47 |
ralonsoh | bluex, what is networking_ovh? | 12:50 |
ralonsoh | this is not the Neutron ml2 plugin | 12:50 |
ralonsoh | this is your own plugin | 12:50 |
ykarel | lajoskatona, ralonsoh slaweq the identified pip modules and affected providers are refreshed for now | 12:51 |
ykarel | if it still happens for others need to reach to #opendev with details | 12:51 |
lajoskatona | ykarel: thanks, keep an eye on it :-) | 12:52 |
opendevreview | Lajos Katona proposed openstack/neutron master: Fix stackalytics' link https://review.opendev.org/c/openstack/neutron/+/821704 | 13:03 |
bluex | ralonsoh, yup, it's propertiary OVH plugin extending some parts of Neutron, in logs provided you can assume is neutron instead of networking_ovh since all of the code mentioned so far comes directly from neutron | 13:10 |
opendevreview | Luis Tomas Bolivar proposed openstack/neutron master: Use Port_Binding up column to set Neutron port status https://review.opendev.org/c/openstack/neutron/+/821544 | 13:18 |
ralonsoh | bluex, right, this is a bug. Please, can you report it? in launchpad | 13:37 |
ralonsoh | bluex, but please, identify the methods that this handler is calling | 13:38 |
bluex | sure | 13:40 |
bluex | btw any thoughts about solution I suggested? | 13:41 |
ralonsoh | to add a delay in any operation? No, not at all, by any circumstance | 13:42 |
bluex | it won't be real delay, (like sleep) instead since we have loop in dispatch_events in external thread we can utilize it to essentially send notification after some time have passed | 13:44 |
bluex | I was more worried about no longer performing _is_session_semantic_violated check | 13:45 |
opendevreview | Maor Blaustein proposed openstack/neutron-tempest-plugin master: Add test_create_and_update_port_with_dns_name https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821079 | 13:48 |
bluex | * plugins.ml2.ovo_rpc._ObjectChangeHandler.dispatch_events | 13:48 |
bluex | but I don't see any reason why we should drop AFTER notifications when transaction is not finished | 13:50 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Remove the expired reservations in a separate DB transaction https://review.opendev.org/c/openstack/neutron/+/821592 | 13:53 |
opendevreview | Bence Romsics proposed openstack/neutron master: Make the dead vlan actually dead https://review.opendev.org/c/openstack/neutron/+/820897 | 14:00 |
lajoskatona | #startmeeting networking | 14:00 |
opendevmeet | Meeting started Tue Dec 14 14:00:46 2021 UTC and is due to finish in 60 minutes. The chair is lajoskatona. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:00 |
opendevmeet | The meeting name has been set to 'networking' | 14:00 |
mlavalle | o/ | 14:00 |
lajoskatona | o/ | 14:00 |
slaweq | hi | 14:00 |
frickler | \o | 14:00 |
amotoki | hi | 14:00 |
isabek | Hi | 14:00 |
njohnston | o/ | 14:00 |
manub | hi | 14:01 |
damiandabrowski[m] | hi! | 14:01 |
obondarev | hi | 14:01 |
lajoskatona | #topic Announcements | 14:02 |
lajoskatona | The usual reminder for Yoga cycle calendar https://releases.openstack.org/yoga/schedule.html | 14:02 |
rubasov | o/ | 14:02 |
bcafarel | o/ | 14:02 |
lajoskatona | Last week the TC organized meeting series was started for operator pain points | 14:02 |
lajoskatona | #link http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026169.html | 14:03 |
lajoskatona | summary of the meeting from Rico: #link http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026245.html | 14:03 |
lajoskatona | and there is a nice etherpad: https://etherpad.opendev.org/p/pain-point-elimination | 14:03 |
ralonsoh | hi | 14:04 |
lajoskatona | hm, I can send the link to the line from where Neutron is mentioned: https://etherpad.opendev.org/p/pain-point-elimination#L260 | 14:04 |
lajoskatona | 4 topics were mentioned for neutron: | 14:05 |
slaweq | I created one LP bug from that etherpad last week | 14:05 |
slaweq | I think that ykarel is working on it now | 14:05 |
lajoskatona | neutron (via python calls) and OVN (via C calls) can have different ideas about what the hostname is, particulary if a deployer (rightly or wrongly) sets their hostname to be an FQDN | 14:05 |
lajoskatona | slaweq: thanks | 14:06 |
amotoki | https://bugs.launchpad.net/neutron/+bug/1953716 this is a bug slaweq filed | 14:06 |
lajoskatona | yes and we have another for "full" solution for network cascade deletion: https://bugs.launchpad.net/neutron/+bug/1870319 | 14:06 |
ykarel | o/ | 14:07 |
lajoskatona | The next one from the list is more an opinion: OpenVSwitch Agent excess logging at INFO level | 14:08 |
ralonsoh | this is the opposite to what we did 2 years ago | 14:08 |
slaweq | exactly | 14:08 |
ralonsoh | moving the debug messages to INFO | 14:08 |
lajoskatona | exactly :-) | 14:08 |
slaweq | I personally I think it's better how it's now | 14:08 |
ralonsoh | I have the same opinion | 14:08 |
ralonsoh | (maybe there is something that could improve) | 14:08 |
lajoskatona | perhaps we can do something which stops in the middle :-) | 14:09 |
lajoskatona | half info half debug :-) | 14:09 |
ralonsoh | that's an option, yes | 14:09 |
slaweq | for me personally it's very hard issue because when all works fine, You don't need almost any logs | 14:09 |
slaweq | but if something is going wrong, You basically need to have debug enabled always | 14:09 |
slaweq | so in case of the problem, our current INFO logging isn't even enough for me :) | 14:10 |
lajoskatona | yeah it's more, a feedback that the agent loop is ok | 14:11 |
amotoki | from my operator experience, it would be nice if INFO messages mention what goes good and what goes wrong. Detail is not needed. DEBUG should help us for such caes. | 14:11 |
amotoki | I am not sure the loop message in ovs-agent really helps | 14:12 |
frickler | would it be an option to move some logs to WARNING, so that deployers getting too much log can turn off INFO? | 14:13 |
lajoskatona | amotoki: thanks | 14:13 |
lajoskatona | from operator/deployer perspective that's an extra pain to have different (log)settings for just ovs-agent for example, and the esiest way from their perspective is to change the code :-) | 14:15 |
amotoki | I think we can start from the logging guideline https://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html | 14:16 |
amotoki | interpreation of the log levels might be different per person | 14:16 |
lajoskatona | amotoki: good that you mentioned, the guideline was mentioned also, but not sure if it is up-to-date | 14:17 |
lajoskatona | and another point for OVN: Spoofing of DNS reponses seems very wrong behavior | 14:17 |
frickler | amotoki: from that, a lot of errors should likely only be warnings | 14:18 |
lajoskatona | to tell the truth I don't know details of it, because the meeting finished, and not sure if there was anybody who has insight of this one | 14:18 |
* slaweq will be back in 5 minutes | 14:19 | |
ralonsoh | lajoskatona, is there a LP bug? | 14:20 |
ralonsoh | for the OVN DNS issue | 14:20 |
amotoki | we can add the labels defined at https://etherpad.opendev.org/p/pain-point-elimination#L10 | 14:21 |
lajoskatona | ralonsoh: nothing: https://etherpad.opendev.org/p/pain-point-elimination#L279 | 14:21 |
ralonsoh | ok | 14:21 |
amotoki | if we need more info, we can INFO NEEDED label | 14:21 |
lajoskatona | amotoki: I added now | 14:22 |
amotoki | thanks | 14:22 |
lajoskatona | for logging TC will work on it to have fresh logging guidelines, I just saw in the etherpad | 14:23 |
njohnston | I know a bit about OVN DNS spoofing, but I am not an expert. I think dalvarez knows more. | 14:24 |
dalvarez | hi, can i help somehow? o/ | 14:24 |
lajoskatona | njohnston, dalvarez: Hi, there's an "operator pain points" etherpad and related discussion | 14:25 |
lajoskatona | njohnston, dalvarez: and one point is: "OVN: Spoofing of DNS reponses seems very wrong behavior" without more details | 14:25 |
lajoskatona | #link https://etherpad.opendev.org/p/pain-point-elimination#L279 | 14:25 |
lajoskatona | if you can think about it please check the etherpad if this issue is relevant at all | 14:26 |
njohnston | So when OVN sees a DNS request originating on a local net it captures it and compares it against it's local cache, answers it if it can and then allows it to proceed out via normal routing if there is no match | 14:27 |
dalvarez | lajoskatona: ack, i agree we need more info. DNS is not really spoofing anything but acting as a DNS server itself, not a full fledged server though so we may be missing some options or maybe it's not fully compliant with the standard... :? | 14:28 |
njohnston | that causes a change in functionality for private networks that don't have egress; in ML2/OVS they could resolve DNS via the local dnsmasq but in OVN it is not possible | 14:28 |
njohnston | ^ s/resolve DNS/resolve external DNS/ | 14:28 |
njohnston | where external DNS includes Designate | 14:28 |
ralonsoh | right, this is a feature gap, OVN still doesn't support external DNS servers | 14:29 |
ralonsoh | njohnston, is there a LP bug? | 14:30 |
lajoskatona | johnston, ralonsoh, dalvarez: thanks for the explanation | 14:30 |
njohnston | I'll have to go back and check, we last talked about it maybe 6-8 months ago | 14:30 |
frickler | IIUC the issue is more that the instance sends a query to e.g. 8.8.8.8, but the query never reaches that server, instead OVN creates a fake response that looks like 8.8.8.8 sent it | 14:31 |
njohnston | ralonsoh: https://bugs.launchpad.net/neutron/+bug/1902950 | 14:32 |
ralonsoh | njohnston, thanks! | 14:32 |
lajoskatona | I will add these insights to the etherpad, check if we have a gap written for it, and we can continue the discussion under a LP if there's already one | 14:32 |
opendevreview | Maor Blaustein proposed openstack/neutron-tempest-plugin master: Add test_create_and_update_port_with_dns_name https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821079 | 14:33 |
lajoskatona | That's all from the meeting last week | 14:33 |
lajoskatona | #topic Community Goals | 14:34 |
lajoskatona | it was again something on the TC worked and discussed: #link http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026280.html (look for "Community-wide goal updates") | 14:34 |
lajoskatona | mail from gmann: #link http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026218.html | 14:34 |
lajoskatona | Current Selected (Active) Goals: | 14:35 |
lajoskatona | Migrate from oslo.rootwrap to oslo.privsep | 14:35 |
lajoskatona | Consistent and Secure Default RBAC | 14:35 |
slaweq | regarding Secure RBAC I proposed patch with update of the policies: https://review.opendev.org/c/openstack/neutron/+/821208 | 14:35 |
slaweq | I forgot to reply to ralonsoh's comments there | 14:35 |
ralonsoh | regarding to the first one, privsep, I think we are done | 14:36 |
lajoskatona | with privsep migration we are good, I think we even finished stadium project migration | 14:36 |
ralonsoh | we still use rootrwap to spawn the privsep daemons | 14:36 |
ralonsoh | this like some kind of "inception" | 14:36 |
ralonsoh | I would remove this part | 14:36 |
ralonsoh | removing any dependency from rootwrap | 14:36 |
lajoskatona | ralonsoh, slaweq: thanks | 14:36 |
lajoskatona | This was more a headsup (as we are progressing well /finished these) that we have these 2 goals for this cycle again | 14:39 |
lajoskatona | #topic Bugs | 14:40 |
lajoskatona | amotoki was bug deputy last week: http://lists.openstack.org/pipermail/openstack-discuss/2021-December/026335.html | 14:40 |
lajoskatona | As I checked there are a few unassigned bugs: | 14:41 |
lajoskatona | #link https://bugs.launchpad.net/neutron/+bug/1954384 | 14:41 |
lajoskatona | this one again a DNS related bug | 14:42 |
ralonsoh | I think we had the same discussion last week | 14:42 |
ralonsoh | this is for designate | 14:42 |
frickler | I can look into that | 14:42 |
amotoki | but it is a case with linux bridge (not OVN) | 14:42 |
ralonsoh | if you don't use it, then this dns-domain name should be the same as the VM name | 14:43 |
ralonsoh | I'll find the LP bug I'm talking about | 14:43 |
lajoskatona | frickler, amotoki, ralonsoh: thanks | 14:44 |
njohnston | I wonder if this behavior change in Nova may affect things: https://specs.openstack.org/openstack/nova-specs/specs/xena/implemented/configurable-instance-hostnames.html | 14:45 |
njohnston | In any evenbt, it was not something I was aware of until recently, so good to boost exposure within the team | 14:45 |
opendevreview | Merged openstack/neutron-tempest-plugin master: Add test_create_port_with_dns_name https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/820456 | 14:45 |
lajoskatona | amotoki: do you have other highlights from last week's bugs? | 14:46 |
amotoki | one thing is https://bugs.launchpad.net/neutron/+bug/1953478 | 14:47 |
amotoki | it is realted to the timing of vif-plugged event. | 14:47 |
amotoki | we need to clarify when we send the event and what is expected from both nova and neutron perspective. | 14:48 |
amotoki | in addition, two bugs on OVN metadata agent. it would be nice if folks familiar with OVN can check them. | 14:48 |
ralonsoh | I'm checking the issues with the OVN metadata agent | 14:49 |
lajoskatona | https://bugs.launchpad.net/neutron/+bug/1953510 & https://bugs.launchpad.net/neutron/+bug/1953295 I think | 14:49 |
amotoki | lajoskatona: correct. thanks | 14:50 |
ralonsoh | at least the first one (I'll assign it to me) | 14:50 |
damiandabrowski[m] | I would really appreciate a helping hand here: https://bugs.launchpad.net/neutron/+bug/1952907 | 14:50 |
damiandabrowski[m] | I have already spent several hours on troubleshooting and proposed a few possible solutions, but that's probably all I can do. | 14:50 |
amotoki | regarding the vif-plugged event bug, I can look it into detail, but more eyes would be appreciated so I let it unassigned. | 14:51 |
lajoskatona | damiandabrowski[m]: I will check your proposals, thanks for working on it | 14:52 |
damiandabrowski[m] | thank You! | 14:52 |
lajoskatona | amotoki: is it written somewhere when we send events like vif-plugged? | 14:53 |
amotoki | lajoskatona: I don't think so.... unfortunately | 14:54 |
ralonsoh | no and this is something I said I was going to write | 14:54 |
ralonsoh | (in the PTG) but I didn't have time yet | 14:54 |
lajoskatona | perhaps we can start with syncing it with nova and writing down, and check each backend | 14:54 |
amotoki | +1 | 14:55 |
lajoskatona | ralonsoh: ok, thanks | 14:55 |
frickler | did anyone look at the discussion I had with mdbooth yesterday? tldr: trunk creation/deletion races with unplugging the baseport from a server | 14:57 |
frickler | no bug reported yet, I was waiting to ask mdbooth whether they would like to do that | 14:57 |
lajoskatona | frickler: that's a good idea to have a bug if they have issue | 14:58 |
lajoskatona | Ok, the CI meeting soon will start, so we have to finish now | 14:58 |
frickler | maybe it was known already or expected somehow, that's why I'm asking | 14:58 |
frickler | final question: anyone looking at the OVN/n-d-r issue yet? | 14:59 |
lajoskatona | #endmeeting | 15:00 |
opendevmeet | Meeting ended Tue Dec 14 15:00:10 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:00 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/networking/2021/networking.2021-12-14-14.00.html | 15:00 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/networking/2021/networking.2021-12-14-14.00.txt | 15:00 |
opendevmeet | Log: https://meetings.opendev.org/meetings/networking/2021/networking.2021-12-14-14.00.log.html | 15:00 |
slaweq | #startmeeting neutron_ci | 15:00 |
opendevmeet | Meeting started Tue Dec 14 15:00:21 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
opendevmeet | The meeting name has been set to 'neutron_ci' | 15:00 |
lajoskatona | sorry frickler :-) | 15:00 |
ralonsoh | hi again | 15:00 |
slaweq | welcome on another meeting :) | 15:00 |
lajoskatona | frickler: I try to fetch some time to check the trunk discussion | 15:00 |
lajoskatona | Hi | 15:00 |
slaweq | Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate | 15:01 |
obondarev | hi | 15:01 |
ykarel | hi | 15:01 |
mlavalle | o/o/ | 15:01 |
slaweq | I think we can start now | 15:01 |
bcafarel | hi again | 15:01 |
slaweq | #topic Actions from previous meetings | 15:01 |
slaweq | slaweq to check missing grafana logs for periodic jobs | 15:02 |
slaweq | actually ykarel already proposed fix https://review.opendev.org/c/openstack/project-config/+/820980 | 15:02 |
slaweq | thx ykarel | 15:02 |
slaweq | next one | 15:02 |
slaweq | ralonsoh to check https://bugs.launchpad.net/neutron/+bug/1953481 | 15:02 |
ralonsoh | #link https://review.opendev.org/c/openstack/neutron/+/820911 | 15:03 |
* dulek doesn't want to interrupt, but the CI is broken at the moment due to pecan bump in upper-constraints.txt. Ignore if you're already aware. | 15:03 | |
ralonsoh | actually this is the meeting to tell this update | 15:04 |
slaweq | thx ralonsoh for the fix | 15:04 |
mlavalle | thanks dulek | 15:04 |
lajoskatona | dulek: thanks, recheck is useless now? | 15:04 |
slaweq | and thx dulek for info - it's very good timinig for that :) | 15:04 |
dulek | I think it's useless. | 15:04 |
dulek | I mean recheck. | 15:04 |
lajoskatona | ok, thanks | 15:04 |
ralonsoh | good to know | 15:04 |
mlavalle | yeah, chances are it's useless :-) | 15:04 |
dulek | The conflict is caused by: | 15:05 |
dulek | The user requested pecan>=1.3.2 | 15:05 |
dulek | The user requested (constraint) pecan===1.4.1 | 15:05 |
dulek | This is the error. And yes, I can't really understand it either, because I don't think this conflicts. | 15:05 |
ykarel | dulek, log link? just to check if it failed recently, as that issue was fixed for some provider some time back | 15:05 |
frickler | ah, that was likely an issue with pypi, see discussion in #opendev | 15:05 |
lajoskatona | mlavalle: https://youtu.be/SJUhlRoBL8M | 15:05 |
dulek | https://0d4538c7b62deb4c15ac-e6353f24b162df9587fa55d858fbfadc.ssl.cf5.rackcdn.com/819502/3/check/openstack-tox-pep8/a4f6148/job-output.txt | 15:05 |
frickler | hopefully solved by refreshing CDN | 15:05 |
ykarel | approx 2 hour back | 15:06 |
slaweq | ++ thx ykarel and frickler | 15:06 |
ykarel | ^ logs are older than that | 15:06 |
slaweq | ok, lets move on with the meeting | 15:07 |
slaweq | #topic Stable branches | 15:07 |
slaweq | bcafarel any updates? | 15:07 |
slaweq | I think it's not in bad shape recently | 15:08 |
bcafarel | indeed I think we are good overall | 15:08 |
bcafarel | I was off yesterday so still checking a few failed runs in train, but nothing that looks 100% reproducible :) | 15:08 |
slaweq | great, thx | 15:09 |
slaweq | #topic Stadium projects | 15:09 |
slaweq | anything in stadium what we should discuss today? | 15:09 |
slaweq | lajoskatona | 15:09 |
lajoskatona | everything is ok as far as I know | 15:09 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Remove the expired reservations in a separate DB transaction https://review.opendev.org/c/openstack/neutron/+/821592 | 15:09 |
slaweq | that's great, thx :) | 15:09 |
slaweq | #topic Grafana | 15:09 |
lajoskatona | perhaps some advertisement: if you have time please check tap-as-a-service reviews: https://review.opendev.org/q/project:openstack%252Ftap-as-a-service+status:open | 15:09 |
slaweq | lajoskatona sure | 15:10 |
slaweq | lajoskatona all of them? or just those which don't have merge conflicts now? | 15:10 |
mlavalle | several of them have merge conflicts | 15:10 |
lajoskatona | slaweq: sorry, the recent ones, for some reason that is not working for me now | 15:12 |
slaweq | k | 15:12 |
lajoskatona | https://review.opendev.org/q/project:openstack/tap-as-a-service+status:open+-age:8week | 15:13 |
lajoskatona | sorry for spamming the meeting.... | 15:13 |
slaweq | lajoskatona I will review them tomorrow | 15:14 |
lajoskatona | slaweq: thanks | 15:14 |
slaweq | ok, lets get back to grafana | 15:15 |
slaweq | http://grafana.openstack.org/dashboard/db/neutron-failure-rate | 15:15 |
slaweq | in overall things don't looks bad IMO | 15:16 |
slaweq | and I also saw the same while looking at results of the failed jobs from last week | 15:16 |
opendevreview | Maor Blaustein proposed openstack/neutron-tempest-plugin master: Add test_create_and_update_port_with_dns_name https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821079 | 15:17 |
slaweq | today I also run my script to check number of rechecks recently | 15:17 |
slaweq | and we are improving I think | 15:17 |
slaweq | week 2021-47 was 3.38 recheck in average | 15:17 |
slaweq | week 2021-48 - 3.55 | 15:17 |
slaweq | week 2021-49 - 2.91 | 15:18 |
mlavalle | nice, trending in the right direction! | 15:18 |
ralonsoh | much better than before | 15:18 |
opendevreview | Bernard Cafarelli proposed openstack/neutron stable/train: DNM: test tempest train-last tag https://review.opendev.org/c/openstack/neutron/+/816597 | 15:18 |
slaweq | it's at least not 13 rechecks in average to get patch merged as it was few weeks ago | 15:18 |
lajoskatona | +1 | 15:18 |
mlavalle | it's a big improvement | 15:18 |
slaweq | and one more thing | 15:18 |
obondarev | cool! | 15:18 |
slaweq | I spent some time last week, going through the list of "gate-failure" bugs | 15:19 |
ykarel | cool | 15:19 |
slaweq | #link https://tinyurl.com/2p9x6yr2 | 15:19 |
slaweq | I closed about 40 of them | 15:19 |
slaweq | but still we have many opened | 15:19 |
slaweq | so if You would have some time, please check that list | 15:19 |
ralonsoh | sure | 15:19 |
slaweq | maybe something is already fixed and we can close it | 15:19 |
slaweq | or maybe You want to work on some of those issues :) | 15:20 |
mlavalle | so the homework is to close as many as possible? | 15:20 |
mlavalle | ohh, now I understand | 15:20 |
slaweq | also, please use that list to check if You didn't hit some already known issue before rechecking patch | 15:20 |
lajoskatona | slaweq: thanks, and thanks for closing so many of old bugs | 15:20 |
slaweq | I want to remind You that we should not only recheck patches but try to identify reason of failure and open bug or link to the existing one in the recheck comment :) | 15:21 |
slaweq | that's what we agreed last week on the ci meeting | 15:21 |
slaweq | anything else You want to talk regarding Grafana or related stuff? | 15:22 |
slaweq | if not, I think we can move on | 15:22 |
lajoskatona | nothing from me | 15:22 |
ykarel | just that i pushed https://review.opendev.org/c/openstack/project-config/+/821706 today | 15:23 |
ykarel | to fix dashboard with recent changes | 15:23 |
lajoskatona | ykarel: thanks | 15:24 |
slaweq | thx | 15:25 |
slaweq | #topic fullstack/functional | 15:25 |
frickler | I merged that already, feel free to ping me for future updates | 15:25 |
ykarel | Thanks frickler | 15:26 |
slaweq | I opened one new bug related to functional job's failures: https://bugs.launchpad.net/neutron/+bug/1954751 | 15:26 |
slaweq | I noticed it I think twice this week | 15:26 |
slaweq | so probably we will need to investigate that | 15:26 |
slaweq | I will add some extra logs there to know better what's going on in that test and why it's failing | 15:27 |
slaweq | as for now it's not easy to tell exactly what was the problem there | 15:27 |
slaweq | #action slaweq to add some extra logs to the test, related to https://bugs.launchpad.net/neutron/+bug/1954751 to help further debugging | 15:27 |
slaweq | #topic Tempest/Scenario | 15:28 |
slaweq | regarding scenario jobs, the only issue which I noticed that is really impacting us now often are jobs' timeouts https://bugs.launchpad.net/neutron/+bug/1953479 | 15:29 |
slaweq | I saw at least 2 or 3 times such timeouts this week again | 15:29 |
slaweq | ykarel | 15:29 |
slaweq | proposed https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821067 | 15:29 |
slaweq | to use nested virt in those jobs | 15:30 |
slaweq | and wanted to discuss what do You think about it | 15:30 |
slaweq | IMO that sounds like worth to try (again) | 15:30 |
ralonsoh | for sure yes | 15:30 |
slaweq | maybe this time it will work for us better | 15:30 |
ralonsoh | do we have figures? to compare performance | 15:30 |
lajoskatona | my concern /question is regarding the availabilty of these nodes | 15:31 |
slaweq | the only issue with that, IIUC is that there is limited pool of nodes/providers which provides that possibility | 15:31 |
ykarel | yes initial results are good with that but worth trying it in all patches | 15:31 |
slaweq | so jobs may be executed longer | 15:31 |
lajoskatona | if ti will be a bottleneck as jobs are waiting for nodes/resources | 15:31 |
ykarel | those scenario jobs are now taking approx 1 hour less to finish | 15:31 |
ralonsoh | and are less prone to CI failures, right? | 15:31 |
ykarel | yes i have not seen any ssh timeout failure in those tests yet | 15:32 |
lajoskatona | sounds good | 15:32 |
ralonsoh | hmmm I know the availability is an issue, but sounds promising | 15:32 |
obondarev | +1 let's try | 15:32 |
ralonsoh | +1 right | 15:32 |
mlavalle | +1 | 15:32 |
lajoskatona | +1, we can revisit if we see bottleneck :-) | 15:32 |
slaweq | yes, that's also my opinion about it - better wait a bit longer than recheck :) | 15:32 |
ralonsoh | right! | 15:33 |
ykarel | yes availability is a concern, but if we have less queue time then we should be good | 15:33 |
mlavalle | yes, the overall effect might be a net reduction in wait time and increase in throughput | 15:33 |
ykarel | and also another issue is when all the providers are down that provide those nodes | 15:33 |
ykarel | we will have issue | 15:33 |
ykarel | but if that happens rarely we can switch/switch-out from those nodes | 15:33 |
ralonsoh | another fire in OVH? I hope no | 15:34 |
slaweq | LOL | 15:34 |
mlavalle | LOL | 15:34 |
ykarel | will send seperate patch to easily allow switching/reverting from those nodes | 15:35 |
slaweq | ykarel thx a lot | 15:36 |
slaweq | ok, lets move on | 15:36 |
slaweq | #topic Periodic | 15:36 |
slaweq | I see that since yesterday neutron-ovn-tempest-ovs-master-fedora seems to be broken (again) | 15:36 |
slaweq | anyone want's to check it? | 15:36 |
slaweq | if not, I can do it | 15:36 |
mlavalle | I'll help | 15:37 |
slaweq | thx mlavalle | 15:37 |
slaweq | #action mlavalle to check failing neutron-ovn-tempest-ovs-master-fedora job | 15:37 |
slaweq | so that's all what I had for today | 15:38 |
* mlavalle has missed to see his name associated to an action item in the CI meeting :-) | 15:38 | |
slaweq | there is still that topic about ci improvements from last week | 15:38 |
slaweq | but TBH I didn't prepare anything for today as I though that maybe it will be better to talk about it next week on video call | 15:38 |
slaweq | wdyt? | 15:38 |
slaweq | should we continue discussion today or next week on video? | 15:38 |
mlavalle | sounds good | 15:38 |
ralonsoh | perfect, I like video calls for CI meetings | 15:38 |
mlavalle | what video service do we use? | 15:39 |
lajoskatona | mlavalle: jitsi | 15:39 |
mlavalle | cool! | 15:39 |
slaweq | ok, so lets continue that topic next week then | 15:40 |
lajoskatona | I like the video personally, its extra work fro you to keep the logs written | 15:40 |
slaweq | if there are no other topics for today, I can give You back about 20 minutes | 15:41 |
mlavalle | +1 | 15:41 |
ralonsoh | see you tomorrow! | 15:41 |
slaweq | lajoskatona it's not big thing to do TBH, and I like video meetings too :) | 15:41 |
ykarel | +1 | 15:41 |
slaweq | thx for attending the meeting today | 15:41 |
slaweq | have a great day and see You online :) | 15:41 |
mlavalle | o/ | 15:41 |
slaweq | #endmeeting | 15:41 |
opendevmeet | Meeting ended Tue Dec 14 15:41:59 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:41 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/neutron_ci/2021/neutron_ci.2021-12-14-15.00.html | 15:41 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/neutron_ci/2021/neutron_ci.2021-12-14-15.00.txt | 15:41 |
opendevmeet | Log: https://meetings.opendev.org/meetings/neutron_ci/2021/neutron_ci.2021-12-14-15.00.log.html | 15:41 |
lajoskatona | o/ | 15:41 |
obondarev | o/ | 15:42 |
bluex | if you have some time please take a look at https://bugs.launchpad.net/neutron/+bug/1954785 | 15:43 |
bluex | Any feedback on provided possible solutions would be great | 15:43 |
ralonsoh | bluex, can you provide a way to reproduce it? | 15:44 |
ralonsoh | some steps, commands, ... | 15:44 |
mlavalle | +1, yeah, taht always heleps | 15:47 |
bluex | I'll try to figure that out, for now I'm not sure | 15:48 |
jhartkopf | Hey, I have a question regarding floating IP quota. Is it possible to set specific quota per FIP pool? For example, if I have two FIP pools "public" and "internal", can I set a quota of 1 for public and a quota of 5 for internal? | 15:54 |
ralonsoh | jhartkopf, you can specify a quota per (project, resource) | 15:55 |
opendevreview | Slawek Kaplonski proposed openstack/neutron master: Allocate IPs in bulk requests in separate transactions https://review.opendev.org/c/openstack/neutron/+/821727 | 15:55 |
ralonsoh | slaweq, ^^ checking this! | 15:55 |
slaweq | ralonsoh ^^ please check when You will have some time | 15:55 |
ralonsoh | hahaha | 15:55 |
slaweq | in my tests improvement was significant | 15:55 |
ralonsoh | perfect! | 15:55 |
slaweq | we can talk about it tomorrow if You want | 15:56 |
ralonsoh | no way, very cool figures! | 15:56 |
slaweq | dulek also, maybe You want to apply that patch in Your env and try https://review.opendev.org/c/openstack/neutron/+/821727 | 15:56 |
slaweq | it's related to that bug with timeout on haproxy which we talked about few weeks ago | 15:56 |
slaweq | ralonsoh yeah, but a lot of that improvement I got when I moved allocation for each IP to the separate transaction really | 15:57 |
slaweq | when allocation for e.g. 10 ports was in one transaction, it was about 1:45 in average | 15:57 |
dulek | slaweq: Oh man, Christmas came early this year! | 15:58 |
slaweq | dulek please first try :) | 15:59 |
slaweq | in my simple reproducer (without kuryr) improvement is pretty good but I would like to see feedback from You alsoi | 15:59 |
dulek | slaweq: Sure, I should be able to try it. | 16:00 |
slaweq | thx a lot | 16:00 |
dulek | BTW - here's our approach to the problem: https://review.opendev.org/c/openstack/kuryr-kubernetes/+/821626 | 16:00 |
dulek | We'll get it in anyway - we got to support many OpenStacks that are there in the wild. | 16:01 |
dulek | slaweq: So the idea behind your patch is that with each transation being separate there is less window for conflicts? | 16:01 |
slaweq | dulek the idea is to do IP allocation for ports in separate transactions - so if one of them will fail, only that one will be retried | 16:02 |
slaweq | and when IPs for all ports are allocated, do everything else with single transaction, as it was so far | 16:03 |
jhartkopf | ralonsoh: So a resource would be one FIP? Or can this also be a whole FIP pool? | 16:05 |
ralonsoh | jhartkopf, no, a resource is FIP, port, security group, SG rules, network, router (I'm missing something) | 16:08 |
ralonsoh | that means you can set the quota for FIPs in a project, or the quota for networks in a project | 16:08 |
ralonsoh | $openstack quota show (that will list the quotas per project) | 16:08 |
jhartkopf | ralonsoh: Ok, so it doesn't seem to be possible to do this for whole FIP pools. Do you know if this feature has been discussed before? | 16:14 |
ralonsoh | jhartkopf, not that I'm aware | 16:14 |
ralonsoh | and this should be compatible not only with Neutron but with all projects using quota | 16:15 |
jhartkopf | ralonsoh: Do you think this would be a welcome feature to invest some time to create a spec for? | 16:22 |
jhartkopf | Or to further discuss this in general | 16:22 |
ralonsoh | jhartkopf, to be honest (but this is my particular opinion), there is no need for this feature right now (I know, I know, you do). And as I commented, that should be aligned with other projects | 16:23 |
ralonsoh | so this is more complex than you expect | 16:23 |
ralonsoh | can't you use the project quota assigned to FIP? | 16:24 |
ralonsoh | in any case, if you want to discuss it, you can raise a topic in neutron driver's meeting | 16:25 |
ralonsoh | on fridays | 16:25 |
ralonsoh | one sec | 16:25 |
ralonsoh | https://meetings.opendev.org/#Neutron_drivers_Meeting | 16:25 |
ralonsoh | you can amend the agenda and add this topic for discussion | 16:25 |
ralonsoh | you can initially propose this RFE in a bug, with a description | 16:26 |
ralonsoh | before spending too much time in a spec | 16:26 |
jhartkopf | ralonsoh: Ok, thank you. I didn't expect this to be not complex though. | 16:27 |
jhartkopf | What do you mean with the project quota assigned to FIP? | 16:28 |
ralonsoh | you can assign a quota for FIP for a specific project | 16:28 |
ralonsoh | project_1, FIP: 100, project_2, FIP: 200 | 16:28 |
ralonsoh | you can have different users assigned to different projects with different quotas | 16:29 |
ralonsoh | if that works for you | 16:29 |
jhartkopf | Ah I see | 16:29 |
jhartkopf | I think the problem is that we'd like to set different quotas for different FIP pools | 16:30 |
jhartkopf | So we could allow usage of 1 publicy routable IP but 5 IPs that are routable internally only | 16:31 |
ralonsoh | jhartkopf, where are you using FIP pools? | 16:33 |
ralonsoh | this seems to be a resource of nova networks | 16:33 |
ralonsoh | (no, not nova networks but a compute API resource) | 16:34 |
ralonsoh | jhartkopf, https://github.com/openstack/nova/blob/master/api-ref/source/os-floating-ip-pools.inc | 16:35 |
ralonsoh | this is deprecated | 16:35 |
jhartkopf | ralonsoh: Yes, we are using them with Nova instances. | 16:40 |
jhartkopf | But I cannot imagine that the feature itself is deprecated | 16:41 |
opendevreview | Maor Blaustein proposed openstack/neutron-tempest-plugin master: Add test_create_and_update_port_with_dns_name https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821079 | 16:44 |
ralonsoh | jhartkopf, what version are you running? | 16:44 |
jhartkopf | ralonsoh: Ussuri | 16:45 |
opendevreview | Merged openstack/neutron-tempest-plugin master: Switch scenario jobs to nested-virt nodes https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821067 | 17:15 |
jhartkopf | ralonsoh: Could you give me a hint where to further discuss this feature? Would this be part of Neutron? Nova seems to use those APIs (thus deprecation of the proxy API). | 17:23 |
ralonsoh | jhartkopf, this is in Neutron now but I can't enable it. In any case, a FIP pool (according to the description) is just the list of FIPs per subnet | 17:25 |
jhartkopf | ralonsoh: Ok, will have to look further into that. Thanks a lot! | 17:30 |
opendevreview | Maor Blaustein proposed openstack/neutron-tempest-plugin master: Add test_create_and_update_port_with_dns_name https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821079 | 17:36 |
opendevreview | Merged openstack/neutron stable/train: Cleanup router for which processing added router failed https://review.opendev.org/c/openstack/neutron/+/817677 | 18:29 |
masterpe[m] | Hi, If I add a static route in a neutron router where the destination already exists but as a connected interface/route neutron does a replace of that route. And so it removes the connected route | 20:06 |
masterpe[m] | see https://bugs.launchpad.net/neutron/+bug/1954777 | 20:06 |
opendevreview | Adam Harwell proposed openstack/neutron master: Tweak port metadata test to be more reliable https://review.opendev.org/c/openstack/neutron/+/821779 | 22:36 |
opendevreview | Adam Harwell proposed openstack/neutron master: Tweak port metadata test to be more reliable https://review.opendev.org/c/openstack/neutron/+/821779 | 22:59 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!