opendevreview | Merged openstack/neutron stable/wallaby: Delete SG log entries when SG is deleted https://review.opendev.org/c/openstack/neutron/+/810870 | 00:10 |
---|---|---|
opendevreview | Federico Ressi proposed openstack/neutron master: Change tobiko CI job in the periodic queue https://review.opendev.org/c/openstack/neutron/+/813977 | 07:35 |
opendevreview | Manu B proposed openstack/os-ken master: Msgpack version upgrade to 1.0.0 https://review.opendev.org/c/openstack/os-ken/+/815784 | 08:02 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Replace "tenant_id" with "project_id" in OVO base https://review.opendev.org/c/openstack/neutron/+/815814 | 09:41 |
opendevreview | Manu B proposed openstack/os-ken master: Msgpack version upgrade to 1.0.0 https://review.opendev.org/c/openstack/os-ken/+/815784 | 09:41 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Replace "tenant_id" with "project_id" in metering service https://review.opendev.org/c/openstack/neutron/+/814807 | 09:43 |
tobias-urdin | ralonsoh: can i ask for a +w on https://review.opendev.org/c/openstack/neutron/+/815310 thanks! :) | 09:55 |
ralonsoh | tobias-urdin, done | 09:57 |
em__ | Is there any way to set the MTU of ovs-system? | 10:01 |
em__ | AFAICS it needs to be set on all nodes | 10:01 |
ralonsoh | em__ you can set the MTU of a network | 10:06 |
ralonsoh | all ports attached to this network will inherit it | 10:06 |
opendevreview | Rodolfo Alonso proposed openstack/neutron stable/xena: Allow to set the OpenFlow protocol when needed https://review.opendev.org/c/openstack/neutron/+/812162 | 10:08 |
em__ | em_, are you refering to global_physnet_mtu? | 10:13 |
em__ | or are your refering to 'ovs-vsctl set Interface br-int mtu_request=1400' | 10:13 |
ralonsoh | em__, none of them | 10:20 |
ralonsoh | the global_physnet_mtu is for physical networks, this is the max limit | 10:20 |
ralonsoh | from the doc: | 10:21 |
ralonsoh | 'MTU of the underlying physical network. Neutron uses ' | 10:21 |
ralonsoh | 'this value to calculate MTU for all virtual network ' | 10:21 |
ralonsoh | 'components. For flat and VLAN networks, neutron uses ' | 10:21 |
ralonsoh | 'this value without modification. For overlay networks ' | 10:21 |
ralonsoh | 'such as VXLAN, neutron automatically subtracts the ' | 10:21 |
ralonsoh | 'overlay protocol overhead from this value. Defaults ' | 10:21 |
ralonsoh | 'to 1500, the standard value for Ethernet.' | 10:21 |
ralonsoh | you should not set the MTU of any interface of OVS manually | 10:21 |
ralonsoh | and br-int port (and interface) is an internal port | 10:22 |
ralonsoh | traffic does not use this port | 10:22 |
ralonsoh | so the MTU of this port is irrelevant | 10:22 |
em__ | ralonsoh, it was extremely relevant for us, maybe i could explain and you could correct what maybe was a wrong assumption? | 10:23 |
ralonsoh | em__, again, the MTU of br-int port is NOT relevant | 10:23 |
ralonsoh | traffic does not use this traffic | 10:23 |
ralonsoh | by setting the MTU of br-int port you are not limiting the MTU of br-int bridge | 10:24 |
em__ | ralonsoh, i'am not doubting you, just trying to explain so you can maybe point where we had an issue | 10:24 |
ralonsoh | the MTU setting is per interface | 10:24 |
ralonsoh | you didn't explain what the issue is | 10:25 |
*** jp is now known as Guest4265 | 10:25 | |
em__ | ralonsoh, our mng network is a vswitch of our provider. Since it is itself encapsulated in a VXLAN and then we use a VLAN inside, we have an MTU of 1400 on this interface. It is critical to use this MTU when we communicate on this vswitch | 10:25 |
em__ | So all our nodes (controller/computes) share this one vswitch which is (AFAIU) not controller by openstack at all - it exists pre-deployment and works pre-deployment. It is a usual linux interface with vlan tag, so somewhat enp9s0.4002 | 10:26 |
ralonsoh | so for your OpenStack deployment, the physical MTU must be 1400 | 10:27 |
ralonsoh | set this value then | 10:27 |
em__ | we did that (in terms of global_pysnet_mtu: 1400 in neutron.conf of our ovn-controller) | 10:27 |
em__ | this seems to work just fine. We then have an additional vswitch, also provider, which is our subnet for the floating ips. Again MTU 1400 due to VXLAN+VLAN of the provider | 10:28 |
ralonsoh | and what's the problem then? | 10:28 |
em__ | this vswitch we only attached to the controller (since we do not want to use DVR). It seems like we run into troubles with that and how MTU is no handled | 10:28 |
em__ | when we create the network in openstack we use an MTU of 1400 for it: openstack network create --external --share --mtu 1400 --provider-physical-network physnet1 --provider-network-type flat provider-wan | 10:29 |
em__ | now we also create a geneve (self-service) network (with no particular MTU setting) put an instance into that network, create a router ... assign a floating ip and all this. | 10:30 |
em__ | Now the problematic part is, that a ICMP works, but the connection seems to be not working other then ICMP (without -s, default size). We neither can wget outside the instance nor connect into the instance via SSH, the connection seems to stall | 10:31 |
em__ | that is why we suggested an MTU issue. | 10:31 |
em__ | also, the controller interface seems to be very slow when we do not set the MTU of br-int (ovs-vsctl set Interface br-int mtu_request=1400) - or it least we see the difference. We did not messure this yet | 10:32 |
em__ | I think that's it. Still not doubting your point, but where do you think we went down the wrong rabbit whole? | 10:32 |
ralonsoh | at this point I don't see where the traffic is being dropped. If I'm not wrong, from what you are saying is in the GW interface, when connecting to the external network | 10:33 |
ralonsoh | and from there I don't know what type of overlaying network/VLAN config you have | 10:34 |
ralonsoh | check the physical network max MTU | 10:34 |
ralonsoh | then take into account the size of the overlaying network (not applied by OpenStack) | 10:34 |
ralonsoh | and use this MTU | 10:34 |
em__ | let me ask some questions quick so i get your terms right and do not confuse you with wrong answers: pyhsical network MTU does refer to the MTU used on the vswitches, so e.g. enp9s0.4001(mng)/enp9s0.4002(wan)? | 10:35 |
ralonsoh | in any case, setting the br-int MTU is irrelevant | 10:36 |
ralonsoh | that will do nothing | 10:36 |
em__ | i do not understand yet. Trying to. What is the physical network MTU you refer to? | 10:36 |
ralonsoh | the underlying physical network | 10:37 |
ralonsoh | that is the infrastructure network installed in the premises | 10:37 |
ralonsoh | something openstack cannot control | 10:37 |
ralonsoh | or set | 10:37 |
em__ | in this case, that is our vswitch we get from the provider. we have 2, both or MTU 1400 | 10:38 |
em__ | so setting global_physnet_mtu: 1400 - as you already confirmed, was the right thing. Now you continued with 'now calculate your MTU downwards from that point' - right? | 10:39 |
em__ | since every added geneve / vxlan or vlan on top will add an addition encapsulation header, thus the internal max mtu will become smaller and smaller -that is what you refer to, right? | 10:39 |
ralonsoh | em__, the mtu will be calculated by the driver type in Neutron | 10:40 |
ralonsoh | depending on the header size | 10:40 |
ralonsoh | you don't need to make any math there | 10:41 |
em__ | i understand. So from you POV setting global_physnet_mtu:1400 is all we need to do? | 10:42 |
ralonsoh | yes | 10:42 |
em__ | i understand. Do you think setting the MTU when creating the network is correct? | 10:44 |
em__ | (still here) | 10:44 |
ralonsoh | if you are creating an external network, let neutron to define this value, according to the external network provided and the network type | 10:45 |
ralonsoh | do not provide the mtu value | 10:46 |
em__ | understood | 10:46 |
em__ | ok so i guess i wipe the cluster and use global_physnet_mtu:1400 only and see what happens | 10:47 |
em__ | Thank you a lot for sorting things out | 10:47 |
em__ | I'am not sure if this related or is a neutron question at all (might be rather nova?). When we start any cloud-init based instance (debian-genericcloud or cirros), they seem not to be able to contact the meta-data service https://gist.github.com/EugenMayer/42a0f13ccf5f18076c4e2d84655bda66 - could that be a communication issue based on OVN/neutron or the way meta-data has been deployed (on which nodes) - or nobody can tell? | 10:51 |
ralonsoh | em__, is the metadata present in this compute node? | 10:52 |
ralonsoh | check the metadata namespace | 10:52 |
em__ | we have an neutron_ovn_metadata_agent on each compute | 10:54 |
ralonsoh | and the metadata namespace? | 10:54 |
ralonsoh | is it there? | 10:54 |
ralonsoh | ovnmeta-<id_of_the_network> | 10:54 |
em__ | on the compute, there is nothing matching ip addr | grep ovnmeta | 10:55 |
ralonsoh | no | 10:55 |
ralonsoh | ip netns | 10:55 |
em__ | ip netns is empty, on all computes and controller | 10:56 |
ralonsoh | so you have a problem, the metadata agent didn't create the namespace | 10:56 |
ralonsoh | check the network | 10:56 |
ralonsoh | how many subnets do you have? | 10:56 |
em__ | ok. What is the direction on this, does neutron_ovn_metadata_agent start up and tries to talk back to an other OVS service (most of those run on our controller, we do not have dedicated gateway or db nodes). | 10:57 |
opendevreview | Merged openstack/neutron stable/victoria: [ovn] Stop monitoring the SB MAC_Binding table to reduce mem footprint https://review.opendev.org/c/openstack/neutron/+/814870 | 10:58 |
ralonsoh | no, ovn metadata agent only talks to neutron server | 10:58 |
em__ | ralonsoh, right now, we have 2 networks, 1 provider-wan with 1 subnet and one self-service, with 1 subnet (28 on the former, 24 on the latter) | 10:58 |
ralonsoh | and does this subnet have DHCP enabled? | 10:58 |
ralonsoh | the self-service one | 10:59 |
em__ | yes it has, but does OVN has a speciallity with DHCP, doesnt it? | 10:59 |
em__ | yes it has | 10:59 |
em__ | are you refering to neutron_ovn_dhcp_agent ? it is off by default, we did not activate it | 10:59 |
ralonsoh | no, ovn metadata agent | 11:00 |
ralonsoh | there is no ovn DHCP agent | 11:00 |
em__ | https://docs.openstack.org/kolla-ansible/latest/reference/networking/neutron.html#ovn-ml2-ovn | 11:00 |
ralonsoh | this is not an OVN dhcp agent, this is a neutron DHCP agent | 11:01 |
ralonsoh | for other backends | 11:01 |
ralonsoh | for example, if you have ironic nodes with ovs | 11:01 |
ralonsoh | but there is no OVN DHCP agents | 11:01 |
em__ | you mean, there are non (not yet implemented) - or there are none in our setup (which is the issue)? | 11:02 |
ralonsoh | DHCP is replied by OVN | 11:02 |
em__ | i think i lost you - though trying hard to follow | 11:03 |
ralonsoh | ovn deployments do not need DHCP agents for internal ports (or SRIOV ports) | 11:04 |
ralonsoh | but you can have multiple backends | 11:04 |
ralonsoh | or in this case, Ironic nodes | 11:04 |
ralonsoh | those ports are not bound to OVN | 11:04 |
ralonsoh | that means you need a neutron DHCP agent, spawning dnsmasq, to reply to DHCP requests from those ports | 11:05 |
ralonsoh | this is what "neutron_ovn_dhcp_agent" means | 11:05 |
em__ | so basically as long as we deploy instances in an OVN based geneve or alike network, DHCP is not needed. If we then would also use let's say linux-bridges, we would need to set "neutron_ovn_dhcp_agent" in addition? | 11:06 |
ralonsoh | yes | 11:07 |
em__ | Understood - thinking bout the above meta-data issue, i would say (and that is what you told me before too) it is unrelated to that | 11:08 |
opendevreview | Merged openstack/neutron stable/ussuri: [ovn] Stop monitoring the SB MAC_Binding table to reduce mem footprint https://review.opendev.org/c/openstack/neutron/+/814869 | 11:11 |
opendevreview | Merged openstack/neutron stable/ussuri: Fix OVN migration workload creation order https://review.opendev.org/c/openstack/neutron/+/815618 | 11:11 |
em__ | not sure what the issue with the meta-data service could be about. Found things like https://bugs.launchpad.net/charm-ovn-chassis/+bug/1907686 | 11:12 |
em__ | when we switched from OVS to OVN deployment (in our lab and on DC, fresh clean deployments via kolla) this issue started to appear | 11:13 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Replace "tenant_id" with "project_id" in OVO base https://review.opendev.org/c/openstack/neutron/+/815814 | 11:41 |
dmitriis | Hi Neutron Core Devs, could this spec https://review.opendev.org/c/openstack/neutron-specs/+/788821 be added to the next Neutron drivers meeting agenda? The RFE (https://bugs.launchpad.net/neutron/+bug/1932154) is in the rfe-approved state so this is about the spec only. | 11:48 |
ralonsoh | dmitriis, the RFE is approved | 12:46 |
ralonsoh | now what you need it to have the spec reviewed | 12:46 |
ralonsoh | the drivers meeting is on fridays | 12:46 |
ralonsoh | https://meetings.opendev.org/#Neutron_drivers_Meeting | 12:46 |
em__ | ralonsoh, started metadata in debug, those are the logs https://gist.github.com/EugenMayer/e2c5796f7c547c224a3aaccad2c35257 .. not sure what is failing. without drebug https://gist.github.com/EugenMayer/bbda954d207fc03b1fd9ea0bb6d143cf | 12:51 |
ralonsoh | em__, can't connect to OVN, check that | 12:53 |
em__ | 10.0.0.3 is my controller - what service will i try to connect to? | 12:54 |
ralonsoh | actually no | 12:54 |
ralonsoh | agent can't connect to OVS DB | 12:54 |
em__ | ovs sb? | 12:54 |
ralonsoh | not SB or NB | 12:55 |
ralonsoh | OVS DB | 12:55 |
em__ | let me check that | 12:55 |
ralonsoh | according to those logs, the OVN NB controller is tcp:10.0.0.3:6642 | 12:55 |
em__ | i guess it is neither openvswitch_db, nor ovn_nb_db, nor ovn_sb_db | 12:55 |
ralonsoh | but cannot connect to OVS controller tcp:127.0.0.1:6640 | 12:55 |
ralonsoh | that's what I said | 12:56 |
ralonsoh | this is the local OVS DB | 12:56 |
em__ | ok then this should be it | 12:56 |
em__ | it is a kolla based deployment, where the meta-data egent is seperated from the ovn controller | 12:56 |
em__ | 2 different docker-containers. Thus if 127.0.0.1 is used, it will not work for obvious reasons from the meta-data controller | 12:57 |
em__ | i refer to https://gist.github.com/EugenMayer/b6611b9725a7697d0a392c1b3c1a5683 | 12:57 |
em__ | since both run isolated in 2 different containers, 127.0.0.1 cannot be used to connect from meta-data to ovn controller | 12:58 |
ralonsoh | ovn controller is not the ovs service | 12:58 |
ralonsoh | ovsdb-server.service | 12:58 |
ralonsoh | this is the service you need to access | 12:58 |
em__ | hmm so that one need to run on the meta-data container too | 13:00 |
ralonsoh | what is the value of "ovsdb_connection"? | 13:01 |
em__ | in the configuration? | 13:01 |
ralonsoh | yes, in this compute | 13:02 |
em__ | ralonsoh, https://gist.github.com/EugenMayer/fda894a3c6bb2bde5b9d1756b234cd79 | 13:03 |
ralonsoh | change it to "tcp:10.0.0.3:6640" | 13:03 |
opendevreview | Oleg Bondarev proposed openstack/neutron master: Add Local IP L2 extension https://review.opendev.org/c/openstack/neutron/+/807116 | 13:05 |
opendevreview | Oleg Bondarev proposed openstack/neutron master: Add Local IP L2 extension flows https://review.opendev.org/c/openstack/neutron/+/815102 | 13:05 |
opendevreview | Slawek Kaplonski proposed openstack/neutron master: Fix expected exception raised when new scope types are enforced https://review.opendev.org/c/openstack/neutron/+/815837 | 13:06 |
em__ | ralonsoh, with | 13:10 |
em__ | cat /etc/kolla/neutron-ovn-metadata-agent/neutron_ovn_metadata_agent.ini | grep ovsdb | 13:10 |
em__ | ovsdb_connection = tcp:10.0.0.3:6640 | 13:10 |
em__ | ovsdb_timeout = 10 | 13:10 |
em__ | the meta-data agent does no longer come up at all | 13:11 |
ralonsoh | check the ovs db connection | 13:11 |
ralonsoh | ovs-vsctl list connection | 13:11 |
ralonsoh | and set this value there | 13:11 |
em__ | https://gist.github.com/EugenMayer/9256af4e6d82dd993838bdb73472cb53 | 13:12 |
ralonsoh | if you don't have access to the ovs db from this container, set an IP different from 127.0.0.1 and change ovsdb_connection and the connection value | 13:12 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Replace "tenant_id" with "project_id" in OVO base https://review.opendev.org/c/openstack/neutron/+/815814 | 13:13 |
em__ | ralonsoh, currently i'am bit confused, over at kolla 127.0.0.1 seems to be the expected value, and looking at lsof there seem to be something listenting on that port too | 13:14 |
em__ | the entire setup is automatically orchestrated - while i'am currently not sure anybody in kolla runs an OVN+Xena stack | 13:15 |
opendevreview | Slawek Kaplonski proposed openstack/neutron master: Don't enforce scopes in the API policies UT temporary https://review.opendev.org/c/openstack/neutron/+/815838 | 13:15 |
em__ | i'am trying my best to make sense of it, thank you for helping all the way | 13:18 |
opendevreview | Lucas Alvares Gomes proposed openstack/neutron master: [OVN] Update check_for_mcast_flood_reports() to check for mcast_flood https://review.opendev.org/c/openstack/neutron/+/815843 | 13:21 |
opendevreview | Slawek Kaplonski proposed openstack/neutron master: Fix expected exception raised when new scope types are enforced https://review.opendev.org/c/openstack/neutron/+/815837 | 13:22 |
em__ | ralonsoh, i tried the exact same setup with OVS (https://github.com/EugenMayer/openstack-lab/tree/stable/ovs) and it works (meta-data), while the same using OVN has this meta-data issues (https://github.com/EugenMayer/openstack-lab/tree/stable/ovn). I would now test OVN+wallaby using the same configuration, since i got a hint in kolla that people run that in production right now. Would that help and probably creating a bug in | 14:00 |
em__ | the launchpad, especially if that works in Wallaby but not in Xena? | 14:00 |
dmitriis | ralonsoh: ack, I'll attend and propose an on-demand topic just to have some slot to answer possible questions about the spec. | 14:12 |
zigo | yoctozepto: frickler: OpenVSwitch and OVN just got approved in the official Debian Bullseye backports, it will reach your local repository over night ! :) | 14:29 |
*** ricolin_ is now known as ricolin | 14:40 | |
yoctozepto | zigo: lovely! | 14:43 |
yoctozepto | thank you for letting us know | 14:44 |
yoctozepto | zigo: here's our gift to you https://review.opendev.org/c/openstack/governance/+/815851 | 14:44 |
opendevreview | Mamatisa Nurmatov proposed openstack/neutron master: Remove todo's in Y release https://review.opendev.org/c/openstack/neutron/+/815853 | 14:51 |
em__ | ralonsoh, wallaby worked without an issue, so it is Xena related without changing any other parts. Where should i open a bug report and which parts do you need? | 14:55 |
ralonsoh | em__, in https://launchpad.net/neutron | 14:58 |
ralonsoh | provide agent logs in debug mode, at least the non working one | 14:58 |
ralonsoh | and provide the git hash of the versions used | 14:59 |
em__ | which versions of what? Will be hard for me possible since i use kolla | 14:59 |
ralonsoh | the version of Neutron | 15:00 |
ralonsoh | kolla deploys the services using a tag, isn't it? | 15:00 |
em__ | not yet for Xena, they are in RC, so not yet tagged (lastest) | 15:01 |
em__ | ralonsoh, should i link the reproducer? I have to vagrant stacks -they only differ in therms of wallaby / xena. that's it | 15:04 |
em__ | https://github.com/EugenMayer/openstack-lab/tree/stable/ovn - ovn xena | 15:04 |
em__ | https://github.com/EugenMayer/openstack-lab/tree/stable/ovn-wallaby - ovn wallaby | 15:04 |
ralonsoh | ok, but more important is to know if the metadata agent creates the network namespace, the interfaces, adds the routes, etc | 15:05 |
ralonsoh | so all that is defined in https://github.com/openstack/neutron/blob/master/neutron/agent/ovn/metadata/agent.py#L400-L516 | 15:06 |
zigo | yoctozepto: \o/ | 15:08 |
zigo | Took 10 years until it happened ... :) | 15:08 |
zigo | FYI, I'm currently building arm64 builds for Linaro, that may be usefull too ... | 15:09 |
yoctozepto | zigo: yeah, /me aware, thanks :-) | 15:10 |
zigo | yoctozepto: FYI, I got my jenkins builder up and running, and it's currently building OVS. | 15:11 |
zigo | I wont be building arch: all on it though... | 15:11 |
zigo | Unfortunately, I need the (not in bullseye) openvswitch-source binary to build OVN. | 15:12 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Replace "target_tenant" with "target_project" in RBAC OVOs and models https://review.opendev.org/c/openstack/neutron/+/815855 | 15:12 |
yoctozepto | zigo: I do not understand the last statement ;/ ovn has its own sources and what is "source binary"? | 15:13 |
zigo | yoctozepto: This: https://packages.debian.org/search?keywords=openvswitch-source | 15:13 |
zigo | openvswitch-source_2.15.0+ds1-8_all.deb | 15:14 |
yoctozepto | oh, weird | 15:14 |
zigo | OVN needs the sources of OVS to build ... | 15:15 |
zigo | That's how we provide it. | 15:15 |
zigo | Bullseye doesn't have it, which is why I backported it too. | 15:15 |
yoctozepto | ok, makes sense | 15:15 |
zigo | Otherwise, the Bullseye version of OVS is fine. | 15:15 |
em__ | ralonsoh, tried my best https://bugs.launchpad.net/neutron/+bug/1949097 | 15:18 |
ralonsoh | em__, "- Using Xena+OVS works." | 15:18 |
ralonsoh | ?? | 15:18 |
ralonsoh | ah ok | 15:18 |
ralonsoh | OVS | 15:18 |
ralonsoh | ok then | 15:18 |
em__ | Xena+OVS works | 15:19 |
em__ | Wallaby+OVN wroks | 15:19 |
em__ | Xena+OVN does not work | 15:19 |
ralonsoh | em__, but again, is metadata agent creating the namespace? | 15:20 |
ralonsoh | if the namespace is created, do we have interfaces inside? | 15:20 |
em__ | Maybe someone from kolla needs to get involved to get you a better insight about the exact version of neutron used- you can pull quay.io/openstack.kolla/ubuntu-source-neutron-metadata-agent:xena and check yourself though | 15:20 |
em__ | ralonsoh, on that topic, i'am not sure. The first time you asked me after the namespace , ip netns was empty. They i fiddled arround with the IP and all that -then i was asked in kolla to run that again, and then there was a namespace | 15:21 |
em__ | so i'am not sure about that. | 15:21 |
ralonsoh | that's mandatory to try to debug this problem | 15:22 |
em__ | The questions 'do we have interfaces inside' - i'am not sure what you mean. Inside the docker-container? AFAIK not needed since --network host is used | 15:22 |
ralonsoh | this is the first thing to check here | 15:22 |
mlavalle | slaweq: Happy birthday! | 15:22 |
ralonsoh | no, inside the namespace | 15:22 |
ralonsoh | ip netns exec ovnmeta-xxxxxxxx ip a | 15:22 |
ralonsoh | so check step by step what https://github.com/openstack/neutron/blob/master/neutron/agent/ovn/metadata/agent.py#L400-L516 is doing | 15:23 |
em__ | i have no running stack right now. If you like, write what you need into the ticket, i provide those informations tomorrow | 15:24 |
em__ | (as far as i can | 15:24 |
opendevreview | Rodolfo Alonso proposed openstack/neutron master: Replace "tenant_id" with "project_id" in OVO base https://review.opendev.org/c/openstack/neutron/+/815814 | 15:37 |
opendevreview | Merged openstack/neutron master: Set RPC timeout in PluginReportStateAPI to report_interval https://review.opendev.org/c/openstack/neutron/+/815310 | 15:56 |
em__ | ralonsoh, what is $meta in https://paste.opendev.org/show/810262/ ? | 16:00 |
ralonsoh | the metadata namespace name | 16:00 |
ralonsoh | ovnmeta-xxxx | 16:00 |
em__ | is see | 16:02 |
em__ | ip netns exec $meta tcpdump -vvnni tap40ede83a-61@if8 | 16:02 |
em__ | Cannot open network namespace "tcpdump": No such file or directory | 16:02 |
em__ | do i need to install tcpdump? | 16:02 |
ralonsoh | yes | 16:02 |
em__ | is that how one uses tcpdump in those ovn based networks? that neat! | 16:03 |
em__ | ralonsoh, is that what you need? https://gist.github.com/EugenMayer/3b7d1fc4a42d7fc911229f38eec891dd | 16:07 |
em__ | i added it to the bug | 16:11 |
em__ | cu tomorrow | 16:12 |
opendevreview | Tobias Urdin proposed openstack/neutron stable/xena: Set RPC timeout in PluginReportStateAPI to report_interval https://review.opendev.org/c/openstack/neutron/+/815879 | 17:58 |
opendevreview | Mamatisa Nurmatov proposed openstack/neutron master: Remove todo's in Y release https://review.opendev.org/c/openstack/neutron/+/815853 | 18:13 |
opendevreview | Ghanshyam proposed openstack/neutron master: DNM: testing tempest test change https://review.opendev.org/c/openstack/neutron/+/815898 | 18:42 |
-opendevstatus- NOTICE: mirror.bhs1.ovh.opendev.org filled its disk around 17:25 UTC. We have corrected this issue around 18:25 UTC and jobs that failed due to this mirror can be rechecked. | 18:44 | |
opendevreview | Merged openstack/neutron stable/xena: Fix OVN migration workload creation order https://review.opendev.org/c/openstack/neutron/+/815621 | 19:29 |
opendevreview | Merged openstack/neutron stable/wallaby: Fix OVN migration workload creation order https://review.opendev.org/c/openstack/neutron/+/815620 | 19:30 |
opendevreview | Merged openstack/neutron stable/victoria: Fix OVN migration workload creation order https://review.opendev.org/c/openstack/neutron/+/815619 | 19:43 |
opendevreview | Merged openstack/neutron master: Networking guide: Add trunk limitation to min bandwidth https://review.opendev.org/c/openstack/neutron/+/815609 | 19:43 |
opendevreview | Merged openstack/neutron stable/victoria: Delete log entries when SG or port is deleted https://review.opendev.org/c/openstack/neutron/+/815299 | 22:32 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!