Friday, 2024-12-27

opendevreviewyatin proposed openstack/neutron master: [DNM] wsgi switch higher workers eventlet  https://review.opendev.org/c/openstack/neutron/+/93736705:27
opendevreviewyatin proposed openstack/neutron master: Switch to OVS/OVN LTS branches  https://review.opendev.org/c/openstack/neutron/+/93827907:26
opendevreviewyatin proposed x/whitebox-neutron-tempest-plugin master: Switch jobs to run on ubuntu noble  https://review.opendev.org/c/x/whitebox-neutron-tempest-plugin/+/93828107:50
opendevreviewyatin proposed x/whitebox-neutron-tempest-plugin master: Switch jobs to run on ubuntu noble  https://review.opendev.org/c/x/whitebox-neutron-tempest-plugin/+/93828108:06
opendevreviewyatin proposed x/whitebox-neutron-tempest-plugin master: Switch jobs to run on ubuntu noble  https://review.opendev.org/c/x/whitebox-neutron-tempest-plugin/+/93828109:39
ozzzo_workOne of our busiest clusters was rebuilt on Wallaby a few months ago (kolla-ansible) and deletions are taking a long for VMs on some hypervisors. When I look at the ovs-vswitchd.log log I see lots of errors that apparently refer to a missing VM interface: 19:29
ozzzo_work2024-12-27T19:06:10.482Z|01793|bridge|WARN|could not open network device qvo0430a2f7-cd (No such device)19:29
ozzzo_workFor most network devices I see "added" and "deleted" lines in the log. This one has an "added" line but no "deleted"19:30
ozzzo_workI tried restarting the neutron_openvswitch_agent and openvswitch_vswitchd containers but that didn't make a difference. How can I get OVS to stop choking on this missing interface?19:31
rm_workThanks ihrachys 20:33
rm_workI’ll try tracing this… though it’s looking like our problem may be something either in the switches or in our OVS stuff because packets seem to just be 30-60s delayed which is likely causing the real issue we’re seeing. Very odd.20:34
*** dmellado07553937 is now known as dmellado075539322:54

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!