15:30:17 #startmeeting neutron_ci 15:30:17 Meeting started Tue Feb 8 15:30:17 2022 UTC and is due to finish in 60 minutes. The chair is lajoskatona. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:30:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:30:17 The meeting name has been set to 'neutron_ci' 15:30:22 o/ 15:30:34 hi (just in time) 15:31:56 o/ (almost just in time) 15:32:43 ok, let's start 15:33:02 #topic Actions from previous meetings 15:33:15 lajoskatona to check if we can use neutron from master in the networking-bgpvpn master branch jobs 15:33:39 It is merged as I remember 15:34:02 I think I skip slaweq's action points 15:34:06 ralonsoh to check ssh failures and arp entry with 00:00:00:00:00:00 15:34:17 sorry, I started but never finished that 15:34:18 sorry 15:35:29 ralonsoh: no problem, isnt that related to https://bugs.launchpad.net/neutron/+bug/1959564 ? 15:35:51 could be, but I'm not sure 15:35:55 ok 15:36:03 in any case, I didn't see another error like that 15:36:09 mlavalle to investigate https://bugs.launchpad.net/neutron/+bug/1945283 15:37:12 mlavalle didn't join, so move on 15:37:35 #topic Stable branches 15:38:10 bcafarel: do you have any news? except of the pike/queens problem I haven't seen any serious 15:38:43 there was a lot of noise last week with the tempest issues (solved) and devstack with py36 15:39:01 ralonsoh++ that was the one I was looking for 15:39:44 but those are merged (except https://review.opendev.org/q/I0391dd24224f8656a09ddb002e7dae8783ba37a4 which is under review) 15:40:12 on my list to review 15:40:49 +1 15:41:06 (btw, good patch!) 15:41:15 oh yes 15:41:35 I just checked the dashboard links in the etherpad (#link https://etherpad.opendev.org/p/neutron-ci-meetings#L30) and it seems not working 15:41:53 I will check later.... 15:42:09 #topic Stadium projects 15:42:50 the only thing was to fix import after n-lib release with moved ovs constants 15:44:11 #topic Grafana 15:44:21 #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:44:52 (not found?) 15:45:00 hmmm, this dasboard is also missing 15:45:59 https://grafana.opendev.org/?orgId=1&search=open&query=neutron-failure-rate 15:46:07 there is nothing with this name... 15:46:56 I will ask on infra if we have to change something with our dashboards 15:47:11 https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1 works 15:47:28 anyway it seems not only the wallaby and victoria ones are missing 15:48:02 bcafarel: thanks 15:48:48 was there any change recently? 15:49:37 ok, back to grafana 15:50:00 probably an openstack/opendev change, it will be good to check with infra (but at least dashboards are still here) 15:50:18 +1 15:50:44 We had a lot of tempest failures last week, but that was mostly due to the fix around dead vlans I suppose (https://review.opendev.org/q/I0391dd24224f8656a09ddb002e7dae8783ba37a4 ) 15:50:54 yes, I think so 15:51:13 but we still have some high rate failures in ovs ha dvr 15:51:30 33.3% failures 15:51:57 and periodic fullstack job is 100% failing 15:52:46 yes and functional around ~25% for gate 15:53:55 I had this one for timeouts: https://review.opendev.org/c/openstack/neutron/+/827488 , but it fails with timeout :-) And I had no time since to go back to it.... 15:55:10 well, this is a timeout inside a test 15:55:16 I still think the patch is correct 15:55:28 (we should fix this test failing, for sure) 15:55:55 ralonsoh: exactly, I will go back to it I hope this week 15:56:25 by zuul stats: https://zuul.openstack.org/builds?job_name=neutron-fullstack&pipeline=periodic&skip=0 15:56:45 wallaby is failing constantly from fullstack 15:57:03 right 15:57:36 each time a different test 15:58:12 but test_east_west_traffic seems to be frequent 15:58:46 I'll open a LP bug for this one 15:59:14 ralonsoh: thanks 16:00:35 time is up, so it is time to close 16:01:02 #endmeeting