15:00:23 #startmeeting neutron_ci 15:00:23 Meeting started Tue Apr 26 15:00:23 2022 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:23 The meeting name has been set to 'neutron_ci' 15:00:24 hi 15:00:26 o/ 15:00:30 hi 15:01:43 Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1 15:01:47 o/ 15:02:05 Merged openstack/neutron-vpnaas master: Fix failover with L3 HA https://review.opendev.org/c/openstack/neutron-vpnaas/+/823904 15:02:38 o/ 15:02:57 #topic Actions from previous meetings 15:03:04 mlavalle to add note about rechecking with reason 15:03:10 I did 15:03:29 it turns out we already had anote 15:03:31 hi 15:03:37 hi 15:03:40 buried in the Neutron policies 15:03:56 so I moved it one lever higher to make it more visible 15:04:14 good idea 15:04:15 thx mlavalle 15:04:19 fixed some wording and added some examples (shamelessly stolen from Cinder) 15:04:39 now harder part - we need to enforce ourselves not to recheck without giving reasons 15:04:49 and also educate others during reviews 15:04:52 :) 15:05:01 do what we say, not what we usually do :) 15:05:18 https://docs.openstack.org/neutron/latest/contributor/index.html#gerrit-rechecks 15:05:18 bcafarel: exactly :) 15:05:18 link please? 15:05:57 ralonsoh: https://docs.openstack.org/neutron/latest/contributor/gerrit-recheck.html 15:06:02 tahnks 15:06:20 ralonsoh: https://review.opendev.org/c/openstack/neutron/+/839107 15:06:24 patch^^^ 15:07:01 ok, I think we can move on 15:07:04 next one 15:07:06 lajoskatona to check with QA team if neutron-tempest-plugin tag for victoria EM will be done together with tempest 15:07:24 yes I asked gmann about it 15:07:58 I have to search for the logs, but summary: they will prepare the tag after victoria EM-ed, and they can do it for neutron-tempest-plugin 15:08:11 great 15:08:41 so as last step we will update our victoria jobs in neutron-tempest-plugin to use those tagged versions of tempest and neutron-tempest-plugin 15:08:43 "we can do tempest first and then plugins. but I am waiting for victoria to be EM which should be in coming week or so" (https://meetings.opendev.org/irclogs/%23openstack-qa/%23openstack-qa.2022-04-25.log.html ) 15:08:55 yes, exactly 15:08:56 and remove those jobs from the check/gate queue in neutron-tempest-plugin repo 15:09:03 thx lajoskatona 15:09:30 next one 15:09:32 lajoskatona to contiune checking failing py39 job in networking-odl 15:10:16 it's on the way, but many small things to fix there (https://review.opendev.org/c/openstack/networking-odl/+/838451 ) 15:11:05 functional is now failing due to a recent change in devstack to collect stats from jobs and they import pymysql and that is not present for functional jobs 15:12:03 lajoskatona: for that one there is fix proposed already: https://review.opendev.org/c/openstack/devstack/+/839217 15:12:42 slaweq: cool, thanks 15:13:23 I pinged gmann and other devstack cores to review it ASAP 15:13:54 ok, next one 15:13:55 mlavalle to check neutron-ovn-tempest-ovs-master-fedora periodic job 15:14:06 I checked it 15:14:32 the original problem was with ovs compiling / building 15:14:46 so the it failed building 15:14:52 that was fixed last week 15:15:18 yeah, I saw that this job was green few times this week :) 15:15:33 it now succeeds building, but is intermitently failing a test: https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_15b/periodic/opendev.org/openstack/neutron/master/neutron-ovn-tempest-ovs-master-fedora/15b5850/testr_results.html 15:15:55 so I need to investigate this latter failure 15:15:58 but this test isn't really related to neutron 15:16:05 I know 15:16:32 I'll just keep an eye on it 15:17:28 yes, thx a lot 15:17:40 :-) 15:17:50 ok, last one from previous week 15:17:51 mlavalle to check failed fips periodic job 15:18:01 with that one I got lucky 15:18:20 it has been succeeding every single time since we talked about it last week 15:18:31 https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-release-fips 15:18:43 I think that failure was a one of 15:19:08 great then :) 15:19:13 because even the few failures during the preceding days were build failures 15:20:45 Merged openstack/networking-ovn stable/train: Do not announce any DNS resolver if "0.0.0.0" or "::" provided https://review.opendev.org/c/openstack/networking-ovn/+/838987 15:21:22 Merged openstack/tap-as-a-service master: Add weekly jobs https://review.opendev.org/c/openstack/tap-as-a-service/+/834505 15:21:33 ok, that were all actions from last meeting 15:21:33 ahkk https://github.com/ovn-org/ovn/commit/996ed75db776c59098b6539d0c03d43977202885 fixed that fedora ovn job 15:21:47 compilation issue 15:21:47 I think we can move on 15:21:51 #topic Stable branches 15:21:56 thx ykarel for link 15:22:37 not much this week (aka no news is good news!) 15:22:53 main point is victoria switching to EM we already talked about 15:24:01 ok, thx bcafarel :) 15:24:10 so next topic then 15:24:12 #topic Stadium projects 15:25:35 lajoskatona: any new issues in stadium's ci, except those in networking-odl? 15:26:05 yes for weekly jobs: https://review.opendev.org/q/topic:weekly_job 15:26:27 I just realized that for sfc and bgpvpn we have to add neutron to required projects list 15:27:05 and I have 2 more for taas to make py310 green: py310 for taas: https://review.opendev.org/q/topic:p310_for_taas 15:27:25 but otherwise the other stadiums seems to be green 15:27:32 Merged openstack/neutron stable/xena: Fix setting table monitoring conditions https://review.opendev.org/c/openstack/neutron/+/838783 15:28:17 I think You need to rebase those 2 taas patches 15:29:06 slaweq: true, I will do, 2 hours ago it was green.... 15:29:18 :) 15:30:10 and for those weekly jobs in sfc and bgpvpn - do You have patches ready? 15:30:22 https://review.opendev.org/q/topic:weekly_job 15:30:39 ahh, sorry 15:30:42 thx 15:30:49 thanks for the attention 15:31:26 +2 on both 15:32:09 ok, I think we can move on 15:32:11 #topic Grafana 15:32:30 #link https://grafana.opendev.org/d/f913631585/neutron-failure-rate 15:32:39 I noticed there is a lot of gaps there recently 15:32:59 I'm not sure if that is going to be shutdown together with services like logstash too 15:33:15 I hope grafana will still be up :) 15:33:50 I haven't heard about it, so I hope not 15:34:15 no logs, no bugs? :) 15:34:32 but yeah hopefully it still works after that change 15:34:35 bcafarel: I wish :P 15:35:23 regarding dashboard I saw some spike to 100% of failures during the weekend 15:35:44 but there was not many runs so it could be some kind of red hering 15:35:59 now it seems to be ok(ish) again 15:36:26 even functional jobs is pretty low with failures this week 15:36:56 a little strange that all tempest job failure rates are 0 15:37:16 lajoskatona: I'm not worried about it :) 15:37:44 slaweq: if it is true I like it 15:37:54 :) 15:38:13 anything else regarding grafana? 15:38:17 or can we move on? 15:38:26 move on 15:38:38 +1 15:38:45 #topic Rechecks 15:39:03 I added it as separate topic as we discuss it every week basically 15:39:11 +---------+----------+... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/fFJABtYBrfTVRoIFUycCRQHt) 15:39:25 we are pretty ok with the number of rechecks 15:39:44 above 1 in average this week but I checked patches which had most of the recheck 15:39:51 and there were only 3 above that average: 15:40:12 https://review.opendev.org/c/openstack/neutron/+/836863 - 8 rechecks - mostly hitting https://bugs.launchpad.net/neutron/+bug/1956958 15:40:20 these numbers for master or all branches? 15:40:30 https://review.opendev.org/c/openstack/neutron/+/834952 - 5 rechecks - mostly hitting 15:40:34 https://bugs.launchpad.net/neutron/+bug/1956958 15:40:56 ^^ https://review.opendev.org/c/openstack/neutron/+/836140 15:40:59 and https://review.opendev.org/c/openstack/neutron/+/837143 - 2 rechecks, pretty old ones 15:41:03 the doc one was from me, It was boring at the end but I had the opportunity to force myself to do not recheck blindly 15:41:05 I would need help with this patch 15:41:12 lajoskatona: yes, those are numbers from master branch only 15:41:46 lajoskatona: no worries, it's not to blame anyone here 15:42:06 more to understand where and why we have those rechecks :) 15:42:13 slaweq: no offense but that small patch for doc I keep in my memories :-) 15:42:17 and it seems that https://bugs.launchpad.net/neutron/+bug/1956958 is our main problem currently :) 15:42:43 and ralonsoh has the above patch for it : https://review.opendev.org/c/openstack/neutron/+/836140 ? 15:42:49 yes 15:43:01 ralonsoh: I remember about Your patch https://review.opendev.org/c/openstack/neutron/+/836140 and looking forward for it :) 15:43:02 I've tested it manually 15:43:20 but I don't know why the script is not running when called from the L3 agent 15:43:31 do You need help with it? 15:43:37 I can take a look 15:43:45 that will be perfect 15:44:30 ok 15:44:36 I will try, but probably on Thursday or Friday 15:45:08 no rush 15:45:18 #action slaweq to check patch https://review.opendev.org/c/openstack/neutron/+/836140 15:45:42 anything else regarding rechecks today? 15:45:49 or are we moving on? 15:46:09 we can move 15:46:24 #topic fullstack/functional 15:46:44 here ykarel (probably) gave some examples of failures: 15:46:49 https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_955/834952/1/gate/neutron-functional-with-uwsgi/955ffd5/testr_results.html 15:46:54 https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b61/834952/1/gate/neutron-functional-with-uwsgi/b617dc3/testr_results.html 15:47:00 https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_211/837681/2/check/neutron-functional-with-uwsgi/2117823/testr_results.html 15:47:21 yes i think all those are related to keeplived thing? 15:47:22 but IMO all of them are related to the bug Probably https://bugs.launchpad.net/neutron/+bug/1956958 and timeout while waiting for router transition 15:48:16 slaweq, is it different then what ralonsoh is trying to fix with https://review.opendev.org/c/openstack/neutron/+/836140? 15:48:17 and that's basically all what I have for today for functional/fullstack jobs 15:48:40 ykarel: I think it's exactly the same 15:48:44 yeah 15:48:54 okk Thanks for confirmation 15:49:43 also i noticed failure related to functional test stuck in stable/yoga and pushed backport https://review.opendev.org/c/openstack/neutron/+/839189 15:50:13 thx 15:50:40 (nice patch!) 15:50:42 hehehe 15:51:00 :) 15:51:02 LoL 15:51:06 ok, lets move on 15:51:55 for periodic jobs all looks mostly good 15:52:04 we have failing neutron-ovn-tempest-postgres-full job but it's because of the same bug as networking-odl functional jobs 15:52:07 and patch https://review.opendev.org/c/openstack/devstack/+/839217 should fix it 15:52:09 so one last item for today 15:52:16 #topic On Demand agenda 15:52:35 I have one quick thing here 15:52:49 next Tuesday will be public holiday in Poland 15:52:53 so I will be off 15:53:08 do we want to cancel ci meeting or someone else will chair it? 15:53:25 not public holiday here but I will be off too 15:53:32 if we have a critical number of bugs, we can have it 15:53:37 if not, for now, we can cancel 15:53:39 +1 15:54:02 ok, I will check on Monday how it looks and will ping You ralonsoh to chair it or will cancel it 15:54:07 perfect 15:54:12 +1 15:54:12 thx 15:54:19 that's all from me for today 15:54:28 +1 15:54:28 anything else You want to discuss, regarding our ci? 15:54:36 nothing from me 15:55:02 nothing from me 15:55:09 nothing here 15:55:20 ok, so thx for attending the meeting 15:55:24 and have a great week 15:55:25 o/ 15:55:26 o/ 15:55:26 #endmeeting