15:00:05 #startmeeting neutron_ci 15:00:05 Meeting started Tue Sep 14 15:00:05 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:05 The meeting name has been set to 'neutron_ci' 15:00:12 hi 15:00:17 hi 15:00:45 Hi 15:00:47 I think that attendance will not be high today as bcafarel and ralonsoh are off 15:00:53 so we can start probably 15:01:18 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:01:28 #topic Actions from previous meetings 15:01:34 slaweq to check if tests can be marked as they can't run in parallel 15:02:01 I was checking it but it seems that we will need some changes in tempest to run some "advanced" tests in serial 15:02:07 and others in parallel 15:02:25 but now gmann proposed https://review.opendev.org/c/openstack/tempest/+/807790 which should helps us with this 15:02:34 I will follow up on it with him 15:02:47 but this week I didn't saw any oom-killers in our jobs 15:03:26 ok, next one 15:03:29 slaweq to start eol'ing vpnaas stein and older branches 15:03:36 I sent email last week http://lists.openstack.org/pipermail/openstack-discuss/2021-September/024779.html 15:03:45 I didn't saw any replies so far 15:03:57 so at the end of this week we should be good to move on with this 15:04:07 lajoskatona: do You want to continue that process as new PTL? 15:04:12 or do You want me to do it? 15:04:14 sure 15:04:27 I have to propose patch to releases? 15:04:45 and no need for more mails about, am I right? 15:04:59 guide is at https://docs.openstack.org/project-team-guide/stable-branches.html 15:05:12 ack 15:05:21 now we need to remove any related zuul jobs from other repos (if there are any) 15:05:55 and propose patch similar to https://review.opendev.org/#/c/677478/ to create eol tag 15:06:18 finally we need to contact elodilles to delete old branches 15:06:32 if You will have any questions elodilles is the best guy to ask I think :) 15:06:53 slaweq: ok 15:06:58 ++ 15:07:00 thx 15:08:13 ok, next one 15:08:19 ralonsoh to open bug about failing neutron-ovn-tempest-ovs-master-fedora periodic job 15:08:24 I know he checked it 15:08:28 and found 2 errors 15:08:33 First one, related to OVN metadata port. Patch: https://review.opendev.org/c/openstack/tempest/+/808081, testing patch: https://review.opendev.org/c/openstack/neutron/+/808110 15:08:38 Second one, related to tempest.scenario.test_server_basic_ops.TestServerBasicOps, still under investigation 15:08:47 that's update from ralonsoh :) 15:09:56 and that are all actions from last week 15:09:59 #topic Stadium projects 15:10:06 any updates about stadium ci today? 15:10:36 nothing interesting 15:10:50 that's good news :) 15:10:52 things are green as I checked last time 15:10:52 thx 15:11:17 so next topic 15:11:19 #topic Stable branches 15:11:28 bcafarel is not available today 15:11:49 but maybe You have something related to the stable branches' ci? 15:12:03 I don't think we have any major issues there but maybe I missed something 15:12:27 nothing from me 15:13:44 so let's move on 15:13:49 #topic Grafana 15:13:56 http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:14:28 seems that things are pretty ok recently 15:14:57 even functional/fullstack jobs looks better now :) 15:16:03 I think we can move on to the next topic 15:16:09 #topic fullstack/functional 15:16:19 here I have just one update from ralonsoh about fullstack 15:16:24 https://bugs.launchpad.net/neutron/+bug/1942190: I've tried to push https://review.opendev.org/c/openstack/neutron/+/807134 and other tests like https://review.opendev.org/c/openstack/neutron/+/808026 (to reduce the memory footprint), but I can't fix this problem. privsep daemons are still, sometimes, not being correctly spawn in the DHCP agent. 15:17:17 but I think we can get back to that when ralonsoh will be back 15:18:24 I didn't found any other new issues in those jobs last week 15:18:27 #topic Tempest/Scenario 15:18:40 here I found one, maybe interesting for us issue: 15:18:44 tempest.api.compute.admin.test_live_migration.LiveMigrationTest.test_live_migration_with_trunk 15:18:49 https://6f050c0b73606a153f16-9f552c22a38891cd59267376a7a41496.ssl.cf1.rackcdn.com/805849/9/check/neutron-ovs-tempest-multinode-full/de0bc9a/testr_results.html 15:18:56 did You saw something similar before? 15:19:07 maybe You are already aware of the problem? 15:19:55 perhaps there was a bug from gibi, but not sure 15:20:15 yeah I reported that 15:20:40 gibi: do You have link? 15:20:45 trying to find... 15:20:45 https://bugs.launchpad.net/neutron/+bug/1940425 15:20:55 thx lajoskatona and gibi 15:21:00 but fix for this was merged I think 15:21:27 at least somebody started to work on it, but prehasp I mess it up 15:21:44 so it's already reported 15:21:48 slaweq: https://bugs.launchpad.net/neutron/+bug/1940425 15:22:00 and indeed it's not very frequent but we will need to check that 15:22:27 aand this seems related https://bugs.launchpad.net/neutron/+bug/1874447 15:22:44 lajoskatona: there was a fix, but it did not solve the issue fully 15:22:54 gibi: ok 15:23:08 lajoskatona: this was the older bug that got a fix https://bugs.launchpad.net/tempest/+bug/1924258 15:23:14 gibi: second one is/was for ovn backend 15:23:20 so I don't think it's the same issue 15:23:32 slaweq: yeah, sorry 15:23:37 no problem :) 15:24:05 Terry Wilson proposed openstack/networking-ovn stable/train: Fix neutron_pg_drop-related startup issues https://review.opendev.org/c/openstack/networking-ovn/+/808967 15:24:57 we will need someone to check that issue probably 15:25:07 if anyone have any cycles, please try to check it 15:26:20 and that's basically all what I have for today :) 15:26:29 do You have anything else related to our ci? 15:27:49 if not, I will give You about 30 minutes back today 15:28:50 ok, thx for attending the meeting today 15:28:55 and have a great week 15:28:57 o/ 15:28:58 o/ 15:28:59 #endmeeting