15:00:12 <slaweq> #startmeeting neutron_ci
15:00:12 <opendevmeet> Meeting started Tue Sep  7 15:00:12 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:12 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:12 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:00:16 <slaweq> hi again :)
15:00:19 <lajoskatona> Hi
15:00:20 <bcafarel> o/ long time no see :)
15:00:28 <ralonsoh> hi
15:00:31 <amotoki> slaweq: which review or place is good for discussion on the OVS constaints?
15:00:40 <amotoki> sorry for bothering the startmeeting :p
15:01:10 <obondarev> hi
15:01:43 <ralonsoh> next friday in the drivers meeting?
15:01:45 <slaweq> amotoki: one of the patches in the list https://review.opendev.org/q/topic:%22rehome-ovs-constants%22+(status:open%20OR%20status:merged)
15:01:55 <amotoki> slaweq: thanks
15:01:58 <slaweq> we need to discuss it somewhere :)
15:02:34 <amotoki> :)
15:02:50 <slaweq> ok, let's now start ci meeting :)
15:03:28 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:04:38 <slaweq> #topic Actions from previous meetings
15:04:44 <slaweq> ralonsoh to check fullstack issue https://bugs.launchpad.net/neutron/+bug/1942190
15:05:16 <ralonsoh> I didn't see the cause
15:05:35 <ralonsoh> but I didn't find any other related fail
15:05:36 <slaweq> isn't that this issue with dhcp agent?
15:05:44 <ralonsoh> (one sec0
15:05:55 <ralonsoh> (wrong link...)
15:06:02 <ralonsoh> Yes, the patch: https://review.opendev.org/c/openstack/neutron/+/807134
15:06:50 <slaweq> but this don't works as expected :/
15:07:01 <ralonsoh> nope, still working on this
15:07:26 <slaweq> ralonsoh: can I assign that bug to You then?
15:07:35 <ralonsoh> ok, perfect
15:07:37 <slaweq> thx
15:08:30 <slaweq> next one
15:08:34 <slaweq> slaweq to check how to run advanced image tests in serial
15:09:02 <slaweq> I was checking if there would be possibility to run it like we do run tempest.scenario and slow tests in "-full" jobs
15:09:23 <slaweq> so first all scenario tests except slow are run, and then as next step only slow tests are run
15:09:41 <slaweq> but for that we would need to have some new toxenv defined in tempest
15:09:52 <slaweq> there's no way to do it differently I think
15:09:59 <ralonsoh> so do you mean to create another CI job?
15:10:02 <slaweq> I spoke with gmann yesterday and he told me that he will try to check it
15:10:09 <slaweq> ralonsoh: nope
15:10:42 <slaweq> I wanted to mark tests which requires advanced image as "advanced" or something like that
15:10:45 <slaweq> maybe even "slow"
15:11:15 <lajoskatona> I wanted to ask if that is an option or not to add new tag
15:11:19 <slaweq> and then do something like is in tempest here: https://github.com/openstack/tempest/blob/master/tox.ini#L115
15:11:32 <slaweq> lajoskatona: You can add whatever tag You want
15:11:41 <slaweq> basically using same decorator from tempest
15:11:55 <slaweq> then You can filter tests to be run based on that decorator
15:12:00 <ralonsoh> ahhh
15:12:18 <slaweq> but we don't have proper toxenv to be run by zuul
15:12:47 <slaweq> alternative would be to have extra ci jobs to run only those "slow/advanced" tests
15:12:55 <lajoskatona> its @attr dcorator, am I right?
15:13:01 <slaweq> that's doable only with changes in neutron-tempest plugin
15:13:05 <slaweq> lajoskatona: right
15:13:31 <slaweq> so it's doable with changes in neutron-tempest-plugin repo but we would have 4 extra scenario jobs
15:13:37 <slaweq> which isn't great solution for sure
15:16:07 <slaweq> I will also check if there is possibility to maybe mark some tests as they can't be run in parallel
15:16:14 <slaweq> maybe that would be enough
15:16:33 <slaweq> I will definitelly continue investigation on that topic
15:17:24 <slaweq> #action slaweq to check if tests can be marked as they can't run in parallel
15:17:34 <slaweq> ok, let's move on
15:17:40 <slaweq> #topic stadium projects
15:17:44 <slaweq> any updates here?
15:18:38 <lajoskatona> nothing, we have some fixes this week for n-d-r, but everything else seems ok
15:18:43 <bcafarel> one from me on neutron-vpnaas, I was looking into fixing stable barnches so the periodic jobs do not generate as much spam
15:19:03 <bcafarel> train is almost there with https://review.opendev.org/c/openstack/neutron-vpnaas/+/805969 but I am bit stuck on the tempest error
15:19:38 <bcafarel> the job (old automaticall converted for zuulv3) does not find neutron-tempest-plugin, though it is listed and checked out :/
15:19:43 <bcafarel> pointers welcome :)
15:20:14 <slaweq> bcafarel: "all-plugin" tox env is used there
15:20:23 <slaweq> it's deprecated and I know it was causing some problems
15:20:28 <opendevreview> Oleg Bondarev proposed openstack/neutron master: Add Local IP Extension and DB  https://review.opendev.org/c/openstack/neutron/+/804523
15:20:29 <opendevreview> Oleg Bondarev proposed openstack/neutron master: Local IP RPC server-agent interface  https://review.opendev.org/c/openstack/neutron/+/807116
15:20:40 <slaweq> maybe You should try to change it to "all" toxenv?
15:20:47 <bcafarel> oh yes I remember that one!
15:20:52 <bcafarel> slaweq++ I will try that
15:20:55 <slaweq> ++
15:21:19 <slaweq> if it's not that, please ping me, I can take a look once again
15:21:24 <slaweq> and try to help :)
15:21:29 <bcafarel> will do thanks :)
15:21:34 <bcafarel> and for older versions I wonder if we should move them to EOL (train is already 8-9 fixes to repair the gates)
15:22:12 <bcafarel> I know elod was motivated to at least keep stein running but it was a sea of red in last DNM test patch
15:22:14 <slaweq> You mean eol older than train or train?
15:22:40 <bcafarel> older than train (where hopefully I can get zuul +1 soon)
15:23:39 <slaweq> if they are broken, and there are no maintainers, I think we can eol them already
15:23:49 <slaweq> I can take care of it if You want
15:24:24 <bcafarel> yes please, you know the "paperwork" better
15:24:31 <slaweq> sure, I will
15:24:46 <slaweq> #action slaweq to start eol'ing vpnaas stein and older branches
15:25:04 <bcafarel> thanks!
15:25:10 <slaweq> ok, next one
15:25:12 <slaweq> #topic Stable branches
15:25:19 <slaweq> any issues with Neutron's stable branches?
15:25:43 <bcafarel> all good on "active" branches (recent backports)
15:25:54 <slaweq> great
15:26:02 <slaweq> thx for taking care of it :)
15:26:04 <bcafarel> the CVE backport for example went fine up to queens :)
15:26:12 <slaweq> yeah, I saw that one
15:27:14 <slaweq> ok, let's move on
15:27:16 <slaweq> #topic Grafana
15:27:24 <slaweq> http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:28:00 <slaweq> functional tests job is finally in better shape this week
15:28:12 <slaweq> thx to otherwiseguy fix which we merged last week
15:28:40 <slaweq> in fullstack the only issue I saw was this issue with port provisioning
15:28:51 <ralonsoh> which one?
15:29:14 <slaweq> which one what? patch from otherwiseguy or fullstack issue?
15:29:22 <ralonsoh> fullstack issue
15:29:43 <slaweq> https://bugs.launchpad.net/neutron/+bug/1942190
15:29:54 <ralonsoh> ah sorry, yes
15:30:54 <slaweq> anything else regarding grafana for today?
15:32:27 <slaweq> I guess we can move on then
15:32:45 <slaweq> regarding fullstack, I saw only that one issue which we already discussed
15:32:55 <slaweq> #topic tempest/scenario
15:33:04 <slaweq> here I found (again) few oom-killer issues
15:33:19 <slaweq> and one failure of test_overlapping_sec_grp_rules in linuxbridge job:
15:33:23 <slaweq> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e2e/807134/3/check/neutron-tempest-plugin-scenario-linuxbridge/e2e97f7/testr_results.html
15:33:43 <slaweq> IIRC this was already seen by Alex who did that scenario test
15:33:56 <slaweq> and it's probably reported as Linuxbridge bug
15:34:11 <slaweq> I will need to look for it and eventually talk with Alex about it
15:35:54 <slaweq> #slaweq to check test_overlapping_sec_grp_rules failure in linuxbridge
15:36:09 <slaweq> and that's basically all regarding tempest tests what I had for today :)
15:36:13 <slaweq> it's not bad this week
15:36:20 <slaweq> anything else You want to add here?
15:36:28 <ralonsoh> no, thanks
15:36:49 <slaweq> so let's go to the last topic for today
15:36:50 <bcafarel> all good (and we saw a good rate of patches merged)
15:36:59 <slaweq> #topic Periodic
15:37:11 <slaweq> it seems that neutron-ovn-tempest-ovs-master-fedora is broken (again)
15:37:22 <slaweq> every day it fails with same errors, like https://zuul.openstack.org/build/c9ad43989706459f84e1b203092f32c4
15:37:38 <slaweq> and this time it seems for me like it can be some valid bug, as it's failing on same tests every day
15:38:27 <ralonsoh> always failing the same tests?
15:39:08 <ralonsoh> yes
15:39:21 <slaweq> at least this week as those jobs I checked today
15:39:51 <ralonsoh> so we did something on 2021-08-23
15:40:06 <slaweq> we or Fedora :)
15:41:33 <slaweq> ralonsoh: I wonder if errors from https://14bc9da0ccb50e1a0478-6b466cd24fe60c196767959ed246eea2.ssl.cf2.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-ovn-tempest-ovs-master-fedora/c9ad439/controller/logs/screen-q-svc.txt are somehow related
15:41:46 <ralonsoh> i was checking that
15:42:12 <slaweq> I'm talking about those: "oslo_db.exception.DBReferenceError: (pymysql.err.IntegrityError) (1452, 'Cannot add or update a child row: a foreign key constraint fails (`neutron`.`ovn_revision_numbers`, CONSTRAINT `ovn_revision_numbers_ibfk_1` FOREIGN KEY (`standard_attr_id`) REFERENCES `standardattributes` (`id`) ON DELETE SET NULL)')"
15:43:59 <slaweq> any volunteer to check it? I think we should open new LP bug and check that one as it may be something in our code
15:44:28 <ralonsoh> I can open the LP bug
15:44:35 <ralonsoh> and maybe check it
15:44:41 <slaweq> thx ralonsoh
15:44:54 <slaweq> let's do it that way, if someone will have time, can check it
15:45:18 <slaweq> #action ralonsoh to open bug about failing neutron-ovn-tempest-ovs-master-fedora periodic job
15:45:41 <slaweq> and that's all what I had for today
15:45:55 <slaweq> do You have anything else to discuss?
15:46:08 <slaweq> if not, I will let You go a bit earlier :)
15:46:15 <bcafarel> all good here
15:46:33 <lajoskatona> +1
15:46:41 <slaweq> ok, thx for attending the meeting
15:46:50 <ralonsoh> bye
15:46:51 <slaweq> see You online and have a great evening :)
15:46:53 <slaweq> o/
15:46:55 <slaweq> #endmeeting