15:01:03 #startmeeting neutron_ci 15:01:03 Meeting started Tue Sep 28 15:01:03 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:03 The meeting name has been set to 'neutron_ci' 15:01:08 Hi 15:01:09 hello again 15:01:10 welcome again :) 15:01:12 no break this time :) 15:01:17 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:01:20 sorry for the long meeting 15:01:22 bcafarel: no mercy :P 15:01:59 we don't have any action items from last meeting, so I think we can quickly move to the next topics today 15:02:21 #topic Stadium projects 15:03:43 nothing from me, as I checked las time all is quiet 15:03:54 great 15:04:06 I also didn't noticed anything really bad there recently 15:04:11 #topic Stable branches 15:04:43 nothing special either (except that placement issue in xena of course) 15:04:55 yeah, that one is also in master 15:05:32 bcafarel: You told me earlier about https://review.opendev.org/c/openstack/neutron-vpnaas/+/805969 15:05:50 ah yes thanks for reminder 15:05:56 :) 15:05:58 yw 15:06:19 as we are EOL'ing the earlier branches, I am stating to think we can just drop tempest in this train (EM branch) 15:07:09 I am a bit out of ideas on why it does not find the tempest plugin (ralonsoh tried too) and at least train will have some working CI 15:07:51 bcafarel: because in train those tests were still in neutron-vpnaas repo 15:07:55 not in tempest_plugin 15:08:02 so the job used there is wrong probably 15:08:44 slaweq: ah sorry bad wording, config does point to the tempest dir in neutron-vpnaas repo (and devstack log shows it configured in tempest as far as I can see) 15:09:04 but opinions/ideas welcome (sorry I have to run, have a nice rest of meeting) 15:09:27 in that case isn't the problem is that neutron-tempest-plugin is cloned as well and tempest finds the tests in 2 paths? 15:09:47 but in https://review.opendev.org/c/openstack/neutron-vpnaas/+/805969/11/.zuul.yaml#31 I see You defined tempest regex neutron_tempest_plugin.vpnaas 15:09:54 which is wrong in the train 15:10:06 why this change is made there? 15:10:47 last change is mine, just for testing 15:10:51 focus on PS10 15:10:53 ah, ok 15:11:31 I will check that this week 15:11:35 thanks 15:11:40 maybe I will figure out something :) 15:11:54 #action slaweq to check vpnaas stable/train patch https://review.opendev.org/c/openstack/neutron-vpnaas/+/805969/10 15:12:32 is that all regarding stable branches' ci for today? 15:12:36 if so, I think we can move on 15:14:20 ok, let's move on :) 15:14:24 #topic Grafana 15:14:31 #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:14:56 we can see that all our tempest jobs are going to 100% of failures today 15:15:06 and it's know issue with the apache and placement 15:15:38 basically all devstack based jobs (not only tempest) are affected by this 15:16:00 * slaweq wonders why things like that happens always in the RC weeks :/ 15:16:11 karma 15:16:16 :D 15:17:07 :-) 15:17:11 other than that I don't see anything critical in the Grafana 15:18:39 lets move on to the specific jobs' issues 15:18:40 #topic fullstack/functional 15:19:05 I found two times the same issue in functional tests: 15:19:06 Rodolfo Alonso proposed openstack/neutron master: Execute the quota reservation removal in an isolated DB txn https://review.opendev.org/c/openstack/neutron/+/809983 15:19:10 https://450622a53297d10bce29-d3fc18d05d3e89166397d52e51106961.ssl.cf2.rackcdn.com/805637/8/check/neutron-functional-with-uwsgi/517e0d3/testr_results.html 15:19:17 https://195cae8c9d2de3edf3c4-c1cb7fb0c5617ff6ee09319b2539cf1a.ssl.cf5.rackcdn.com/803268/8/check/neutron-functional-with-uwsgi/8d2d4d9/testr_results.html 15:19:34 it's in different tests but seems like may be the same problem 15:19:57 the keepalived state change process, maybe 15:20:20 I could take a look at those errors, but maybe at the end of the week 15:20:58 thx ralonsoh 15:21:09 it's not very urgent for sure as it don't happens very often 15:21:30 #action ralonsoh to check functional tests issue with router's state transition 15:22:01 Rodolfo Alonso proposed openstack/neutron stable/xena: Execute the quota reservation removal in an isolated DB txn https://review.opendev.org/c/openstack/neutron/+/811124 15:22:29 ok, next one 15:22:29 #topic Tempest/Scenario 15:22:34 I reported new bug today https://bugs.launchpad.net/neutron/+bug/1945283 15:22:47 I saw this test fails at least twice this week 15:22:52 but I also think I saw it earlier as well 15:23:20 I will try to look at it if I will have some time 15:23:28 but I can't promise I will do it this week 15:23:37 so it's not assigned to me for now 15:23:48 we can ping Eran 15:23:49 if anyone wants to check it, feel free to take 15:24:00 (if I'm not wrong, he implemented those tests) 15:24:15 ralonsoh: thx, I will check and talk with him about it 15:25:31 lajoskatona: You also wanted to talk about something related to the tempest jobs, right? 15:25:40 yes, thanks 15:25:56 it's about the api_extensions list in tempest.conf 15:26:01 * gibi lurks 15:26:43 it seems that earlier it was filled with a list, and now on master, xena, wallaby (citation needed) we have "all" 15:26:58 do You have job's example? 15:27:10 interestingly for example on master I found that for ovn jobs we have list (https://7ab77c8a0a641a7ff2df-ecc33ccaab89feb6b0f7a7737c56ac51.ssl.cf5.rackcdn.com/807707/4/check/neutron-ovn-tempest-slow/a706584/controller/logs/tempest_conf.txt ) 15:27:29 and for ovs we have all: https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d36/807707/4/check/neutron-ovs-tempest-slow/d36b75e/controller/logs/tempest_conf.txt 15:28:00 ok I copied the good links 15:29:47 I thought that the list is added from zuul.yaml, like this one: https://opendev.org/openstack/neutron-tempest-plugin/src/branch/master/zuul.d/master_jobs.yaml#L8 15:30:12 the list -> all change happend during the weekend somehow 15:30:35 hmm 15:30:46 and it make nova-grenade-multinode running with 'all' now but that job was never configured to run trunk tests 15:30:58 but 'all' means tempest wants to run trunk as well 15:31:00 Yeah, and for nova grenade job I suppose the list is not from Neutron zuul.yaml-s 15:31:42 I tried to fetch some repos which I thought might have some change related to this but without success 15:31:57 the 'all' is coming from devstack. when $NETWORK_API_EXTENSIONS is not defined it gots defaulted to 'all' 15:32:22 but yeah I stuck there with lajoskatona 15:32:53 yeah but that was not changed recently, so I suppose there was something in env previously 15:33:35 I think I know where it's configured for ovn jobs 15:33:38 give me a sec 15:33:45 Elvira García Ruiz proposed openstack/neutron master: Fix misleading FixedIpsSubnetsNotOnSameSegment error https://review.opendev.org/c/openstack/neutron/+/811435 15:34:26 well, for OVN, the supported extensions are defined in a variable 15:34:44 ML2_SUPPORTED_API_EXTENSIONS 15:35:15 Elvira García Ruiz proposed openstack/neutron master: Fix misleading FixedIpsSubnetsNotOnSameSegment error https://review.opendev.org/c/openstack/neutron/+/811435 15:35:50 ralonsoh: yes, that was my though as well 15:36:00 but I don't see it used in the devstack anywhere :/ 15:36:32 btw, what CI job backend is failing? 15:36:34 OVS or OVS? 15:36:36 https://opendev.org/openstack/devstack/src/branch/master/lib/neutron_plugins/ovn_agent#L421 15:36:38 here it is 15:36:50 right 15:36:58 so this is statically defined in the Neutron code 15:37:07 for ML2 yes 15:37:11 *ML2/OVN 15:37:31 gibi: grenade is running with ovs? 15:37:36 maybe we should do something similar for other backends as well? 15:37:37 yes I think so 15:38:03 slaweq: but how does this got broken just now? 15:38:28 we didn't release any new API for the last weeks 15:39:09 ralonsoh: yeah I know, so this is really strange 15:39:19 gibi: idk 15:40:26 lajoskatona: yepp I confirm now grenade-base job has Q_AGENT: openvswitch defined 15:41:47 one thing nova can do is to try to configure the grenade job to be able to run the trunk test successfully that would unblock us without explaning why this happened 15:41:48 gibi: lajoskatona so the easiest solution for now would be to define that list, even statically for the jobs where it is broken 15:42:35 Terry Wilson proposed openstack/neutron master: Support SB OVSDB connections to non-leader servers https://review.opendev.org/c/openstack/neutron/+/803268 15:43:12 gibi: yeah, that is an option too - I think that it should be enough to add neutron-trunk: true in the job definition 15:43:15 like https://github.com/openstack/neutron-tempest-plugin/blob/master/zuul.d/master_jobs.yaml#L554 15:43:24 slaweq: yep lyarwood trying that 15:43:29 and that would enable this service plugin so that API will be available 15:43:29 adding neutron-trunk 15:44:09 gibi: and I will try to understand how it was earlier in the ovs based jobs 15:44:29 slaweq, lajoskatona, ralonsoh: thanks I think we can move on. If you have ideas later let me know 15:44:31 #action slaweq to check api extensions list in ovs based jobs, how it was generated 15:44:46 checking the q-svc logs, I see there are differences during the extension load process 15:44:57 I'll take a look after the meeting 15:45:24 ralonsoh: ok, please let me know if You will find anything 15:45:42 slaweq, ralonsoh: thanks 15:46:45 ralonsoh: lajoskatona gibi could it be that grenade job started testing from Xena to master now and in Xena we have https://review.opendev.org/c/openstack/neutron/+/793141 15:47:11 slaweq: that feels related 15:47:13 which basically changes the way to calculate what extensions should be really loaded 15:47:33 slaweq: I checked that yesterday but had no proof 15:47:38 ok 15:47:40 thx lajoskatona 15:47:57 slaweq: during the weekend grenade was switched from testing wallaby -> master to xena -> master 15:48:23 gibi: ok, thx, that's something to check for sure 15:48:29 slaweq: yepp 15:48:35 good point 15:48:37 thanks 15:48:46 I will try to investigate it 15:49:13 any other topics related to the tempest jobs? 15:49:42 not from me, thanks for the time 15:49:48 no 15:49:53 ok, let's move on 15:49:57 #topic Periodic 15:50:12 I noticed that openstack-tox-py36-with-neutron-lib-master is failing since couple of days at least 15:50:19 so I reported bug https://bugs.launchpad.net/neutron/+bug/1945285 15:50:31 and it's already fixed 15:50:44 thx lajoskatona for quick fix 15:50:50 :) 15:51:17 except that one issue with periodic jobs, others seems to be ok 15:51:30 fedora based job is failing again but it seems it's related to that placement/apache issue 15:52:14 well, there were two errors in this job 15:52:17 I solved one 15:52:19 one sec 15:52:37 https://9706dd0df42819547e88-7c45ade9796b4a439c7ceea8af4ef1aa.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-ovn-tempest-ovs-master-fedora/107c6b8/job-output.txt 15:52:48 now I see that it's failing as placement seems to eb not running 15:52:52 yes but before that 15:53:10 well, I don't find now the bug 15:53:37 I remember You had fixed some issue few weeks ago 15:53:39 :) 15:53:59 let's now wait for fix for the current problem with placement and we will see how it will be then 15:54:07 https://review.opendev.org/c/openstack/tempest/+/808081 15:54:16 this is 50% of the solution 15:54:23 as commented in https://launchpad.net/bugs/1942913 15:54:56 tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops is still failing 15:55:05 (regardless of the placement problems right now) 15:55:12 and I didn't find the problem 15:55:18 ralonsoh: ok, thx 15:55:42 so when placement problem will be solved we can blacklist that one test which is broken 15:55:45 sure 15:55:57 to have job green and then investigate this issue 15:56:05 thx for taking care of it so far 15:56:16 lets get back to it when placement will be fine 15:56:41 and that's all from me for today 15:56:47 any last minute topics? 15:56:51 yes 15:57:03 https://review.opendev.org/c/openstack/neutron/+/809983 15:57:09 I've refactored this patch 15:57:18 in master and I've pushed the xena patch 15:57:22 based on this one 15:57:28 "this is the way" 15:57:50 thx 15:57:59 I will review it tonight or tomorrow morning 15:58:06 thanks! 15:58:19 Will do the same, thanks ralonsoh 15:58:25 ok, I have to finish now as I have another meeting in a minute 15:58:32 thx a lot for attending guys 15:58:36 and have a great week 15:58:40 you too 15:58:40 #endmeeting