15:00:06 #startmeeting neutron_ci 15:00:06 Meeting started Tue Jun 29 15:00:06 2021 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:06 The meeting name has been set to 'neutron_ci' 15:00:12 hi 15:00:15 hi again :) 15:00:18 hi 15:00:32 First of all Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:01:15 ok, let's start 15:01:18 #topic Actions from previous meetings 15:01:25 slaweq to check failing update mtu fullstack test: https://bugs.launchpad.net/neutron/+bug/1933234 15:01:32 I checked it today 15:02:14 and I think I know what is going on there, I described it in comment on LP: https://bugs.launchpad.net/neutron/+bug/1933234/comments/2 15:02:31 but to confirm that I will need additional logs in L3 agent: https://review.opendev.org/c/openstack/neutron/+/798648 15:02:51 Ilya Chukhnakov proposed openstack/neutron-specs master: [WIP] Add Node-Local Virtual IP Spec https://review.opendev.org/c/openstack/neutron-specs/+/797798 15:03:05 Hi 15:03:18 obondarev regarding Your question, I don't think it will be useful in that specific case to log those ports ids there but I can add it 15:03:43 Ilya Chukhnakov proposed openstack/neutron-specs master: [WIP] Add Node-Local Virtual IP Spec https://review.opendev.org/c/openstack/neutron-specs/+/797798 15:04:52 slaweq: got it 15:06:11 and that's basically all regarding this issue for now 15:06:27 Mamatisa Nurmatov proposed openstack/neutron master: use payloads for PORT AFTER_DELETE events https://review.opendev.org/c/openstack/neutron/+/797004 15:06:28 if it will happen often I will propose to mark this test as unstable for now 15:06:38 but maybe it will not be necessary :) 15:06:51 I think we can move on 15:06:56 next was: 15:06:59 lajoskatona to check with infra status of the tap-as-a-service move 15:08:15 Merged openstack/neutron-tempest-plugin master: Add a test for overlapping SG rules https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/520251 15:08:19 Merged openstack/neutron-tempest-plugin master: Revert "Skip scenario tests if HA router will not be active" https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/796858 15:08:39 lajoskatona are You here? 15:08:51 Merged openstack/neutron stable/ussuri: Updates for python3.8 https://review.opendev.org/c/openstack/neutron/+/793417 15:09:36 sorry, yes 15:10:05 I asked around ( https://meetings.opendev.org/irclogs/%23openstack-infra/%23openstack-infra.2021-06-24.log.html#t2021-06-24T16:58:18 ) 15:10:18 sorry, my client plays with me 15:11:14 so basically whenever the move/rename happens all open patches will be moved 15:12:04 so IIUC we are good to merge and send patches as we want now and we are good, right? 15:12:11 exactly 15:12:23 great, thx lajoskatona :) 15:13:34 ok, that's all actions from previous meeting 15:13:37 #topic Stadium projects 15:13:55 lajoskatona any other updated about stadium's CI? 15:14:32 the only one is from the mail http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023321.html to fix broken zuul config 15:15:04 that is mostly odl rocky perhaps, amotoki started to check that, thanks for that 15:15:29 all jobs are failing. perhaps we can drop them 15:15:31 I saw amotoki volunteered for that on ML so I didn't check it 15:15:37 https://review.opendev.org/c/openstack/networking-odl/+/798298 15:15:57 but requirements change is also required and the requirmenets-check fails too. I need to ask it to the requirments team. 15:17:01 maybe we should simply make networking-odl rocky EOL now? 15:17:04 wdyt? 15:17:07 elod this morning mentioned something that isort is not in rocky requirements, and that one is broken too, so hard 15:18:38 yeah, for ODL we seem to have even liberty (frm http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023347.html & http://paste.openstack.org/show/806995/ ) 15:18:39 lajoskatona: yes. we don't have a proper isort in the requirements, so if we'd like to fix it we also need a requirement change but the job is failing (it is not specific to neutron team) 15:19:02 amotoki: exactly 15:19:31 amotoki: so perhaps we can clean these old branches, not sure what to need for that 15:20:30 I am okay with EOL'ing networking-odl rocky and older 15:21:13 me too :) 15:21:15 lajoskatona: what do you mean by 'clean'? EOL? 15:21:23 do You want me to start that process? 15:21:23 yes 15:21:36 or do You want to do it? 15:21:43 but that means we put tag on it and the branch is deleted, or am I wrong? 15:21:49 slaweq: me? 15:21:53 I can 15:22:05 i can help it 15:22:15 amotoki: thanks 15:22:46 I think we can drop jobs which cause zuul configuration error first along with failing jobs. 15:23:26 in parallel, we can move forward the process of EOL of networking-eol older branches. 15:23:32 amotoki: ok 15:23:36 sounds good for me 15:23:58 so amotoki will You propose removal of that broken jobs? 15:24:06 slaweq: yes 15:24:14 and lajoskatona You will start EOLing it, correct? 15:24:22 salweq: yes 15:24:47 I will also check zuul errors in other repos this week. 15:24:51 #action amotoki to clean failing jobs in networking-odl rocky and older 15:25:11 #action lajoskatona to start EOL process for networking-odl rocky and older 15:25:18 thank You both :) 15:26:21 anything else or can we move on? 15:27:09 yes 15:27:34 lajoskatona "yes, something else" or "yes, we can move on"? :D 15:27:35 none from me either 15:28:03 sorry we can move 15:28:14 ok :) 15:28:35 As we don't have bcafarel today, I think we can skip stable branches topic 15:28:50 I didn't saw any serious issues there last week 15:28:53 #topic Grafana 15:29:30 https://grafana.opendev.org/d/BmiopeEMz/neutron-failure-rate?orgId=1 15:31:29 I see that today neutron-tempest-slow-py3 is going high but there was small number of the runs so maybe it's nothing really serious 15:31:41 worth to keep an eye on it for now 15:31:58 other than that things looks pretty ok 15:32:32 do You see any issues there? 15:33:46 ok, so let's move on 15:33:48 #topic fullstack/functional 15:34:05 I noticed (again) functional tests failure with db migration timeouts this week: 15:34:11 https://099638a2437d4a88b01b-b7b49a3857e4a968cf2542b58172db3c.ssl.cf2.rackcdn.com/798156/1/check/neutron-functional-with-uwsgi/6d49cc5/testr_results.html 15:34:21 ralonsoh may be interesting for You :) 15:34:40 I saw it, yes 15:34:48 as I think You recently removed some skip if timeout decorator from those tests, right? 15:34:51 I'll keep an eye on the CI 15:34:56 thx 15:35:00 some weeks ago 15:35:15 for now I saw it only once so maybe nothing really serious and just very overloaded node 15:35:15 and we don't execute those test in parallel 15:35:21 but worth to be aware 15:35:24 agree 15:35:47 and that's all regarding functional tests 15:35:59 for fullstack I only saw this error with mtu update 15:36:07 but that is already reported and in progress 15:36:12 so I think we can move on 15:36:14 #topic Tempest/Scenario 15:36:20 neutron-tempest-slow-py3 - again nameservers issue 15:36:25 https://8a9b6f5914f633048b01-596b3ee18079cc72e9aa5b1ed231f9fc.ssl.cf5.rackcdn.com/795117/12/check/neutron-tempest-slow-py3/6518f8d/testr_results.html 15:36:46 again? 15:36:48 ralonsoh is Your fix related to such issue fixed already? 15:36:56 let me check, I think so 15:37:08 I'll ping you later 15:37:16 ralonsoh yes, that was from yesterday so fresh thing :) 15:37:38 ok, please let me know if we should reopen our bug for that 15:37:40 can be tomorrow 15:37:56 ah ok 15:38:05 but we use cirros there, right? 15:38:29 my patch introduced the use of resolvectl 15:38:29 not ubuntu? 15:38:35 that is not in cirros, only advance images 15:38:40 yes, it's in ubuntu 15:39:06 in this test we use cirros 15:39:14 this is tempest test 15:39:21 not neutron_tempest_plugin 15:39:25 yeah... 15:39:36 oh ,and there we use only cirros.... 15:39:41 I think this is just a problem with the OS 15:39:42 and in tempest there is no this "advanced image" thing 15:39:44 it's only in neutron-tempest-plugin 15:39:58 yes... now I realized that 15:40:35 I'll see if, in cirros, there is another way to check that 15:40:57 thx 15:41:11 and the reason behind using only cirros is to keep it tempest fast? 15:41:12 ralonsoh to check if there is better way to check dns nameservers in cirros 15:41:16 #action ralonsoh to check if there is better way to check dns nameservers in cirros 15:41:34 lajoskatona yes, with ubuntu and without nested virt it's a nightmare 15:41:43 we had it like that in neutron_tempest_plugin jobs in the past 15:41:57 and that's why we did this advanced_image option to use ubuntu only for some tests 15:41:58 not for all 15:42:19 ok, thanks 15:43:21 ok, that's basically all what I had for today for You 15:43:37 generally CI looks pretty ok, I see many patches merged without rechecks recently 15:43:47 great job guys with all those CI issues :) 15:43:49 thank You! 15:44:23 do You have anything else what You would like to discuss today? 15:44:30 or if not, we can finish a bit earlier today 15:45:31 I'm fine 15:46:02 me too 15:46:11 ok, thx for attending the meeting 15:46:13 bye 15:46:16 and have a great week 15:46:17 o/ 15:46:21 #endmeeting