15:00:52 #startmeeting neutron_ci 15:00:53 Meeting started Wed Oct 14 15:00:52 2020 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:56 hi 15:00:57 The meeting name has been set to 'neutron_ci' 15:01:16 o/ 15:01:38 hi 15:01:55 o/ 15:02:15 o/ 15:02:26 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:02:36 please open now, before we will start 15:02:43 #topic Actions from previous meetings 15:03:50 first one 15:03:52 bcafarel to update our grafana dashboards for stable branches 15:04:07 I sent review (looking for link I think it is merged) 15:04:37 https://review.opendev.org/#/c/757102/ merged indeed! 15:05:05 thx bcafarel 15:05:25 ok, next one 15:05:34 slaweq to make neutron-tempest-plugin victoria template 15:05:38 Patch: https://review.opendev.org/#/c/756585/ 15:05:50 it is missing +W :) 15:06:06 so if someone wants, feel free to approve it ;) 15:06:16 sone 15:06:18 done 15:06:29 after it will be merged I will propose patch to neutron to use those new template 15:06:43 thx ralonsoh 15:06:45 kind of related one https://review.opendev.org/#/c/756695/ drops *-master jobs in victoria (similar to other sable branches) 15:07:59 bcafarel: done 15:08:05 thanks! 15:08:11 ok, next one 15:08:14 slaweq to propose patch to check console log before ssh to instance 15:08:22 and tbh I totally forgot about that one 15:08:24 sorry 15:08:42 but I added it to my tasks list for that week, so I will not forget about it again 15:08:48 #action slaweq to propose patch to check console log before ssh to instance 15:09:23 ok, lets move on 15:09:26 #topic Switch to Ubuntu Focal 15:09:36 I checked https://etherpad.opendev.org/p/neutron-victoria-switch_to_focal today 15:09:46 and the only missing jobs are 2 jobs from networking-midonet 15:10:10 but those jobs are just proposed to be run on Bionic https://review.opendev.org/#/c/754922/ and it is explained there that there is some issue on Focal 15:10:22 so I think that we can close this topic for now and consider it as done 15:10:25 wdyt? 15:10:41 yes I think we knew it would be problematic to move them to Focal (and there is good progress compared to preious state) 15:10:52 and for the rest it seems done to me indeed \o/ 15:10:55 I agree 15:11:21 ahh, and one more cleanup related to that https://review.opendev.org/758103 15:11:27 please take a look :) 15:11:38 I didn't found definition of those jobs anywhere 15:11:40 :) 15:12:05 nice catch :) 15:12:46 thx bcafarel 15:12:53 I have another one related to bgpvpn: https://review.opendev.org/755719 15:13:12 to make some tests unstable for now, and after that we can make voting that job again 15:13:42 I already +2 it :) 15:13:45 thx lajoskatona 15:14:11 slaweq: thanks 15:14:28 ok, next topic 15:14:34 #topic standardize on zuul v3 15:14:46 I think that we can also consider it as done and remove from the agenda 15:14:49 do You agree? 15:14:57 I think so 15:15:05 +1 15:15:24 yay for topics completion 15:15:29 ok, great :) 15:15:38 * slaweq I need 2 minutes break 15:16:15 (he is preparing next week PTG! hehehe) 15:16:34 lajoskatona: just commented on https://review.opendev.org/758103 please check (before it is merged) 15:16:48 bcafarel: I check it, thanks 15:17:41 let's remove the +W then 15:18:00 just as reminder about the zuul v3 migration: if you want, you can propose backports too 15:18:25 I'm not sure I will have time for that in the next weeks, but at least ussuri and train could benefit from being fully zuulv3 15:18:29 EOF :) 15:18:35 :) 15:18:50 tosky: thx for reminder 15:18:55 and thanks again for the push! Those jobs were not too easy to tackle 15:18:56 I will try to do them 15:18:59 tosky: and hopefully CI is still OK in most stadium projects up to train (older branches I am not so sure) 15:20:58 ok, lets move on 15:21:01 #topic Stadium projects 15:21:15 do You have anything else regarding our stadoum projects for today? 15:21:32 nothing from me 15:22:50 ok, so lets move on 15:22:52 #topic Stable branches 15:23:04 anything related to the stadium branches for today? 15:23:30 *stable nothing either 15:23:43 I have a few reviews in my backlog to check but I think others went fine 15:23:59 afaict it works pretty ok for stable branches recently 15:24:42 ok, so lets move on 15:24:44 #topic Grafana 15:25:00 http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:26:21 in overall I don't see anything "special" there - it works "as usual" :) 15:26:37 a bit buggy during last weeks 15:26:37 looks like we need an update for functional/fullstack? graphs are empty 15:26:44 but nothing special to point out 15:26:53 bcafarel: yes, I though about it 15:27:11 we should remove those 2 jobs from the graph as it was removed from our queues some time ago 15:27:21 ah yes with the -uwsgi move 15:27:29 exactly 15:27:41 and we need to add neutron-ovn-grenade job to the graph 15:27:49 do You want to do that? 15:28:10 sure! should be smaller than the stable branches one :) 15:28:15 yes 15:28:17 :) 15:28:19 thx a lot 15:28:29 #action bcafarel to update grafana dashboard for master branch 15:29:42 anything else regarding grafana? 15:31:10 apparently no 15:31:30 ok, so lets move on 15:31:37 #topic fullstack/functional 15:31:49 I have one issue in fullstack test for today 15:31:55 https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_095/756678/5/check/neutron-fullstack-with-uwsgi/095d3c9/testr_results.html 15:32:14 it happened only once but it seems similar to the other issue which lajoskatona was trying to address recently 15:32:40 this time it couldn't associate FIP to port because port was already associated to FIP from same network 15:33:06 I think that this was simply slow processing of the first request and then it was retried and failed 15:33:13 I think that happened before, a race condition between tests, the second one using the first one FIP 15:33:28 ralonsoh: that is not possible in fullstack tests 15:33:38 because each server has got own neutron server 15:33:42 you are right, this is fullstack 15:33:45 yeah I recently learned that from slaweq :-) 15:33:46 (at least it shouldn't be possible) 15:34:00 not tempest 15:34:18 I can review this tomorrow morning 15:35:05 I just wanted to point to this issue here for lajoskatona :) 15:35:20 so he can take a look and maybe understand what happened exactly there 15:35:47 but tbh I don't think we will be able to do anything with that if that was just slow node 15:37:03 and that's all regarding fullstack/functional jobs 15:37:14 slaweq: sure I can check 15:37:33 in functional most often we have issue with test_agent and it is addressed already by some patches IIRC 15:37:36 so that should be good 15:37:39 thx lajoskatona :) 15:37:49 next topic then 15:37:51 #topic Tempest/Scenario 15:38:12 here I have 2 things 15:38:31 1. I proposed patch https://review.opendev.org/758116 to disable dstat service in ovn multinode jobs 15:38:41 as it seems that it is causing many failures of those jobs 15:38:57 thx ralonsoh for +2 :) 15:39:02 yw 15:39:13 lajoskatona: if You can take a look also, would be great :) 15:39:28 hmm where did I see recently issue with dstat causing failure? 15:39:43 it may have been same ovn issue nvm 15:40:18 and the second issue is in qos test 15:40:25 https://1bf1346bda7ee67d9a02-58d93d7929b8e264fef7aa823e8ca608.ssl.cf5.rackcdn.com/734889/5/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/d140a79/testr_results.html 15:40:37 it is basically ssh failure 15:40:46 but this time it seems that even metadata wasn't working 15:40:58 so maybe there is someone who wants to investigate it more 15:41:23 I can take a look 15:41:24 there is data about network configuration from the node there 15:41:30 maybe it will be useful somehow 15:42:21 and that reminds me my patch https://review.opendev.org/#/c/725526/ 15:42:35 which may help in debugging such issues in the future :) 15:43:29 from other things, neutron-grenade-ovn is failing 100% of times but I will check that as I just moved this job to zuulv3 syntax recently :) 15:43:45 #action slaweq to check failing neutron-grenade-ovn job 15:43:55 and that's basically all from me for today 15:43:59 may be some recent change, I saw it was green in victoria backport 15:44:21 bcafarel: yes, I saw it green in victoria too 15:44:54 do You have anyting else regarding CI for today? 15:45:05 if not, I will give You few minutes back today 15:46:13 all good for me 15:46:21 same here 15:46:35 ok 15:46:40 ahh, one more announcement 15:46:54 next week I have workshop on OpenInfra during the time of this meeting 15:47:02 so I will cancel this meeting 15:47:19 and then the week after next one is PTG so I will cancel CI meeting too 15:47:37 if there will be anything urgent we can sync on neutron channel 15:47:43 ok for You? 15:47:55 sounds good! 15:48:05 perfect 15:48:09 perfect 15:48:43 thx for attending the meeting 15:48:49 see You online :) 15:48:51 o/ 15:48:55 #endmeeting'