15:00:52 <slaweq> #startmeeting neutron_ci
15:00:53 <openstack> Meeting started Wed Oct 14 15:00:52 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:54 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:56 <slaweq> hi
15:00:57 <openstack> The meeting name has been set to 'neutron_ci'
15:01:16 <lajoskatona> o/
15:01:38 <ralonsoh> hi
15:01:55 <radez> o/
15:02:15 <bcafarel> o/
15:02:26 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:02:36 <slaweq> please open now, before we will start
15:02:43 <slaweq> #topic Actions from previous meetings
15:03:50 <slaweq> first one
15:03:52 <slaweq> bcafarel to update our grafana dashboards for stable branches
15:04:07 <bcafarel> I sent review (looking for link I think it is merged)
15:04:37 <bcafarel> https://review.opendev.org/#/c/757102/ merged indeed!
15:05:05 <slaweq> thx bcafarel
15:05:25 <slaweq> ok, next one
15:05:34 <slaweq> slaweq to make neutron-tempest-plugin victoria template
15:05:38 <slaweq> Patch: https://review.opendev.org/#/c/756585/
15:05:50 <slaweq> it is missing +W :)
15:06:06 <slaweq> so if someone wants, feel free to approve it ;)
15:06:16 <ralonsoh> sone
15:06:18 <ralonsoh> done
15:06:29 <slaweq> after it will be merged I will propose patch to neutron to use those new template
15:06:43 <slaweq> thx ralonsoh
15:06:45 <bcafarel> kind of related one https://review.opendev.org/#/c/756695/ drops *-master jobs in victoria (similar to other sable branches)
15:07:59 <slaweq> bcafarel: done
15:08:05 <bcafarel> thanks!
15:08:11 <slaweq> ok, next one
15:08:14 <slaweq> slaweq to propose patch to check console log before ssh to instance
15:08:22 <slaweq> and tbh I totally forgot about that one
15:08:24 <slaweq> sorry
15:08:42 <slaweq> but I added it to my tasks list for that week, so I will not forget about it again
15:08:48 <slaweq> #action slaweq to propose patch to check console log before ssh to instance
15:09:23 <slaweq> ok, lets move on
15:09:26 <slaweq> #topic Switch to Ubuntu Focal
15:09:36 <slaweq> I checked https://etherpad.opendev.org/p/neutron-victoria-switch_to_focal today
15:09:46 <slaweq> and the only missing jobs are 2 jobs from networking-midonet
15:10:10 <slaweq> but those jobs are just proposed to be run on Bionic https://review.opendev.org/#/c/754922/ and it is explained there that there is some issue on Focal
15:10:22 <slaweq> so I think that we can close this topic for now and consider it as done
15:10:25 <slaweq> wdyt?
15:10:41 <bcafarel> yes I think we knew it would be problematic to move them to Focal (and there is good progress compared to preious state)
15:10:52 <bcafarel> and for the rest it seems done to me indeed \o/
15:10:55 <ralonsoh> I agree
15:11:21 <slaweq> ahh, and one more cleanup related to that https://review.opendev.org/758103
15:11:27 <slaweq> please take a look :)
15:11:38 <slaweq> I didn't found definition of those jobs anywhere
15:11:40 <slaweq> :)
15:12:05 <bcafarel> nice catch :)
15:12:46 <slaweq> thx bcafarel
15:12:53 <lajoskatona> I have another one related to bgpvpn: https://review.opendev.org/755719
15:13:12 <lajoskatona> to make some tests unstable for now, and after that we can make voting that job again
15:13:42 <slaweq> I already +2 it :)
15:13:45 <slaweq> thx lajoskatona
15:14:11 <lajoskatona> slaweq: thanks
15:14:28 <slaweq> ok, next topic
15:14:34 <slaweq> #topic standardize on zuul v3
15:14:46 <slaweq> I think that we can also consider it as done and remove from the agenda
15:14:49 <slaweq> do You agree?
15:14:57 <ralonsoh> I think so
15:15:05 <lajoskatona> +1
15:15:24 <bcafarel> yay for topics completion
15:15:29 <slaweq> ok, great :)
15:15:38 * slaweq I need 2 minutes break
15:16:15 <ralonsoh> (he is preparing next week PTG! hehehe)
15:16:34 <bcafarel> lajoskatona: just commented on https://review.opendev.org/758103 please check (before it is merged)
15:16:48 <lajoskatona> bcafarel: I check it, thanks
15:17:41 <ralonsoh> let's remove the +W then
15:18:00 <tosky> just as reminder about the zuul v3 migration: if you want, you can propose backports too
15:18:25 <tosky> I'm not sure I will have time for that in the next weeks, but at least ussuri and train could benefit from being fully zuulv3
15:18:29 <tosky> EOF :)
15:18:35 <bcafarel> :)
15:18:50 <slaweq> tosky: thx for reminder
15:18:55 <tosky> and thanks again for the push! Those jobs were not too easy to tackle
15:18:56 <slaweq> I will try to do them
15:18:59 <bcafarel> tosky: and hopefully CI is still OK in most stadium projects up to train (older branches I am not so sure)
15:20:58 <slaweq> ok, lets move on
15:21:01 <slaweq> #topic Stadium projects
15:21:15 <slaweq> do You have anything else regarding our stadoum projects for today?
15:21:32 <bcafarel> nothing from me
15:22:50 <slaweq> ok, so lets move on
15:22:52 <slaweq> #topic Stable branches
15:23:04 <slaweq> anything related to the stadium branches for today?
15:23:30 <bcafarel> *stable nothing either
15:23:43 <bcafarel> I have a few reviews in my backlog to check but I think others went fine
15:23:59 <slaweq> afaict it works pretty ok for stable branches recently
15:24:42 <slaweq> ok, so lets move on
15:24:44 <slaweq> #topic Grafana
15:25:00 <slaweq> http://grafana.openstack.org/dashboard/db/neutron-failure-rate
15:26:21 <slaweq> in overall I don't see anything "special" there - it works "as usual" :)
15:26:37 <ralonsoh> a bit buggy during last weeks
15:26:37 <bcafarel> looks like we need an update for functional/fullstack? graphs are empty
15:26:44 <ralonsoh> but nothing special to point out
15:26:53 <slaweq> bcafarel: yes, I though about it
15:27:11 <slaweq> we should remove those 2 jobs from the graph as it was removed from our queues some time ago
15:27:21 <bcafarel> ah yes with the -uwsgi move
15:27:29 <slaweq> exactly
15:27:41 <slaweq> and we need to add neutron-ovn-grenade job to the graph
15:27:49 <slaweq> do You want to do that?
15:28:10 <bcafarel> sure! should be smaller than the stable branches one :)
15:28:15 <slaweq> yes
15:28:17 <slaweq> :)
15:28:19 <slaweq> thx a lot
15:28:29 <slaweq> #action bcafarel to update grafana dashboard for master branch
15:29:42 <slaweq> anything else regarding grafana?
15:31:10 <bcafarel> apparently no
15:31:30 <slaweq> ok, so lets move on
15:31:37 <slaweq> #topic fullstack/functional
15:31:49 <slaweq> I have one issue in fullstack test for today
15:31:55 <slaweq> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_095/756678/5/check/neutron-fullstack-with-uwsgi/095d3c9/testr_results.html
15:32:14 <slaweq> it happened only once but it seems similar to the other issue which lajoskatona was trying to address recently
15:32:40 <slaweq> this time it couldn't associate FIP to port because port was already associated to FIP from same network
15:33:06 <slaweq> I think that this was simply slow processing of the first request and then it was retried and failed
15:33:13 <ralonsoh> I think that happened before, a race condition between tests, the second one using the first one FIP
15:33:28 <slaweq> ralonsoh: that is not possible in fullstack tests
15:33:38 <slaweq> because each server has got own neutron server
15:33:42 <ralonsoh> you are right, this is fullstack
15:33:45 <lajoskatona> yeah I recently learned that from slaweq :-)
15:33:46 <slaweq> (at least it shouldn't be possible)
15:34:00 <ralonsoh> not tempest
15:34:18 <ralonsoh> I can review this tomorrow morning
15:35:05 <slaweq> I just wanted to point to this issue here for lajoskatona :)
15:35:20 <slaweq> so he can take a look and maybe understand what happened exactly there
15:35:47 <slaweq> but tbh I don't think we will be able to do anything with that if that was just slow node
15:37:03 <slaweq> and that's all regarding fullstack/functional jobs
15:37:14 <lajoskatona> slaweq: sure I can check
15:37:33 <slaweq> in functional most often we have issue with test_agent and it is addressed already by some patches IIRC
15:37:36 <slaweq> so that should be good
15:37:39 <slaweq> thx lajoskatona :)
15:37:49 <slaweq> next topic then
15:37:51 <slaweq> #topic Tempest/Scenario
15:38:12 <slaweq> here I have 2 things
15:38:31 <slaweq> 1. I proposed patch https://review.opendev.org/758116 to disable dstat service in ovn multinode jobs
15:38:41 <slaweq> as it seems that it is causing many failures of those jobs
15:38:57 <slaweq> thx ralonsoh for +2 :)
15:39:02 <ralonsoh> yw
15:39:13 <slaweq> lajoskatona: if You can take a look also, would be great :)
15:39:28 <bcafarel> hmm where did I see recently issue with dstat causing failure?
15:39:43 <bcafarel> it may have been same ovn issue nvm
15:40:18 <slaweq> and the second issue is in qos test
15:40:25 <slaweq> https://1bf1346bda7ee67d9a02-58d93d7929b8e264fef7aa823e8ca608.ssl.cf5.rackcdn.com/734889/5/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid/d140a79/testr_results.html
15:40:37 <slaweq> it is basically ssh failure
15:40:46 <slaweq> but this time it seems that even metadata wasn't working
15:40:58 <slaweq> so maybe there is someone who wants to investigate it more
15:41:23 <ralonsoh> I can take a look
15:41:24 <slaweq> there is data about network configuration from the node there
15:41:30 <slaweq> maybe it will be useful somehow
15:42:21 <slaweq> and that reminds me my patch https://review.opendev.org/#/c/725526/
15:42:35 <slaweq> which may help in debugging such issues in the future :)
15:43:29 <slaweq> from other things, neutron-grenade-ovn is failing 100% of times but I will check that as I just moved this job to zuulv3 syntax recently :)
15:43:45 <slaweq> #action slaweq to check failing neutron-grenade-ovn job
15:43:55 <slaweq> and that's basically all from me for today
15:43:59 <bcafarel> may be some recent change, I saw it was green in victoria backport
15:44:21 <slaweq> bcafarel: yes, I saw it green in victoria too
15:44:54 <slaweq> do You have anyting else regarding CI for today?
15:45:05 <slaweq> if not, I will give You few minutes back today
15:46:13 <bcafarel> all good for me
15:46:21 <ralonsoh> same here
15:46:35 <slaweq> ok
15:46:40 <slaweq> ahh, one more announcement
15:46:54 <slaweq> next week I have workshop on OpenInfra during the time of this meeting
15:47:02 <slaweq> so I will cancel this meeting
15:47:19 <slaweq> and then the week after next one is PTG so I will cancel CI meeting too
15:47:37 <slaweq> if there will be anything urgent we can sync on neutron channel
15:47:43 <slaweq> ok for You?
15:47:55 <bcafarel> sounds good!
15:48:05 <ralonsoh> perfect
15:48:09 <lajoskatona> perfect
15:48:43 <slaweq> thx for attending the meeting
15:48:49 <slaweq> see You online :)
15:48:51 <slaweq> o/
15:48:55 <slaweq> #endmeeting'