15:00:23 <slaweq> #startmeeting neutron_ci
15:00:23 <opendevmeet> Meeting started Tue Dec 20 15:00:23 2022 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:23 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:23 <opendevmeet> The meeting name has been set to 'neutron_ci'
15:00:27 <slaweq> o/
15:00:29 <ralonsoh> hi
15:00:39 <mlavalle> o/
15:00:45 <lajoskatona> Hi
15:00:51 <slaweq> Grafana dashboard: https://grafana.opendev.org/d/f913631585/neutron-failure-rate?orgId=1
15:00:51 <slaweq> Please open now :)
15:01:13 <slaweq> lets wait a bit more for ykarel, bcafarel and others to join
15:01:21 <ykarel> o/
15:01:23 <bcafarel> o
15:01:29 <bcafarel> or o/ even
15:01:29 <slaweq> :)
15:01:39 <slaweq> ok, I think we can start as we have our usuall attendance
15:01:54 <slaweq> #topic Actions from previous meetings
15:02:04 <slaweq> lajoskatona to check dvr lifecycle functional tests failures
15:02:24 <lajoskatona> I spent some time with it yesterday but without much result
15:02:53 <lajoskatona> I added some comments to the bug, let me find it
15:03:26 <lajoskatona> https://bugs.launchpad.net/neutron/+bug/1995031
15:04:28 <lajoskatona> It seems that this happens less often recently, and I cnt reproduce it locally even with touching the code to have the exception seen in CI
15:04:42 <opendevreview> Fernando Royo proposed openstack/ovn-octavia-provider master: Fix listener provisioning_status after HM created/deleted  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/867974
15:04:42 <slaweq> lajoskatona I got one new occurence from this Monday https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/868164
15:04:52 <slaweq> sorry, wrong link
15:04:53 <slaweq> https://d29e095404ca86d71246-7daaa7daeb1030fb2122101d5d422feb.ssl.cf2.rackcdn.com/863780/2/check/neutron-functional-with-uwsgi/9da600e/testr_results.html
15:04:56 <slaweq> this one is correct
15:04:56 <opendevreview> Fernando Royo proposed openstack/ovn-octavia-provider master: Uncouple HM status of member statuses  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/868092
15:05:46 <lajoskatona> on stable branches seems like happening more often as I checked on Monday
15:06:31 <lajoskatona> but actually that's all what I have for this today
15:06:40 <lajoskatona> I will check this occurence you linked here
15:07:35 <slaweq> lajoskatona I used such query:
15:07:35 <slaweq> https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-7d,to:now))&_a=(columns:!(_source),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',key:message,negate:!f,params:(query:'line%20626,%20in%20_dvr_router_lifecycle'),type:phrase),query:(match_phrase:(message:'line%20626,%20in%
15:07:35 <slaweq> 20_dvr_router_lifecycle')))),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:''),sort:!())
15:07:40 <slaweq> and it found this result for me
15:08:08 <slaweq> maybe that will be helpful for You :)
15:08:19 <slaweq> ok, I will add action item for You for next meeting
15:08:32 <slaweq> #action lajoskatona to check dvr lifecycle functional tests failures
15:08:41 <slaweq> thx for working on this
15:08:46 <slaweq> now, lets move on to the next one
15:08:52 <slaweq> slaweq to update grafana dashboard
15:08:56 <slaweq> Patch https://review.opendev.org/c/openstack/project-config/+/868068 - merged
15:09:05 <slaweq> and the last one
15:09:10 <slaweq> ykarel to fix periodic job neutron-ovn-tempest-ipv6-only-ovs-master
15:09:28 <ykarel> #link https://review.opendev.org/c/openstack/neutron/+/867522
15:09:33 <ykarel> fixed with ^
15:09:45 <ykarel> but now we need https://review.opendev.org/c/openstack/neutron/+/868201 from ralonsoh
15:09:53 <ralonsoh> hehehe core was fixed
15:09:57 <ralonsoh> core OVN *
15:10:09 <ykarel> yeap
15:10:35 <slaweq> thx ykarel and ralonsoh :)
15:10:41 <bcafarel> pinning to master again, nice :)
15:11:01 <slaweq> approved
15:11:05 <ralonsoh> cool
15:11:26 <slaweq> and that were all action items from last week
15:12:21 <slaweq> #topic Stable branches
15:12:26 <slaweq> bcafarel any updates?
15:13:09 <bcafarel> no CI recheck needed on recent backports, it looks like stable branches are ready for holidays too
15:13:23 <slaweq> LOL, good to know
15:14:31 <slaweq> ok, so lets move on to the next topic
15:14:35 <slaweq> #topic Stadium projects
15:14:49 <slaweq> I see that still networking-odl is failing this week
15:14:53 <slaweq> any updates lajoskatona ?
15:15:08 <lajoskatona> not that ready for holidays :-)
15:15:13 <slaweq> LOL
15:15:34 <lajoskatona> I started to push tox4 things where we need more
15:16:24 <lajoskatona> I will check odl, and will collect the patches for tox4
15:16:36 <slaweq> thx lajoskatona
15:16:41 <lajoskatona> I lost the mail when we will switch to tox4 finally? This week?
15:16:48 <ralonsoh> tomorrow
15:16:50 <slaweq> #action lajoskatona to check networking-odl periodic failures
15:17:04 <lajoskatona> ralonsoh: cool
15:17:05 <ralonsoh> btw, I forgot to bring this topic
15:17:18 <ralonsoh> if you don't mind, let's talk about this later in this meeting
15:17:26 <lajoskatona> ok
15:17:58 <slaweq> ralonsoh sure
15:18:01 <lajoskatona> that's it for stadiums
15:18:15 <slaweq> ok, thx lajoskatona for updates
15:18:18 <slaweq> lets move on
15:18:20 <slaweq> #topic Grafana
15:18:32 <slaweq> dashboard should be updated already with current jobs
15:18:56 <slaweq> and this week it don't looks bad
15:19:24 <slaweq> I also removed from it periodic jobs as it was always just 1 execution per day and I'm checking them in zuul always
15:20:10 <slaweq> anything else regarding grafana?
15:20:32 <mlavalle> looks good to me
15:20:53 <slaweq> ok, so next topic
15:20:58 <slaweq> #topic Rechecks
15:21:11 <slaweq> number of rechecks is better this and last week
15:21:44 <bcafarel> less people using CI resources with approaching end of year?
15:21:49 <slaweq> but in bare rechecks we had 12 out of 24 without any reason in last 7 days
15:21:55 <slaweq> which is pretty high number
15:22:35 <ralonsoh> slaweq, does your script record those patches?
15:22:53 <slaweq> do You want to check which patches had those bare rechecks?
15:23:01 <ralonsoh> yes
15:24:17 <slaweq> ralonsoh yes, I can have that list
15:24:23 <ralonsoh> cool
15:25:27 <slaweq> ralonsoh I pasted list in the meeting agenda https://etherpad.opendev.org/p/neutron-ci-meetings
15:25:32 <ralonsoh> thanks
15:26:53 <slaweq> ok, lets move on
15:26:55 <slaweq> next topic
15:26:59 <slaweq> #topic fullstack/functional
15:27:16 <slaweq> those are still our most unstable jobs :/
15:27:22 <slaweq> first functional
15:28:06 <slaweq> first failure
15:28:09 <slaweq> neutron.tests.functional.agent.test_ovs_flows.ARPSpoofTestCase.test_arp_spoof_doesnt_block_normal_traffic
15:28:14 <slaweq> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_462/865470/1/check/neutron-functional-with-uwsgi/462ae7e/testr_results.html
15:29:01 <slaweq> this one I saw first time
15:29:44 <slaweq> anyone saw it before?
15:29:48 <ralonsoh> no
15:29:50 <slaweq> or it's just single failure for now?
15:30:04 <ykarel> me too seeing first time
15:30:39 <slaweq> ok, lets just keep an eye on it
15:30:48 <slaweq> next one
15:30:49 <slaweq> neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_floatingip_mac_bindings
15:30:59 <slaweq> this one failed at least twice this week
15:31:03 <slaweq> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_66f/866225/4/check/neutron-functional-with-uwsgi/66fa82b/testr_results.html
15:31:03 <slaweq> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_fd7/864000/5/check/neutron-functional-with-uwsgi/fd79a11/testr_results.html
15:31:26 <slaweq> ralonsoh maybe You know that one already?
15:31:37 <slaweq> You were fixing a lot of ovn related functional tests I think
15:31:41 <ralonsoh> yeah, I'll check it.
15:32:03 <slaweq> thx
15:32:18 <slaweq> #action ralonsoh to check failures of the test_floatingip_mac_bindings functional test
15:32:27 <slaweq> next one
15:32:33 <slaweq> neutron.tests.functional.agent.l3.test_dvr_router.TestDvrRouter.test_dvr_update_gateway_port_with_no_gw_port_in_namespace
15:32:42 <slaweq> https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9d5/866489/5/check/neutron-functional-with-uwsgi/9d5a735/testr_results.html
15:32:52 <slaweq> again, it's first time I see such failure
15:33:49 <slaweq> if You also didn't saw it before, lets just keep an eye on it
15:34:29 <slaweq> and the last one
15:34:31 <slaweq> neutron.tests.functional.agent.test_ovs_lib.OVSBridgeTestCase.test_get_datapath_id
15:34:36 <slaweq> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_08d/865697/5/gate/neutron-functional-with-uwsgi/08d8152/testr_results.html
15:34:51 <ralonsoh> I'll check last two
15:34:58 <slaweq> thx ralonsoh
15:35:06 <ralonsoh> in the dvr one I'll add an extra log for "device_exists_with_ips_and_mac"
15:35:21 <slaweq> #action @ralonsoh to check failed test_get_datapath_id functional test
15:35:28 <ralonsoh> and in the last one, I'll check why this is happening (we should have the same dpid)
15:35:39 <slaweq> #action ralonsoh to add an extra log for "device_exists_with_ips_and_mac"
15:35:47 <slaweq> ok, now fullstack
15:35:51 <ralonsoh> do you have lanchpad bugs?
15:35:55 <ralonsoh> just to know
15:35:57 <slaweq> nope
15:35:59 <ralonsoh> ok
15:36:17 <slaweq> regarding fullstack, I have one issue to talk about
15:36:21 <slaweq> and it happens pretty often recently
15:36:27 <slaweq> so I opened LP bug for it already
15:36:31 <slaweq> https://bugs.launchpad.net/neutron/+bug/2000150
15:36:35 <slaweq> failure examples:
15:36:40 <slaweq> https://36ccf3feef294bb781ed-88f13fe1d2cd9b64afb4d1818492b226.ssl.cf1.rackcdn.com/866225/4/gate/neutron-fullstack-with-uwsgi/d5c1b53/testr_results.html
15:36:40 <slaweq> https://51698b00592e877a0023-3a9e3dcf5065ad1abf1d1a27741d8ba4.ssl.cf1.rackcdn.com/865822/2/check/neutron-fullstack-with-uwsgi/e813f7e/testr_results.html
15:36:48 <slaweq> any volunteer to check it?
15:37:01 <slaweq> if not, I will take a look
15:37:13 <ralonsoh> all of them in linux bridge?
15:37:20 <lajoskatona> same question from me
15:37:54 <slaweq> ralonsoh no, it happens also for openvswitch
15:37:59 <ralonsoh> ok then
15:38:00 <slaweq> ok, I will take a look at this bug
15:38:00 <opendevreview> Merged openstack/networking-sfc master: Remove note about migration from lib/neutron-legacy to lib/neutron  https://review.opendev.org/c/openstack/networking-sfc/+/868168
15:38:12 <slaweq> #action @slaweq to check bug https://bugs.launchpad.net/neutron/+bug/2000150
15:38:39 <slaweq> that's all from me for functional and fullstack
15:38:56 <slaweq> anything else on this topic for today?
15:38:59 <slaweq> or can we move on?
15:39:23 <lajoskatona> nothing from me
15:39:26 <ralonsoh> nothing from mew
15:39:41 <slaweq> ok
15:39:43 <slaweq> #topic Periodic
15:39:55 <slaweq> here I have one thing to mention
15:40:00 <slaweq> neutron-ovn-tempest-ovs-master-centos-9-stream and neutron-ovn-tempest-ipv6-only-ovs-master still failing every day
15:40:09 <ralonsoh> should be solved now
15:40:16 <slaweq> but I think it's related to the ralonsoh's revert of the ykarel's patch :)
15:40:23 <slaweq> which we discussed earlier already
15:41:10 <slaweq> so I think we can move on to the last topic for today
15:41:15 <slaweq> #topic On Demand
15:41:31 <ralonsoh> tox4, if you don't mind
15:41:45 <lajoskatona> I mind but we have to cook with it :P
15:41:48 <slaweq> yeah
15:41:50 <ralonsoh> I've been trying to solve this for fullstack
15:41:51 <slaweq> go on
15:42:00 <ralonsoh> https://review.opendev.org/c/openstack/neutron/+/867554
15:42:17 <ralonsoh> I can't reproduce the CI locally and fullstack logs are not saved
15:42:23 <ralonsoh> so I don't know what is happenign
15:42:34 <ralonsoh> but this is, for sure, something related to rootwrap
15:42:51 <slaweq> I will try to check it if I will have some time
15:43:22 <ralonsoh> for now, I've pushed https://review.opendev.org/c/openstack/neutron/+/867977
15:43:28 <ralonsoh> we need ^^ before tomorrow
15:43:35 <ralonsoh> not to break the whole neutron CI
15:44:01 <slaweq> thx
15:44:03 <slaweq> +2
15:44:12 <ralonsoh> thanks, that's all from me
15:44:17 <ykarel> ralonsoh, so the zuul-jobs patch will merge tomorrow?
15:44:28 <ralonsoh> I read it in the mail, I think is tomorow
15:44:31 <ralonsoh> 21st
15:44:42 <ykarel> okk
15:45:02 <ralonsoh> This is going to be uncapped on Dec 21[5]
15:45:07 <ralonsoh> [5] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031440.html
15:45:52 <slaweq> thx ralonsoh
15:47:23 <slaweq> one last thing from me for today
15:47:35 <slaweq> it's last CI meeting of the year
15:47:45 <slaweq> next week meeting is cancelled already by ralonsoh
15:47:58 <slaweq> so happy holidays everyone and see You back on the meeting in 2023 :)
15:48:16 <ralonsoh> happy holidays!
15:48:26 <lajoskatona> everybody: happy holidays🎄
15:48:39 <bcafarel> happy holidays \o/
15:49:09 <ykarel> happy holidays all
15:49:12 <slaweq> that's all from me for today
15:49:21 <slaweq> thx for attending the meeting
15:49:26 <slaweq> and see You o/
15:49:29 <mlavalle> o/
15:49:30 <slaweq> #endmeeting