16:00:18 <slaweq> #startmeeting neutron_ci
16:00:19 <openstack> Meeting started Tue Oct  1 16:00:18 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:21 <slaweq> hi
16:00:23 <openstack> The meeting name has been set to 'neutron_ci'
16:01:06 <ralonsoh> hi
16:01:15 <njohnston> o/
16:01:34 <bcafarel> o/
16:02:07 <slaweq> let's start
16:02:14 <slaweq> Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:02:20 <slaweq> (for later use)
16:02:26 <slaweq> #topic Actions from previous meetings
16:02:37 <slaweq> njohnston update python 3 stagium etherpad for bagpipe
16:02:50 <slaweq> *stadium
16:03:13 <njohnston> I did not get to that; as we talked about yesterday I got stuck debugging the tempest jobs
16:03:40 <slaweq> njohnston: no problem
16:03:58 <slaweq> as I said yesterday, I will check this failing tempest job this week
16:04:13 <slaweq> and I can than update this etherpad if You want
16:04:37 <njohnston> That would be great, thanks
16:04:45 <slaweq> sure, np
16:05:18 <slaweq> #action slaweq to fix networking-bagpipe python3 scenario job and update etherpad
16:05:29 <slaweq> ok, next one
16:05:32 <slaweq> slaweq to invesigate issue with  neutron.tests.functional.agent.linux.test_iptables.IptablesManagerTestCase.test_tcp_output
16:05:51 <slaweq> I looked into this but I didn't found anything clearly wrong in logs
16:06:02 <slaweq> it also happend only this one time in last 30 days
16:06:15 <slaweq> so I gave up on this, at least for now
16:06:40 <slaweq> if this will happend more often, I will report bug and work on it once again
16:07:18 <slaweq> next one:
16:07:28 <slaweq> ralonsoh to take a look at failing neutron.tests.functional.agent.linux.test_bridge_lib.FdbInterfaceTestCase.test_add_delete(no_namespace
16:07:57 <ralonsoh> slaweq, yes, the problem is solved here
16:08:26 <ralonsoh> (i'll find it)
16:08:37 <slaweq> ralonsoh: sure :)
16:08:46 <slaweq> most important that it's solved now
16:09:02 <ralonsoh> yes, the problem was in the VXLAN interfaces
16:09:11 <bcafarel> ah the tests reusing the same ID?
16:09:17 <ralonsoh> you can't have more than one vxlan interface in the same VTEP with the same ID
16:09:19 <ralonsoh> yes
16:09:34 <ralonsoh> please, go on, i'll find the patch
16:09:39 <slaweq> ok, thx a lot
16:09:42 <slaweq> so next one
16:09:43 <bcafarel> https://review.opendev.org/#/c/684805/
16:09:44 <slaweq> slaweq to ask rubasov if he can check neutron.tests.fullstack.test_agent_bandwidth_report.TestPlacementBandwidthReport.test_configurations_are_synced_towards_placement
16:10:02 <ralonsoh> bcafarel, exactly, thanks!
16:10:07 <slaweq> thx bcafarel for link
16:10:13 <slaweq> and thx ralonsoh for the fix :)
16:10:41 <slaweq> according to last mentioned action item, lajoskatona looked into this
16:11:00 <slaweq> but he also couldn't reproduce this issue and it didn't happend since last week anymore
16:11:14 <slaweq> so for now we are not working on this anymore
16:11:26 <slaweq> and the last one from last week:
16:11:29 <slaweq> slaweq to investigate neutron.tests.fullstack.test_l3_agent.TestLegacyL3Agent.test_mtu_update
16:11:51 <slaweq> very similar case, I couldn't find anything clearly indicating issue in logs
16:12:06 <slaweq> and this didn't happend anymore according to logstash
16:12:18 <slaweq> so I didn't bother more with this for now
16:12:41 <slaweq> ok, that's all actions from last week which I have today
16:12:46 <slaweq> any questions/comments?
16:13:12 <ralonsoh> I don't think we should expend more time on this test
16:13:15 <ralonsoh> (for now)
16:13:37 <ralonsoh> and, btw, we have a patch to force the MTU status
16:13:43 <ralonsoh> so maybe we can wait
16:13:49 <ralonsoh> (force ==> not null)
16:13:57 <slaweq> yes, I remember that patch
16:14:20 <slaweq> I will not spent more time on it, I have other things to do
16:14:43 <slaweq> let's hope it will not happend anymore ;)
16:15:20 <bcafarel> :) yes especially with the not null MTU patch getting close
16:15:38 <slaweq> ok, let's move on
16:15:41 <slaweq> next topic
16:15:43 <slaweq> #topic Stadium projects
16:15:49 <slaweq> Python 3 migration
16:15:57 <slaweq> etherpad: https://etherpad.openstack.org/p/neutron_stadium_python3_status
16:16:08 <njohnston> we got an update from yamamoto yesterday, and we already talked about bagpipe
16:16:15 <slaweq> yep
16:16:53 <njohnston> I don't see lajoskatona on this channel
16:17:01 <slaweq> and I think that lajoskatona is slowly working on odl
16:17:31 <slaweq> My biggest concern for now related to this topic is midonet
16:17:44 <slaweq> as it don't even supports Ubuntu Bionic yet
16:18:04 <slaweq> hi lajoskatona
16:18:08 <lajoskatona> Hi
16:18:22 <njohnston> thanks for joining lajoskatona
16:18:41 <lajoskatona> No problem
16:18:43 <njohnston> so we were talking about https://etherpad.openstack.org/p/neutron_stadium_python3_status
16:18:56 <njohnston> how do things fare with networking-odl?
16:19:16 <lajoskatona> Just a moment I collect my thughts :-)
16:19:21 <njohnston> of course!
16:19:58 <lajoskatona> So: The voting jobs are running on python3 now
16:20:26 <lajoskatona> With the non-voting we have issues integrating ODL with latest openstack
16:20:41 <njohnston> We had defined the non-voting jobs as being out of scope anyway
16:21:10 <njohnston> so if all the voting jobs are on python3 then networking-odl is done
16:21:20 <slaweq> yes, so if we have voting jobs running on py3 than we can mark it as done
16:21:21 <lajoskatona> njohnston: yes few days ago we discussed with slaweq, but anyway
16:21:42 <slaweq> lajoskatona: sorry, I may forgot :/
16:22:02 <lajoskatona> slaweq, njohnston: ok, and I continue with local team to make the other jobs working and perhaps later voting again
16:22:25 <lajoskatona> slaweq: it was perhaps last week, so long-long time ago ;)
16:22:33 <slaweq> lajoskatona: sure, so for now I think we can mark networking-odl as done ion etherpad
16:22:44 <slaweq> lajoskatona: LOL, yes
16:22:46 <lajoskatona> slaweq: thanks
16:22:57 <slaweq> lajoskatona: thx for working on this and for an update
16:23:25 <lajoskatona> slaweq: I am happy to make thing working :-)
16:23:27 <njohnston> thanks veyr much lajoskatona!
16:23:53 <slaweq> ok, let's move on
16:23:57 <slaweq> next is
16:24:01 <slaweq> tempest-plugins migration
16:24:08 <slaweq> Etherpad: https://etherpad.openstack.org/p/neutron_stadium_move_to_tempest_plugin_repo
16:24:34 <slaweq> neutron-dynamic-routing patch for "phase1" https://review.opendev.org/#/c/652099 was just merged finally
16:24:39 <slaweq> thx tidwellr for this
16:25:09 <slaweq> so now we only need to delete those tests from neutron-dynamic-routing repo and use job defined in neutron-tempest-plugin
16:25:24 <slaweq> except that we still need vpnaas which is on mlavalle
16:25:35 <slaweq> and I know that he is willing to help with this
16:26:08 <slaweq> that's all from my side
16:26:21 <slaweq> any other questions/comments regrading stadium projects?
16:27:12 <slaweq> ok, so let's move on
16:27:17 <slaweq> #topic Grafana
16:27:23 <slaweq> #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:28:32 <slaweq> tbh for me grafana looks to good this week ;)
16:28:46 <ralonsoh> better than previous weeks
16:28:52 <slaweq> much better
16:29:23 <slaweq> and maybe because of that we also quite fast released rc1 without big problems
16:29:28 <njohnston> the ironic and openstacksdk cross-gating jobs look totally healthy now, should we mark them as voting?
16:29:44 <slaweq> njohnston: I'm not sure
16:30:27 <slaweq> we can discuss about it but IIRC in Denver we agreed to add jobs from other projects as non-voting only
16:30:49 <njohnston> ok thats fine
16:31:01 <slaweq> but of course it would be better to have it voting as it is much more effective to prevent breaking changes
16:31:31 <slaweq> from the other hand, we are getting more dependent on other projects with our gating
16:32:15 <slaweq> IMO let's keep them as non-voting for now
16:32:45 <slaweq> if other team will ask to promote such job to be voting, we can talk about it
16:32:50 <slaweq> what do You think?
16:33:26 <njohnston> +1
16:33:28 <ralonsoh> +1
16:33:47 <slaweq> ok, let's move on than
16:34:05 <slaweq> I don't have any new issues for functional/fullstack jobs now
16:34:19 <slaweq> so let's go directly to
16:34:23 <slaweq> #topic Tempest/Scenario
16:34:39 <slaweq> regarding scenario jobs, I have only one info to share
16:35:12 <slaweq> there was an issue in networking-ovn which caused random failures in trunk_subport_lifecycle test
16:35:24 <slaweq> but it should be fixed when https://review.opendev.org/#/c/685206/ will be merged
16:35:54 <slaweq> so if You will see such test failed in networking-ovn-tempest-dsvm-ovs-release, that's the reason
16:36:04 <slaweq> and that's all from my side
16:36:13 <slaweq> it is really much better this week with ci imo
16:36:29 <slaweq> do You have anything else to talk about?
16:36:35 <ralonsoh> no
16:36:50 <njohnston> all good here
16:36:59 <slaweq> ok
16:37:02 <slaweq> thx
16:37:08 <slaweq> #topic On Demand
16:37:15 <slaweq> I have one more small announcement
16:37:47 <slaweq> I was reviewing list of Neutron lieutenants recently, and I talked with hongbin about his testing lieutenant role
16:37:57 <slaweq> he confirmed me that he don't have enough time for it
16:38:08 <slaweq> so I asked ralonsoh if he can take care of it
16:38:20 <ralonsoh> -1
16:38:20 <slaweq> and ralonsoh accepted to be testing liuetenant
16:38:26 <ralonsoh> -1
16:38:28 <slaweq> why?
16:38:32 <ralonsoh> a joke!!
16:38:39 <slaweq> LOL
16:38:42 <ralonsoh> hahaha
16:38:46 <njohnston> congrats ralonsoh!  May your grafanas always be flat and low.
16:38:47 <slaweq> You got me :P
16:39:01 <ralonsoh> njohnston, hahahaha
16:39:29 <slaweq> so, I will update this docs in next few days as I also want to confirm/make some other changes to it
16:39:40 <slaweq> but I wanted to announce it here already
16:39:47 <njohnston> +100
16:39:53 <slaweq> and Thank You ralonsoh for taking care of it
16:39:59 <ralonsoh> my pleasure
16:40:17 <slaweq> and that's all from me for today
16:40:32 <slaweq> if You don't have anything else, I can give You back 20 minutes
16:40:45 <njohnston> I wonder if the infra liaison and the testing liaison might do well to be combined into one role
16:41:06 <njohnston> since the overlap between infra issues and testing issues is very large
16:41:24 <slaweq> njohnston: good point
16:41:51 <slaweq> TBH, in last cycle I wasn't testing liuetenant but I was taking care a lot of ci and about infra too
16:42:08 <slaweq> ralonsoh: do You want to take care both of those roles?
16:42:15 <njohnston> yes, I was thinking about all that you did when I had that idea
16:42:46 <ralonsoh> ok
16:42:51 <slaweq> ralonsoh: thx a lot
16:43:01 <slaweq> so I will update this one too :)
16:43:10 <slaweq> and thx njohnston for raising this
16:43:18 <njohnston> \o/
16:44:06 <slaweq> ok, anything else for today?
16:44:12 <slaweq> last call :)
16:44:56 <slaweq> ok, have a nice day
16:44:58 <njohnston> o/
16:45:01 <slaweq> and thx for attending
16:45:03 <slaweq> o/
16:45:05 <ralonsoh> bye
16:45:06 <slaweq> #endmeeting