16:00:07 <slaweq> #startmeeting neutron_ci
16:00:08 <openstack> Meeting started Tue Feb 26 16:00:07 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:10 <slaweq> hi
16:00:12 <openstack> The meeting name has been set to 'neutron_ci'
16:00:17 <njohnston> o/
16:00:21 <mlavalle> o/
16:01:49 <slaweq> ok, lets start then
16:02:04 <slaweq> #topic Actions from previous meetings
16:02:15 <slaweq> first one was:
16:02:17 <slaweq> slaweq to fix patch with skip_if_timeout decorator
16:02:34 <haleyb> o/
16:02:48 <slaweq> It is working fine, failed test in http://logs.openstack.org/92/636892/1/gate/neutron-functional/85da30c/logs/testr_results.html.gz was PgSQL test which isn’t marked with this decorator. Now patch is merged and new decorator should be used for some mysql tests
16:03:43 <slaweq> next one was:
16:03:45 <slaweq> slaweq to split patch https://review.openstack.org/633979 into two: zuulv3 and py3 parts
16:03:51 <slaweq> it's done
16:04:00 <slaweq> python3 patch: https://review.openstack.org/638626
16:04:20 <slaweq> and https://review.openstack.org/#/c/633979/ is only for migraton from legacy to zuulv3 job
16:04:48 <slaweq> one funny thing about this job neutron-tempest-dvr-ha-multinode-full
16:05:07 <slaweq> it has "ha" in the name so I assumed that it should have l3_ha=True and test HA routers
16:05:16 <slaweq> but it don't have it like that
16:05:30 <slaweq> so it's in fact something like neutron-tempest-dvr-multinode-full job currently
16:05:49 <slaweq> unless I missunderstood it and HA in the name is for something else
16:05:53 <slaweq> do You know maybe?
16:05:59 <mlavalle> no
16:06:06 <mlavalle> I thinl that's fine
16:06:13 <mlavalle> maybe haleyb know why
16:06:48 <haleyb> yes, it should be testing dvr+ha
16:07:03 <njohnston> that is so weird
16:07:32 <slaweq> ok, so I have some problems with it and it's failing when I added l3_ha=True to neutron config
16:07:43 <slaweq> I will continue digging into this one :)
16:07:56 <slaweq> and I will switch it to zuulv3 with HA set properly :)
16:08:33 <slaweq> ok, lets move on then
16:08:37 <slaweq> next one was
16:08:38 <slaweq> njohnston to ask stadium projects about python3 status
16:09:22 <njohnston> yeah, I have not done that; when I started to check I realized that many projects might way that they were fine because they merged the autogenerated changes, without thinking about functional/fullstack/etc.
16:09:54 <njohnston> So I started to do a sweep of those jobs to see how many there were; that is what I am working on right now.  That way I can provide targeted feedback.
16:10:12 <slaweq> njohnston: do You have any list of stadium projects which we should take care of?
16:11:01 <slaweq> njohnston: maybe You can create etherpad similar to https://etherpad.openstack.org/p/neutron_ci_python3 or add it to the bottom of this one?
16:11:20 <njohnston> I'll put the results in an etherpad; mlavalle I may need your help in corralling members of specific projects that I don't know the contact folks for
16:11:33 <slaweq> njohnston: thx
16:11:37 <mlavalle> njohnston: we'll go after them. np
16:11:58 <slaweq> #action njohnston to create etherpad with python3 status of stadium projects
16:11:58 <njohnston> slaweq: The official list of stadium projects is here I believe: https://governance.openstack.org/tc/reference/projects/neutron.html
16:12:16 <mlavalle> njohnston: THAt is correct
16:12:43 <slaweq> njohnston: thx
16:13:53 <slaweq> ok, lets move on then
16:14:03 <slaweq> next one was: mlavalle to check bug https://bugs.launchpad.net/neutron/+bug/1816489
16:14:04 <openstack> Launchpad bug 1816489 in neutron "Functional test neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase. test_ha_router_lifecycle failing" [High,Confirmed]
16:14:48 <mlavalle> so I got feedback in https://review.openstack.org/#/c/636710
16:14:53 <mlavalle> which I greed with
16:14:58 <mlavalle> agreed^^^
16:15:09 <slaweq> mlavalle: but it's different bug ;)
16:15:24 <mlavalle> ohhh
16:15:31 <mlavalle> I didn't have time to work on this one
16:15:44 <mlavalle> sorry
16:16:07 <slaweq> sure, no problem
16:16:17 <slaweq> I can try to take it from You if You don't mind
16:16:29 <mlavalle> no, go ahead
16:16:32 <slaweq> I recently looked into some functional tests issues so I can do this also
16:16:35 <slaweq> ok
16:16:49 <slaweq> #action slaweq to check bug https://bugs.launchpad.net/neutron/+bug/1816489
16:16:50 <openstack> Launchpad bug 1816489 in neutron "Functional test neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase. test_ha_router_lifecycle failing" [High,Confirmed]
16:17:06 <slaweq> ok
16:17:11 <slaweq> and the last one was
16:17:13 <slaweq> slaweq to check bug https://bugs.launchpad.net/neutron/+bug/1815585
16:17:13 <openstack> Launchpad bug 1815585 in neutron "Floating IP status failed to transition to DOWN in neutron-tempest-plugin-scenario-linuxbridge" [High,Confirmed]
16:17:22 <slaweq> I was looking into this one
16:17:27 <slaweq> and it looks strange for me
16:17:44 <slaweq> I described everythning in bug comments
16:17:57 <slaweq> please read it if You will have some time
16:18:04 <mlavalle> ok, will do
16:18:17 <slaweq> but basically what looks strange for me is that port don't have device_id but is active
16:18:28 <slaweq> how is that possible? do You know?
16:18:51 <mlavalle> are we going to talk about https://bugs.launchpad.net/neutron/+bug/1795870
16:18:52 <openstack> Launchpad bug 1795870 in neutron "Trunk scenario test test_trunk_subport_lifecycle fails from time to time" [High,In progress] - Assigned to Miguel Lavalle (minsel)
16:19:21 <slaweq> mlavalle: yes, I wanted to ask about it in tempest/scenario topic
16:19:26 <mlavalle> cool
16:19:31 <mlavalle> let's do it that way
16:19:33 <slaweq> but we can talk about it now if You want :)
16:19:43 <mlavalle> no, I'll follow your guidance
16:19:54 <slaweq> ok, thx
16:20:14 <slaweq> so, please read my comments in https://bugs.launchpad.net/neutron/+bug/1815585 - maybe You will have some ideas about it
16:20:15 <openstack> Launchpad bug 1815585 in neutron "Floating IP status failed to transition to DOWN in neutron-tempest-plugin-scenario-linuxbridge" [High,Confirmed]
16:20:31 <slaweq> it's not hitting us very often so it's not urgent issue IMO
16:20:43 <slaweq> but would be good to fix it somehow :)
16:20:44 <slaweq> ok
16:21:06 <slaweq> any questions/comments on actions from last week? or can we move on to the next topic?
16:21:17 <mlavalle> not from me
16:21:46 <slaweq> ok, lets move on then
16:21:50 <slaweq> #topic Python 3
16:22:01 <slaweq> we already spoke about stadium projects
16:22:22 <slaweq> regarding neutron, we have only switch of neutron-tempest-dvr-ha-multinode-full missing
16:22:31 <slaweq> patch is ready https://review.openstack.org/638626
16:23:15 <slaweq> later we will have to switch experimental jobs to python3
16:23:31 <slaweq> but that may be hard as some of them are probably broken already :/
16:23:38 <slaweq> so we will have to fix them first
16:24:04 <njohnston> just for completeness with the stadium, here is the change I did to get ovsdbapp ready: https://review.openstack.org/#/c/637988/
16:24:17 <njohnston> if there are low hanging fruit I'll just take care of it, like I did there
16:24:48 <slaweq> thx njohnston :)
16:25:31 <slaweq> any other questions/comments related to python 3?
16:25:38 <mlavalle> not from me
16:26:09 <slaweq> ok, so next topic then
16:26:13 <slaweq> #topic Ubuntu Bionic in CI jobs
16:26:30 <slaweq> I just wanted to mention this email from gmann http://lists.openstack.org/pipermail/openstack-discuss/2019-February/003129.html
16:26:52 <slaweq> I already pushed test job for neutron https://review.openstack.org/#/c/639361/
16:26:58 <slaweq> lets see how it will work
16:26:59 <mlavalle> this is the same one we discussed earlier in the Neutron meeting, right?
16:27:08 <slaweq> mlavalle: correct
16:27:30 <slaweq> other thing here is again status of stadium projects
16:27:37 <mlavalle> yeap
16:27:45 <slaweq> I will try to push such test patches to each of them tomorrow
16:27:46 <njohnston> How do we want to handle the stadium for this?  That seems like the long pole in the tent, so to speak.
16:27:50 <slaweq> and we will see how it will work
16:27:59 <njohnston> sorry, laggy irc client :-)
16:28:04 <slaweq> :)
16:28:21 <mlavalle> laggy! new word for me
16:28:35 <slaweq> njohnston: I think I will first try with test patches and then we will see what will be to fix there
16:28:44 <slaweq> njohnston: mlavalle do You agree?
16:28:56 <mlavalle> yes, that makes sense
16:28:58 <njohnston> sounds good
16:29:19 <slaweq> #action slaweq to create bionic test patches for stadium projects
16:29:44 <slaweq> ok, so that's all about Bionic
16:29:52 <slaweq> can we move on?
16:29:57 <mlavalle> yes
16:30:02 <njohnston> for networking-bagpipe make sure your change is a child of https://review.openstack.org/#/c/636154/
16:30:19 <njohnston> fyi
16:30:33 <slaweq> njohnston: ok, thx for info
16:30:53 <slaweq> #topic Grafana
16:30:59 <slaweq> http://grafana.openstack.org/dashboard/db/neutron-failure-rate
16:31:18 <mlavalle> laggy browser
16:31:26 <slaweq> LOL
16:31:45 <mlavalle> got to practice my new word
16:33:14 <slaweq> there is one significant spike recently on openstack-tox-lower-constraints
16:33:30 <slaweq> but that should be fixed with  https://review.openstack.org/#/c/639074/
16:33:37 <mlavalle> yes
16:33:50 <mlavalle> that's the only big anomaly in grafana
16:34:10 <njohnston> I was wondering, why do we think the neutron-functional-python27 job has double the failure rate of the regular neutron-functional job in the check queue?  That seems weird to be since they run the same tests.
16:34:10 <slaweq> yes
16:34:30 <slaweq> and other anomaly is that neutron-tempest-plugin-dvr-multinode-scenario job wasn't failing 100% of time :D
16:34:44 <slaweq> but only for short while and now it's back to normal :P
16:35:01 <njohnston> LOL
16:35:21 <slaweq> but more seriously, we have some problems with functional jobs I think - it's failing around 20-30% of times
16:35:33 <slaweq> but there are couple of bugs identified already
16:35:45 <slaweq> so I hope it will be better when we will merge patches for that
16:35:50 <njohnston> +1
16:36:00 <mlavalle> ok
16:36:20 <slaweq> regarding to grafana, I also pushed today small patch  https://review.openstack.org/639321 to move openstack-tox-lower-constraints to unit jobs graph
16:36:34 <slaweq> please review and +1 if You are fine with that :)
16:37:11 <njohnston> +1
16:37:24 <slaweq> thx njohnston
16:37:29 <slaweq> ok, lets move on then
16:37:30 <mlavalle> done
16:37:31 <slaweq> #topic fullstack/functional
16:37:34 <slaweq> thx mlavalle
16:37:47 <slaweq> as I said, we have couple of bugs
16:37:55 <slaweq> first one is https://bugs.launchpad.net/neutron/+bug/1816239
16:37:56 <openstack> Launchpad bug 1816239 in neutron "Functional test test_router_processing_pool_size failing" [High,In progress] - Assigned to LIU Yulong (dragon889)
16:38:14 <slaweq> patch https://review.openstack.org/#/c/637544/ was merged but it still happens, there is new patch also: https://review.openstack.org/#/c/639034/
16:38:27 <slaweq> so this is still in progress and liuyulong is working on it
16:38:50 <slaweq> I recently found also one more issue (race) in functional tests
16:38:56 <slaweq> patch to fix it is in https://review.openstack.org/#/c/638635/
16:39:04 <slaweq> please review this one if You will have some time :)
16:39:33 <slaweq> except those 2, there is also one mentioned earlier today and I will try to check it this week
16:39:37 <mlavalle> added to the pile
16:39:42 <njohnston> will do
16:39:43 <slaweq> so that's all related to functional tests
16:39:50 <slaweq> thx mlavalle and njohnston
16:39:55 <slaweq> any questions/comments here?
16:40:14 <njohnston> nope
16:40:15 <mlavalle> not from me
16:40:20 <slaweq> ok, so next one
16:40:21 <slaweq> #topic Tempest/Scenario
16:40:33 <slaweq> mlavalle: please go on with updates on https://bugs.launchpad.net/neutron/+bug/1795870 :)
16:40:34 <openstack> Launchpad bug 1795870 in neutron "Trunk scenario test test_trunk_subport_lifecycle fails from time to time" [High,In progress] - Assigned to Miguel Lavalle (minsel)
16:41:03 <mlavalle> so in https://review.openstack.org/#/c/636710/
16:41:18 <mlavalle> I got feedback that filters to kill python are too broad
16:41:23 <mlavalle> which I agree with
16:41:41 <mlavalle> so I just proposed https://review.openstack.org/#/c/639375/
16:42:33 <mlavalle> and based on that I will update https://review.openstack.org/#/c/636710/
16:43:01 <slaweq> ok, I will review it today evening or tomorrow morning :)
16:43:10 <mlavalle> Thanks
16:43:37 <slaweq> mlavalle: I hope You noticed my last comment in https://review.openstack.org/#/c/636710/2/etc/neutron/rootwrap.d/l3.filters
16:43:52 <slaweq> I also saw few times issues with killing keepalived
16:44:08 <mlavalle> I saw it
16:44:11 <slaweq> and e.g. in D/S we solved it by removing /usr/sbin/ from path
16:44:18 <slaweq> so maybe that will also be necessary
16:44:21 <slaweq> ok, thx
16:44:36 <mlavalle> but I think that changing the command name in the process solves it
16:44:52 <mlavalle> we won't filter pythonx anymore
16:44:52 <slaweq> but it's for different process
16:45:14 <slaweq> I'm now speaking about keepalived, not neutron-keepalived-state-change
16:45:21 <slaweq> those are 2 different things :)
16:45:23 <mlavalle> in that case, we need to change keepalived
16:45:34 <mlavalle> I'll take care of it as well
16:45:39 <slaweq> ok, thx :)
16:45:50 <mlavalle> thanks for slapping me ;-)
16:46:02 <slaweq> mlavalle: yw :D
16:46:53 <slaweq> ok, can we move on to the last topic for today? or are there any other comments/questions related to tempest jobs?
16:47:02 <mlavalle> let's do it
16:47:06 <slaweq> ok :)
16:47:08 <slaweq> #topic Open discussion
16:47:22 <slaweq> I wanted to ask You about one more thing and stadium projects (again)
16:47:48 <slaweq> in Denver we agreed with QA team to move tempest plugins from "main" repositories to tempest-plugin repos
16:48:04 <slaweq> I think that still many stadium projects didn't finish it
16:48:22 <njohnston> which ones took any action on it?
16:48:30 <slaweq> so question here is: how we should do it? should we start moving such plugins to neutron-tempest-plugin repo?
16:49:22 <slaweq> njohnston: I think that only midonet did something with this
16:49:29 <slaweq> list of tempest plugins is in https://docs.openstack.org/tempest/latest/plugin-registry.html
16:49:37 <mlavalle> ovn didn't?
16:50:02 <slaweq> each of them which points to main repo still needs to be moved
16:50:40 <slaweq> mlavalle: networking-ovn I think don't have own tempest-plugin
16:50:46 <slaweq> so they don't need to do anything
16:50:53 <mlavalle> ok
16:51:10 <njohnston> I don't even see midonet in the list
16:51:37 <slaweq> maybe I mistake something with this midonet
16:52:09 <slaweq> njohnston: https://github.com/openstack/neutron-tempest-plugin/commit/5214b27c080208ff4fc6b47c997f8aa6a28a6d44
16:52:10 <mlavalle> it's not in the list
16:52:11 <slaweq> :)
16:52:16 <slaweq> I knew there was something
16:52:49 <slaweq> so basically my question is: how You think we should do this?
16:52:59 <slaweq> I guess we will have to do it by self mostly
16:53:20 <slaweq> and in such case I think that the best way would be to move those tests to neutron-tempest-plugin repo
16:53:33 <slaweq> what do You think?
16:53:55 <mlavalle> what's the deadline?
16:54:19 <njohnston> I think that setting up a separate repo per subproject would be insane, so aggregating them in neutron-tempest-plugin cuts down on the work
16:54:20 <slaweq> I don't know about any deadline
16:54:41 <slaweq> gmann: wanted told me in Denver that it would be good to do it ASAP :)
16:54:51 <mlavalle> so if we can do it slowly and without much disruyption
16:54:55 <njohnston> mlavalle: It's the same deadline as the deadline for migrating to neutron-lib :-)
16:55:02 <mlavalle> let's do it
16:55:07 <slaweq> ok, thx
16:55:18 <slaweq> so I will prepare some etherpad with list what we need to move
16:55:19 <mlavalle> give me a piece and I'll help
16:55:24 <slaweq> and will try to start this work slowly
16:55:28 <njohnston> give me a piece and I'll help too
16:55:39 <slaweq> then we can split this work between all of us later
16:55:41 <slaweq> ok?
16:55:43 <mlavalle> ok, I will take from the etherpad
16:55:47 <njohnston> sounds good
16:55:54 <slaweq> great, thx
16:56:02 <mlavalle> Thank you!
16:56:21 <slaweq> #action slaweq to prepare etherpad and plan of moving tempest plugins from stadium projects to neutron-tempest-plugin repo
16:56:46 <slaweq> so that was all from me for today
16:57:05 <slaweq> do You have anything else You want to talk quickly?
16:57:12 <mlavalle> I don't
16:57:21 <njohnston> me neither
16:57:30 <slaweq> ok, thx for attending
16:57:34 <mlavalle> o/
16:57:39 <slaweq> and have a nice week :)
16:57:44 <mlavalle> you too
16:57:44 <slaweq> #endmeeting