Wednesday, 2020-10-07

*** macz_ has joined #openstack-meeting-302:34
*** macz_ has quit IRC02:38
*** psachin has joined #openstack-meeting-303:41
*** macz_ has joined #openstack-meeting-304:22
*** macz_ has quit IRC04:26
*** ricolin_ has joined #openstack-meeting-305:15
*** ricolin_ has quit IRC05:26
*** ralonsoh has joined #openstack-meeting-306:54
*** slaweq has joined #openstack-meeting-307:05
*** e0ne has joined #openstack-meeting-307:07
*** slaweq has quit IRC07:09
*** haleyb has quit IRC07:10
*** haleyb has joined #openstack-meeting-307:11
*** slaweq has joined #openstack-meeting-307:14
*** njohnston has quit IRC07:33
*** tosky has joined #openstack-meeting-307:41
*** tosky_ has joined #openstack-meeting-308:12
*** tosky is now known as Guest9816008:13
*** tosky_ is now known as tosky08:13
*** Guest98160 has quit IRC08:15
*** lpetrut has joined #openstack-meeting-309:13
*** raildo has joined #openstack-meeting-311:04
*** belmoreira has joined #openstack-meeting-311:22
*** njohnston_ has joined #openstack-meeting-311:26
*** psahoo has joined #openstack-meeting-311:52
*** raildo_ has joined #openstack-meeting-313:03
*** raildo has quit IRC13:03
*** ttx has quit IRC13:04
*** ttx has joined #openstack-meeting-313:09
*** psahoo_ has joined #openstack-meeting-313:09
*** psahoo has quit IRC13:12
*** ttx has quit IRC13:16
*** ttx has joined #openstack-meeting-313:17
*** Adri2000 has joined #openstack-meeting-313:25
*** genekuo_ has joined #openstack-meeting-314:29
*** macz_ has joined #openstack-meeting-314:35
*** macz_ has quit IRC14:39
*** psahoo_ has quit IRC14:43
*** lpetrut has quit IRC14:45
*** njohnston_ is now known as njohnston14:52
*** priteau has joined #openstack-meeting-314:57
*** slaweq_ has joined #openstack-meeting-314:58
*** macz_ has joined #openstack-meeting-314:58
*** slaweq has quit IRC14:58
*** lajoskatona has joined #openstack-meeting-315:01
slaweq_#startmeeting neutron_ci15:01
openstackMeeting started Wed Oct  7 15:01:15 2020 UTC and is due to finish in 60 minutes.  The chair is slaweq_. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:01
*** openstack changes topic to " (Meeting topic: neutron_ci)"15:01
openstackThe meeting name has been set to 'neutron_ci'15:01
slaweq_Grafana dashboard:
slaweq_Please open now :)15:02
*** mdelavergne has joined #openstack-meeting-315:03
slaweq_#topic Actions from previous meetings15:03
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"15:03
slaweq_bcafarel to update our grafana dashboards for stable branches15:03
bcafarelin progress, not sent yet (I wanted to check jobs listed there)15:04
slaweq_ok, thx bcafarel15:04
slaweq_I will assign it to You for next week15:04
slaweq_just to remember about it15:04
bcafarelsounds good, also to have reviewers if it gets forgotten15:04
slaweq_#action bcafarel to update our grafana dashboards for stable branches15:04
slaweq_thx a lot15:04
slaweq_ok, next one15:05
slaweq_ralonsoh to report a bug and check failing openstack-tox-py36-with-ovsdbapp-master periodic job15:05
*** mlavalle has joined #openstack-meeting-315:05
ralonsohI sent a patch to try to solve it15:05
ralonsohone sec15:05
ralonsoh(should be on the etherpad)15:05
slaweq_I don't see it on etherpad15:06
ralonsohavoid to monkey patch processutils15:06
ralonsohwell, use the original current_thread _active15:07
ralonsohbut we'll need a new version of oslo.concurrency15:07
slaweq_and it seems that it helped15:07
ralonsohat least locally15:08
ralonsohbut I can't say that in the CI15:08
slaweq_it helped15:08
ralonsohah8hh ok, this is2 another problem15:08
ralonsohthis is the patch15:09
ralonsohsorry again15:09
slaweq_don't need to sorry15:09
slaweq_good that it's fixed :)15:09
slaweq_thx ralonsoh15:09
slaweq_and thx otherwiseguy15:09
slaweq_ok, so I think we can move on to the next topics15:10
slaweq_#topic Switch to Ubuntu Focal15:10
*** openstack changes topic to "Switch to Ubuntu Focal (Meeting topic: neutron_ci)"15:10
slaweq_we still have some stadium projects to check/change15:10
slaweq_but I didn't had time this week15:10
slaweq_do You have any other updates on that?15:10
bcafarel longing for second +2 for sfc :)15:12
bcafarelelse topic:migrate-to-focal list looks good for us15:13
slaweq_bcafarel: I already gave +2 :)15:13
slaweq_so I can't help with that one now15:13
slaweq_ralonsoh: lajoskatona but You can ;)15:13
lajoskatonadone :-)15:13
bcafarelthanks :)15:14
lajoskatonaShall I have a slighly related question, do we need this any more:  ?15:15
slaweq_lajoskatona: nope15:15
slaweq_it was an issue with pypi mirror15:15
lajoskatonaslaweq_: yeah that's why I asked :-) I abandone it then15:16
slaweq_and I think ralonsoh fixed it on devstack by capping setuptools version15:16
ralonsohbut that was rejected15:16
ralonsohthe problem was in the pypi server15:16
slaweq_ralonsoh: ahh, ok15:16
ralonsohadmins talked to pypi folks to solve that15:16
slaweq_most important is that problem is fixed now :)15:16
slaweq_thx ralonsoh and lajoskatona for taking care of it :)15:16
lajoskatonano problem15:18
slaweq_regrading standardize on zuul v315:18
slaweq_we merged networking-odl patch
slaweq_so the last one missing is for neutron15:18
slaweq_and it just failed again, at least functional tests job:
toskywhich received its fair share of rechecks15:19
ralonsohslaweq_, that's the other related problem I was talking this morning15:20
ralonsohnow we don't fail in the OVN method15:21
ralonsohbut in the "old_method" --> L3 plugin15:21
ralonsohI need to check if this is related15:21
ralonsohI'll talk to otherwiseguy15:21
slaweq_ralonsoh: ok15:21
toskyplease remember to vote also on the networking-odl backport for stable/victoria:
slaweq_tosky: I already did15:22
slaweq_I think we need bcafarel's vote also15:22
toskyyeah, another stable core15:23
toskyor neutron stable core15:23
bcafarelreviewed and W+1 :)15:23
slaweq_so I think we can move on to the next topic now15:24
slaweq_#topic Stable branches15:24
*** openstack changes topic to "Stable branches (Meeting topic: neutron_ci)"15:24
slaweq_Ussuri dashboard:
slaweq_Train dashboard:
bcafarelone thing I remember now on stable dashboards, we will also need a victoria template for neuton-tempest-plugin15:25
bcafareland switch neutron stable/victoria to it15:25
slaweq_bcafarel: yes, true15:26
slaweq_I will do this template15:26
slaweq_thx for reminder15:26
slaweq_#action slaweq to make neutron-tempest-plugin victoria template15:26
bcafarelnp, I remembered when my test dashboard came up empty for them15:26
slaweq_btw. I have one new issue in stable/train15:29
openstackLaunchpad bug 1898748 in neutron "[stable/train] Creation of the QoS policy takes ages" [Critical,New]15:29
slaweq_did You saw it already maybe?15:30
slaweq_it seems that it breaks devstack gate for stable/train :/15:30
bcafarelI don't think I saw it either15:31
slaweq_is there anyone who wants to check that maybe?15:31
slaweq_if not, I will try to check that15:32
ralonsohI'll try to take a look at this error tomorrow15:32
slaweq_thx ralonsoh :)15:32
slaweq_ok, lets move on15:33
slaweq_#topic Grafana15:33
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"15:33
slaweq_IMO worst thing from voting jobs is neutron-functional-with-uwsgi now15:34
slaweq_and we have couple of issues there15:34
slaweq_and also most of the ovn based jobs are failing 100% of times15:35
slaweq_anything else You have regarding grafana in general?15:36
slaweq_or should we move on to the specific job types?15:36
bcafarelnothing from me15:37
slaweq_ok, so lets move on15:37
slaweq_#topic functional/fullstack15:37
*** openstack changes topic to "functional/fullstack (Meeting topic: neutron_ci)"15:37
slaweq_I reported today
openstackLaunchpad bug 1898859 in neutron "Functional test neutron.tests.functional.agent.linux.test_keepalived.KeepalivedManagerTestCase.test_keepalived_spawns_conflicting_pid_vrrp_subprocess is failing" [High,Confirmed]15:38
slaweq_as I saw it at least twice recently15:38
slaweq_IIRC we already saw it in the past too but I wasn't sure if we have bug reported for that already15:38
ralonsohrelated to the ns deletion15:38
ralonsohplease, review ^^15:39
slaweq_ahh, right15:40
slaweq_now I remember :)15:40
slaweq_so I will mark as duplicate of
openstackLaunchpad bug 1898859 in neutron "Functional test neutron.tests.functional.agent.linux.test_keepalived.KeepalivedManagerTestCase.test_keepalived_spawns_conflicting_pid_vrrp_subprocess is failing" [High,Confirmed]15:40
ralonsohI think you can join both LP bugs15:40
openstackLaunchpad bug 1838793 in neutron ""KeepalivedManagerTestCase" tests failing during namespace deletion" [High,Confirmed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)15:40
slaweq_lajoskatona: can You check that patch from ralonsoh?15:41
slaweq_I hope it will help us a bit with this functional tests job :)15:41
lajoskatonaslaweq_: sure, I cheked it in the past, so has some background :-)15:41
slaweq_lajoskatona: thx a lot15:42
slaweq_and for other issues with functional tests I know that ralonsoh told me that he will open LPs15:42
ralonsohthe one related to the agents15:42
*** belmoreira has quit IRC15:44
slaweq_yes, did You report it already?15:45
ralonsohnot yet15:45
ralonsohI'm still investigating the error15:45
slaweq_ok, lets move on then15:47
slaweq_#topic Tempest/Scenario15:47
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"15:47
slaweq_first, I reported today bug:
openstackLaunchpad bug 1898862 in neutron "Job neutron-ovn-tempest-ovs-release-ipv6-only is failing 100% of times" [High,Confirmed]15:47
slaweq_becuase neutron-ovn-tempest-ovs-release-ipv6-only is failing 100% of times and usually (or always even) there is 9 tests failing there15:48
slaweq_so it's very reproducible15:48
slaweq_I will try to ping lucasgomes or jlibosva to take a look at that one15:48
slaweq_there is also ovn related issue
openstackLaunchpad bug 1885900 in neutron "test_trunk_subport_lifecycle is failing in ovn based jobs" [Critical,Confirmed] - Assigned to Lucas Alvares Gomes (lucasagomes)15:49
slaweq_which I saw today again15:49
slaweq_and we still have some random ssh authentication failures15:50
slaweq_like e.g.
slaweq_and in those cases there is no any "pattern", like always same tests or always same backend15:51
slaweq_it happens everywhere15:51
slaweq_and I tend to think that this is issue which ralonsoh found some time ago in our d/s ci15:51
slaweq_with paramiko and some race condition15:51
ralonsohwith paramiko15:51
slaweq_I couldn't reproduce that locally15:51
slaweq_but some race is there IMO15:52
ralonsohonce paramiko tries to log into a VM without the keys, even when the keys are installed, the SSH connection is not possible15:52
slaweq_maybe we can try to check console log first to see if ssh key was confiugred already15:52
slaweq_before ssh to the instance15:52
slaweq_if that will fail for any reason (e.g. custom guest os which don't log things like cirros), we can always try ssh at the end15:53
slaweq_as "fallback" option15:53
ralonsohit worths to try it15:54
slaweq_we can maybe propose that first in neutron-tempest-plugin15:54
bcafarelworth a try15:54
slaweq_and if that will work, then propose to tempest too15:54
slaweq_ok, I will give it a try15:54
ralonsoh(I was doing the opposite: reviewing the paramiko code)15:55
slaweq_#action slaweq to propose patch to check console log before ssh to instance15:55
slaweq_ralonsoh: if You will find issue on paramiko's side, we can always revert workaround from neutron-tempest-plugin :)15:55
ralonsohof course15:55
slaweq_ok, I have one more issue related to ovn jobs:
openstackLaunchpad bug 1898863 in neutron "OVN based scenario jobs failing 100% of times" [Critical,Confirmed]15:56
slaweq_did You saw that before?15:56
bcafarelon dstat??15:56
slaweq_but I saw it only on ovn based jobs15:57
ralonsohno sorry, that's new to me15:57
slaweq_ok, anyone wants to take a look at that?15:57
slaweq_if not than it's also fine for now as it affects "only" non-voting jobs15:58
openstackLaunchpad bug 1866619 in dstat (Ubuntu) "OverflowError when machine suspends and resumes after a longer while" [Undecided,Confirmed]15:58
ralonsohDistroRelease: Ubuntu 20.0415:58
slaweq_so we will probably need to disable dstat as temporary workaround15:59
slaweq_thx ralonsoh15:59
slaweq_we are out of time today16:00
slaweq_thx for attending the meeting16:00
*** openstack changes topic to "OpenStack Meetings ||"16:00
openstackMeeting ended Wed Oct  7 16:00:19 2020 UTC.  Information about MeetBot at . (v 0.1.4)16:00
openstackMinutes (text):
bcafarellengthy one :/16:00
*** lajoskatona has left #openstack-meeting-316:00
ttx#startmeeting large_scale_sig16:01
openstackMeeting started Wed Oct  7 16:01:37 2020 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:01
*** openstack changes topic to " (Meeting topic: large_scale_sig)"16:01
openstackThe meeting name has been set to 'large_scale_sig'16:01
ttx#topic Rollcall16:01
*** openstack changes topic to "Rollcall (Meeting topic: large_scale_sig)"16:01
ttxWho is here for the Large Scale SIG meeting ?16:01
ttxmdelavergne: hi!16:01
ttxgenekuo_: Hi!16:02
genekuo_I'm first time here16:02
genekuo_I'm a Infrastructure Engineer at LINE16:02
genekuo_masahito is my colleague16:02
ttxamorin: around?16:03
ttxIt might be just us 3 today16:04
ttxOur agenda for today is at:16:04
ttx#topic PTG/Summit plans update16:04
*** openstack changes topic to "PTG/Summit plans update (Meeting topic: large_scale_sig)"16:04
ttxA reminder on the Large Scale SIG activities around Summit and PTG16:04
ttxOur Forum session is Tuesday, October 20, 7:30am-8:15am CT16:04
ttxThat makes it super early for our US friends and a bit late for our APAC friends16:05
ttxgenekuo_: it must be super-late for you now16:05
genekuo_I'm usually sleep late16:05
genekuo_So it's fine16:05
ttxI'll moderate the discussion, but we'll also have active participants to help seed the discussion and encourage others to share16:06
ttxamorin and belmoreira said they would help16:06
ttxIn preparation for this session, please add to the etherpad at:16:06
*** slaweq has joined #openstack-meeting-316:06
*** slaweq_ has quit IRC16:07
ttxespecially if you have things you'd like to see covered16:07
ttxThe week after that during PTG week we will have two one-hour sessions:16:07
ttx#info PTG meeting Wednesday Oct 28 7UTC-8UTC and 16UTC-17UTC16:07
ttxThose will be more traditional meetings, the idea being to onboard any new recruit from that forum session16:07
ttxQuestions on that topic?16:07
mdelavergneNot from myself16:08
ttxalright, moving on16:08
ttx#topic Meaningful monitoring16:08
*** openstack changes topic to "Meaningful monitoring (Meeting topic: large_scale_sig)"16:08
ttxLast month we discussed forming a new workstream around "meaningful monitoring"16:09
ttxI tried to bootstrap it in the following etherpad:16:09
ttxgenekuo_: is that something that is of interest for you?16:09
genekuo_I'll probably will be upstreaming oslo.metrics code that we current have16:10
ttxgenekuo_: ok, we will cover that in a minute16:10
ttxObviously we need to discuss what we mean by "meaningful monitoring"16:10
mdelavergneIt would be nice to have some feedback from those who launched this topic :(16:10
ttxIs it actionable monitoring, like opinionated/focused monitoring...16:10
ttxmdelavergne: yeah, tI was hoping they would be here today16:11
ttxsince it's "their" time16:11
genekuo_This topic is interesting as we have a lot of notifications16:11
genekuo_Most of them are not that useful16:11
ttxright, so I could see a need for a more targeted monitoring that instead of showing everything, tracks golden signals16:11
ttx(as described in that etherpad)16:11
ttxBut yes I agree with mdelavergne it would be good to hear from those who raised that topic first and hear of their definition16:12
ttxmoving on to the next workstream16:12
ttx#topic Progress on "Scaling within one cluster" goal16:12
*** openstack changes topic to "Progress on "Scaling within one cluster" goal (Meeting topic: large_scale_sig)"16:12
ttxRegarding oslo.metrics, I did push a basic functional test so that we are reasonably sure that it actually works:16:13
ttxgenekuo_: would be good to get your review on it (or masahito's)16:13
genekuo_Got it16:13
ttxDo you know when you'll be able to push the latest version?16:13
genekuo_I'll also start writing test once I upstream most of our codes16:13
genekuo_There not much left16:13
genekuo_I can probably finish it by next week16:13
ttxNote that according to my testing it seems to be  missing the other side of the code -- the change in oslo.messaging to actually emit those metrics16:14
ttxgenekuo_: do you have the code for that too?16:14
genekuo_We currently haven't have any test yet I think16:14
genekuo_Have to double check16:15
ttxok, because as far as I can tell, the oslo.metric code only handles the reception of the message on the socket and it's storage in a Prometheus metric16:15
ttxThe other side of this workstream is the collection of scaling stories16:16
ttxNothing new posted there... our next action is the forum session in two weeks16:16
ttxAnything else on this "Scaling within one cluster" goal? Questions? Comments?16:16
genekuo_I think I can add something to the scaling stories part16:17
genekuo_We did hit some issue scaling, I'll discuss with masahito tomorrow16:17
ttxgenekuo_: perfect! Any story, even short, helps!16:17
ttxIt's basically about "what happens when we add nodes to a cluster, what failed first"16:17
ttx(and bonus points for telling how you solved it)16:18
genekuo_Got it16:18
ttxMoving on to next goal16:18
ttx#topic Progress on "Documenting large scale operations" goal16:18
*** openstack changes topic to "Progress on "Documenting large scale operations" goal (Meeting topic: large_scale_sig)"16:18
ttxamorin was working on pushing OSarchiver to the OSops repository16:18
ttxI guess we'll have to wait for an update on that16:19
*** psachin has quit IRC16:19
ttxSo for now, just let me know if you have questions on that goal, and if you can help with anything in it16:19
ttx#topic Next meeting16:20
*** openstack changes topic to "Next meeting (Meeting topic: large_scale_sig)"16:20
genekuo_Sounds clear to me for now16:20
ttxIn two weeks we'll have the Forum session and the week after the live meetings16:20
ttxSo I propose we get back to our regular rotation two weeks after that16:20
ttxNext IRC meeting will be EU+APAC Nov 10, 8utc, then US+EU Nov 24, 16utc.16:20
ttxDoes that work?16:20
ttxI'll probably have to send the personal reminder to jpenick and Erik next time16:21
ttxsince they seem to miss the one I send to teh ML16:21
ttx#info next meetings: Nov 10, 8utc; Nov 24, 16utc16:21
mdelavergneprobably, yes!16:21
ttx#topic Open discussion16:21
*** openstack changes topic to "Open discussion (Meeting topic: large_scale_sig)"16:21
ttxAnything else you'd like to discuss?16:21
*** tosky has quit IRC16:21
ttxgenekuo_: anything you think this group should do, that is not covered in those 3 goals?16:21
genekuo_Haven't think about it yet16:22
genekuo_Looks good for me for now16:22
genekuo_I'll think about it and provide more feedback if there is16:22
ttxfeel free to think about it and let us know next time! This is really about what the participants want to do, and try to use the group to help them achieve those objectives16:23
ttxLike amorin is leading the doc effort, and you're leading the oslo.metric effort16:23
ttxand the rest of the group facilitates16:23
genekuo_Sounds good16:24
ttxAlright, if you have nothing else... I propose we close early and let genekuo_ go to bed :)16:24
ttxThanks everyone!16:24
mdelavergnethanks to you!16:24
ttxHopefully will see you at the PTG meeting in 3 weeks!16:24
ttx(and maybe at the Forum session in two weeks if you can make it!)16:24
genekuo_I will join if possible16:25
*** openstack changes topic to "OpenStack Meetings ||"16:25
openstackMeeting ended Wed Oct  7 16:25:06 2020 UTC.  Information about MeetBot at . (v 0.1.4)16:25
openstackMinutes (text):
mdelavergnesee you!16:25
*** mdelavergne has quit IRC16:25
*** genekuo_ has left #openstack-meeting-316:33
*** ralonsoh has quit IRC17:12
*** ralonsoh has joined #openstack-meeting-317:12
*** macz_ has quit IRC17:14
*** macz_ has joined #openstack-meeting-317:15
*** Adri2000 has quit IRC17:20
*** e0ne has quit IRC17:23
*** mlavalle has quit IRC17:56
*** mlavalle has joined #openstack-meeting-318:14
*** lpetrut has joined #openstack-meeting-318:18
*** lpetrut has quit IRC18:36
*** Adri2000 has joined #openstack-meeting-318:43
*** priteau has quit IRC19:12
*** tosky has joined #openstack-meeting-319:26
*** raildo_ is now known as raildo19:43
*** slaweq has quit IRC20:14
*** gmann is now known as gmann_lunch20:22
*** slaweq has joined #openstack-meeting-320:23
*** ralonsoh_ has joined #openstack-meeting-320:30
*** purplerbot has quit IRC20:31
*** ralonsoh has quit IRC20:33
*** purplerbot has joined #openstack-meeting-320:33
*** ralonsoh_ has quit IRC20:51
*** slaweq has quit IRC20:58
*** gmann_lunch is now known as gmann21:04
*** raildo has quit IRC21:11
*** njohnston has quit IRC22:04
*** tosky has quit IRC22:22
*** macz_ has quit IRC23:02
*** mlavalle has quit IRC23:59

Generated by 2.17.2 by Marius Gedminas - find it at!