Monday, 2018-12-17

*** slaweq has joined #openstack-meeting-400:11
*** slaweq has quit IRC00:15
*** bobh has joined #openstack-meeting-401:44
*** oanson has quit IRC01:59
*** dave-mccowan has joined #openstack-meeting-402:09
*** slaweq has joined #openstack-meeting-402:11
*** hongbin has joined #openstack-meeting-402:11
*** slaweq has quit IRC02:16
*** hongbin has quit IRC02:28
*** bobh has quit IRC02:37
*** dave-mccowan has quit IRC02:47
*** bobh has joined #openstack-meeting-402:49
*** hongbin has joined #openstack-meeting-402:50
*** psachin has joined #openstack-meeting-402:59
*** dklyle has joined #openstack-meeting-403:20
*** slaweq has joined #openstack-meeting-404:11
*** bobh has quit IRC04:12
*** slaweq has quit IRC04:16
*** iyamahat__ has quit IRC04:31
*** iyamahat has joined #openstack-meeting-404:37
*** hongbin has quit IRC04:37
*** dklyle has quit IRC05:16
*** janki has joined #openstack-meeting-405:36
*** markvoelker has joined #openstack-meeting-405:47
*** markvoelker has quit IRC05:51
*** slaweq has joined #openstack-meeting-406:11
*** slaweq has quit IRC06:16
*** gcheresh_ has joined #openstack-meeting-406:35
*** gcheresh has joined #openstack-meeting-406:39
*** gcheresh_ has quit IRC06:39
*** iyamahat has quit IRC07:10
*** Luzi has joined #openstack-meeting-407:23
*** yboaron_ has quit IRC07:28
*** oanson has joined #openstack-meeting-407:34
*** e0ne has joined #openstack-meeting-407:35
*** slaweq has joined #openstack-meeting-407:37
*** e0ne has quit IRC07:39
*** slaweq has quit IRC07:40
*** slaweq has joined #openstack-meeting-407:44
*** markvoelker has joined #openstack-meeting-407:48
*** pcaruana has joined #openstack-meeting-408:18
*** yboaron_ has joined #openstack-meeting-408:29
*** yamamoto has quit IRC08:30
*** yamamoto has joined #openstack-meeting-408:32
*** yboaron_ has quit IRC08:33
*** yboaron_ has joined #openstack-meeting-408:34
*** gkadam has joined #openstack-meeting-408:35
*** gkadam has quit IRC08:35
*** ralonsoh has joined #openstack-meeting-408:37
*** yamamoto has quit IRC08:46
*** yamamoto has joined #openstack-meeting-409:26
*** yamamoto has quit IRC09:45
*** yamamoto has joined #openstack-meeting-409:45
*** e0ne has joined #openstack-meeting-410:02
*** yamamoto has quit IRC10:06
*** yamamoto has joined #openstack-meeting-410:18
*** yboaron_ has quit IRC10:21
*** yboaron_ has joined #openstack-meeting-410:22
*** yamamoto has quit IRC10:23
*** salmankhan has joined #openstack-meeting-410:25
*** sambetts_ has joined #openstack-meeting-410:29
*** pbourke has quit IRC10:34
*** pbourke has joined #openstack-meeting-410:36
*** ktibi has joined #openstack-meeting-410:49
*** salmankhan1 has joined #openstack-meeting-410:52
*** salmankhan has quit IRC10:54
*** salmankhan1 is now known as salmankhan10:54
*** yamamoto has joined #openstack-meeting-411:00
*** yboaron_ has quit IRC11:23
*** e0ne has quit IRC11:31
*** yboaron_ has joined #openstack-meeting-411:33
*** celebdor has joined #openstack-meeting-411:36
*** yamamoto has quit IRC11:38
*** yamamoto has joined #openstack-meeting-411:38
*** Liang__ is now known as LiangFang12:24
*** janki has quit IRC12:31
*** bobh has joined #openstack-meeting-412:34
*** bobh has quit IRC12:38
*** e0ne has joined #openstack-meeting-412:47
*** bobh has joined #openstack-meeting-412:53
*** janki has joined #openstack-meeting-412:59
*** markvoelker has quit IRC13:05
*** e0ne has quit IRC13:11
*** seajay has joined #openstack-meeting-413:12
*** e0ne has joined #openstack-meeting-413:14
*** bobh has quit IRC13:16
*** maysams has joined #openstack-meeting-413:22
*** seajay_ has joined #openstack-meeting-413:23
*** seajay has quit IRC13:25
*** gcheresh has quit IRC13:26
*** yamamoto has quit IRC13:27
*** gcheresh has joined #openstack-meeting-413:30
*** yamamoto has joined #openstack-meeting-413:32
*** e0ne has quit IRC13:42
*** e0ne has joined #openstack-meeting-413:45
*** pcaruana has quit IRC13:50
*** dangtrinhnt_x has joined #openstack-meeting-413:55
*** gcheresh_ has joined #openstack-meeting-413:55
*** gcheresh has quit IRC13:55
*** dangtrinhnt_x has quit IRC14:00
dmellado#startmeeting kuryr14:00
openstackMeeting started Mon Dec 17 14:00:31 2018 UTC and is due to finish in 60 minutes.  The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: kuryr)"14:00
openstackThe meeting name has been set to 'kuryr'14:00
dmelladoHi everyone! Who's around here for today's kuryr meeting? ;)14:00
duleko/14:00
yboaron_o/14:01
dmellado#chair dulek14:01
openstackCurrent chairs: dmellado dulek14:01
dmellado#chair yboaron_14:01
openstackCurrent chairs: dmellado dulek yboaron_14:01
dmelladoAll right, so let's do a quick roll call14:01
dmelladofirst of all, due to Christmas' holidays there won't be kuryr meetings on 24 nor on 31st14:02
* dulek is trying to summon celebdor as he might be able to give some insight on that kuryr-daemon mem usage.14:02
dmelladoI'll be sending an email as a reminder14:02
dmelladodulek: could you give a brief summary about that?14:02
dmelladoon which gates is this happening?14:02
ltomasboo/14:02
dmelladowhat's eating memory and so?14:02
dulekOh maaan, we could wish ourselves happy holidays on 24th and happy new year on 31st. :D14:02
dmellado#topic kuryr-kubernetes14:02
*** openstack changes topic to "kuryr-kubernetes (Meeting topic: kuryr)"14:02
dmelladodulek: lol14:03
dmellado#chair ltomasbo14:03
openstackCurrent chairs: dmellado dulek ltomasbo yboaron_14:03
dulekOkay, let me explain then.14:03
dulekSo it's the issue of events being skipped and not passed to kuryr-controller that is watching K8s API.14:03
dulekI've linked that to errors in kubernetes-api/openshift-master14:03
dulekhttp://logs.openstack.org/27/625327/3/check/kuryr-kubernetes-tempest-daemon-openshift-octavia/cb22439/controller/logs/screen-openshift-master.txt.gz14:04
dmelladolike, random type events? or is it tied to specific ones?14:04
dulekHere's an example.14:04
dmellado#link http://logs.openstack.org/27/625327/3/check/kuryr-kubernetes-tempest-daemon-openshift-octavia/cb22439/controller/logs/screen-openshift-master.txt.gz14:04
dulekdmellado: Like the updates on pods, services or endpoints.14:04
dulekOkay, so then I've linked those issues with etcd warnings: http://logs.openstack.org/27/625327/3/check/kuryr-kubernetes-tempest-daemon-openshift-octavia/cb22439/controller/logs/screen-etcd.txt.gz14:04
maysamso/14:04
celebdorI am here14:04
celebdorwhat's up14:05
dmelladodulek: so that's linked with http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000906.html14:05
celebdorsorry for the delay, I was on a meeting14:05
dulekdmellado: Yes!14:05
dmelladohiya folks14:05
dulekAnd then with some insight from infra I've started looking at dstat files from those gate runs.14:05
*** pcaruana has joined #openstack-meeting-414:05
dulekWhat immediately caught my attention is the column that shows up the process using most of the memory.14:05
dmelladocelebdor: dulek has been seeing odd things on dstat, you might have some experience on that14:05
dulekIt's kuryr-daemon with value of 2577G.14:06
dmelladohow many memory was the process eating, dulek? xD14:06
celebdorthat looks far too high14:06
dulekhttp://logs.openstack.org/27/625327/3/check/kuryr-kubernetes-tempest-daemon-openshift-octavia/cb22439/controller/logs/screen-dstat.txt.gz - see for yourself, it's the third column to the last.14:06
celebdordulek: only on the gates or locally as well?14:06
dulekcelebdor: I'll be checking that after the meeting.14:06
celebdorok14:06
celebdorthanks14:06
dulekSo first of all I don't understand why OOM is not happening.14:06
dmelladodulek: is this also tied to specific cloud providers on the infra?14:07
dmelladoi.e. it happens on RAX but not on OVH and so14:07
dmellado?14:07
*** dave-mccowan has joined #openstack-meeting-414:07
dulekdmellado: Not really, no. The logs I'm sending you now are from INAP.14:07
dulekSecond - once the kuryr-daemon is started it drains all the swap.14:07
dulekSo this might have some sense - the issues in etcd might be due to some IO delays.14:07
celebdordulek: s/drains/occupies?14:08
*** bobh has joined #openstack-meeting-414:09
ltomasbodulek, didn't we have a healthcheck that was restarting kuryr-daemon if eating more than X MBs?14:09
dulekcelebdor: Well, swap usage raises. But now that I look again it's only using up to 3 GB of it, so not all.14:09
dulekltomasbo: That's only on containerized.14:09
celebdordulek: really?14:10
celebdora docker leak then?14:10
yboaron_dulek, Q: Does finding this string in etcd log file means that etcd lost requests? 'avoid queries with large range/delete range!' ?14:10
celebdorthat would be unusual, I usspose14:10
dulekcelebdor: In that case kuryr-daemon runs as a process, not container.14:10
dmelladodulek: I guess he's referring to etcd14:11
dulekyboaron_: Not 100% sure really.14:11
dmelladobut if kuryr-daemon is the process eating up the memory, then it doesn't apply14:11
dulekdmellado: We run etcd as process as well.14:11
celebdordulek: so this happens on containerized or not. Not sure I understood14:11
*** dave-mccowan has quit IRC14:11
dulekcelebdor: Ah, sorry. I saw it on non-containerized at the moment.14:11
dmelladoit could be due to the healthcheck14:12
dulekBut I can checkā€¦14:12
dmelladoit'd be interesting to see if we can replicate if we disable that healthcheck on containerized14:12
*** bobh has quit IRC14:12
dmelladodulek: did you freeze an upstream VM to try to fetch some data?14:12
dulekLooks the same in containerized: kuryr-daemon 2589G14:13
*** bobh has joined #openstack-meeting-414:13
dulekcelebdor: Note that we run kuryr-daemon as privileged container, so it's on host's ps namespace, right?14:13
dulekdmellado: Not yet, I'll try this locally first.14:13
dulekOkay, so if nobody has an idea, I'll try to poke it a bit more.14:14
dulekHopefully it's some dstat quirk14:14
dulekBut in any case I guess it's worth to be looked at.14:14
dmelladodulek: in any case let me know if you get stuck trying this on local14:14
dmelladoand we'll take a look together in a vm either upstream or rdocloud14:14
celebdordulek: indeed14:14
dulekOkay, cool!14:15
dmelladothanks for bringing this dulek14:15
dmelladoso, I also wanted to let you know14:16
dmelladothat I've been working on getting python-openshift packaged for fedora14:17
*** psachin has quit IRC14:17
dmelladowhich involves a hell of bureaucracy14:17
dmelladoand being sponsored into fedora-packagers group14:17
celebdorI thought it's just a rpmspec file14:17
dmelladoI'm not sure if besides me there's anyone on the team who also got those privileges14:17
dmelladobut if you could vote for this I'd be happy14:17
dmelladocelebdor: it involved a dependency hell xD14:17
dmelladohttps://bugzilla.redhat.com/show_bug.cgi?id=165483314:18
openstackbugzilla.redhat.com bug 1654833 in Package Review "Review Request: python-kubernetes - add python-kuberenetes to EPEL 7" [Medium,New] - Assigned to nobody14:18
dmelladohttps://bugzilla.redhat.com/show_bug.cgi?id=165483514:18
openstackbugzilla.redhat.com bug 1654835 in Package Review "Review Request: python-google-auth - Add python-google-auth to EPEL 7" [Medium,Post] - Assigned to zebob.m14:18
dmelladohttps://bugzilla.redhat.com/show_bug.cgi?id=165928114:18
openstackbugzilla.redhat.com bug 1659281 in Package Review "Review Request: python-dictdiffer - Dictdiffer is a module that helps you to diff and patch dictionaries" [Medium,New] - Assigned to nobody14:18
dmelladohttps://bugzilla.redhat.com/show_bug.cgi?id=165928214:18
openstackbugzilla.redhat.com bug 1659282 in Package Review "Review Request: python-string_utils - A python module containing utility functions for strings" [Medium,New] - Assigned to nobody14:18
dmelladohttps://bugzilla.redhat.com/show_bug.cgi?id=165928614:18
openstackbugzilla.redhat.com bug 1659286 in Package Review "Review Request: python-openshift - Python client for the OpenShift API" [Medium,New] - Assigned to nobody14:18
dmelladocelebdor: ltomasbo maysams yboaron_ andy topics from your side?14:18
celebdorno14:19
celebdorupdates from NP?14:19
yboaron_neither from my side14:19
maysamsdmellado: celebdor: yep14:19
dmelladogo for it maysams14:19
ltomasboI have a topic14:19
ltomasbohttps://bugs.launchpad.net/kuryr-kubernetes/+bug/180850614:20
openstackLaunchpad bug 1808506 in kuryr-kubernetes "neutron-vif requires admin rights" [Medium,New] - Assigned to Luis Tomas Bolivar (ltomasbo)14:20
ltomasbosorry maysams, go ahead...14:20
maysamsI am working on updating the sg rules in the CRD when pods are created/deleted/updated14:20
maysamshttps://review.openstack.org/#/c/62558814:20
maysamsltomasbo: it's okay14:20
ltomasbogreat! I'll take a look asap!14:20
celebdorltomasbo: that's BM only14:20
celebdormaysams: you also reported some bug, rgiht?14:21
maysamscelebdor: yes14:21
maysamshttps://bugs.launchpad.net/kuryr-kubernetes/+bug/180878714:21
openstackLaunchpad bug 1808787 in kuryr-kubernetes "Pod Label Handler tries to handle pod creation causing controller restart" [Undecided,New]14:21
ltomasbocelebdor, still is broken... and it raises another concern, which is that we are using both on devstack as well as on the gates the wront type of tenant14:21
yboaron_ltomasbo, what do you mean by 'under a normal tenant' in the bug description, do we use tenant with extra permissions in our gates?14:22
dmelladoltomasbo: is that related to the devstack tenant permissions you told me about?14:22
celebdorltomasbo: that is my biggest concren14:22
celebdor*concern14:22
maysamscelebdor: basically the pod label handler is waiting for the vif handler to handle the event, but due to resource not ready exceptions this will not work14:22
ltomasboyboaron_, unfortunately, yes14:22
yboaron_ltomasbo, can you elaborate on that?14:23
celebdormaysams: why wait and not drop?14:23
dmelladoltomasbo: I'll need to spend some time investigating it14:23
celebdorif it drops the event if it has no annotation14:23
ltomasbodmellado, it will be great if you can take a look, yes!14:23
celebdorit will eventually get the event from the patch that the vif handler does14:23
dulekcelebdor: Hey, that makes sense.14:23
dmelladoltomasbo: I'll sync with you after the meeting to work into the details, thanks!14:23
ltomasbodmellado, it is not only on the gates, but also on the k8s tenant that we create on devstack14:23
maysamscelebdor: indeed makes sense14:23
celebdor:O14:23
celebdorlet's try that then14:24
dmelladoincreibdle, something that celebdor says makes sense14:24
dmelladoxD14:24
maysamscelebdor: thanks :)14:24
celebdordmellado: I feel the same way14:24
celebdorbelieve me14:24
dmelladosomething's going really odd14:24
ltomasbomaysams, celebdor: yep, it is easy to just drop the event, but worth to keep in mind that we have a problem there14:24
dmellado:D14:24
dmelladoltomasbo: maysams we'll treat that as a bug14:24
dmelladoso if you haven't already, please open it in LP14:24
celebdorltomasbo: why is this a problem?14:24
ltomasbodmellado, is a bug we hit on network policies, but it is not a bug produced by network policies...14:25
celebdorlabel pod handler only cares about annotated pods14:25
celebdorI think it is fair14:25
celebdoron the other hand14:25
ltomasbocelebdor, if we have different handlers listening to the same events, why one is blocked by the other?14:25
celebdorltomasbo: dmellado: we need the tenant to be neutron policy regular tenant14:25
celebdorlike right now14:25
maysamsltomasbo is right14:25
ltomasbocelebdor, I agree with the workaround14:26
dmelladoltomasbo: totally agree14:26
ltomasbocelebdor, but that is just avoiding the problem, not fixing it14:26
celebdorltomasbo: well, I don't think it's that odd for one handler to depend on stuff done by the other14:26
ltomasbocelebdor, no no, I don't mean the depends on14:26
celebdorjust that the current handler design makes it weird14:26
ltomasbocelebdor, what would happen in the case that handler is not depending on the other?14:26
*** gcheresh has joined #openstack-meeting-414:27
*** yboaron has joined #openstack-meeting-414:27
ltomasbocelebdor, if that is the desing, it does not make a lot of sense to have different handlers... perhaps simple to have just one...14:27
*** gcheresh_ has quit IRC14:27
*** yboaron_ has quit IRC14:27
dmelladoltomasbo: but I think having several ones is better for the granularity14:27
dmelladoif by any chance we have handlers a and b14:27
celebdorltomasbo: I don't understand the "what would happen if they do not depend on each other?14:27
dmelladoand b depends on a then it's fine14:28
dmelladobut maybe some user would like ot use just 'a' and it would use less memory14:28
yboaronsorry guys, I was disconnected for the last few minutes .. network issues14:28
celebdorthey will just annotate the resource (if they need to) in some random order14:28
celebdorwhat's the prob with that14:28
celebdor?14:28
ltomasbocelebdor, imagine the vif handler was not doing pod annotations and other neutron actions14:29
celebdorok14:29
ltomasbocelebdor, it will never get executed due to the other handler being scheduled fist14:29
ltomasboanyway, we can discuss this on the kuryr channel after the meeting14:30
dmelladoltomasbo: yep, let's do that14:31
celebdorltomasbo: is the handler now serializing per resource?14:31
celebdorok, ok14:31
celebdorwe can take it up later14:31
*** bobh has quit IRC14:31
dmelladogcheresh: itzikb juriarte14:32
dmelladoo/14:32
dmelladoanything you'd like to add to the meeting?14:32
dmelladoI'm pretty sure you'd be interested on the python-openshift thing14:32
dmelladohttps://media1.tenor.com/images/184a80bf791b211b0e2f3f02badab20e/tenor.gif?itemid=1246954914:33
dmelladoawesome!14:33
dmelladoxD14:33
gchereshdmellado: had network issues as well on the TLV (everyone did)14:34
dmelladogcheresh: ouch, hope you can get to fix it14:34
gchereshit just disconnected and now is back again14:34
dmelladojust wanted to ask whether you can push for the python-openshift rpms to get accepted14:34
dmelladoI pointed the links out earlier and they'll be available on the logs14:35
dmelladobesides that, is there anything you'd like to cover? ;)14:35
*** yboaron_ has joined #openstack-meeting-414:36
*** yamamoto has quit IRC14:38
*** yboaron has quit IRC14:39
dmelladoAll right, so then it seems that it was it for the day!14:39
*** gcheresh has quit IRC14:39
*** gcheresh_ has joined #openstack-meeting-414:39
dmelladoltomasbo: celebdor let's continue the discussion on the kuryr channel14:39
dmelladothanks for attending!14:39
*** yamamoto has joined #openstack-meeting-414:39
dmellado#endmeeting14:39
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"14:39
openstackMeeting ended Mon Dec 17 14:39:48 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:39
openstackMinutes:        http://eavesdrop.openstack.org/meetings/kuryr/2018/kuryr.2018-12-17-14.00.html14:39
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/kuryr/2018/kuryr.2018-12-17-14.00.txt14:39
openstackLog:            http://eavesdrop.openstack.org/meetings/kuryr/2018/kuryr.2018-12-17-14.00.log.html14:39
*** yamamoto has quit IRC14:45
*** Luzi has quit IRC14:46
*** yamamoto has joined #openstack-meeting-414:47
*** bobh has joined #openstack-meeting-414:48
*** bobh has quit IRC14:52
*** yamamoto has quit IRC14:53
*** beekneemech is now known as bnemec15:00
*** e0ne has quit IRC15:02
*** bobh has joined #openstack-meeting-415:03
*** LiangFang has quit IRC15:05
*** e0ne has joined #openstack-meeting-415:06
*** spiette has quit IRC15:08
*** yboaron_ has quit IRC15:10
*** spiette has joined #openstack-meeting-415:10
*** hongbin has joined #openstack-meeting-415:17
*** yamamoto has joined #openstack-meeting-415:27
*** pcaruana has quit IRC15:29
*** pcaruana has joined #openstack-meeting-415:51
*** slaweq has quit IRC15:54
*** ktibi has quit IRC15:56
*** dklyle has joined #openstack-meeting-415:59
*** slaweq has joined #openstack-meeting-416:01
*** janki has quit IRC16:05
*** maysams has quit IRC16:06
*** haleyb has joined #openstack-meeting-416:15
*** haleyb has left #openstack-meeting-416:15
*** gcheresh_ has quit IRC16:16
*** pcaruana has quit IRC16:28
*** macza has joined #openstack-meeting-416:38
*** e0ne has quit IRC17:46
*** diablo_rojo has joined #openstack-meeting-417:54
*** yamahata has quit IRC17:58
*** salmankhan has quit IRC18:09
*** irenab has joined #openstack-meeting-418:14
*** armstrong has joined #openstack-meeting-418:30
*** ralonsoh has quit IRC18:36
*** e0ne has joined #openstack-meeting-418:42
*** e0ne has quit IRC18:46
*** gcheresh_ has joined #openstack-meeting-418:58
*** e0ne has joined #openstack-meeting-419:01
*** gcheresh_ has quit IRC19:10
*** yamahata has joined #openstack-meeting-419:33
*** macza has quit IRC19:42
*** macza has joined #openstack-meeting-419:43
*** bobh has quit IRC19:47
*** macza has quit IRC19:47
*** bobh has joined #openstack-meeting-419:49
*** bobh has quit IRC19:54
*** bobh has joined #openstack-meeting-419:57
*** bobh has quit IRC20:02
*** macza has joined #openstack-meeting-420:33
*** celebdor has quit IRC20:43
*** celebdor has joined #openstack-meeting-420:45
*** yboaron has joined #openstack-meeting-420:49
*** celebdor has quit IRC20:51
*** liuyulong has joined #openstack-meeting-421:04
*** dmellado has quit IRC21:05
*** seajay_ has quit IRC21:28
*** slaweq has quit IRC21:30
*** bobh has joined #openstack-meeting-421:33
*** armstrong has quit IRC21:41
*** celebdor has joined #openstack-meeting-421:45
*** slaweq has joined #openstack-meeting-421:46
*** slaweq has quit IRC21:51
*** yboaron has quit IRC21:52
*** seajay has joined #openstack-meeting-422:01
*** slaweq has joined #openstack-meeting-422:08
*** bobh has quit IRC22:20
*** slaweq has quit IRC22:20
*** bobh has joined #openstack-meeting-422:22
*** e0ne has quit IRC22:25
*** hongbin has quit IRC22:29
*** yamamoto has quit IRC22:40
*** yamamoto has joined #openstack-meeting-422:41
*** macza has quit IRC22:54
*** bobh has quit IRC22:56
*** macza has joined #openstack-meeting-422:57
*** seajay has quit IRC23:16
*** celebdor has quit IRC23:26
*** celebdor has joined #openstack-meeting-423:43
*** celebdor has quit IRC23:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!