14:00:07 <dmellado> #startmeeting kuryr
14:00:07 <openstack> Meeting started Mon Mar 12 14:00:07 2018 UTC and is due to finish in 60 minutes.  The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:11 <openstack> The meeting name has been set to 'kuryr'
14:00:22 <dmellado> Hi fellow kuryrs, who's here today for the meeting?
14:00:27 <irenab> hi
14:00:37 <dmellado> We'd need to rush it here, before my flu remedy wears off xD
14:00:37 <ltomasbo> o/
14:00:42 <dmellado> hi irenab ltomasbo
14:00:49 <dmellado> #chair ltomasbo irenab
14:00:50 <openstack> Current chairs: dmellado irenab ltomasbo
14:01:00 <dulek> Hello!
14:01:06 <kaliya> o/
14:01:06 <dmellado> #chair dulek
14:01:07 <openstack> Current chairs: dmellado dulek irenab ltomasbo
14:01:13 <kaliya> sorry to hear dmellado get better
14:01:14 <irenab> will have to leave in 30 mins or so
14:01:17 <dmellado> #chair kaliya
14:01:18 <openstack> Current chairs: dmellado dulek irenab kaliya ltomasbo
14:01:25 <dmellado> thanks kaliya ;)
14:01:37 <irenab> dmellado, time to let Dublin go ...
14:01:39 <dmellado> the way back from the ptg was quite terrible, tbh
14:01:40 <dmellado> xD
14:01:50 <dmellado> irenab: I know! I still have nightmares about it
14:02:00 <dmellado> so, let's start
14:02:29 <dmellado> now seriously, now that I'm back and somehow healthy I'll schedule a ptg summary/post-mortem session
14:02:35 <dmellado> will send an email to the ML
14:02:43 <dmellado> #topic kuryr-kubernetes
14:03:49 <dmellado> so, on the topic, tomorrow vikasc will be hosting a session at 12:00 CET about multi networks
14:04:08 <leyal> o/
14:04:16 <yboaron> o/
14:04:17 <dmellado> and we've finally being able to finish the migration from our gates to zuulv3 native syntax
14:04:41 <ltomasbo> \o/
14:04:48 <dmellado> I guess we were one of the first projects and (sadly) another projects such as neutron are copying our setup, so let's see how everything falls apart
14:04:57 <celebdor> hello
14:05:07 <dmellado> hi fellas
14:05:23 <irenab> dmellado, is it about multi node?
14:05:53 <dmellado> irenab: on the multinode I'm basically waiting for andreaf to be done with his tempest-multinode parent gate
14:06:02 <dmellado> once that that's done, ours should come by almost seamlessly
14:06:35 <irenab> ok
14:06:39 <dmellado> I won't be going on a recap of the PTG as we'll be having a session on that, so besides that
14:06:53 <dmellado> is there anything you'd like to share about kuryr-kubernetes today? ;)
14:07:14 <ltomasbo> I have this patch waiting some reviews/comments https://review.openstack.org/#/c/548673
14:07:42 <ltomasbo> it is needed to avoid the problem of changing the subnet-id that kuryr.conf used
14:08:06 <ltomasbo> as without it, the pools will make it use the wrong ports (on the wrong/previous) network
14:08:16 <irenab> ltomasbo, will take a look
14:08:22 <dmellado> ltomasbo: rebase it and I'll take a look once it passes CI again
14:08:26 <ltomasbo> we need to discuss if you are fine with using network_id instead of subnet-id
14:08:29 <celebdor> ltomasbo: I see "cannot merge"
14:08:36 <ltomasbo> really??
14:08:39 <celebdor> does it need rebase?
14:08:40 <dmellado> ltomasbo: yep
14:08:41 <celebdor> yes
14:08:45 <ltomasbo> damn firefox. I'm moving back to chrome
14:08:51 <dmellado> lol
14:09:04 <ltomasbo> I'll rebase it!
14:10:00 <dulek> I have a few patches that need reviews as well:
14:10:08 <dulek> This one was discussed on the PTG: https://review.openstack.org/#/c/550170/
14:10:22 <dulek> That's deprecation of daemon-less deployment.
14:10:41 <dulek> This one updates DevStack to install K8s 1.9: https://review.openstack.org/#/c/531128/
14:11:04 <dulek> And this one is required to fix our kuryr/cni images on Dockerhub: https://review.openstack.org/#/c/551261/
14:11:24 <dmellado> Another few things I'd like to mention in our exciting world of CI is that the Octavia folks have been kind enough to provide us with nightly images, which means that we won't be in need of creating the amphora image for each time we run CI, so that'd save us about 10 minutes on the octavia enabled environments
14:11:27 <celebdor> ltomasbo: I see it with firefox
14:11:32 <celebdor> are you still on firefox 3.0?
14:11:38 <dmellado> dulek: I'll take a look at those
14:11:49 <ltomasbo> celebdor, I see it https://review.openstack.org/#/q/project:openstack/kuryr-kubernetes
14:11:52 <dmellado> celebdor: he runs iceweasel
14:11:56 <ltomasbo> celebdor, but not on the patch itself...
14:12:00 <celebdor> iceweasel...
14:12:03 <celebdor> wait, wait
14:12:10 <celebdor> are you using debian?
14:12:14 <dmellado> lol
14:12:26 <ltomasbo> celebdor, Mozilla Firefox 58.0.2
14:12:49 <ltomasbo> I have fedora 27! xD
14:12:54 <dmellado> ltomasbo: F5?
14:13:04 <dmellado> seems to be loading just fine with same settings from my side
14:13:21 <yboaron> irenab, celebdor : I have this two patches waiting for your feedback : https://review.openstack.org/#/c/546784/ https://review.openstack.org/#/c/549739/
14:13:44 <irenab> yboaron, on my list, will try review by tomorrow
14:13:54 <yboaron> irenab, 10x
14:14:04 <ltomasbo> dmellado, perhaps mine want to keep me in the good mood! xD no rebase needed...
14:14:22 <dmellado> lol
14:14:46 <celebdor> man... so much queue reviewing
14:14:59 <celebdor> I'm glad we have so many upcoming fixes
14:15:01 <dmellado> all right, another topic I wanted to tackle is that it looks that our dependencies will soon be ok in terms of the renaming services
14:15:05 <dmellado> q-foo to neutron-foo
14:15:14 <dmellado> I'll be submitting one patch as soon as the octavia one lands
14:15:20 <celebdor> okey dokey
14:15:42 <dmellado> https://review.openstack.org/#/c/544281/
14:15:46 <dmellado> thanks cgoncalves!
14:16:09 <celebdor> yboaron: https://review.openstack.org/#/c/549739/3 should be backported
14:16:29 <yboaron> celebdor, will take care of it
14:16:45 <dmellado> +1, yboaron please do ;)
14:16:47 <celebdor> and https://review.openstack.org/#/c/546784/ as well
14:17:04 <celebdor> in general, please, backport all the bug fixes
14:17:15 <cgoncalves> who? what? when? :)
14:17:18 <dmellado> celebdor: https://review.openstack.org/#/c/546784/ isn't even merged yet
14:17:19 <celebdor> unless they are not present in the stable branch
14:17:34 <celebdor> dmellado: I say it now so it won't be forgotten
14:17:36 <dmellado> cgoncalves: lol, thanks on keeping the q- to neutron- octavia patch alive
14:17:51 <irenab> celebdor, what is the policy about back porting?
14:17:56 <cgoncalves> dmellado: you're more than welcome :)
14:18:15 <ltomasbo> btw, I was also testing this patch last week https://review.openstack.org/#/c/550421/
14:18:34 <ltomasbo> celebdor, ^^ this fixes the problem with octavia and ovs-firewall
14:18:37 <dulek> ltomasbo: https://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes
14:18:52 <dmellado> ltomasbo: awesome
14:19:03 <celebdor> irenab: the policy I'd like is that we backport all the bugfixes present in the stable branch
14:19:04 <dmellado> btw, celebdor, now that you're around here
14:19:19 <celebdor> at least until the following release
14:19:20 <dmellado> could you please add dulek to the launchpad group so he can also tackle bug triaging
14:19:45 <irenab> celebdor, as far as I remember, patches that change config are not considered as candidates for back porting
14:19:49 <celebdor> alrighty
14:19:57 <dulek> irenab: Fair point.
14:20:22 <celebdor> irenab: only bugfixes. If there was a config bug, I'd say we can consider it
14:20:22 <irenab> I guess we just need to follow the openstack back porting policies
14:20:24 <celebdor> but with care
14:20:34 <celebdor> irenab: you're right
14:20:40 <dmellado> dulek: "Each project team should designate a stable branch cross-project liaison as the main point of contact for all stable branch support issues in the team" I hereby...
14:20:41 <dmellado> xD
14:20:42 <celebdor> that should be the base
14:21:02 <celebdor> dmellado: are you volunteering with that quote?
14:21:08 <dmellado> nope, look at it
14:21:15 <dulek> dmellado: Are you volunteering me? ;)
14:21:21 <dmellado> I meant to xD
14:21:31 <dmellado> but I don't want to overload you
14:21:39 <dulek> No problem at all, I'm already on #openstack-release, so I can join stable channel as well. :)
14:21:41 <dmellado> so either you or ltomasbo, up to you folks ;)
14:22:00 <celebdor> alright. Let's do this in the best possible way
14:22:06 <dmellado> celebdor: you do it
14:22:09 <celebdor> who wants to be the stable person?
14:22:14 <celebdor> I'm not stable myself...
14:22:27 <celebdor> irenab: dulek: ltomasbo: yboaron:
14:22:29 <celebdor> ?
14:22:34 <dulek> I can take that.
14:22:42 <celebdor> any other candidate?
14:22:47 <celebdor> I'm dying to do a coin flip
14:22:57 * dmellado is picturing a rolling plant...
14:23:29 <ltomasbo> xD
14:23:32 <dmellado> ltomasbo: heads or tails?
14:23:39 <ltomasbo> I can help too
14:23:50 <irenab> https://docs.openstack.org/project-team-guide/stable-branches.html#review-guidelines
14:24:20 <celebdor> alright, heads or tails?
14:24:33 <dulek> heads
14:25:52 <celebdor> kangaroo tails
14:26:06 * celebdor used an australian coin from the summit)
14:26:26 <dmellado> celebdor: ok, so who won
14:26:28 <dmellado> I'll trust you
14:26:45 <celebdor> ltomasbo
14:26:52 <dmellado> tia ra rannnn
14:26:57 <ltomasbo> xD
14:26:59 <celebdor> "won"
14:27:01 <celebdor> xd
14:27:14 <dmellado> in any case ltomasbo, we'll split the work as it says it'd be PTL responsibility in the end
14:27:31 <celebdor> it may be because my house is very anti monarchy and the heads was the british queen
14:27:33 <ltomasbo> ok!
14:27:46 <ltomasbo> lol
14:28:46 <dmellado> all right, anything else besides the exciting announcement of our new stable branch liaison?
14:29:07 <celebdor> I don't have anything atm
14:29:19 <kaliya> dmellado: yes I don't remember if someone already had a look to the sriov patch failure in zuul?
14:29:29 <celebdor> I did not
14:29:37 <celebdor> dmellado: dulek: either of you take that on
14:29:37 <dmellado> kaliya: not really, link to it?
14:29:39 <celebdor> *one
14:29:51 <dmellado> celebdor: you do, you love it xD
14:29:52 <kaliya> dmellado: here around
14:29:53 <kaliya> https://review.openstack.org/#/c/524592/
14:30:03 * dulek looks.
14:30:30 <dmellado> kaliya: I'd first rebase it again
14:30:39 <dmellado> and see how the patch behaves ;)
14:30:47 <kaliya> dmellado: ok, in case we ask in IRC
14:31:04 <dmellado> sure, please ping us on the channel after it
14:31:16 <kaliya> thanks
14:31:20 <dmellado> post_failure in any case were due to infra errors
14:31:22 <dmellado> so rebase and recheck
14:31:29 <dmellado> yw!
14:31:48 <dmellado> gcheresh_: juriarte!
14:31:50 <dulek> kaliya: Looks like pod never gets spawned.
14:32:07 <dmellado> any exciting news on the testin area?
14:32:43 <ltomasbo> ohh, on the testing area, we need to fix the OVN gate, as it has some security group issues
14:32:51 <celebdor> ltomasbo: wait...
14:32:55 <dulek> kaliya: http://logs.openstack.org/92/524592/10/check/kuryr-kubernetes-tempest-octavia/2b987eb/controller/logs/screen-kuryr-kubernetes.txt.gz#_Feb_22_15_01_05_126731
14:32:57 <celebdor> What kind of security issues?
14:32:57 <ltomasbo> dmellado, we can try to take a look at it tomorrow
14:33:00 <dulek> kaliya: Here's the error message.
14:33:08 <dmellado> ltomasbo: sure, sounds like a plan
14:33:19 <ltomasbo> celebdor, I managed to make the VM -> pod ping work, as well as the pod to pod
14:33:23 <ltomasbo> but pod -> VM is failing
14:33:26 <kaliya> dulek: ty!
14:33:58 <ltomasbo> initial problem was with FIP not being properly configured, but htat was fixed
14:34:16 <ltomasbo> celebdor, and I guess we are creating different networks for the VM and the pods, and that can be the problem
14:34:34 <dmellado> ltomasbo: yep
14:34:37 <ltomasbo> though I would have expect the problem being in the other direction VM-> pod, rather than pod->VM
14:34:49 <dmellado> the vm is on a per-test created tenant + network
14:35:06 <dmellado> but icmp + ssh should be allowed directly there
14:35:10 <dmellado> ltomasbo: are you sure about it?
14:35:30 <ltomasbo> dmellado, no, I'm not
14:35:43 <dmellado> let's try to get someenvironment tomorrow and check
14:36:08 <ltomasbo> dmellado, sounds good, it must be some misconfiguration somewhere
14:36:44 <dmellado> irenab snapiri any news on the similar issue with dragonflow?
14:37:55 <leyal> I think that irenab already dropped ..
14:38:07 <dmellado> oanson:
14:38:09 <dmellado> ^^
14:38:19 <dmellado> well, I'll sync with them after the meeting
14:38:22 <oanson> dmellado, hi. Gimme a sec to sync
14:38:29 <dmellado> oanson: sure, thanks!
14:38:38 <gcheresh_> By the way is anyone familiar how to delete a pod with --force, I don't see such an option in API and without it it takes 30 seconds to delete a pod
14:39:13 <celebdor> gcheresh_: you can always do it with the cli and check the k8s API to see what got passed to the it
14:39:22 <celebdor> directly with the API I don't know
14:39:23 <oanson> dmellado, the bug-fix is stuck. Every time I solve one scenario, another breaks. I am still working on it.
14:39:37 <gcheresh_> celebdor: I did, but it still doesn't work in API
14:39:43 <dmellado> oanson: ack, thanks for updating us
14:39:49 <dmellado> let us know if we can help you in any way
14:40:18 <oanson> Sure. Will do. Thanks!
14:40:28 <celebdor> gcheresh_: the cli does use the API
14:41:13 <gcheresh_> I know, but magically it behaves differently when doing with CLI and API, at least for me
14:41:28 <dmellado> gcheresh_: using the built-in fuction delete_pod it takes 30s?
14:41:42 <gcheresh_> dmellado: yes
14:41:44 <ltomasbo> gcheresh_, you mean python API, right?
14:41:51 <gcheresh_> ltomasbo: yes
14:41:57 <ltomasbo> celebdor, ^^
14:42:18 <ltomasbo> I guess the python client does not cover the complete k8s API
14:42:52 <celebdor> ltomasbo: it should let you pass params
14:43:03 <celebdor> gcheresh_: please, paste me what you tried
14:43:06 <ltomasbo> yep
14:43:25 <dmellado> gcheresh_: grace_period_seconds
14:43:27 <dmellado> change that
14:43:39 <dmellado> https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#delete_namespaced_pod
14:43:48 <gcheresh_> dmellado: with grace_periods=0 it takes 30 seconds
14:43:50 <leyal> ltomasbo. i think it's should as it's build according to the from swagger definitions of k8s api
14:43:55 <dmellado> :\ really?
14:44:30 <gcheresh_> yes, doing it manually only when you send it with --force it excecutes immidiately, without it - 30 seconds
14:45:04 <celebdor> gcheresh_: did you check which value is passed by the CLI to the API?
14:45:22 <dmellado> I was wondering about the garbage policy
14:45:23 <dmellado> Background' - allow the garbage collector to delete the dependents in the background
14:45:32 <gcheresh_> {"kind":"DeleteOptions","apiVersion":"v1","gracePeriodSeconds":0}
14:46:19 <celebdor> gcheresh_: and when you do it from Python
14:46:20 <leyal> I also want to remind the network-policy patches -  https://review.openstack.org/#/q/blueprint+k8s-network-policies, still waiting for reviews ..
14:46:25 <celebdor> what does the API see?
14:46:32 <celebdor> thanks leyal
14:46:49 <gcheresh_> celebdor: I tried to do the same with python and it didn't work
14:46:56 <dmellado> leyal: thanks, I'll take a look them too
14:46:58 <gcheresh_> I mean took 30 sec
14:47:04 <dmellado> gcheresh_: so the python options and the cli options are the same?
14:48:03 <gcheresh_> in the CLI you have --force that it translates to what I sent, when not using --force it has grace-period=1 in the curl, though you send grace-period=0
14:48:11 <celebdor> my point is, did python successfully get the params to the API
14:48:19 <celebdor> or did they get lost in the python world?
14:48:33 <gcheresh_> don't know, will check
14:48:39 <celebdor> thanks gcheresh_
14:48:44 <celebdor> update me on that later, please
14:48:48 <dmellado> gcheresh_: could you try to use this?
14:49:07 <gcheresh_> ok
14:49:11 <dmellado> api_response = api_instance.delete_namespaced_pod(name, namespace, body, pretty=pretty, grace_period_seconds=grace_period_seconds, orphan_dependents=orphan_dependents, propagation_policy=propagation_policy)
14:49:11 <dmellado> pprint(api_response)
14:49:25 <dmellado> let's try to compare the api_response with your json
14:50:22 <dmellado> gcheresh_: in fact, you can just capture the reply from delete_pod
14:50:36 <gcheresh_> dmellado: what params should I put for orphan_dependents, propagation_policy and pretty - the default ones?
14:51:03 <gcheresh_> I will send the API that is sent shortly
14:53:09 <dmellado> gcheresh_: so, here
14:53:10 <dmellado> https://github.com/openstack/kuryr-tempest-plugin/blob/master/kuryr_tempest_plugin/tests/scenario/base.py#L78-L84
14:53:35 <dmellado> you can just an option to the kwards so it'll be grace_period_seconds=foo
14:53:46 <dmellado> let's add that 0 wait option and see it this works
14:54:11 <gcheresh_> dmellado: That what I did, that's how I sent grace-period=0
14:55:11 <gcheresh_> I even changed the body to {'kind': 'DeleteOptions', 'apiVersion': 'v1', 'grace_period_seconds': 0} as that is what I saw on the CLI call, but still no luck
14:55:16 <dmellado> gcheresh_: then let's capture that and see what's different
14:55:23 <dmellado> gcheresh_: Ill try to take a look too
14:55:29 <gcheresh_> thanks
14:55:36 <dmellado> so we're almost out of time
14:55:42 <dmellado> #topic open discussion
14:56:21 <dmellado> aaand I guess I'll close the meeting and save some minutes for a quick coffee for all
14:56:26 <dmellado> thanks for attending!
14:57:26 <dmellado> #endmeeting