14:00:07 #startmeeting kuryr 14:00:07 Meeting started Mon Mar 12 14:00:07 2018 UTC and is due to finish in 60 minutes. The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:11 The meeting name has been set to 'kuryr' 14:00:22 Hi fellow kuryrs, who's here today for the meeting? 14:00:27 hi 14:00:37 We'd need to rush it here, before my flu remedy wears off xD 14:00:37 o/ 14:00:42 hi irenab ltomasbo 14:00:49 #chair ltomasbo irenab 14:00:50 Current chairs: dmellado irenab ltomasbo 14:01:00 Hello! 14:01:06 o/ 14:01:06 #chair dulek 14:01:07 Current chairs: dmellado dulek irenab ltomasbo 14:01:13 sorry to hear dmellado get better 14:01:14 will have to leave in 30 mins or so 14:01:17 #chair kaliya 14:01:18 Current chairs: dmellado dulek irenab kaliya ltomasbo 14:01:25 thanks kaliya ;) 14:01:37 dmellado, time to let Dublin go ... 14:01:39 the way back from the ptg was quite terrible, tbh 14:01:40 xD 14:01:50 irenab: I know! I still have nightmares about it 14:02:00 so, let's start 14:02:29 now seriously, now that I'm back and somehow healthy I'll schedule a ptg summary/post-mortem session 14:02:35 will send an email to the ML 14:02:43 #topic kuryr-kubernetes 14:03:49 so, on the topic, tomorrow vikasc will be hosting a session at 12:00 CET about multi networks 14:04:08 o/ 14:04:16 o/ 14:04:17 and we've finally being able to finish the migration from our gates to zuulv3 native syntax 14:04:41 \o/ 14:04:48 I guess we were one of the first projects and (sadly) another projects such as neutron are copying our setup, so let's see how everything falls apart 14:04:57 hello 14:05:07 hi fellas 14:05:23 dmellado, is it about multi node? 14:05:53 irenab: on the multinode I'm basically waiting for andreaf to be done with his tempest-multinode parent gate 14:06:02 once that that's done, ours should come by almost seamlessly 14:06:35 ok 14:06:39 I won't be going on a recap of the PTG as we'll be having a session on that, so besides that 14:06:53 is there anything you'd like to share about kuryr-kubernetes today? ;) 14:07:14 I have this patch waiting some reviews/comments https://review.openstack.org/#/c/548673 14:07:42 it is needed to avoid the problem of changing the subnet-id that kuryr.conf used 14:08:06 as without it, the pools will make it use the wrong ports (on the wrong/previous) network 14:08:16 ltomasbo, will take a look 14:08:22 ltomasbo: rebase it and I'll take a look once it passes CI again 14:08:26 we need to discuss if you are fine with using network_id instead of subnet-id 14:08:29 ltomasbo: I see "cannot merge" 14:08:36 really?? 14:08:39 does it need rebase? 14:08:40 ltomasbo: yep 14:08:41 yes 14:08:45 damn firefox. I'm moving back to chrome 14:08:51 lol 14:09:04 I'll rebase it! 14:10:00 I have a few patches that need reviews as well: 14:10:08 This one was discussed on the PTG: https://review.openstack.org/#/c/550170/ 14:10:22 That's deprecation of daemon-less deployment. 14:10:41 This one updates DevStack to install K8s 1.9: https://review.openstack.org/#/c/531128/ 14:11:04 And this one is required to fix our kuryr/cni images on Dockerhub: https://review.openstack.org/#/c/551261/ 14:11:24 Another few things I'd like to mention in our exciting world of CI is that the Octavia folks have been kind enough to provide us with nightly images, which means that we won't be in need of creating the amphora image for each time we run CI, so that'd save us about 10 minutes on the octavia enabled environments 14:11:27 ltomasbo: I see it with firefox 14:11:32 are you still on firefox 3.0? 14:11:38 dulek: I'll take a look at those 14:11:49 celebdor, I see it https://review.openstack.org/#/q/project:openstack/kuryr-kubernetes 14:11:52 celebdor: he runs iceweasel 14:11:56 celebdor, but not on the patch itself... 14:12:00 iceweasel... 14:12:03 wait, wait 14:12:10 are you using debian? 14:12:14 lol 14:12:26 celebdor, Mozilla Firefox 58.0.2 14:12:49 I have fedora 27! xD 14:12:54 ltomasbo: F5? 14:13:04 seems to be loading just fine with same settings from my side 14:13:21 irenab, celebdor : I have this two patches waiting for your feedback : https://review.openstack.org/#/c/546784/ https://review.openstack.org/#/c/549739/ 14:13:44 yboaron, on my list, will try review by tomorrow 14:13:54 irenab, 10x 14:14:04 dmellado, perhaps mine want to keep me in the good mood! xD no rebase needed... 14:14:22 lol 14:14:46 man... so much queue reviewing 14:14:59 I'm glad we have so many upcoming fixes 14:15:01 all right, another topic I wanted to tackle is that it looks that our dependencies will soon be ok in terms of the renaming services 14:15:05 q-foo to neutron-foo 14:15:14 I'll be submitting one patch as soon as the octavia one lands 14:15:20 okey dokey 14:15:42 https://review.openstack.org/#/c/544281/ 14:15:46 thanks cgoncalves! 14:16:09 yboaron: https://review.openstack.org/#/c/549739/3 should be backported 14:16:29 celebdor, will take care of it 14:16:45 +1, yboaron please do ;) 14:16:47 and https://review.openstack.org/#/c/546784/ as well 14:17:04 in general, please, backport all the bug fixes 14:17:15 who? what? when? :) 14:17:18 celebdor: https://review.openstack.org/#/c/546784/ isn't even merged yet 14:17:19 unless they are not present in the stable branch 14:17:34 dmellado: I say it now so it won't be forgotten 14:17:36 cgoncalves: lol, thanks on keeping the q- to neutron- octavia patch alive 14:17:51 celebdor, what is the policy about back porting? 14:17:56 dmellado: you're more than welcome :) 14:18:15 btw, I was also testing this patch last week https://review.openstack.org/#/c/550421/ 14:18:34 celebdor, ^^ this fixes the problem with octavia and ovs-firewall 14:18:37 ltomasbo: https://docs.openstack.org/project-team-guide/stable-branches.html#appropriate-fixes 14:18:52 ltomasbo: awesome 14:19:03 irenab: the policy I'd like is that we backport all the bugfixes present in the stable branch 14:19:04 btw, celebdor, now that you're around here 14:19:19 at least until the following release 14:19:20 could you please add dulek to the launchpad group so he can also tackle bug triaging 14:19:45 celebdor, as far as I remember, patches that change config are not considered as candidates for back porting 14:19:49 alrighty 14:19:57 irenab: Fair point. 14:20:22 irenab: only bugfixes. If there was a config bug, I'd say we can consider it 14:20:22 I guess we just need to follow the openstack back porting policies 14:20:24 but with care 14:20:34 irenab: you're right 14:20:40 dulek: "Each project team should designate a stable branch cross-project liaison as the main point of contact for all stable branch support issues in the team" I hereby... 14:20:41 xD 14:20:42 that should be the base 14:21:02 dmellado: are you volunteering with that quote? 14:21:08 nope, look at it 14:21:15 dmellado: Are you volunteering me? ;) 14:21:21 I meant to xD 14:21:31 but I don't want to overload you 14:21:39 No problem at all, I'm already on #openstack-release, so I can join stable channel as well. :) 14:21:41 so either you or ltomasbo, up to you folks ;) 14:22:00 alright. Let's do this in the best possible way 14:22:06 celebdor: you do it 14:22:09 who wants to be the stable person? 14:22:14 I'm not stable myself... 14:22:27 irenab: dulek: ltomasbo: yboaron: 14:22:29 ? 14:22:34 I can take that. 14:22:42 any other candidate? 14:22:47 I'm dying to do a coin flip 14:22:57 * dmellado is picturing a rolling plant... 14:23:29 xD 14:23:32 ltomasbo: heads or tails? 14:23:39 I can help too 14:23:50 https://docs.openstack.org/project-team-guide/stable-branches.html#review-guidelines 14:24:20 alright, heads or tails? 14:24:33 heads 14:25:52 kangaroo tails 14:26:06 * celebdor used an australian coin from the summit) 14:26:26 celebdor: ok, so who won 14:26:28 I'll trust you 14:26:45 ltomasbo 14:26:52 tia ra rannnn 14:26:57 xD 14:26:59 "won" 14:27:01 xd 14:27:14 in any case ltomasbo, we'll split the work as it says it'd be PTL responsibility in the end 14:27:31 it may be because my house is very anti monarchy and the heads was the british queen 14:27:33 ok! 14:27:46 lol 14:28:46 all right, anything else besides the exciting announcement of our new stable branch liaison? 14:29:07 I don't have anything atm 14:29:19 dmellado: yes I don't remember if someone already had a look to the sriov patch failure in zuul? 14:29:29 I did not 14:29:37 dmellado: dulek: either of you take that on 14:29:37 kaliya: not really, link to it? 14:29:39 *one 14:29:51 celebdor: you do, you love it xD 14:29:52 dmellado: here around 14:29:53 https://review.openstack.org/#/c/524592/ 14:30:03 * dulek looks. 14:30:30 kaliya: I'd first rebase it again 14:30:39 and see how the patch behaves ;) 14:30:47 dmellado: ok, in case we ask in IRC 14:31:04 sure, please ping us on the channel after it 14:31:16 thanks 14:31:20 post_failure in any case were due to infra errors 14:31:22 so rebase and recheck 14:31:29 yw! 14:31:48 gcheresh_: juriarte! 14:31:50 kaliya: Looks like pod never gets spawned. 14:32:07 any exciting news on the testin area? 14:32:43 ohh, on the testing area, we need to fix the OVN gate, as it has some security group issues 14:32:51 ltomasbo: wait... 14:32:55 kaliya: http://logs.openstack.org/92/524592/10/check/kuryr-kubernetes-tempest-octavia/2b987eb/controller/logs/screen-kuryr-kubernetes.txt.gz#_Feb_22_15_01_05_126731 14:32:57 What kind of security issues? 14:32:57 dmellado, we can try to take a look at it tomorrow 14:33:00 kaliya: Here's the error message. 14:33:08 ltomasbo: sure, sounds like a plan 14:33:19 celebdor, I managed to make the VM -> pod ping work, as well as the pod to pod 14:33:23 but pod -> VM is failing 14:33:26 dulek: ty! 14:33:58 initial problem was with FIP not being properly configured, but htat was fixed 14:34:16 celebdor, and I guess we are creating different networks for the VM and the pods, and that can be the problem 14:34:34 ltomasbo: yep 14:34:37 though I would have expect the problem being in the other direction VM-> pod, rather than pod->VM 14:34:49 the vm is on a per-test created tenant + network 14:35:06 but icmp + ssh should be allowed directly there 14:35:10 ltomasbo: are you sure about it? 14:35:30 dmellado, no, I'm not 14:35:43 let's try to get someenvironment tomorrow and check 14:36:08 dmellado, sounds good, it must be some misconfiguration somewhere 14:36:44 irenab snapiri any news on the similar issue with dragonflow? 14:37:55 I think that irenab already dropped .. 14:38:07 oanson: 14:38:09 ^^ 14:38:19 well, I'll sync with them after the meeting 14:38:22 dmellado, hi. Gimme a sec to sync 14:38:29 oanson: sure, thanks! 14:38:38 By the way is anyone familiar how to delete a pod with --force, I don't see such an option in API and without it it takes 30 seconds to delete a pod 14:39:13 gcheresh_: you can always do it with the cli and check the k8s API to see what got passed to the it 14:39:22 directly with the API I don't know 14:39:23 dmellado, the bug-fix is stuck. Every time I solve one scenario, another breaks. I am still working on it. 14:39:37 celebdor: I did, but it still doesn't work in API 14:39:43 oanson: ack, thanks for updating us 14:39:49 let us know if we can help you in any way 14:40:18 Sure. Will do. Thanks! 14:40:28 gcheresh_: the cli does use the API 14:41:13 I know, but magically it behaves differently when doing with CLI and API, at least for me 14:41:28 gcheresh_: using the built-in fuction delete_pod it takes 30s? 14:41:42 dmellado: yes 14:41:44 gcheresh_, you mean python API, right? 14:41:51 ltomasbo: yes 14:41:57 celebdor, ^^ 14:42:18 I guess the python client does not cover the complete k8s API 14:42:52 ltomasbo: it should let you pass params 14:43:03 gcheresh_: please, paste me what you tried 14:43:06 yep 14:43:25 gcheresh_: grace_period_seconds 14:43:27 change that 14:43:39 https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CoreV1Api.md#delete_namespaced_pod 14:43:48 dmellado: with grace_periods=0 it takes 30 seconds 14:43:50 ltomasbo. i think it's should as it's build according to the from swagger definitions of k8s api 14:43:55 :\ really? 14:44:30 yes, doing it manually only when you send it with --force it excecutes immidiately, without it - 30 seconds 14:45:04 gcheresh_: did you check which value is passed by the CLI to the API? 14:45:22 I was wondering about the garbage policy 14:45:23 Background' - allow the garbage collector to delete the dependents in the background 14:45:32 {"kind":"DeleteOptions","apiVersion":"v1","gracePeriodSeconds":0} 14:46:19 gcheresh_: and when you do it from Python 14:46:20 I also want to remind the network-policy patches - https://review.openstack.org/#/q/blueprint+k8s-network-policies, still waiting for reviews .. 14:46:25 what does the API see? 14:46:32 thanks leyal 14:46:49 celebdor: I tried to do the same with python and it didn't work 14:46:56 leyal: thanks, I'll take a look them too 14:46:58 I mean took 30 sec 14:47:04 gcheresh_: so the python options and the cli options are the same? 14:48:03 in the CLI you have --force that it translates to what I sent, when not using --force it has grace-period=1 in the curl, though you send grace-period=0 14:48:11 my point is, did python successfully get the params to the API 14:48:19 or did they get lost in the python world? 14:48:33 don't know, will check 14:48:39 thanks gcheresh_ 14:48:44 update me on that later, please 14:48:48 gcheresh_: could you try to use this? 14:49:07 ok 14:49:11 api_response = api_instance.delete_namespaced_pod(name, namespace, body, pretty=pretty, grace_period_seconds=grace_period_seconds, orphan_dependents=orphan_dependents, propagation_policy=propagation_policy) 14:49:11 pprint(api_response) 14:49:25 let's try to compare the api_response with your json 14:50:22 gcheresh_: in fact, you can just capture the reply from delete_pod 14:50:36 dmellado: what params should I put for orphan_dependents, propagation_policy and pretty - the default ones? 14:51:03 I will send the API that is sent shortly 14:53:09 gcheresh_: so, here 14:53:10 https://github.com/openstack/kuryr-tempest-plugin/blob/master/kuryr_tempest_plugin/tests/scenario/base.py#L78-L84 14:53:35 you can just an option to the kwards so it'll be grace_period_seconds=foo 14:53:46 let's add that 0 wait option and see it this works 14:54:11 dmellado: That what I did, that's how I sent grace-period=0 14:55:11 I even changed the body to {'kind': 'DeleteOptions', 'apiVersion': 'v1', 'grace_period_seconds': 0} as that is what I saw on the CLI call, but still no luck 14:55:16 gcheresh_: then let's capture that and see what's different 14:55:23 gcheresh_: Ill try to take a look too 14:55:29 thanks 14:55:36 so we're almost out of time 14:55:42 #topic open discussion 14:56:21 aaand I guess I'll close the meeting and save some minutes for a quick coffee for all 14:56:26 thanks for attending! 14:57:26 #endmeeting