14:00:09 #startmeeting kuryr 14:00:10 Meeting started Mon May 7 14:00:09 2018 UTC and is due to finish in 60 minutes. The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:14 The meeting name has been set to 'kuryr' 14:00:54 o/ Hi fellow kuryrs, who's here today to enjoy the exciting world of meetings? 14:00:57 I've made it. :) 14:01:16 o/ 14:01:32 #chair dulek 14:01:32 Current chairs: dmellado dulek 14:01:49 Hi ! 14:01:56 #chair yboaron 14:01:57 Current chairs: dmellado dulek yboaron 14:03:27 o/ 14:03:35 #chair ltomasbo 14:03:36 Current chairs: dmellado dulek ltomasbo yboaron 14:03:39 all right 14:03:44 #topic kuryr-kubernetes 14:04:14 So, folks, besides all the terror stories that we've been having as of lately with the upstream infra, who has something to share on the topic? ;) 14:04:54 on the infra topic? 14:04:56 xD 14:05:03 I'm back from PTO today, I'll be catching up with reviews. 14:05:13 Don't worry, I don't delete any emails from Gerrit. ;) 14:05:28 dulek: lol 14:05:39 does that mean that you deleted mine? caught you 14:05:41 xD 14:05:51 ltomasbo: hopefully not on the infra topic 14:06:11 dmellado, then, reviews on https://blueprints.launchpad.net/kuryr-kubernetes/+spec/network-namespace are welcome! 14:06:11 I just noticed a race condition on the tempest plugin devstack plugin 14:06:14 heh, metaplugin 14:06:20 I'm working on the tempest tests for that 14:06:39 when it tries to install the plugin, it doesn't get off with the docker image built before installing docker so it fails 14:07:00 I'm testing a patch that just delays that creation to the very end of the devstack installation 14:07:12 #link https://blueprints.launchpad.net/kuryr-kubernetes/+spec/network-namespace 14:08:15 now that we speak about reviews, danil, do you need anything taken a look into? 14:08:24 yeah 14:08:27 I would like to ask some questions about multi-vif & sriov patches in kuryr-k8s. I have fixed the most part of problems, core developers have mentioned 14:08:36 What do you think, is it worth to merge these patches sequentially. I mean , first patch (https://review.openstack.org/#/c/471012/) is fully ready as I can see. It only changes a type of pod annotation, without changes in other functionality (it is specified in commit message) 14:08:38 danil: shoot then ;) 14:08:56 If not, I would like to hear my forward steps in work with these patches 14:09:34 #link https://review.openstack.org/#/c/471012/ 14:10:30 Aww, I was supposed to propose an alternative to maybe_callback for that oneā€¦ I'll need to do it ASAP. 14:10:36 danil: if it just adds a type of pod annotation I'd be fine 14:10:45 in any case zuul and dulek give you -1 14:10:47 :) 14:11:01 dulek: ltomasbo I see that you've been addressing comments there 14:11:10 could you follow up so we can iterate and start merging this soon? 14:11:20 dmellado: Heh, the maybe_callback thing can be addressed by refactor later if I won't come up with solution now. ;) 14:11:24 i've changed this patch according to dulek's remarks, as I remember 14:11:29 otherwise we'll be celebrating these patch's anniversary 14:11:38 dmellado: So no -1 until I know exactly if there's a better way. 14:11:42 dulek: in any case please do leave some comment regarding that 14:11:45 I have a question. When using kuryr, the kube-proxy is not necessary, is it right? 14:11:47 For now it's just a wild hunch. :P 14:11:57 danil: please address the patch so it passes CI and I'll get some eyes on it ;) 14:12:29 zhangoic, that's correct 14:12:47 yboaron: go ahead ;) 14:12:50 thanks 14:12:57 danil, I have another comment about the patch, as it only adds support for the daemon based cni, perhaps it is worthy to add it to the documentation 14:12:58 dmellado: I don't understand what ddo you mean, sorry 14:13:05 danil: the link that you passed to me 14:13:06 otherwise we will forget about it... 14:13:07 https://review.openstack.org/#/c/471012/ 14:13:25 is currently not passing CI 14:13:31 see zuul verified -1 there 14:13:46 it could be a glitch on the CI but nothing will get merged until it gives +1 there ;) 14:14:13 yboaron: could you elaborate on this? otherwise it's just yes - no 14:14:15 xD 14:14:46 yeap, I see. What should I cahange for that? 14:14:53 how to pass CI ? 14:15:22 dmellado, kube-procy is used in K8S for mapping service-name/IP to service's pods (actually by setting IP table rules) 14:15:31 proxy 14:15:36 yboaron: I'm aware that we just use neutron for that 14:15:44 but I wonder if it would make sense to add that to the docs 14:15:59 so with kuryr we use Neutron LBaaS or Octavia for that purpose 14:16:46 kuryr support the type of clusterIP for service? 14:17:05 danil: so, for example 14:17:06 http://logs.openstack.org/12/471012/37/check/openstack-tox-lower-constraints/0001367/ara-report/ 14:17:20 zhangoic, that's right, and LoadBalancer type 14:17:25 How about nodeport service in k8s 14:17:34 that seems to be an issue on the infra 14:17:40 zhangoic, there is no support for nodeport 14:17:41 so just recheck it for now 14:17:55 sure, thanks 14:18:03 you can also go to the openstack-infra channel 14:18:07 and ask any cores 14:18:16 zhangoic, try this link #link https://docs.openstack.org/kuryr-kubernetes/latest/installation/services.html 14:18:25 dmellado: thanks a lot 14:18:25 OK 14:18:33 and also when you're waiting for it, you can check zuul.openstack.org 14:18:40 and see in real time what's going on with that exact gate 14:19:04 dmellado: sure, I see. Thanks. Hope this patch to be merged soon 14:19:26 danil: https://www.youtube.com/watch?v=fowBDdLGBlU 14:19:33 slightly old but I guess watching that would still be useful 14:20:23 danil: in any case if you get stuck at it feel free to ping us on the kuryr channel 14:20:32 I would also like to start getting the feature merged 14:20:49 dmellado: sure, thank you 14:22:29 I can update on ingress controller patches.. 14:22:53 yboaron: go ahead 14:22:56 thanks! 14:23:19 So, I received comments from Luis&Toni, Toni's suggested to have a single driver for both endpoints and ingress/routes 14:24:19 hm, it's true that I'd also like to tidy up the driver section as it's quite widespread, so opverall +1 on that 14:24:34 and it sound reasonable, my first patch is just to prepare LBasS driver before extending it to support ingress 14:24:48 #link https://review.openstack.org/#/c/566175/ 14:25:19 yboaron: thanks, I will put that to my review queue too 14:25:26 second patch just adds support for the ingress controller LB manual creation fir devstack deployment 14:25:37 'just' xD 14:25:45 lol 14:25:55 #link https://review.openstack.org/#/c/564051/ 14:26:00 ltomasbo: don't tell me that is one of 'these' patches 14:26:06 * dmellado opens it waiting for 300+ loc 14:26:31 ltomasbo: 117, could've been much bigger 14:26:43 I have two more patches in the oven to support ingress controller and the second for ocp-route handler 14:26:49 * dmellado just recalls about .zuul.yaml and the fact that he needs to rebase the split patch 14:26:51 I thought it was the next one! 14:27:40 ltomasbo, did you mean to the ingress controller patches or something else 14:27:51 heh, you were surprised, just like me when I was told that apuimedo passed customs in the end 14:28:05 yep! I already took a look at both patches you just posted and they look ok 14:29:24 ltomasbo, thanks!, I need to update #link https://review.openstack.org/#/c/536387/ with Toni's comments 14:29:50 yboaron: do you have a gerrit topic for the whole effort? 14:30:05 yboaron, yep, I saw that, please ping me when you have done so (I want to test it too!) 14:30:06 I have BP 14:30:27 ltomasbo, I"ll ping you 14:30:32 thanks! 14:30:34 yboaron: awesome, I'll put some comments as well 14:30:44 so, anything else folks? 14:31:01 Q: do we support devstack deployment for the nested case ? 14:31:21 yboaron: we're waiting for a patch on zuul to get merged 14:31:26 and then we'll have multinode support 14:31:36 please do notice that by multinode I mean several compute nodes 14:31:53 once that that's done I'll add the nested scenario as we'll have enough capacity 14:31:56 yboaron, we have the local.conf files for it, but it is not tested right now on the gates 14:32:10 I'm not sure on the speed, though, as the upstream infra doesn't support nested kvm 14:32:14 so it might be quite sluggish 14:32:21 so that's the current status 14:32:31 ofc as ltomasbo said, you can try on your own if you've a machine big enough 14:32:33 ;) 14:32:33 ltomasbo, it's seems that it not tested 14:32:59 yboaron, it is not tested due to gate limitations 14:33:16 yboaron, but you can try it on your machine (if you have 16Gb or so) 14:33:20 https://review.openstack.org/#/c/558762/ 14:33:24 ^^ 14:33:32 so, that's the patch which is waiting on its dependant to get merged 14:33:46 we'll be able to add more capacity and test nested upstream after it 14:33:55 as well as openshift-ansible installation, hopefully 14:33:59 ltomasbo, I meant from code prespective , we don't 'clean' kuryr openstack infra resources (LB, etc) on unstack 14:34:29 ahh, that is most probably the case 14:34:42 yboaron: what usecase you have in mind? 14:34:50 if it's for CI I'd say we just 'don't care' 14:34:59 yboaron, but usually I have to do 2 stacks (one for the undercloud, one for the overcloud) 14:35:04 stack+unstack+stack for nested 14:35:20 so, it is hard to have a complete cleanup, though I guess if you remove the undercloud, everything is gone anyway 14:35:21 all for the overclpoud 14:35:33 overcloud 14:35:37 ahh, no, for just the overcloud it will not work 14:37:30 OK, 10x!, do u think we should support such use case ? 14:37:37 hmmm 14:37:44 yboaron: for the CI part, not really a need tbh 14:37:57 just for dev side it'd be nice but I don't think it's a must-have 14:38:19 OK, I file a bug for that, and we"ll see 14:38:28 please do! 14:38:54 yboaron: also ltomasbo looked like he wanted to volunteer for this 14:39:01 so you can assign him 14:39:03 :D 14:39:05 yboaron, though I guess that is not easy to do unless we have kuryr resources on CRDs 14:39:58 ltomasbo, I thought to 'clean' just the infra resources first 14:40:15 by infra I mean kury-kubernets LB, SG, etc 14:40:46 BTW, do we have solution for that in openshift-ansible case? 14:40:51 but how do you know what LBs belong to one or the other? 14:41:32 yboaron, in openshift-ansible I proposed this (as it is simpler as we deploy with heat templates): https://github.com/openshift/openshift-ansible/pull/8254/files 14:42:01 but it is slightly different, as we have real overcloud/undercloud also, here we call it undercloud/overcloud 14:42:11 but it is more like worker node on a VM 14:42:11 the deployment code should know which resources were created for the deployment , right ? 14:42:44 ltomasbo, anyhow, let's take it offline, is it OK with you ? 14:42:46 yboaron, yep, but if you have a sg being used by a kuryr created pod, it will not be able to delete it 14:42:51 sure sure 14:42:59 let's move to the next topic! 14:43:19 so next topic it's easy 14:43:33 #topic open discussion 14:44:01 Just wanted to let you know that after getting the feedback from the PTG 14:44:25 we won't be going there as a group, and instead we'll get to talk about whether we do a mid-cycle elsewhere, a VTG again or even both 14:45:13 I might go to Denver (choo-choo) again just to interact with another projects, such as infra and QA (and probably Octavia) though 14:45:40 if you didn't get to know the choo-choo then you were lucky and didn't get to attend that PTG xD 14:47:22 and I guess this finishes today's meeting 14:47:31 I'll save you 13 minutes of your life 14:47:35 thanks for attending everyone! 14:47:58 xD 14:48:08 #endmeeting 14:48:12 #endmeeting