14:04:23 #startmeeting kuryr 14:04:24 Meeting started Mon Jun 4 14:04:23 2018 UTC and is due to finish in 60 minutes. The chair is ltomasbo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:04:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:04:28 The meeting name has been set to 'kuryr' 14:04:44 o/ 14:04:56 dmellado cannot handle the meeting today, so I'll try to take over, lets start 14:05:02 #topic kuryr-kubernetes 14:05:23 hello, folks. After merging https://review.openstack.org/#/c/471012/43, I would like to know your opinion about sriov support patches (https://review.openstack.org/#/c/512280/21). What should I do? 14:06:06 danil, first, I guess you need to rebase it! :D 14:06:14 sure) 14:06:16 then, I'm a little concern about the previous patch 14:06:34 as it missed some stuff, for instance: https://bugs.launchpad.net/kuryr-kubernetes/+bug/1774997 14:06:35 Launchpad bug 1774997 in kuryr-kubernetes "Failure recovering pre-created ports into pools" [Undecided,New] 14:06:48 I'll sent a fix for it after the meeting 14:07:08 but some other cases may have been missed 14:07:28 so, it will be nice to add tempest test to ensure it is working 14:07:32 ltomasbo: Shouldn't that come out on pools test? 14:07:53 dulek, if there was enough coverage, yes! 14:08:08 dulek, problem is this will happens after the kuryr-controller gets restarted 14:08:16 Okay, I get it. 14:08:37 ltomasbo: ok, I see. Thanks a lot 14:08:54 would be nice if I fix it 14:09:00 sorry, I didn't mean that ports pool tempest needs to be done on sriov patches! 14:09:12 yeah, I see 14:09:19 danil, I already have the fix for that bug, I'll send it in a few minutes 14:09:39 ltomasbo: thanks a lot 14:09:46 not sure if the notation change may affect other functionality though 14:10:23 I'll check, no problem 14:10:31 thank you 14:10:51 #action review https://review.openstack.org/#/c/512280 after rebase 14:11:28 any other topic on kuryr-kubernetes? 14:11:36 yes, 14:11:42 danil, do you want to discuss something specific about that patch 14:11:55 AlexeyPerevalov_, go ahead! 14:12:00 we have a use case where we're working with ovs's ports of vhostuser type, it's for DPDK in containers. 14:12:16 Such use case is covered by CNIVhostUser plugin, but we need neutron's network info. 14:13:00 AlexeyPerevalov_, what info? 14:13:01 so from my point of view kuryr-kubernetes is perfect CNI plugin for that. 14:13:09 great! 14:13:41 neutron port, ip address ranges, subnets. 14:13:59 AlexeyPerevalov_, ok 14:14:19 AlexeyPerevalov_, is that related with the dpdk patch set that was sent a few weeks ago? 14:14:31 AlexeyPerevalov_: Technically you could get the info from pod annotation? 14:14:48 AlexeyPerevalov_, https://review.openstack.org/#/c/559363/ 14:15:54 \o/ 14:15:58 Do you mean Gary's patch set. I saw it, it's about vhostuser port, yes, but that port already was passed into VM, so in VM it becames VIRTIO device. 14:16:32 I'm talking about baremetall, where vhostuser is still unix domain socket. 14:17:33 what's the convo about? 14:17:44 so it's another use case. 14:17:48 ahh, great! 14:18:13 apuimedo: http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-4/latest.log.html#t2018-06-04T14:11:36 14:18:22 thanks dulek 14:19:39 read! 14:19:54 AlexeyPerevalov_, I assume we need to add some new binding driver to kuryr and then a new driver at kuryr-kubernetes, but it looks doable 14:19:55 It looks to me that the ideal would be to get that from the pod annotation as dulek suggests 14:20:23 ltomasbo: yes, So we decided to add yet another binding driver, which will create ovs bridge with netdev type, and ovs port with vhostuser type. 14:20:38 ltomasbo: VIFOpenVSwitchDriver alreay is doing something similar, but it's not vhostuser it's VETH. 14:20:49 AlexeyPerevalov_: I don't see an issue with that approach. That's what drivers are for. 14:21:10 #chair apuimedo 14:21:11 Current chairs: apuimedo ltomasbo 14:21:32 AlexeyPerevalov_: is it like the internal ovs type? 14:21:45 cause that's what we do for the devstack kubelt iface 14:22:28 apuimedo: yes, it is for creating vhoustuser unix domain sock, in case when ovs-vswitchd is compiled with DPDK support. 14:22:50 right 14:23:21 so then, as you hinted, doing a class like the VIFOpenVSwitchDriver should do it 14:23:25 dulek: I think need to customize or extend (by inheritance) VIFOpenVSwitchDriver, 14:23:28 anything else that it will need? 14:23:57 AlexeyPerevalov_: Yep, that would work. 14:23:58 AlexeyPerevalov_: if you ask me, I prefer a separate class or extend 14:24:15 only extend if it really shares almost all impl 14:24:19 apuimedo: I think, yes. That's all, I would like to reuse multi-vif approach here. 14:24:40 very well 14:25:12 apuimedo: I think separate class would be simpler at initial stage of prototyping. 14:25:30 sounds good 14:25:33 AlexeyPerevalov_: I agree 14:25:42 * apuimedo doesn't like most inheritance 14:25:54 except if it is property for me 14:25:56 :P 14:26:19 apuimedo: ;) 14:26:42 so I'm assigned to this task and now I'm doing prototype. 14:26:58 great! 14:27:02 looking forward to see this 14:27:07 great! ping us on kuryr channel if you need some help! 14:27:20 indeed 14:27:22 any other topic on kuryr-kubernetes? 14:27:23 sure 14:27:54 ltomasbo: I don't have 14:29:38 ok, I just want to update on the status of the network per namespace handler/driver 14:29:59 very well 14:30:03 first patch regarding adding network resources when creating the namespace are already merged, and the ones for deleting those resources are in review status 14:30:17 and that's all from my side 14:30:22 :) 14:30:27 apuimedo, dulek, do you have anything to discuss? 14:30:41 I have to say that I really like this crd look 14:30:50 better than the huge annotations we do for vif 14:31:02 apuimedo, thanks! 14:31:07 Not really, I'm sprinting now to finish code for daemon VIF choice and A/P HA. 14:31:32 ltomasbo: https://review.openstack.org/#/c/562249/18/kuryr_kubernetes/controller/drivers/namespace_subnet.py 14:31:45 dulek, nice! Looking forward to take a look at the patch sets 14:31:49 I do not think it is a good idea to delete the neutron resources after the CRD 14:31:54 dulek, are you planning to have a devref about it? 14:31:55 ltomasbo: Me too. :D 14:32:16 ltomasbo: Definitely, I think it'll be unreviewable without documentation. 14:32:22 if you fail immediately to delete neutron resources, you won't have the mapping in k8s anymore 14:32:29 and you'll have untracked crap in Neutron 14:32:43 dulek: sounds like a challenge 14:33:16 apuimedo: Ah, I think I've pointed it out in one of the patchsets. 14:33:57 apuimedo, you are right 14:34:05 I'll change the order 14:34:26 thanks ltomasbo 14:34:55 cause half of the point of using CRDs is that then cleanup is just going through unreferenced CRDs 14:35:06 and deleting the resources they point to 14:35:26 yep, I thought I have it on the opposite order, perhaps I did it on the rollback method... 14:35:59 ok 14:36:40 ltomasbo: I would appreciate if you could try https://review.openstack.org/#/c/571707/ when you are bored :P 14:37:06 apuimedo, I already did, and failed 14:38:01 ltomasbo: at build or at runtime? 14:38:06 at build 14:38:39 :/ 14:38:53 ltomasbo: if you can, give me the output in the patchset as a comment 14:38:56 and I'll address 14:39:18 I quickly tried it, I'll give it another try and let you know! 14:39:34 I should not need any extra req, right? rather than buildah 14:39:48 ltomasbo: which distro where you trying it on? 14:39:58 you need to have yum and el7 repos available 14:40:27 centos 7.4 14:40:33 ok 14:40:47 so yes, I really want a report of the failure 14:40:50 I was using 7.5 14:41:00 and also I think I tried it on my archlinux 14:41:07 xD 14:41:07 (which has buildah as well) 14:41:13 ok, I'll do! 14:41:45 should we move to other topics? 14:42:09 anyone here want to discuss any topic, regarding kuryr, kuryr-libnetwork or kuryr-kubernetes? 14:43:52 sure 14:43:59 I don't have anything 14:44:55 so, I guess we can close the meeting sooner today and have 15 minutes back! 14:45:02 thanks everyone for attending! 14:45:50 looking forward to seeing the new A/P and vhostuser patch sets! 14:46:08 #endmeeting