14:04:23 <ltomasbo> #startmeeting kuryr
14:04:24 <openstack> Meeting started Mon Jun  4 14:04:23 2018 UTC and is due to finish in 60 minutes.  The chair is ltomasbo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:04:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:04:28 <openstack> The meeting name has been set to 'kuryr'
14:04:44 <AlexeyPerevalov_> o/
14:04:56 <ltomasbo> dmellado cannot handle the meeting today, so I'll try to take over, lets start
14:05:02 <ltomasbo> #topic kuryr-kubernetes
14:05:23 <danil> hello, folks. After merging https://review.openstack.org/#/c/471012/43, I would like to know your opinion about sriov support patches (https://review.openstack.org/#/c/512280/21). What should I do?
14:06:06 <ltomasbo> danil, first, I guess you need to rebase it! :D
14:06:14 <danil> sure)
14:06:16 <ltomasbo> then, I'm a little concern about the previous patch
14:06:34 <ltomasbo> as it missed some stuff, for instance: https://bugs.launchpad.net/kuryr-kubernetes/+bug/1774997
14:06:35 <openstack> Launchpad bug 1774997 in kuryr-kubernetes "Failure recovering pre-created ports into pools" [Undecided,New]
14:06:48 <ltomasbo> I'll sent a fix for it after the meeting
14:07:08 <ltomasbo> but some other cases may have been missed
14:07:28 <ltomasbo> so, it will be nice to add tempest test to ensure it is working
14:07:32 <dulek> ltomasbo: Shouldn't that come out on pools test?
14:07:53 <ltomasbo> dulek, if there was enough coverage, yes!
14:08:08 <ltomasbo> dulek, problem is this will happens after the kuryr-controller gets restarted
14:08:16 <dulek> Okay, I get it.
14:08:37 <danil> ltomasbo: ok, I see. Thanks a lot
14:08:54 <danil> would be nice if I fix it
14:09:00 <ltomasbo> sorry, I didn't mean that ports pool tempest needs to be done on sriov patches!
14:09:12 <danil> yeah, I see
14:09:19 <ltomasbo> danil, I already have the fix for that bug, I'll send it in a few minutes
14:09:39 <danil> ltomasbo: thanks a lot
14:09:46 <ltomasbo> not sure if the notation change may affect other functionality though
14:10:23 <danil> I'll check, no problem
14:10:31 <danil> thank you
14:10:51 <ltomasbo> #action review https://review.openstack.org/#/c/512280 after rebase
14:11:28 <ltomasbo> any other topic on kuryr-kubernetes?
14:11:36 <AlexeyPerevalov_> yes,
14:11:42 <ltomasbo> danil, do you want to discuss something specific about that patch
14:11:55 <ltomasbo> AlexeyPerevalov_, go ahead!
14:12:00 <AlexeyPerevalov_> we have a use case where we're working with ovs's ports of vhostuser type, it's for DPDK in containers.
14:12:16 <AlexeyPerevalov_> Such use case is covered by CNIVhostUser plugin, but we need neutron's network info.
14:13:00 <ltomasbo> AlexeyPerevalov_, what info?
14:13:01 <AlexeyPerevalov_> so from my point of view kuryr-kubernetes is perfect CNI plugin for that.
14:13:09 <ltomasbo> great!
14:13:41 <AlexeyPerevalov_> neutron port, ip address ranges, subnets.
14:13:59 <ltomasbo> AlexeyPerevalov_, ok
14:14:19 <ltomasbo> AlexeyPerevalov_, is that related with the dpdk patch set that was sent a few weeks ago?
14:14:31 <dulek> AlexeyPerevalov_: Technically you could get the info from pod annotation?
14:14:48 <ltomasbo> AlexeyPerevalov_, https://review.openstack.org/#/c/559363/
14:15:54 <celebdor1> \o/
14:15:58 <AlexeyPerevalov_> Do you mean Gary's patch set. I saw it, it's about vhostuser port, yes, but that port already was passed into VM, so in VM it becames VIRTIO device.
14:16:32 <AlexeyPerevalov_> I'm talking about baremetall, where vhostuser is still unix domain socket.
14:17:33 <celebdor1> what's the convo about?
14:17:44 <AlexeyPerevalov_> so it's another use case.
14:17:48 <ltomasbo> ahh, great!
14:18:13 <dulek> apuimedo: http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-4/latest.log.html#t2018-06-04T14:11:36
14:18:22 <apuimedo> thanks dulek
14:19:39 <apuimedo> read!
14:19:54 <ltomasbo> AlexeyPerevalov_, I assume we need to add some new binding driver to kuryr and then a new driver at kuryr-kubernetes, but it looks doable
14:19:55 <apuimedo> It looks to me that the ideal would be to get that from the pod annotation as dulek suggests
14:20:23 <AlexeyPerevalov_> ltomasbo: yes, So we decided to add yet another binding driver, which will create ovs bridge with netdev type, and ovs port with vhostuser type.
14:20:38 <AlexeyPerevalov_> ltomasbo: VIFOpenVSwitchDriver alreay is doing something similar, but it's not vhostuser it's VETH.
14:20:49 <dulek> AlexeyPerevalov_: I don't see an issue with that approach. That's what drivers are for.
14:21:10 <ltomasbo> #chair apuimedo
14:21:11 <openstack> Current chairs: apuimedo ltomasbo
14:21:32 <apuimedo> AlexeyPerevalov_: is it like the internal ovs type?
14:21:45 <apuimedo> cause that's what we do for the devstack kubelt iface
14:22:28 <AlexeyPerevalov_> apuimedo: yes, it is for creating vhoustuser unix domain sock, in case when ovs-vswitchd is compiled with DPDK support.
14:22:50 <apuimedo> right
14:23:21 <apuimedo> so then, as you hinted, doing a class like the VIFOpenVSwitchDriver should do it
14:23:25 <AlexeyPerevalov_> dulek: I think need to customize or extend (by inheritance) VIFOpenVSwitchDriver,
14:23:28 <apuimedo> anything else that it will need?
14:23:57 <dulek> AlexeyPerevalov_: Yep, that would work.
14:23:58 <apuimedo> AlexeyPerevalov_: if you ask me, I prefer a separate class or extend
14:24:15 <apuimedo> only extend if it really shares almost all impl
14:24:19 <AlexeyPerevalov_> apuimedo: I think, yes. That's all, I would like to reuse multi-vif approach here.
14:24:40 <apuimedo> very well
14:25:12 <AlexeyPerevalov_> apuimedo: I think separate class would be simpler at initial stage of prototyping.
14:25:30 <ltomasbo> sounds good
14:25:33 <apuimedo> AlexeyPerevalov_: I agree
14:25:42 * apuimedo doesn't like most inheritance
14:25:54 <apuimedo> except if it is property for me
14:25:56 <apuimedo> :P
14:26:19 <AlexeyPerevalov_> apuimedo: ;)
14:26:42 <AlexeyPerevalov_> so I'm assigned to this task and now I'm doing prototype.
14:26:58 <apuimedo> great!
14:27:02 <apuimedo> looking forward to see this
14:27:07 <ltomasbo> great! ping us on kuryr channel if you need some help!
14:27:20 <apuimedo> indeed
14:27:22 <ltomasbo> any other topic on kuryr-kubernetes?
14:27:23 <AlexeyPerevalov_> sure
14:27:54 <AlexeyPerevalov_> ltomasbo: I don't have
14:29:38 <ltomasbo> ok, I just want to update on the status of the network per namespace handler/driver
14:29:59 <apuimedo> very well
14:30:03 <ltomasbo> first patch regarding adding network resources when creating the namespace are already merged, and the ones for deleting those resources are in review status
14:30:17 <ltomasbo> and that's all from my side
14:30:22 <dulek> :)
14:30:27 <ltomasbo> apuimedo, dulek, do you have anything to discuss?
14:30:41 <apuimedo> I have to say that I really like this crd look
14:30:50 <apuimedo> better than the huge annotations we do for vif
14:31:02 <ltomasbo> apuimedo, thanks!
14:31:07 <dulek> Not really, I'm sprinting now to finish code for daemon VIF choice and A/P HA.
14:31:32 <apuimedo> ltomasbo: https://review.openstack.org/#/c/562249/18/kuryr_kubernetes/controller/drivers/namespace_subnet.py
14:31:45 <ltomasbo> dulek, nice! Looking forward to take a look at the patch sets
14:31:49 <apuimedo> I do not think it is a good idea to delete the neutron resources after the CRD
14:31:54 <ltomasbo> dulek, are you planning to have a devref about it?
14:31:55 <dulek> ltomasbo: Me too. :D
14:32:16 <dulek> ltomasbo: Definitely, I think it'll be unreviewable without documentation.
14:32:22 <apuimedo> if you fail immediately to delete neutron resources, you won't have the mapping in k8s anymore
14:32:29 <apuimedo> and you'll have untracked crap in Neutron
14:32:43 <apuimedo> dulek: sounds like a challenge
14:33:16 <dulek> apuimedo: Ah, I think I've pointed it out in one of the patchsets.
14:33:57 <ltomasbo> apuimedo, you are right
14:34:05 <ltomasbo> I'll change the order
14:34:26 <apuimedo> thanks ltomasbo
14:34:55 <apuimedo> cause half of the point of using CRDs is that then cleanup is just going through unreferenced CRDs
14:35:06 <apuimedo> and deleting the resources they point to
14:35:26 <ltomasbo> yep, I thought I have it on the opposite order, perhaps I did it on the rollback method...
14:35:59 <apuimedo> ok
14:36:40 <apuimedo> ltomasbo: I would appreciate if you could try https://review.openstack.org/#/c/571707/ when you are bored :P
14:37:06 <ltomasbo> apuimedo, I already did, and failed
14:38:01 <apuimedo> ltomasbo: at build or at runtime?
14:38:06 <ltomasbo> at build
14:38:39 <apuimedo> :/
14:38:53 <apuimedo> ltomasbo: if you can, give me the output in the patchset as a comment
14:38:56 <apuimedo> and I'll address
14:39:18 <ltomasbo> I quickly tried it, I'll give it another try and let you know!
14:39:34 <ltomasbo> I should not need any extra req, right? rather than buildah
14:39:48 <apuimedo> ltomasbo: which distro where you trying it on?
14:39:58 <apuimedo> you need to have yum and el7 repos available
14:40:27 <ltomasbo> centos 7.4
14:40:33 <apuimedo> ok
14:40:47 <apuimedo> so yes, I really want a report of the failure
14:40:50 <apuimedo> I was using 7.5
14:41:00 <apuimedo> and also I think I tried it on my archlinux
14:41:07 <ltomasbo> xD
14:41:07 <apuimedo> (which has buildah as well)
14:41:13 <ltomasbo> ok, I'll do!
14:41:45 <ltomasbo> should we move to other topics?
14:42:09 <ltomasbo> anyone here want to discuss any topic, regarding kuryr, kuryr-libnetwork or kuryr-kubernetes?
14:43:52 <apuimedo> sure
14:43:59 <apuimedo> I don't have anything
14:44:55 <ltomasbo> so, I guess we can close the meeting sooner today and have 15 minutes back!
14:45:02 <ltomasbo> thanks everyone for attending!
14:45:50 <ltomasbo> looking forward to seeing the new A/P and vhostuser patch sets!
14:46:08 <ltomasbo> #endmeeting