15:00:53 <apuimedo> #startmeeting kuryr
15:00:54 <openstack> Meeting started Mon Feb 15 15:00:53 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:57 <openstack> The meeting name has been set to 'kuryr'
15:01:08 <gsagie_> Hello all
15:01:32 <apuimedo> Hello and welcome to yet another kuryr weekly meeting
15:01:39 <apuimedo> who's here for the show?
15:01:49 <salvorlando> Me
15:02:00 <fawadkhaliq> o/
15:02:02 <salvorlando> But I am here to watch the show!
15:02:16 <apuimedo> salvorlando: I have a couple of questions to ask you :P
15:02:28 <apuimedo> so don't sit too far back
15:03:00 <apuimedo> #info gsagie_ salvorlando fawadkhaliq and apuimedo are here for the meeting
15:03:16 <apuimedo> thanks for joining
15:03:25 <apuimedo> #topic announcements
15:04:31 <apuimedo> After the recent discussion in the neutron community about what should and should not be in the Neutron big stadium, we will be submitting the request for Kuryr to be a project on the big tent
15:04:40 <apuimedo> proposing Gal Sagie as PTL
15:05:03 <fawadkhaliq> excellent!
15:05:03 <apuimedo> and we'll request a design room for the Austin summit
15:05:18 <apuimedo> :-)
15:05:34 <gsagie_> yeah, i think that the last day of the last summit was probably the most effective one when we managed to all sit together
15:05:40 <salvorlando> You will probably have more luck winning the euromillioms
15:05:41 <gsagie_> so hopefully we will have more time this summit
15:06:07 <gsagie_> salvorlando: why? what is the criteria to have a room?
15:06:11 <fawadkhaliq> salvorlando: rofl
15:06:19 <apuimedo> salvorlando: we have a a failsafe plan!
15:06:32 <apuimedo> KFC design room
15:06:34 <apuimedo> :D
15:06:45 <fawadkhaliq> apuimedo: KFC design room ftw!
15:06:47 <apuimedo> #link: https://vimeopro.com/midokura/734915331
15:06:59 <apuimedo> I made a webinar last week
15:07:25 <apuimedo> it shows the communication with lbaas and container <-> VMs
15:07:56 <gsagie_> cool
15:08:03 <apuimedo> salvorlando: I need more information on surrendering the email domain to the foundation, do you have some contact I could use for that?
15:08:32 <fawadkhaliq> apuimedo: very cool, thanks for sharing
15:08:39 <gsagie_> apuimedo: which domain? kuryr.org?
15:08:42 <gsagie_> :(
15:09:33 <apuimedo> yes, salvorlando suggested that I should give control of it to the foundation
15:09:42 <apuimedo> I created it to have cool links for the demos
15:09:54 <apuimedo> like http://webinar.kuryr.org:8000/
15:09:56 <apuimedo> :-)
15:10:16 <apuimedo> which has two containers running behind a load balancer
15:10:20 <salvorlando> I was joking btw
15:10:35 <apuimedo> oh, I thought it was a requirement salvorlando
15:10:40 <apuimedo> you totally got me!
15:11:02 <apuimedo> ok, just so you all know, if any of you want to use the domain for demos, just shoot me an email
15:11:08 <apuimedo> and I'll add the necessary dns entries
15:11:28 <apuimedo> #topic deployment
15:12:11 <apuimedo> #info apuimedo setn https://review.openstack.org/#/c/279320/
15:12:20 <apuimedo> darned typo...
15:12:34 <apuimedo> anyway
15:13:22 <apuimedo> salvorlando: fawadkhaliq: gsagie_: I'd like to hear more about how you bind
15:13:46 <apuimedo> to make sure that I'm not overlooking anything for the container
15:13:58 <apuimedo> to make the common base container as complete as possible
15:14:18 <apuimedo> I'll probably be submitting an ovs one soon
15:14:56 <gsagie_> apuimedo: what do you mean an ovs one? you have one container per backend?
15:15:13 <gsagie_> or better one Dockerfile
15:15:40 <apuimedo> there should be kuryr/libnetwork:midonet kuryr/libnetwork:ovs kuryr/libnetwork:ovn
15:16:05 <apuimedo> or dragonflow/kuryr:1.0.0
15:16:22 <apuimedo> but what I meant is that yes, vendors should have their own dockerfile
15:16:40 <apuimedo> that does a "from kuryr/libnetwork:1.0.0"
15:16:52 <apuimedo> and adds a layer with their binding dependencies
15:16:58 <fawadkhaliq> apuimedo: for PLUMgrid/iovisor, it will follow similar mechanism as you have in there. I will review and comment.
15:17:10 <apuimedo> gsagie_: in your case, df-db would be added
15:17:25 <apuimedo> fawadkhaliq: very well
15:17:40 <apuimedo> the design suggestion, of course, is that the neutron agent is on a separate container
15:17:47 <apuimedo> where possible
15:17:47 <gsagie_> apuimedo: sounds good, the only thing i am wondering is how for example if we take ovn/dragonflow the code inside the container will perform the binding to OVS
15:17:50 <gsagie_> in the compute node
15:18:13 <apuimedo> gsagie_: we run the kuryr container on the host networking namespace
15:18:19 <apuimedo> so ovs-vsctl works fine
15:18:21 <apuimedo> ;-)
15:18:25 <apuimedo> this I already tested
15:18:44 <gsagie_> okie cool
15:18:50 <apuimedo> fawadkhaliq: will that split containerization work for you guys?
15:19:39 <apuimedo> salvorlando: I'll be checking kolla this coming weekend
15:19:50 <fawadkhaliq> apuimedo: I will have to check that. Let me report back.
15:19:53 <salvorlando> Apuimedo your approach to containers make sebse
15:19:55 <apuimedo> gotta talk with SamYaple
15:20:02 <apuimedo> fawadkhaliq: very well
15:20:03 <salvorlando> Let me know if I can help with kolla
15:20:29 <apuimedo> salvorlando: alright, so for that, since kolla only has ovs, I'll get ready the ovs version of the container
15:20:43 <apuimedo> salvorlando: or do you have ovn kolla support?
15:21:38 <gsagie_> i think that reaching ovn support from ova should be fairly easy
15:21:50 <gsagie_> ovs
15:22:14 <apuimedo> gsagie_: well, not so much, you'd need kolla to deploy all the components that ovn uses
15:22:17 <apuimedo> probably
15:22:25 <apuimedo> the extra agents and stuff
15:22:48 <apuimedo> anyway, let's try to get ovs+kolla in the coming two weeks
15:22:51 <apuimedo> :-)
15:22:54 <gsagie_> okie
15:23:03 <apuimedo> #action salvorlando apuimedo to try ovs + kolla
15:23:13 <salvorlando> Thanks apuimedo :-)
15:23:14 <apuimedo> (+kuryr, ofc)
15:24:08 <apuimedo> #topic nested
15:24:16 <apuimedo> fawadkhaliq: the floor is yours
15:24:26 <fawadkhaliq> apuimedo: thanks
15:24:50 <fawadkhaliq> I addressed most of the early comments.
15:24:58 <fawadkhaliq> magnum folks have provided some feedback
15:25:14 <apuimedo> fawadkhaliq: in the review?
15:25:18 <fawadkhaliq> I would like to discuss one point here that taku and I discussed last week as well, needed broader audience
15:25:22 <fawadkhaliq> apuimedo: correct.
15:25:33 <apuimedo> fawadkhaliq: raise it, raise it ;-)
15:26:07 <fawadkhaliq> the ask is to support networking via Kuryr when native container tools are used instead of magnum API. For example Docker, Kub CLI etc.
15:26:36 <apuimedo> fawadkhaliq: oh, you mean that you deploy a bay
15:26:40 <fawadkhaliq> While is possible and I have a way to make it work, it would take us back to the path of making Kuryr agent communicate via management plane to other endpoints :(
15:26:55 <fawadkhaliq> so heres what I am thinking..
15:27:06 <apuimedo> and you want to get the containers networked whether the user goes through magnum api or swarm/k8s, right?
15:27:06 <mspreitz> some of us plan to use Neutron and Docker without Magnum
15:27:24 <fawadkhaliq> apuimedo: correct
15:27:32 <apuimedo> mspreitz: welcome ;-)
15:27:32 <gsagie_> fawad: but i thought the plan was to integrate with the COE's themselves and not with Magnum
15:27:51 <fawadkhaliq> mspreitz: https://review.openstack.org/#/c/269039/ please review :-)
15:28:14 <gsagie_> What information do you need from Magnum?
15:28:29 <fawadkhaliq> gsagie_: COE via magnum currently :)
15:28:57 <apuimedo> gsagie_: well, my idea was, and fawadkhaliq I'll appreciate that you tel lme if I'm talking nonsense, that we have:
15:29:36 <fawadkhaliq> gsagie_: two paths. 1. if we bypass Magnum, then Kuryr agent communicates with other OpenStack endpoints. 2. If we go via Magnum API, we may be able to avoid it.
15:29:43 <apuimedo> 1. Magnum deploy the bay controller nodes with information as to where to reach Neutron and that such communication is allowed
15:30:07 <fawadkhaliq> so if we are okay with this communication, then we can update and make it happen.
15:30:19 <apuimedo> 2. We have an integration with bay types that speaks with neutron and does the ipam and port management
15:30:45 <apuimedo> 3. the worker nodes just get minimal information like what is the vlan to connect to
15:31:00 <mspreitz> I am trying to connect containers to Neutron networks without any VMs in sight.  Is anybody else here interested in that?
15:31:13 <fawadkhaliq> apuimedo: the idea is similar, except that more information is passed via Magnum
15:31:19 <gsagie_> mspreitz: this is not the use case we talking now
15:31:28 <fawadkhaliq> however..
15:31:54 <fawadkhaliq> apuimedo: I see your point and I see this communication seems reasonable to us..
15:31:59 <gsagie_> fawadkhaliq, apuimedo: ok that make sense, but now why would anyone want to run this without Magnum? and if they do why is this different then a regular "bare metal" integration with Docker/Kubernetes?
15:32:00 <fawadkhaliq> given that, let's update
15:32:07 <apuimedo> mspreitz: yes, that comes just after this topic ;-)
15:32:16 <apuimedo> we went with nested first today
15:32:40 <apuimedo> gsagie_: it's not to run it without magnum
15:32:58 <apuimedo> it's to let the magnum user consume the k8s/swarm api instead of the magnum one for service creation
15:33:05 <fawadkhaliq> apuimedo: +1, Magnum does one step and then rest can be done via native tools.
15:33:05 <apuimedo> did I get it right fawadkhaliq ?
15:33:21 <gsagie_> okie got it now
15:33:40 <fawadkhaliq> apuimedo: correct. essentially a hybrid workflow still gets consumer the network.
15:33:54 <apuimedo> :-)
15:34:08 <fawadkhaliq> so I wanted to make sure you guys are onboard, before I propose the change.
15:34:17 <fawadkhaliq> Looks we are all on the same page with this
15:34:20 <fawadkhaliq> so I will go ahead
15:34:25 <apuimedo> fawadkhaliq: can you summarize the change in one sentence?
15:34:36 <apuimedo> (I'll put it in info comment)
15:35:05 <fawadkhaliq> apuimedo: goal is to facilitate networking even when native tools are used.
15:35:24 <fawadkhaliq> apuimedo: that would require Kuryr agent to have communication with other OpenStack endpoints.
15:35:41 <apuimedo> #info nested goal: to provide networking whether magnum users consume magnum or bay specific apis
15:35:51 <gsagie_> fawadkhaliq: i personally think, without giving it enough thought that this is the correct path either way
15:35:52 <fawadkhaliq> apuimedo: +1
15:36:17 <apuimedo> fawadkhaliq: I think the communication can be very minimal and we should keep it like that
15:36:37 <apuimedo> for swarm it may be more complicated, but for k8s probably only the controller nodes will need it
15:36:59 <fawadkhaliq> apuimedo: gsagie_, I am concerned about the security aspects on this communication. But thats something we can improve and can evolve.
15:37:09 <gsagie_> yep, the end points should only do the binding similar to our Kubernetes integration plan
15:37:14 <apuimedo> fawadkhaliq: exactly
15:37:38 <fawadkhaliq> thats all on nested.
15:37:45 <fawadkhaliq> very useful discussion :-)
15:37:46 <apuimedo> #info: agreement to limit access to the management as much as possible
15:38:16 <apuimedo> #topic k8s integration
15:38:20 <apuimedo> mspreitz: here we go ;-)
15:38:26 <apuimedo> thanks fawadkhaliq
15:38:37 <fawadkhaliq> apuimedo: welcome
15:39:06 <apuimedo> alright then
15:39:15 <apuimedo> We do not have Irena here
15:39:21 <gsagie_> or banix ?
15:39:46 <apuimedo> mspreitz: did you move forward with the plan to make cni call libnetwork?
15:39:59 <banix> gsagie_: hi
15:40:00 <mspreitz> I have a draft on which I am working
15:40:06 <gsagie_> ohh hi :)
15:40:20 <mspreitz> Hope to get it running today
15:40:25 <apuimedo> mspreitz: :O
15:40:34 <banix> have been sick all week. getting back to things....
15:40:55 <fawadkhaliq> banix: sorry to hear. hope you feel well.
15:40:57 <apuimedo> are you having a network configured in CNI vendor on each worker machine
15:41:10 <gsagie_> banix: happy to hear you recovering
15:41:17 <apuimedo> and then having the cni driver call docker libnetwork attaching?
15:41:26 <apuimedo> banix: take care ;-)
15:41:29 <mspreitz> It will be more interesting to me once Kuryr can connect to existing networks instead of make new ones.
15:42:26 <mspreitz> My CNI plugin calls `docker network`
15:42:35 <gsagie_> mspreitz: yes thats deafeningly something we had in mind to do
15:42:49 <mspreitz> The `docker network create` needs to be done just once, thanks to Kuryr.
15:42:55 <apuimedo> mspreitz: good point
15:43:14 <banix> apuimedo: gsagie_ fawadkhaliq thansk (and sorry for the interruption)
15:43:19 <apuimedo> mspreitz: couldn't you create the networks as part of the deployment?
15:43:31 <apuimedo> you add a worker node, you do a docker network create
15:43:44 <mspreitz> apuimedo: I am not sure I understand the question.
15:43:50 <mspreitz> which networks, which deployment?
15:44:07 <mspreitz> Networks are not per worker node
15:44:27 <apuimedo> mspreitz: which network topology are you using
15:44:29 <apuimedo> ?
15:44:36 <apuimedo> network per service?
15:44:51 <mspreitz> I am taking baby steps
15:44:55 <apuimedo> and what are the reasons you need to connect to pre-existing networks
15:44:58 <apuimedo> ?
15:45:03 <mspreitz> the one I want to take first is to connect containers to a provider network
15:45:11 <mspreitz> But I can't yet, Kuryr won't connect to that.
15:45:13 <apuimedo> aha
15:45:15 <apuimedo> :-)
15:45:19 <apuimedo> thanks mspreitz
15:45:40 <mspreitz> My next step would be to have a tenant network per K8s namespace, probably
15:45:54 <mspreitz> That is not the way k8s wants to go
15:46:02 <mspreitz> it's just a possible next baby step.
15:46:05 <apuimedo> mspreitz: is it okay for your draft to hack it?
15:46:14 <apuimedo> the first step
15:46:27 <apuimedo> what I mean is
15:46:42 <apuimedo> do `docker network create myprovidernet`
15:46:47 <apuimedo> you'll get a uuid
15:47:06 <apuimedo> you delete the network that is created in neutron with that uuid
15:47:16 <apuimedo> and update the provider network name to have that uuid
15:47:38 <apuimedo> It would be very useful to know if that would work
15:47:44 <mspreitz> Oh, rename the pre-existing provider nework.  Hadn't thought of that
15:47:54 <mspreitz> I can give it a try.
15:47:55 <apuimedo> mspreitz: I have a hunch that it should work
15:47:58 <apuimedo> if it does
15:48:34 <apuimedo> probably connecting to existing networks will be a bit easier in "hacky mode"
15:48:41 <mspreitz> I suppose you folks can tell me: will future libnetwork calls to Kuryr, to attach containers, find the Neutron network by nUetron network name or by UUID?
15:48:59 <gsagie_> apuimedo: maybe you can add an action for me to add this support
15:49:13 <apuimedo> mspreitz: yes, you'll pass --neutron-net a450ae64-6464-44ee-ab20-a5e710026c47
15:49:21 <gsagie_> i don't think it should be too hard to add, i will write a spec
15:49:31 <apuimedo> well, more as a tag
15:49:42 <mspreitz> does `--neutron-net` take a name or a UUID?
15:49:50 <gsagie_> mspreitz: we can support both
15:49:56 <apuimedo> gsagie_: it can be a workaround while we can't put darned tags on resources
15:50:01 <apuimedo> mspreitz: uuid
15:50:03 <gsagie_> mspreitz: we meant to use the tags in Neutron once this feature is completed
15:50:06 <banix> gsagie_: i have the code that does that
15:50:16 <banix> there is one issue that we need to deal with
15:50:25 <apuimedo> banix: you mean the name change?
15:50:35 <gsagie_> apuimedo: we can just do according to the name right now
15:50:39 <gsagie_> i think thats what banix has
15:50:42 <banix> i meant using existing networks
15:50:53 <gsagie_> use existing network by Neutron name
15:51:08 <apuimedo> gsagie_: you should not do it by name if you are then changing the current name :P
15:51:09 <banix> the issue is right now we rely on neutron network name
15:51:18 <mspreitz> If work has to be done, I'd rather see banix  go for the jugular
15:51:19 <apuimedo> exactly
15:51:38 <banix> and for an existing neutron network for the rest of our code to work, we have to set its name to what we want; not desrirable but it works
15:51:58 <apuimedo> banix: yes, that's what I proposed mspreitz
15:51:59 <gsagie_> banix: or keep the mapping internally
15:52:02 <gsagie_> or in a DB
15:52:05 <banix> i think once we have the tagging, things will look much cleaner
15:52:17 <apuimedo> gsagie_: I'd rather knock my teeth out than adding a DB to Kuryr :P
15:52:30 <gsagie_> heh ok
15:52:37 <apuimedo> Kuryr is parasitic by nature
15:52:44 <banix> gsagie_: yesh, have been trying to see how use the docker kv but doesnt look promising
15:52:51 <apuimedo> it should use the DBs of what it connects to
15:53:09 <apuimedo> banix: the inflight restriction is a bit damning there
15:53:18 <banix> do you guys agree neutron tagging will solve this problem?
15:53:25 <apuimedo> banix: wholeheartedly
15:53:43 <mspreitz> What is "the inflight restriction" ?
15:53:55 <banix> the blueprint there seems almost approved; has one +2 last i checked
15:54:09 <apuimedo> mspreitz: in docker libnetwork you can't access a kv resource in the same operation that is creating it
15:54:09 <gsagie_> yes
15:54:27 <apuimedo> so, for example, let's say that we are creating a network to connect to a Neutron net
15:54:35 <mspreitz> I understand
15:54:37 <apuimedo> kuryr receives the call to the network creation
15:54:40 <mspreitz> yes, that's a pain
15:54:49 <apuimedo> but it can't go to the kv to modify it to add data
15:54:52 <apuimedo> mspreitz: indeed
15:55:04 <apuimedo> when I feel hacky, I'd add deferred actions xD
15:55:12 <apuimedo> that are executed just after we return
15:55:28 <mspreitz> that's a real pain for any higher level automation
15:55:31 <apuimedo> but then I think, well, tags should be coming soon to neutron
15:55:39 <apuimedo> mspreitz: and raceful
15:55:58 <apuimedo> and you'd lose it if the kuryr daemon restarts
15:56:11 <mspreitz> like you said, "hacky"
15:56:13 <apuimedo> anyway, mspreitz let us know if the hacky way works
15:56:20 <apuimedo> the one of renaming
15:56:24 <apuimedo> #topic general
15:56:27 <mspreitz> hmm, I thought I was just told it will not
15:56:46 <apuimedo> mspreitz: banix said that changing the name in neutron will work
15:56:54 <mspreitz> `docker network attach` will cause Kuryr to find a Neutron network by its Neutron UUID, right?
15:57:22 <apuimedo> mspreitz: by docker network id
15:57:29 <apuimedo> (which maps to the neutron name)
15:57:41 <mspreitz> So it will look up the network in Neutron by Neutron network name?
15:57:41 <apuimedo> (while we don't have neutron resource tags)
15:58:01 <banix> mspreitz: what apuimedo said
15:58:03 <apuimedo> mspreitz: docker network attach dockernetname
15:58:08 <mspreitz> Then I'll give it a try
15:58:23 <apuimedo> dockernetname -> dockernetid -> (to be found on neutron net name)
15:58:31 <apuimedo> anybody has anything more for the alst two minutes?
15:58:42 <banix> looking at the log of this meeting i see Kola being mentioned; hui kang has worked on this
15:59:07 <banix> unfortunately he had to leave for a family emergency last week; i will check with him and see where he is
15:59:36 <banix> See what you did. Toni got upset!
16:01:33 <banix> any co chairs to close the call?
16:01:43 <fawadkhaliq> anyone else going to end the meeting :-)
16:02:03 <fawadkhaliq> #endmeeting