18:05:52 <daneyon> #startmeeting container-networking
18:05:53 <openstack> Meeting started Thu Jul 23 18:05:52 2015 UTC and is due to finish in 60 minutes.  The chair is daneyon. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:05:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:05:57 <openstack> The meeting name has been set to 'container_networking'
18:06:01 <adrian_otto> ok, that's fine
18:06:07 <daneyon> sorry all!
18:06:29 <daneyon> #topic roll call
18:06:34 <adrian_otto> Adrian Otto
18:06:37 <s3wong> Stephen Wong
18:06:38 <daneyon> here
18:06:39 <hongbin> o/
18:06:39 <Tango> Ton Ngo
18:06:44 <eghobo> o/
18:06:44 <arajagopal> Aditi Rajagopal
18:06:45 <sicarie> o/
18:06:46 <mestery> o/
18:06:56 <adrian_otto> thanks for joining mestery
18:07:18 <daneyon> thx all for joining
18:07:28 <suro-patz> o/
18:07:29 <daneyon> #topic Review Networking Spec Submission/Feedback
18:07:39 <daneyon> #link https://review.openstack.org/#/c/204686/
18:08:01 <daneyon> thx for everyone's input into the spec and thx for all the feedback
18:08:07 <adrian_otto> for those who have seen the review, we have a heathy debate in progress
18:08:24 <daneyon> i would like to open it up for questions
18:08:31 <adrian_otto> 0/
18:08:33 <adrian_otto> o/
18:09:01 <daneyon> I am still in the process of providing responses to the latest feedback
18:09:38 <mestery> I'm really confused by what the spec is trying to implement, it appears to be ANOTHER network layer to plug into, and as Kevin Fox indicated, it's unclear it's needed. :)
18:09:40 <adrian_otto> so I wanted to indicate that I'm sympathetic to the feedback offered by our Neutron team
18:09:42 <mestery> I'll open with that :)
18:10:01 <mestery> adrian_otto: ++, and thanks :)
18:10:08 <adrian_otto> and I wanted to give us a chance to consider an iterative approach
18:10:46 <adrian_otto> I want to better understand what we would need in order to allow both flannel and libnetwork to be gracefully supported in Magnum
18:11:01 <daneyon> mestery the spec is trying to implement a container networking model that provides choice in container networking implementations, while providing a simple user experience
18:11:07 <adrian_otto> and highlight why we are worried that libnetwork may not support Flannel
18:11:37 <daneyon> i think the biggest issue is by aligning with Kuryr, we align oursleves only to libnetwork
18:11:42 <mestery> I'm worried about having another place for networking plugins in the stack, how this will interact with the underlying system, and adding more encaps to packets as they traverse the network
18:11:52 <daneyon> which means we would no longer support flannel
18:12:19 <eghobo> daneyon: why?
18:12:20 <daneyon> if the magnum community is ok with that, I would be more open to aligning with kuryr
18:12:38 <daneyon> eghobo flannel is not a libnetwork remote driver
18:12:39 <mestery> Why won't flannel integrate with libnetwork?
18:12:57 <daneyon> that's not to say it could not become a remote, but it's not today and i have no indication from coreos that it will be
18:12:58 <s3wong> daneyon: would it make sense to start another Neutron sub-project that can support Flannel model, or better yet, can we extend Kuryr to support Flannel?
18:13:12 <mestery> s3wong: ++
18:13:15 <eghobo> yes, but it's operator choice how deploy kub
18:13:30 <adrian_otto> daneyon: you had indicated that there may be a licensing issue driving that reluctance. Did I remember that right?
18:13:53 <daneyon> adrian_otto to support both, we need an abstractipon layer like what the spec proposes
18:14:20 <adrian_otto> daneyon: unless we contribute a flannel plugin for libnetwork, right?
18:14:21 <mestery> daneyon: And this is the problem, introducing another networking plugin layer into openstack doesn't seem to make sense to me.
18:14:30 <mestery> daneyon: Especially when we're consolidating to a single one at the moment.
18:14:46 <daneyon> mestery whether a magnum network_backend adds encap is based on the capabilities of the backend. many backends offer non encap options
18:15:21 <adrian_otto> that's true. Actaully we have flannel configurations that use non-encap.
18:15:26 <eghobo> mestery: +1, it make sense to use Neutron for all network managment
18:15:31 <adrian_otto> based on nexthop setup
18:15:33 <daneyon> mestery i think because coreos has aligned themselves with google to battle Docker in the container space, but i would suggest pinging #coreos-dev
18:16:15 <eghobo> daneyon: but I believe Tectronic shipp with Docker now
18:16:28 <daneyon> s3wong I would like to see Kuryr not be directly tied to libnetwork, but that is their decision
18:16:32 <adrian_otto> eghobo: I'm not 100% convinced of that yet. While I want to leverage neutron, we also want containers that run on Magnum to run on non-openstack clouds as well.
18:16:56 <daneyon> adrian_otto not sure about the lic issue. I think it may be more of a strategy issue.
18:17:09 <s3wong> daneyon: and that is also up to us as community in general though. I am sure Kuryr does NOT have the intention to NOT work with Magnum
18:17:13 <adrian_otto> daneyon: ok, thanks for the clarity
18:17:21 <arajagopal> echoing mestery, I think that this would set a bad precedent...
18:17:31 <mestery> arajagopal: ++
18:17:32 <daneyon> adrian_otto we could contribute a flannel plugin to libnetwork... i would need to investigate the details
18:17:42 <s3wong> daneyon: so Magnum should give requirements to Kuryr
18:17:51 <adrian_otto> so let's imagine for a moment that we have such a plugin
18:18:14 <adrian_otto> s3wong: let's come back to Kuryr in a moment, please.
18:18:43 <eghobo> adrian_otto: if I don't want to use Neutron, I will configure Kub directly through heat template
18:19:10 <adrian_otto> in that world with a flannel plugin for libnetwork, we restrict Magnum to supporting only container runtimes that integrate with libnetwork.
18:19:11 <daneyon> mestery can u help me understand this aspect of Kuryr We also plan on providing common additional networking services API's from other sub projects"
18:19:18 <adrian_otto> let's explore that for a moment
18:19:29 <daneyon> seems like Kuryr is providing another networking abstraction layer
18:19:38 <adrian_otto> today, all of our supported bay types do use the Docker runtime
18:19:45 <adrian_otto> so there would be no short term impact
18:19:51 <mestery> adrian_otto: ++ so far
18:19:56 <adrian_otto> long term, if we did add another integration point
18:20:07 <adrian_otto> that would be adding a new API (an additive action)
18:20:13 <adrian_otto> and not a contract breaking change
18:20:28 <arajagopal> adrian_otto: ++ i like where this is going...
18:20:43 <daneyon> eghobo i believe tectonic uses flannel for container networking
18:20:49 <adrian_otto> so if an OCP runtime bay arrived in the future, we could address that question again
18:21:11 <daneyon> #action danehans to verify the container networking tech used by tectonic
18:21:14 <adrian_otto> daneyon: yes, it does.
18:21:34 <adrian_otto> it's flannel, you can #undo the action
18:21:36 <daneyon> adrian_otto thx for the clarification
18:22:10 <daneyon> need a few moments to catch up with the thread
18:23:04 <adrian_otto> ok, so I'm not saying we should take an abstraction layer or integration point totally off the table, but I'm asking if we could sequence our work in a way that defers that to a time when we know if it's worth the additional engineering and user education needed to clear that hurdle.
18:24:03 <adrian_otto> do we have a neutron plugin for libnetwork today? (sorry, I have not looked. The last time I wanted this it did not yet exist)
18:24:06 <eghobo> mestery: have you heard anything from Google about Kub integration with Neutron?
18:24:34 <mestery> eghobo: I have not, but I've seen some folks have already done this or some prelimanary work here (e.g. some folks on the OVS list)
18:24:53 <mestery> adrian_otto: We do not, but that is what Kuryr is going to do
18:25:08 <adrian_otto> mestery: tx
18:25:38 <hongbin> I think we should consider integrating to Kuryr when it becomes mature
18:25:44 <daneyon> adrian_otto no, a neutron libnetwork remote driver does not exist
18:26:18 <adrian_otto> daneyon: do we have a reluctance on taking a dependency on Kuryr? Does the Magnum team have skills that we could lend to Kuryr to help advance it?
18:26:31 <daneyon> adrian_otto I think in the long-term, it's better to add openstack networking integration into the upstream container projects.
18:26:42 <adrian_otto> daneyon: ++
18:26:51 <daneyon> for example, flannel alread support gce and aws, why not openstack?
18:27:01 <adrian_otto> that's what we discussed in Vancouver.
18:27:09 <adrian_otto> we had a rather strong consensus on that decision point
18:27:35 <s3wong> adrian_otto: so both efforts (magnum-networking and Kuryr) are in early stage... it isn't like the dependency really is something mature waiting on something new
18:27:45 <daneyon> mestery i don;t understand why Kuryr is needed for contributing to the upstream container projects directly
18:28:01 <adrian_otto> s3wong: Magnum aims to be ready for production workloads in a few months
18:28:22 <adrian_otto> we are much further along
18:28:22 <mestery> daneyon: Nor do I, and like I said, some folks are playing with integrating neutron with container networking.
18:29:32 <daneyon> mestery if the best place to integrate container networking with openstack is within the upstream container networking projects, then can you help me better understand what Kuryr will provide?
18:30:05 <mestery> daneyon: We'd need to talk to gsagie a bit for that, there is some confusion around Kuryr at this point.
18:30:12 * mestery goes into lurk mode now and will be async
18:30:37 <adrian_otto> mestery let me know before that he could only attend for the first half of this meeting
18:30:49 <mestery> adrian_otto: Yes, internal meeting now, but will be async here, apologies
18:31:34 <adrian_otto> so daneyon: do you feel that we have clarity on the neutron POV?
18:31:42 <daneyon> mestery I really do appreciate your input and perspective.
18:31:53 <mestery> daneyon: Likewise, thanks! :)
18:32:12 <daneyon> adrian_otto I feel that I understand the Neutron Community's POV.
18:32:55 <daneyon> However, I think we need to figure out what direction we take magnum networking... here are some options:
18:32:58 <adrian_otto> ok, so my suggestion is to have us apply some creativity and identify ways to accommodate that POV while still advancing toward our objectives.
18:33:19 <daneyon> 1. We focus dev on upstream, add flannel as a remtoe libnetwork driver and only support libnetwork
18:33:34 * adrian_otto nods
18:33:41 <daneyon> 2. We proceed with the current spec, making minor tweaks as needed
18:33:57 <daneyon> 3. We work with Kuryr and follow that direction.
18:34:17 <daneyon> 4. We abandon flaannel support and choose options 2 or 3
18:34:21 <eghobo> daneyon: we should check weaveworks, they may support libnetwork as well
18:34:26 <daneyon> any other options?
18:34:30 <adrian_otto> 4 is not acceptable
18:34:36 <hongbin> One more option from me
18:34:44 <daneyon> eghobo weave is a libn remote driver
18:34:59 <daneyon> flannel is currently the big stumbling block
18:35:01 <hongbin> 5. Prototype a neutron bay in parallel with flannel bay
18:35:12 <daneyon> 1 moment pls
18:35:18 <s3wong> yeah, I believe Weave already committed to having a libnet driver
18:35:54 <adrian_otto> libnet == libnetwork ?
18:35:58 <daneyon> yes
18:36:08 <adrian_otto> let's say that explicitly to avoid any ambiguity
18:36:35 <daneyon> so, we need to figure out which option to take
18:36:37 <s3wong> adrian_otto: yes, libnetwork. Sorry about the abbreviation
18:36:44 <adrian_otto> I'm insterested in your idea hongbin.
18:37:00 <adrian_otto> daneyon: did you need a moment, or could we explore this a bit?
18:37:11 <hongbin> I would suggest to directly integrate with neutron (without touching libnetwork flannel) at the very beginning
18:37:24 <hongbin> One thing is integrating neutron with something (libnetwork, flannel) will make the development process slow at the begining
18:37:33 <daneyon> since goog is joinin it would be nice to get their input re flannel support
18:37:46 <adrian_otto> daneyon: ++
18:37:53 <daneyon> adrian_otto give me a moment to catch up
18:37:58 <s3wong> hongbin: so the Neutron Bay would support libnetwork with Kuryr; but Magnum Networking would also have libnetwork bay to avoid dependency on Kuryr?
18:38:52 <hongbin> s3wong: I am not clear what is behind a neutron bay right now, it is a prototype of something that works
18:39:09 <daneyon> hongbin for 5 do you mean a baymodel that uses neutron and 1 that uses flannel?
18:39:24 <s3wong> hongbin: yeah... I don't know if we try Neutron on containers. Perhaps with OVN driver?
18:40:13 <hongbin> daneyon: Yes, neutron bay just directly use neutron, another bay use flannel
18:40:16 <daneyon> hongbin until neutron supports container networking, i don;t see that as possible.
18:40:42 <daneyon> and it apperas that neutron will support container networking thru Kuryr.. which will use libnetwork
18:40:49 <daneyon> so we're back at square 1
18:40:56 <hongbin> s3wong: Or just allocate neutron ports in the running vm, and attach the port to container
18:41:40 <hongbin> daneyon: if a neutron bay is not possible, we will figure it out and switch to libnetwork at that time
18:42:13 <adrian_otto> ok, I want to take a step back and think like a user
18:42:24 <daneyon> ok
18:42:31 <adrian_otto> in Vancouver we all agreed that our users want the ability to use native API's
18:42:41 <daneyon> agreed
18:42:55 <adrian_otto> and users should have the ability to specify the IP addresses used by their containers
18:43:16 <daneyon> adrian_otto i think that's 1 feature some users want
18:43:29 <adrian_otto> some want it to just work
18:43:37 <s3wong> hongbin: yeah, I remember two meetings ago (the first meeting), we talked about using Neutron network as a way to represent network for containers
18:43:39 <adrian_otto> some want to specify
18:43:40 <daneyon> right
18:44:08 <adrian_otto> so for those who want to specify, we want the user experience to be the same as if they were using the same native API *outside* of an OpenStack cloud
18:44:13 <daneyon> i think most users will never even touch the network_backend and just use the default of the coe
18:44:39 <adrian_otto> agreed that most prefer the default for this very reason
18:45:09 <adrian_otto> so the command I use to create and start (run) a container is the same when I do it locally, and when I do it in the cloud
18:45:11 <daneyon> adrian_otto to your point, that is why i did not include having magnum wrap container networking opertions.
18:45:29 <adrian_otto> excellent, glad this is not a point of debate
18:45:58 <adrian_otto> that was my point, I'm done now.
18:46:01 <daneyon> I think it can become diffult to support the different container networking implementations.. it would be easier if we only supported libnetwork
18:46:50 <s3wong> daneyon: I thought adrian_otto stated above that not support Flannel is unacceptable?
18:47:01 <daneyon> adrian_otto so the network spec tried to minimize any changes to the user experience
18:47:04 <hongbin> What I don't like to integrate with libnetwork is that it is heavy
18:47:10 <eghobo> is libnetwork part of Open Container Initiative?
18:47:20 <daneyon> users that deploy containers in magnum today can continue in the same fashion
18:47:23 <adrian_otto> s3wong: I do maintain that our k8s bay type must use flannel
18:47:36 <adrian_otto> but I don't care if flannel does not work with every bay type
18:47:48 <adrian_otto> I'd prefer it, but I'm willing to compromise there
18:47:49 <daneyon> users that want to get fancy, can specify the network_backend and specify add'l config options for that backend
18:48:08 <suro-patz> adrian_otto: what is the reason for depending on flannel for k8-bay
18:48:11 <daneyon> eghobo it is not
18:48:28 <adrian_otto> I feel strongly about having proper support for flannel in our k8s bay because that's how people use k8s, and I don't want to be in a poistion where we are the only ones who do it differently.
18:48:30 <suro-patz> what I understand k8 can do multi-host networking of its own also
18:48:43 <daneyon> until libnetwork is managed outside of Docker, I don;t see goog/coreos using the tech
18:48:53 <adrian_otto> suro-patz: my understanding is that k8s uses Flannel for that.
18:49:13 <daneyon> suro-patz k8s within gce has it's own network model and tooling
18:49:15 <suro-patz> nope, k8 allocates /24 network per node
18:49:31 <daneyon> flannel copied gce's network model.
18:49:47 <suro-patz> and do multihost networking without flannel
18:49:47 <eghobo> suro-patz: I believe it's GCE specific
18:50:30 <suro-patz> I thought when we discussed this in summit, we had concluded that flannel just creeped into the bay definition, without any particular intention
18:50:44 <adrian_otto> eghobo: I think you're right. There's what k8s does, and there's what CGE does.
18:51:13 <adrian_otto> I am pretty sure the user experience is consistent in both cases, but the implementation is different.
18:51:18 <daneyon> flannel is def the way k8s does multi-host networking
18:51:24 <eghobo> suro-patz: there vxlan support as well, but you need OVS for that
18:51:50 <daneyon> you can use other tech for k8s, but flannel is the preferred atm
18:52:25 <adrian_otto> so have we reached a point of clarity on Magnum's preferred networking solution for k8s will be flannel?
18:52:26 <s3wong> Kubernetes does have OVS driver
18:52:47 <hongbin> adrian_otto: +1
18:53:00 <daneyon> +1 to continue have flannel be the k8s networking default
18:53:18 <suro-patz> I think we should bank more on libnetwork for docker than being dependent on a particular remote plugin
18:53:24 <eghobo> +1 for flannel, at least today
18:53:28 <s3wong> adrian_otto: yes, based on your point that Magnum wants to provide consistent user experience. Seems like flannel is the way to go there
18:53:31 <Tango> +1 for flannel
18:53:32 <adrian_otto> ok, so as hongbin mentioned, we could have the k8s bay template do essentially what it does now, and the other bay types could use libnetwork instead.
18:53:42 <daneyon> if other agree, then we are back to square 1.
18:54:10 <adrian_otto> our default driver for libnetwork could be a driver for neutron once that becomes available and is judged as stable enough.
18:54:30 <suro-patz> +1
18:55:04 <adrian_otto> so in that case the Swarm bay and the Mesos bay would both use libnetwork in their respective templates. Did I understand the proposal clearly?
18:55:17 <hongbin> I have concerns that libnetwork-neutron integration is going to be heavy
18:55:30 <s3wong> adrian_otto: yes, I think we shouldn't have multiple low level networking drivers for different projects in OpenStack
18:55:32 <adrian_otto> we can still allow for labels to pass through configuration specifics
18:55:36 <daneyon> then we lock container networking implementations to a coe instead of the project level.
18:56:01 <hongbin> daneyon: +1
18:56:02 <daneyon> i would like users to have choice that is independent.
18:56:02 <adrian_otto> daneyon: that may give us more freedom down the road
18:56:19 <suro-patz> daneyon: +1
18:56:26 <Tango> Sounds right
18:56:57 <suro-patz> container networking shud be in COE and it may use some remote driver from OpenStack
18:57:09 <eghobo> daneyon: +1
18:57:15 <daneyon> if we lock the container networking implementaiton, then I would suggest supporting 1 for the entire magnum project. we can either make flannel work with swarm or we contribute upstream to get flannel into libnetwork
18:57:50 <s3wong> daneyon: how much effort would it be to get flannel into libnetwork?
18:57:58 <adrian_otto> we are approaching the end of our scheduled time
18:57:59 <s3wong> (I do realize we only have three minutes)
18:58:42 <eghobo> adrian_otto: do you think CoreOS folks will be fine with libnetwork flannel integration?
18:58:56 <adrian_otto> eghobo: we can ask them
18:58:59 <daneyon> s3wong not sure atm
18:59:01 <adrian_otto> I can ask.
18:59:38 <adrian_otto> time to end
18:59:53 <daneyon> eghobo i brought up the quesrion yesterday in #coreos-dev and it didn;t go anywhere
18:59:53 <s3wong> Thanks, guys! Good discussions
19:00:02 <daneyon> thx everyone
19:00:04 <daneyon> #endmeeting