18:03:36 #startmeeting container-networking 18:03:37 Meeting started Thu Jul 16 18:03:36 2015 UTC and is due to finish in 60 minutes. The chair is daneyon_. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:03:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:03:40 The meeting name has been set to 'container_networking' 18:03:47 suro-patz thx 18:03:51 hello again :-) 18:03:57 #topic roll call 18:04:02 o/ 18:04:05 o/ 18:04:06 o/ 18:04:09 o/ 18:04:15 o/ 18:04:19 o/ 18:04:33 o/ 18:05:11 thx all for joining 18:05:15 #topic Continue the collaborative brainstorm session in etherpad 18:05:24 #link https://etherpad.openstack.org/p/magnum-native-docker-network 18:05:53 I have spent a good amount of time the last few days in the proposed change section 18:06:23 Lets spend the next 15 min reviewing the Problem description and proposed change sections 18:07:04 Please review Example 2 in the Problem description section 18:08:35 please also review the Design Principles, alternatives and data model sections 18:08:44 I just started on the data model impact 18:09:27 Those with a longer history with Magnum, I could really use your input on how the proposed changes will effect the data model 18:10:54 o/ 18:11:25 the CIDR used in network_backend, is it going to be public or private? 18:11:56 daneyon_ 18:12:34 we should provide an option to create a backend_network object/instance 18:12:59 and refer the same during baymodel creation 18:13:40 suro-patz can u pls elaborate on the object/instance? 18:13:58 it may be better to do so in the ep 18:14:25 I meant we could have 'magnum network_backend-create' with various options of KV pairs 18:14:48 that way we can use the same backend, if required, to create various baymodels 18:15:58 suro-patz take a look at where i talk about labels in the proposed changes section 18:15:58 You mean something like a model to represent the backend network, that can be shared/reused? 18:16:12 Tango|2 18:16:31 all, I am still debating whether adding the network_backend or just using labels to pass everything 18:17:05 I think network_backend makes sense 18:17:17 if we instantiate it becomes clear compartmentalization by individual instance and reference 18:17:30 then labels are used to pass in metadata/config to the backend 18:17:50 labels can be used for other purposes for similar reasons outside of network_backend 18:18:01 else to access the the flannel params of a baymodel, baymodel will be the container of all the attributes 18:18:23 it seems like labels are becoming the standard way to pass md to docker daemon/containers/images, k8s, etc.. 18:18:59 daneyon_: It is better to design the lable for general purpose 18:19:09 i think so 18:19:22 For example, it can be used by the whole template, not just network 18:19:43 i wonder if general purpose labels should be a different spec that the network spec depends on???? 18:19:59 Maybe a different BP 18:20:07 Sounds like it if we want to generalize 18:20:12 hongbin right. I called that out in the changes section of the ep spec. 18:20:46 I would like labels to be a top level object that is utilized by the entire magnum system 18:21:35 So to use labels, the keys would be well defined constant documented somewhere? 18:22:03 the propsed changes section identifies 6 major changes 18:22:22 pls review 1-6 and let me know what you think 18:22:29 am i missing somehting? 18:22:31 To me the modeling looks very similar the way we have it for —coe kubernetes' 18:23:07 I will pull out labels, create a seperate bp and have the net spec refer to the labels bp as a dependency 18:23:26 sounds good 18:24:14 Another question, are we going to have another network_backend called neutron, or not? 18:24:23 without labels, we could support multiple network backends but just not be able to configure the backens until labels is implemented 18:25:01 hongbin at this point I am unsure. 18:25:31 daneyon_: K. It may need a bit research to figure it out 18:25:39 i see container networking and openstack networking as 2 entities and I am still trying to identify the integration point 18:25:51 hongbin: what does network_backend called neutron represent? An integration point? 18:26:09 their is 1 example that i have been thinking of that points to neutron integration 18:26:13 vlans 18:26:17 daneyon_: there’s a neutron networking sub-project called Kuryr that’s been proposed for integration 18:26:24 their is a neutron bp to implement vm-aware vlan's 18:27:14 if we want to map these vm vlan's to containers to extend the vlan-based isolation into the container, then we will need the container network to intgrate with the neutron network 18:27:35 https://github.com/celebdor/neutron-docker-plugin 18:27:37 however, I don;t see vlan-based isolation being that important 18:27:57 because of vxlan annd other prefered overlays/isolation mechanisms 18:28:40 SourabhP thx for sharing 18:28:53 #link https://github.com/celebdor/neutron-docker-plugin 18:29:20 SourabhP can u pls add it to the ep with a brief description? 18:29:44 SourabhP i'll spend time diving in after the meeting, OK? 18:30:17 daneyon_: yes, will do. The relevant discussion happened on the openstack-dev ML. 18:30:21 SourabhP do you want to take a few min and discuss the project? 18:30:50 SourabhP it would be helpful to add a pointer to the ML discussion in the ep too 18:31:11 daneyon_: I haven’t yet studied it in much detail yet. I will add the relevant links. 18:31:23 OK, thx 18:32:05 #action everyone on the subteam to review the neutron-docker-plugin project 18:33:08 seems to only have document but no code 18:33:24 I could really use some help from someone to understand the Data model impact from network_backend 18:33:47 yup 18:34:08 I believe Antoni has been on the magnum irc channel 18:35:04 this is the home stretch to submit the spec 18:35:27 I will be spending most of my time to complete the initial draft and submit it. 18:35:46 The data model makes sense to me. 18:36:02 #action danehans to submit draft network spec on July 22, 2015 18:36:16 Just one point, we don't have to use different heat template for different network backends 18:36:32 hongbin should i elaborate on anything? 18:37:00 We may use a single heat template for all network backend. 18:37:41 pls look at line 250, where SourabhP discussed registering network_backends 18:38:08 again, this could be a seperate bp, as I could see the need for magnum to have an overall plugin registration/mgt process 18:39:54 making networking in M pluggable is going to have an impact on the heat templates 18:40:09 for example> flannel can not longer be embedded 18:40:15 no 18:40:57 Actually I am thinking of the integration point 18:41:17 i see cloud-config is being used to explicitly setup flannel and flannel is embedded in the TL k8s yml's 18:41:18 An option is a Heat resource to abstract out different network backends 18:41:35 does anyone see any issues with making networking pluggable within the heat templates? 18:41:56 daneyon_ +1 18:42:14 +1 18:42:19 This could avoiding splitting templates for different backend 18:42:52 hongbin that is what i'm thinking too 18:43:19 It will probably take some experimenting to arrive at a good template design 18:43:32 all, pls add your ideas to 5 in proposed changes section 18:43:48 Tango|2 agreed 18:44:12 I think the best 1st step is removing flannel from the core k8s templates 18:44:19 this is similar to what Docker did with libnetwork 18:44:40 provide the same functionality, but in a seperate library 18:44:52 Yep, that would be a good exercise to see how to refactor further 18:46:10 I would like to take a quick vote then 18:46:57 on seperating flannel from core templates as one of the 1st steps 18:47:06 from the spec for #5 18:47:54 So does this imply having an option that's not flannel? 18:48:16 or just refactoring? 18:48:19 yes.. this is what the magnum networking model is all about 18:48:52 magnum should have the ability to support various container networking tools and techniques 18:49:11 this is the batteries included but replaceable. 18:49:13 btw, kubernetes can do multihost networking with flannel, of its own, by allocating individual subnet on each host, and adding route for other subnets 18:49:35 although flannel will be pulled from core k8s templaates, it will continue to be the default 18:50:06 +1 for me then - sensible default that's able to be swapped out 18:50:15 daneyon_: do we have a need to keep flannel as default? As kube can do by itself 18:50:16 so if a user creates a new baymodel, specifies the coe=kubernetes and does not specify a network_backend, flannel will be chosen 18:50:19 make sense? 18:50:30 +1 18:50:31 suro-patz pls see above 18:50:47 +1 18:51:32 hongbin does the above make sense to you too? 18:51:43 I agree to refactor and make flannel optional, but I am saying it need not be default 18:51:48 daneyon_: Yes, definite 18:52:00 Do we know if anyone has tried a different overlay network with Kubernetes? 18:52:05 OK, I think we are in agreement. 18:52:20 Does anyone -1 this approach? 18:52:25 pls speak up if so. 18:52:46 Otherwise, feel free to add to the spec sections in the etherpad 18:53:26 hongbin and all what do you think about having to register plugins? 18:53:57 If so, I think it may make sense to have as a seperate bp 18:54:31 daneyon_: don't follow sorry 18:54:41 I could see the network plugin framework be adopted as a larger effort across magnum and other plugins for storage, etc.. would follow the same registration process 18:54:46 +1 to separate bp - it sounds like there's other stuff going on there that may need to be aligned with other efforts 18:55:03 hongbin look at line 250 of the ep 18:55:25 sure, let's punt that to another spec (and when Magnum has a more generic framework) 18:55:46 i think registering plugins make sense, i just think it should be a seperate bp bc it has a wider impact than just the network spec 18:56:03 We should bring this up for discussion on the ML 18:56:11 will do 18:56:35 SourabhP do you want to start the plugin registration ML thread or would you like me to? 18:57:04 daneyon_: pls go ahead 18:57:34 #action danehans to start a ML discussion related to plugin registration and an overall plugin framework 18:57:41 we are winding down the meeting 18:57:59 i really appreciate everyone's participation 18:58:17 almost unrelated news: http://googlecloudplatform.blogspot.tw/2015/07/Containers-Private-Cloud-Google-Sponsors-OpenStack-Foundation.html 18:58:27 Do we need another meeting to finalize ? 18:58:57 +1 or -1 to meet at the same bat time on the same bat channel next week? 18:59:05 +1 18:59:06 +1 18:59:08 +1 18:59:11 +1 18:59:11 +1 18:59:11 +1 18:59:12 +1 18:59:21 will do 18:59:36 lets keep in touch on the magnum irc channel 18:59:50 thank you for your time! 19:00:03 and see you all back here next week 19:00:15 #endmeeting