18:00:24 <daneyon_> #startmeeting container-networking
18:00:25 <openstack> Meeting started Thu Jul 30 18:00:24 2015 UTC and is due to finish in 60 minutes.  The chair is daneyon_. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:29 <openstack> The meeting name has been set to 'container_networking'
18:00:36 <daneyon_> #topic roll call
18:00:39 <adrian_otto> Adrian Otto
18:00:48 <daneyon_> Daneyon here
18:01:30 <hongbin> o/
18:01:34 <daneyon_> Lets wait a couple minutes for others to join.
18:01:34 <s3wong> o/
18:01:59 <eghobo> o/
18:02:58 <daneyon_> #topic Review Networking Spec Submission/Feedback
18:03:04 <daneyon_> #link https://review.openstack.org/#/c/204686/
18:03:20 <daneyon_> We had a ton of feedback, mainly from the neutron community
18:03:35 <daneyon_> Lots of -1's from the neutron community
18:04:33 <daneyon_> It appears that mestery (neutron ptl) would +1 the spec if we did away with the network_backend abstraction and supported libnetwork and libnetwork only
18:04:46 <daneyon_> this would align us with the kuryr project
18:05:01 <daneyon_> #link https://github.com/openstack/kuryr/
18:05:41 <daneyon_> The biggsest question mark is supporting flannel. flannel is not a libnetwork remote driver
18:06:05 <daneyon_> I have found that it should be possible to have flnnel work with libnetwork
18:06:10 <adrian_otto> daneyon_: I think there is a middle ground
18:06:22 <daneyon_> flannel would use the libnetwork native bridge driver
18:06:23 <adrian_otto> we could state an intent to use libnetwork to the extent possible.
18:06:38 <adrian_otto> in the case of flannel, we only have the option of integrating using the bridge interface
18:06:41 <eghobo> daneyon_: how "serious" kuryr project, sorry but there are no sources and documentation?
18:07:07 <daneyon_> long-term someone could create a flannel remote driver, it would still use the native bridge driver for L2. This is becasue flannel is a L3-only solution
18:07:13 <adrian_otto> so that will need to be an exception until someone creates a viable solution to the absence of a libnetwork remote driver for flannel
18:08:05 <daneyon_> adrian_otto the solution may currently exist. IMO it's just a matter of validating that flannel can work with libnetwork's native bridge driver.
18:08:18 <daneyon_> if it does not pan out, then i agree that their needs to be an exception
18:08:51 <adrian_otto> ok, who can perform that validation, and on what timeframe?
18:09:22 <daneyon_> eghobo i believe the kuryr project has good intentions. It's super new... my issue is that code is dropping without any detailed design specs. I don;t like that approach.
18:09:57 <daneyon_> however, i'm happy to see the neutron community addressing container networking.
18:10:09 <adrian_otto> daneyon_: does kuryr at least have regular team meetings?
18:10:10 <daneyon_> adrian_otto i am going to validate it
18:10:47 <daneyon_> i would be further along, but my lab was moved and i have been dealing with lab changes slowing me down and also i was part of the kolla midcycle.
18:11:11 <daneyon_> i would expect that i finalize the validation and do a write-up by the end of next week
18:11:16 <sdake> daneyon_ rocked the midcycle btw ;)
18:11:26 * sdake thanks daneyon_ heartely :)
18:11:31 <adrian_otto> it does not appear in the wii at all
18:11:40 <daneyon_> adrian_otto i have yet to see any details about kuryr meetings.
18:11:54 <daneyon_> if i don;t see soemthing soon, i will contact Gal to get the details
18:12:11 <adrian_otto> well look, we can't take a nonexistent thing seriously
18:12:14 <daneyon_> #action danehans contact Gal to get Kuryr meeting details.
18:12:24 <daneyon_> sdake happy to help.
18:12:55 <adrian_otto> if it's not an openstack team, has no specs, has no wiki page, does nothing yet, and just a code repo, then we can't be expected to take a dependency on it.
18:13:01 <daneyon_> adrian_otto does the eval time period i provided work for you?
18:13:14 <adrian_otto> so my guidance is to participate as it evolves, and advance with our plans in parallel
18:13:22 <sdake> we are not taking a dep on a stackforge project - I will -2 any such review imo :)
18:13:55 <sdake> we can have further discussions at our normal team meeting if someone wants to try to change my mind :)
18:14:10 <adrian_otto> daneyon_: I'm satisfied with the proposed timeframe. I'm actually not concerned about when we finish. What I care most about is that we are clear about delegating the work, and tracking it to completion.
18:14:19 <daneyon_> adrian_otto agreed. i don;t plan to depend on the project anytime soon. If we can make the neutron community happy by loosing the network_backend abstration by standardizing on libnetwork, it may be worth our while to do the flannel/libnet eval
18:14:23 <hongbin> I also don't like depends on kuryr until it is mature
18:14:35 <daneyon_> my issue with standardizing on libnet goes beyond neutron/kuryr
18:14:44 <sdake> if a project is mature or not is not a deal killer for me hongbin, its the namespace
18:14:52 <daneyon_> i don't think kuryr should standardize on libnet
18:15:07 <daneyon_> and i will continue to let that be known in that community
18:15:09 <hongbin> sdake: kuryr is already under openstack namespace
18:15:25 <sdake> oh ok then, my objection is not valid then :)
18:15:34 <daneyon_> until their is a container networking industry standard, i think it's not wise to do.
18:15:40 <adrian_otto> sdake: it was inserted without community discussion
18:16:03 <daneyon_> especially when k8s has a pluggable networking model and I think coe's is where the long-term value is
18:16:05 <sdake> ack, I just want people to know where i stand with dependencies, and a stackforge project as a dep I am -2 on
18:16:08 <adrian_otto> I can summarize the process as an executive aciton by the Neutron PTL, to which there was no timely objection
18:16:41 <sdake> hindsite 20/20 but there should have atleast been a core reviewer vote
18:17:16 <adrian_otto> if Neutron wants to make a docker driver for OpeNStack networking, that's their prerogative
18:17:17 <daneyon_> adrian_otto understood. if the eval goes as planned, we should have a good understanding of the implementation details. I'll create bp's for each and work with others to divide-up the tasks and track to completion.
18:17:31 <adrian_otto> but they can't expect us to standardize on that without talking with us about it first.
18:17:45 <adrian_otto> or showing any form of written plan
18:18:23 <daneyon_> sdake agreed. even if we standardize on libnetwork, we will not depend on kuryr anytime soon. btw kuryr went straight into the big tent??????
18:18:23 <adrian_otto> daneyon_: tx!
18:18:46 <adrian_otto> daneyon_: This is actually a gap in our governance process
18:18:56 <sdake> daneyon_ the big tent permits a PTL to insert new repos related to their project
18:19:08 <adrian_otto> I don't think you should be allowed to just grab scope like that without any email to the dev list, or any prior discussion with stakeholders
18:19:11 <sdake> I think there is probably a discussion t obe had about how that should happen
18:19:41 <sdake> the thing is we trust our ptls to "do the right thing"
18:19:42 <adrian_otto> yes, that's something I'm planning to raise with the TC, because it goes against our community values.
18:19:48 <daneyon_> id we standardize on libbnetwork, then we will be in alginment with kuryr w/o depending (atm) on their code. it's just making sure both projects are marching towards the same direction.
18:19:51 <sdake> but there is no written formality around what the right hting is
18:20:06 <sdake> adrian_otto ya - it depends on if the right thing is done/not done
18:20:07 <adrian_otto> in this case, we really needed to have a discussion before that governance review was merged
18:20:14 <daneyon_> sdake ack
18:20:36 <adrian_otto> FTR this subteam has been operating in the correct way.
18:21:13 <adrian_otto> this == containers_networking subteam
18:21:15 <sdake> which subteam adrian_otto
18:21:26 <sdake> thanks ;)
18:21:34 <daneyon_> adrian_otto would you be willing to own an action of requesting the neutron ptl to create a kuryr design spec?
18:21:50 <daneyon_> I asked in the review and got no feedback
18:21:53 <sdake> daneyon_ what adrain_otto said he would do is take itu pwith the tc
18:21:54 <adrian_otto> from what I can tell kuryr has not organized an openstack team yet.
18:22:02 <daneyon_> sdake ack
18:22:12 <daneyon_> i'm bringing up a different request
18:22:14 <sdake> which I think is an appropriate solution
18:23:00 <daneyon_> IMO I think it's important that Kuryr provide technical details. As of today, the project has a few sentenes of what it is and that's it.
18:23:01 <adrian_otto> daneyon_: yes. Assignme an action item to request a kuryr design spec
18:23:59 <daneyon_> #action adrian_otto To formally request that the Neutron/Kuryr PTL submit a Kuryr design spec.
18:24:05 <adrian_otto> tx
18:24:14 <daneyon_> thx
18:24:46 <daneyon_> any other details related to the magnum net spec that we should discuss?
18:25:25 <daneyon_> ok, then let's move on
18:25:33 <daneyon_> #topic open discussion
18:25:43 <daneyon_> I have been diving into the libnetwork, primarily the remote drivers code.
18:25:47 <daneyon_> #link https://github.com/docker/libnetwork/tree/master/drivers/remote
18:26:16 <daneyon_> ^ I want to make sure i understand libnet, especially the remote driver code in greet detail.
18:26:23 <daneyon_> I am also starting to see Kuryr code drop and have been reviewing the initial commits.
18:26:24 <eghobo> daneyon_: just curious why you against to taking libnetwork model
18:26:30 <daneyon_> #link https://github.com/openstack/kuryr/commits/master
18:26:56 <daneyon_> eghobo I am for the libnetwork model. i'm just against standardizing on it atm.
18:27:16 <daneyon_> personally, i don;t like standardizing on tech that's not a standard.
18:27:21 <daneyon_> what do others think?
18:27:29 <hongbin> +1
18:27:41 <daneyon_> again, especially since k8s has its own pluggable network model/implementation.
18:27:42 <adrian_otto> we don't need to standardize, but we can state an intent to use something
18:27:51 <eghobo> personally I agree with Kyle and libnetwork model looks right (no surprises Docker has strong networking team)
18:27:59 <adrian_otto> and if that turns out not to meet our needs, we will adjust expectations and change direction
18:28:06 <hongbin> +1 adrian_otto
18:28:47 <adrian_otto> we can merge a spec, and then have subsequent changes proposed against it.
18:28:51 <eghobo> we should agree that there are not so many standards in container world now ;)
18:29:03 <adrian_otto> we could even anticipate that and put a version number in it
18:29:12 <daneyon_> adrian_otto understood. If we intend to use libnetwork and intend to support k8s pluggble net, then that's where things get hairy and the neutron team is unhappy.
18:29:35 <daneyon_> because i think we would need to implement an abstration such as network_backend
18:29:46 <adrian_otto> we don't have a compulsion to support k8s pluggable yet
18:29:55 <adrian_otto> so let's cross that bridge when we come to it
18:29:57 <daneyon_> if we don't then trying to support both could get messy
18:30:22 <daneyon_> adrian_otto good point
18:30:31 <adrian_otto> think of this as a sequence
18:30:47 <adrian_otto> we state what we expect the long term vision to be, and step 1 toward it
18:31:03 <adrian_otto> and set expectations that we could revise the direction or the vision based on what we learn
18:31:28 <daneyon_> adrian_otto until i can validate that flannel can work with libnet's native bridge driver, i want to pause the spec. then we can update it based on the results of the eval
18:31:39 <adrian_otto> that's totally appropriate
18:32:05 <adrian_otto> I suggest that you toggle the review to WIP
18:32:18 <daneyon_> adrian_otto agreed, we'll cross that bridge later. i just want the subteam to know where my head is at.
18:32:19 <adrian_otto> and just put a comment on the end to expect a revision
18:32:43 <daneyon_> adrian_otto I'm make the changes to the spec after our meeting
18:32:50 <adrian_otto> tx
18:33:14 <daneyon_> #action danehans to update network spec review to WIP and add comment to expect a revision
18:33:51 <daneyon_> I am also starting to see Kuryr code drop and have been reviewing the initial commits.
18:33:56 <daneyon_> #link https://github.com/openstack/kuryr/commits/master
18:34:22 <daneyon_> adrian_otto ^ is one of the reasons why i am asking for a kuryr design spec
18:34:33 <daneyon_> It appears the Kuryr code is modeled from calico
18:34:38 <daneyon_> #link https://github.com/Metaswitch/calico-docker
18:35:16 <daneyon_> i'll give everyone 5 minutes for a quick review of the links
18:35:37 <daneyon_> Let me know if you have any questions, concerns, ideas, etc..
18:35:54 <eghobo> daneyon_: what's your opinion about calico?
18:36:18 <daneyon_> eghobo I really like their approach to container networking.
18:36:31 <daneyon_> no overlays, a router on each host
18:36:40 <daneyon_> uses bgp for routing
18:36:54 <daneyon_> i'm a big fan of bgp since it scales
18:37:11 <daneyon_> i've never been much for overlays
18:37:20 <eghobo> I aways was scary about bgp in dc, but may be ;)
18:37:49 <daneyon_> network engineers understand ctp/ip, routing protocols (i.e. bgp) and calico seems to hit the sopt their
18:38:08 <daneyon_> eghobo why?
18:38:34 <daneyon_> s/ctp/tcp/
18:39:36 <eghobo> mostly because I don't know it too well ): and one simple mistake can kill whole traffic
18:40:09 <eghobo> but facebook and fastly folks thinks it's good idea
18:40:58 <adrian_otto> who is working on Calico?
18:41:15 <adrian_otto> FB+Fast.ly?
18:41:33 <daneyon_> eghobo their is a fair bit of a learning curve with bgp. I think you can make a lot of different operational mistakes that can cause huge problems in a data center. fortunaly bgp has been operating in dc's and the internet for a long time and the op's folks have it down. bgp also has preventative measures for reducing mistakes.
18:41:40 <adrian_otto> the description of the approach looks pretty compelling
18:41:54 <daneyon_> Metaswitch is behind Calico
18:42:21 <daneyon_> Some of their team have been involved in libnetwork from early on.
18:42:43 <daneyon_> adrian_otto I agree, I like their approach
18:43:15 <daneyon_> and hopefully we can support calico as a libnetwork driver when we get past this magnum netwokring design phase
18:43:34 <suro-patz> this approach is similar to that of distributed routing / vrouter / OpenContrail
18:44:35 <daneyon_> suro-patz It seems to be a design approach that several vendors are starting to get behind.
18:44:58 <daneyon_> IMO because the overlay approach has issues
18:45:13 <suro-patz> daneyon_: are you proposing calico-plugin instead of kuryr, for neutron to connect to libnetwork?
18:46:52 <daneyon_> suro-patz I am proposing that we eval flannel with libnetwork's native bridge driver. If it works, then we can focus on libnetwork and not implement the network_backend abstraction... at least initiaally. as adrian_otto stated, we could end up going down that road but it's not critical atm.
18:47:20 <daneyon_> if the eval goes as planned, update the spec with my findings and modify the implementation details accordinlly.
18:48:10 <daneyon_> then create seperate bp's for each implementation detail and work with the community to divide up the tasks, track them to completion and celebreate with a bottle of wine we we complete all ;-)
18:48:16 <suro-patz> IMHO, what magnum networking team wants to achieve is integration of COE with OpenStack's networking mechanism, i.e. Neutron - Now what neutron uses to connect to libnetwork, is not a real problem for magnum
18:48:39 <suro-patz> we just want to have a default/precribable plugin for neutron to do so
18:48:53 <daneyon_> suro-patz Calico is a libnetwork remote driver, so vendors should be able to easily add their driver if we do this right.
18:48:56 <suro-patz> prescribable
18:49:41 <daneyon_> My focus will be to make sure flannel works under this new model so we can use it for k8s and swarm coe's.
18:49:45 <suro-patz> daneyon_: Practically calico is a neutron plugin, so in my view a replacement for Kuryr
18:50:01 <suro-patz> similarly operators can use any of the SDN provider
18:50:22 <suro-patz> it can be plumgrid/contrail depending on what they have
18:50:49 <suro-patz> one thing to note is calico is a vendor plugin
18:51:28 <daneyon_> suro-patz atm i believe their is a seperation between the container and cloud infra networking. I would like our focus to be on the container networking. that's why all the debate is related to libnetwork and flannel
18:51:49 <daneyon_> when container/cloud infra networking start to integrate, that's when the line will blur
18:52:39 <suro-patz> My understanding was magnum was trying to provide the integration platform for bridging cloud networking and container networking
18:52:43 <daneyon_> to your point though, with container networking in magnum, i want to focus on implementing flannel for k8s and either flannel or one of libnetwork's native drivers for swarm
18:53:08 <suro-patz> the way we have been providing identity/auto-scaling integration platform
18:53:09 <daneyon_> i want to sync the container networking default with the coe.
18:53:51 <suro-patz> looking forward to discussing in details at the MidCycle
18:54:12 <daneyon_> while mkaing it easy for 3rd parties to add their libnetwork remote driver and make it easy for users to run containers, while allowing advanced users to perform advanced container networking functions
18:54:13 <adrian_otto> I would like to identify a volunter to make a presentation on libnet and calico on day 2
18:54:18 <daneyon_> trying to strick a balance
18:54:23 <adrian_otto> possibly separate presentations on each
18:54:37 <adrian_otto> or identify a video we can watch as a team and discuss.
18:54:48 <daneyon_> adrian_otto I could do the ppt, but i will be remote
18:54:53 <adrian_otto> I'm referring to the Midcycle now
18:55:15 <daneyon_> instead of just calico, i would like to touch on each of the libnet remote drivers
18:55:36 <adrian_otto> I think that would be really helpful
18:55:55 <adrian_otto> we can find a way to make that work as a remote presenter
18:56:00 <daneyon_> adrian_otto underdtood. i can create and deliver the ppt. Unfortunaly I will not be onsite for the midcycle
18:56:15 <daneyon_> adrian_otto should work just fine through webex
18:56:22 <daneyon_> i can be on video and share the ppt
18:56:55 <adrian_otto> ok, https://etherpad.openstack.org/p/magnum-liberty-midcycle-topics
18:57:08 <adrian_otto> let's juggle that around a bit to find the best time to fit that in on day 2
18:57:20 <daneyon_> we are down to our last few minutes
18:57:32 <daneyon_> Any parting questions, thoughts, etc?
18:58:16 <daneyon_> OK
18:58:25 <gangil> The midcycle meeting's location is not there on etherpad. Do you mind if new people interested in the project want to join in?
18:58:35 <daneyon_> I really appreciate everyone's participation.
18:58:47 <daneyon_> Have a great day!
18:58:59 <daneyon_> #endmeeting