15:02:58 <tmorin> #startmeeting bgpvpn
15:02:59 <openstack> Meeting started Tue Apr 11 15:02:58 2017 UTC and is due to finish in 60 minutes.  The chair is tmorin. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:03:00 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:03:02 <openstack> The meeting name has been set to 'bgpvpn'
15:03:11 <pcarver> hi
15:03:18 <tmorin> hi pcarver!
15:03:35 <tmorin> I don't know who else might join this week
15:04:21 <tmorin> hi doude, timirnich, around for bgpvpn irc weekly meeting perhaps ?
15:07:36 <tmorin> pcarver, it seems its only the two of us
15:07:42 <pcarver> looks like it
15:07:52 <tmorin> I think matrohon is off this week
15:08:05 <pcarver> I pushed a new PS for the API def but we need someone else to review
15:08:30 <tmorin> yep, I'll ask matrohon to have a look (next week I guess)
15:09:26 <pcarver> If we don't have any other topics, I'd like to talk about multiple backend support
15:09:47 <tmorin> good topic
15:10:00 <pcarver> I had some discussions around Gluon/Proton last week and have been doing compare/contrast with the existing Neutron stadium projects
15:10:23 <tmorin> and I have another one worth discussing together I think: how to push people working on SDN controller to review the blueprint we have in the pipe...
15:10:26 <pcarver> One of the design criteria of Gluon/Proton is simultaneous support of multiple interoperable backends
15:10:43 <pcarver> e.g. for migration/upgrade scenarios or possibly other reasons
15:11:35 <tmorin> pcarver: yes, we are also concerned about what can help migrate from one solution to another
15:11:39 <pcarver> ML2 definitely supports hierarchical port binding, but I'm not sure how much other cases of multiple simultaneous controllers have been thought about
15:12:16 <tmorin> pcarver: even without the "hierarchy" part, ML2 supports having multiple driver active at the same time
15:12:34 <tmorin> I'm, like you, unsure if there are other places
15:12:45 <tmorin> we have a blueprint to track this in BGPVPN
15:12:49 <pcarver> The Gluon demo from last year's OPNFV involved the multiple controllers peering with each other via BGP so that VMs could be created on the same network despite being managed by different controllers
15:13:12 <tmorin> https://blueprints.launchpad.net/bgpvpn/+spec/allow-multiple-drivers
15:13:58 <pcarver> Ok, that's good. Looking at it now.
15:14:15 <tmorin> I'm not sure it is the right starting point TBH
15:14:38 <pcarver> Gluon has had some architectural revision. At first, the idea was that the port would be used to look up *the* SDN controller.
15:14:51 <pcarver> But that doesn't support the equivalent of HPB
15:15:00 <tmorin> ah, we in fact have a bug for multiple driver support: https://bugs.launchpad.net/bgpvpn/+bug/1485515
15:15:01 <openstack> Launchpad bug 1485515 in networking-bgpvpn "multiple service drivers can't coexist" [Wishlist,Confirmed]
15:15:06 <pcarver> As I understand it, that has changed in the more recent architecture
15:16:05 <tmorin> this bug possibly ain't not related to what you have in mind either
15:16:16 <pcarver> I'm reading it now
15:17:22 <bobmel> hi tmorin, pcarver: Sorry for my absense for a few meetings
15:17:32 <pcarver> If Contrail specifically proves challenging, we might look at running two ODLs side by side (perhaps different versions to simulate an upgrade scenario?) and look at what challenges that might have.
15:17:42 <tmorin> hi bobmel
15:17:50 <tmorin> np, you're welcome
15:17:59 <bobmel> I see you disucss multiple backend drivers
15:18:02 <tmorin> yep
15:18:30 <tmorin> pcarver: for BGPVPN, what about using the following very simple idead as a starting point:
15:19:09 <tmorin> having the bgppvn service framework allow multiple simultaneously active driver, and passing all information from API calls to all of them
15:19:57 <pcarver> Yes, I think that's basically what ML2 does.
15:20:15 <tmorin> each backend would then take care of the processing for the related Neutron ports which are under its control
15:20:22 <pcarver> Call all drivers and let the drivers determine if they can/should handle the event
15:20:47 <tmorin> pcarver: well, not exactly, ML2 loops on all drivers, and one (only one) of them will be the binding driver
15:21:27 <pcarver> Are you sure about that? In the HPB case I'm pretty sure it can be multiple.
15:22:03 <pcarver> There was a demo at the Vancouver summit by folks from Cumulus and I discussed the demo with Sukdev Kapur who told me that Arista had implmented the same thing.
15:22:20 <tmorin> yes, but I'm trying to see if for BGPVPN, if just 'calling all drivers' without the "determine if they can/should handle the event"
15:22:23 <pcarver> If I remember correctly it was for VLAN
15:22:43 <tmorin> yes, sorry, with HPB probably multiple
15:23:01 <pcarver> For certain BGPVPN events we would want to call multiple.
15:23:01 <tmorin> but without hierarchy only one I think
15:23:37 <tmorin> in fact, I don't see for which call we would *not* want to call all the drivers
15:23:56 <pcarver> Yes, actually I was just thinking that
15:24:52 <bobmel> For ml2 HPB, I think the binding continues until on of the driver signals that the binding is complete
15:25:14 <tmorin> bobmel: ok
15:26:21 <tmorin> I think we can start a proposal aroung the simple "always-call-all-driver" scheme
15:26:22 <bobmel> So for example the ToR driver can do it is part of the binding, then the vswitch on the host etc
15:26:36 <pcarver> An example use case would be creating a network that spans across multiple SDN controllers, using the route target to let the controllers know to exchange routes. Then ports bound to one controller could reach ports bound to a different controller.
15:27:08 <pcarver> This would be similar to the Gluon demo from last year.
15:27:20 <tmorin> bobmel: to handle transitioning or upgrade, for BGPVPN, I don't think we will need this kind of complex cooperation between driver
15:27:42 <pcarver> They established BGP peering between multiple controllers. That allowed VMs handled by one controller to reach VMs handled by a different controller.
15:29:08 <tmorin> pcarver: yes, this can already be achieved between 2-Openstacks with one L2 BGPVPN, we would just allow this to happen without requiring having a different Openstack for each SDN controller
15:31:07 <tmorin> the part that we still do not cover with this, but this is not specific to BGPVPN (and it's perhaps fair to keep it out of scope) is: how can the newly added controller know about things that were created/done before its driver was added
15:31:47 <pcarver> That's a good question
15:32:29 <tmorin> and I still don't have the whole picture / I'm not sure we can do something meaningful in BGPVPN alone without knowing what will happen for ports, networks and routers
15:32:47 <pcarver> Yes, I was thinking that as well.
15:33:02 <tmorin> we can make the assumption that ML2 is used (although we have to exclude Contrail...)
15:33:18 <pcarver> With HPB I would think ports may be at least somewhat covered, but networks and routers might need attention
15:33:40 <pcarver> There's an ML2 driver for Contrail, although not endorsed by Juniper
15:35:31 <tmorin> ok, could be an option
15:38:10 <tmorin> so we would have: on a given Neutron network,  new ports gets created on new backend (whatever is given priority by ML2 framework), they don't have connectivity to the ports that were created before via this backend base L2 functionaly, but if this network is associated to BGPVPN type l2, then both the new and the old backend will do whatever is needed and the 'new' ports will get connectivity to the 'old' ones
15:38:22 <tmorin> is this the kind of idea you had pcarver ?
15:39:13 <pcarver> tmorin: maybe. I may need to think about this more.
15:40:13 <pcarver> I hadn't thought too carefully about the time aspect.
15:40:31 <pcarver> With backends being added or removed to a longer lived Neutron.
15:40:54 <pcarver> I'll discuss this with the Gluon team as well to see if anyone has thought about how that would work.
15:43:09 <tmorin> another possible thing would be to see if HPB would make sense:    at the top of the hierarchy we would have a bgpvpn mech_driver talking with the bgpvpn service plugin, allocating an l2 bgpvpn for a network (new bgpvpnl2 type driver), and down the hierarchy, the ML2 drivers of each backend would do the rest of the work to connect the l2 bgpvpn to whatever they use internally to manage L2 ... that's probably harder to achi
15:43:09 <tmorin> eve than what it sounds ...
15:43:45 <tmorin> to add a backend, you possibly need to replay old information to it, or have it sync back to neutron
15:44:12 <tmorin> to remove one, this is easier, you simply need to make sure first that its not handling any port anymore
15:45:57 <tmorin> what I would tend to conclude from this discussion: we can add to n8g-bgpvpn an ability to support multiple dirver simultaneously with an "always-call-all-drivers" behavior ; it seems it may help find interesting transition scenariis, but we aren't totally sure
15:46:43 <pcarver> I agree. That's a good place to start and see what works and what doesn't.
15:47:08 <tmorin> the behavior is not too complex to implement, except one thing:   error handling  (what to do when one driver raises an error and the others do not ?)
15:47:34 <pcarver> I see potential convergence between Gluon and Neutron but we need to explore the boundaries to see if there are places that are clearly distinct.
15:47:45 <tmorin> but we could still start with something simple like: fail if any of the driver fail
15:47:50 <tmorin> *fail_s_*
15:48:13 <tmorin> ok
15:48:26 <tmorin> let's keep the ideas flowing and revisit the topic soonish
15:48:30 <pcarver> tmorin: Error handling is something that HPB and Gluon/Proton have to handle too, so I'll explore what the current status is.
15:48:43 <tmorin> pcarver: yes, good idea
15:50:17 <tmorin> the other thing I wanted to briefly discuss is:
15:51:05 <tmorin> we need feedback/participation on the blueprint we have in the pipe for: port associations (think hub'n'spoke) and fine grained control of routing (static routes, local_pref and community control)
15:51:42 <tmorin> pcarver, bobmel: reviews welcome !
15:52:00 <tmorin> pcarver, bobmel: and if you can pass the word to people you know who are working on SDN controllers BGPVPN backend, that will help as well!
15:52:56 <pcarver> tmorin: will do. Got kind of swamped, especially with going to ONS last week, but trying to dig myself out.
15:53:33 <tmorin> :)
15:53:36 <tmorin> I understand
15:54:02 <tmorin> I'm unlucky to be so far from the US I can't travel everywhere, but this gives time for other things as well :)
15:54:04 <pcarver> tmorin: I'm looking for the reviews
15:54:11 <pcarver> Are they open on Gerrit?
15:54:29 <tmorin> nope
15:54:35 <tmorin> launchpad blueprints
15:54:49 <pcarver> oh, ok. I'll take a look at Launchpad
15:54:55 <tmorin> https://blueprints.launchpad.net/bgpvpn/+spec/port-routes
15:55:06 <tmorin> https://blueprints.launchpad.net/bgpvpn/+spec/port-association
15:55:36 <tmorin> this one is not cleanup yet
15:55:51 <tmorin> the discuss on local_pref and communities control does not have its own blueprint, but https://etherpad.openstack.org/p/bgpvpn_advanced_features has lots of things
15:56:00 <tmorin> I need to sort all of this a little better
15:57:43 <tmorin> ok...
15:57:47 <tmorin> done for today ?
15:57:53 <tmorin> thanks pcarver, bobmel...
15:58:26 <tmorin> I'll be off next week, but you can still come, matrohon will possibly chair
15:58:37 <pcarver> sounds good. bye.
15:58:44 <tmorin> bye///
15:58:46 <tmorin> #endmeeting