15:02:58 #startmeeting bgpvpn 15:02:59 Meeting started Tue Apr 11 15:02:58 2017 UTC and is due to finish in 60 minutes. The chair is tmorin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:03:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:03:02 The meeting name has been set to 'bgpvpn' 15:03:11 hi 15:03:18 hi pcarver! 15:03:35 I don't know who else might join this week 15:04:21 hi doude, timirnich, around for bgpvpn irc weekly meeting perhaps ? 15:07:36 pcarver, it seems its only the two of us 15:07:42 looks like it 15:07:52 I think matrohon is off this week 15:08:05 I pushed a new PS for the API def but we need someone else to review 15:08:30 yep, I'll ask matrohon to have a look (next week I guess) 15:09:26 If we don't have any other topics, I'd like to talk about multiple backend support 15:09:47 good topic 15:10:00 I had some discussions around Gluon/Proton last week and have been doing compare/contrast with the existing Neutron stadium projects 15:10:23 and I have another one worth discussing together I think: how to push people working on SDN controller to review the blueprint we have in the pipe... 15:10:26 One of the design criteria of Gluon/Proton is simultaneous support of multiple interoperable backends 15:10:43 e.g. for migration/upgrade scenarios or possibly other reasons 15:11:35 pcarver: yes, we are also concerned about what can help migrate from one solution to another 15:11:39 ML2 definitely supports hierarchical port binding, but I'm not sure how much other cases of multiple simultaneous controllers have been thought about 15:12:16 pcarver: even without the "hierarchy" part, ML2 supports having multiple driver active at the same time 15:12:34 I'm, like you, unsure if there are other places 15:12:45 we have a blueprint to track this in BGPVPN 15:12:49 The Gluon demo from last year's OPNFV involved the multiple controllers peering with each other via BGP so that VMs could be created on the same network despite being managed by different controllers 15:13:12 https://blueprints.launchpad.net/bgpvpn/+spec/allow-multiple-drivers 15:13:58 Ok, that's good. Looking at it now. 15:14:15 I'm not sure it is the right starting point TBH 15:14:38 Gluon has had some architectural revision. At first, the idea was that the port would be used to look up *the* SDN controller. 15:14:51 But that doesn't support the equivalent of HPB 15:15:00 ah, we in fact have a bug for multiple driver support: https://bugs.launchpad.net/bgpvpn/+bug/1485515 15:15:01 Launchpad bug 1485515 in networking-bgpvpn "multiple service drivers can't coexist" [Wishlist,Confirmed] 15:15:06 As I understand it, that has changed in the more recent architecture 15:16:05 this bug possibly ain't not related to what you have in mind either 15:16:16 I'm reading it now 15:17:22 hi tmorin, pcarver: Sorry for my absense for a few meetings 15:17:32 If Contrail specifically proves challenging, we might look at running two ODLs side by side (perhaps different versions to simulate an upgrade scenario?) and look at what challenges that might have. 15:17:42 hi bobmel 15:17:50 np, you're welcome 15:17:59 I see you disucss multiple backend drivers 15:18:02 yep 15:18:30 pcarver: for BGPVPN, what about using the following very simple idead as a starting point: 15:19:09 having the bgppvn service framework allow multiple simultaneously active driver, and passing all information from API calls to all of them 15:19:57 Yes, I think that's basically what ML2 does. 15:20:15 each backend would then take care of the processing for the related Neutron ports which are under its control 15:20:22 Call all drivers and let the drivers determine if they can/should handle the event 15:20:47 pcarver: well, not exactly, ML2 loops on all drivers, and one (only one) of them will be the binding driver 15:21:27 Are you sure about that? In the HPB case I'm pretty sure it can be multiple. 15:22:03 There was a demo at the Vancouver summit by folks from Cumulus and I discussed the demo with Sukdev Kapur who told me that Arista had implmented the same thing. 15:22:20 yes, but I'm trying to see if for BGPVPN, if just 'calling all drivers' without the "determine if they can/should handle the event" 15:22:23 If I remember correctly it was for VLAN 15:22:43 yes, sorry, with HPB probably multiple 15:23:01 For certain BGPVPN events we would want to call multiple. 15:23:01 but without hierarchy only one I think 15:23:37 in fact, I don't see for which call we would *not* want to call all the drivers 15:23:56 Yes, actually I was just thinking that 15:24:52 For ml2 HPB, I think the binding continues until on of the driver signals that the binding is complete 15:25:14 bobmel: ok 15:26:21 I think we can start a proposal aroung the simple "always-call-all-driver" scheme 15:26:22 So for example the ToR driver can do it is part of the binding, then the vswitch on the host etc 15:26:36 An example use case would be creating a network that spans across multiple SDN controllers, using the route target to let the controllers know to exchange routes. Then ports bound to one controller could reach ports bound to a different controller. 15:27:08 This would be similar to the Gluon demo from last year. 15:27:20 bobmel: to handle transitioning or upgrade, for BGPVPN, I don't think we will need this kind of complex cooperation between driver 15:27:42 They established BGP peering between multiple controllers. That allowed VMs handled by one controller to reach VMs handled by a different controller. 15:29:08 pcarver: yes, this can already be achieved between 2-Openstacks with one L2 BGPVPN, we would just allow this to happen without requiring having a different Openstack for each SDN controller 15:31:07 the part that we still do not cover with this, but this is not specific to BGPVPN (and it's perhaps fair to keep it out of scope) is: how can the newly added controller know about things that were created/done before its driver was added 15:31:47 That's a good question 15:32:29 and I still don't have the whole picture / I'm not sure we can do something meaningful in BGPVPN alone without knowing what will happen for ports, networks and routers 15:32:47 Yes, I was thinking that as well. 15:33:02 we can make the assumption that ML2 is used (although we have to exclude Contrail...) 15:33:18 With HPB I would think ports may be at least somewhat covered, but networks and routers might need attention 15:33:40 There's an ML2 driver for Contrail, although not endorsed by Juniper 15:35:31 ok, could be an option 15:38:10 so we would have: on a given Neutron network, new ports gets created on new backend (whatever is given priority by ML2 framework), they don't have connectivity to the ports that were created before via this backend base L2 functionaly, but if this network is associated to BGPVPN type l2, then both the new and the old backend will do whatever is needed and the 'new' ports will get connectivity to the 'old' ones 15:38:22 is this the kind of idea you had pcarver ? 15:39:13 tmorin: maybe. I may need to think about this more. 15:40:13 I hadn't thought too carefully about the time aspect. 15:40:31 With backends being added or removed to a longer lived Neutron. 15:40:54 I'll discuss this with the Gluon team as well to see if anyone has thought about how that would work. 15:43:09 another possible thing would be to see if HPB would make sense: at the top of the hierarchy we would have a bgpvpn mech_driver talking with the bgpvpn service plugin, allocating an l2 bgpvpn for a network (new bgpvpnl2 type driver), and down the hierarchy, the ML2 drivers of each backend would do the rest of the work to connect the l2 bgpvpn to whatever they use internally to manage L2 ... that's probably harder to achi 15:43:09 eve than what it sounds ... 15:43:45 to add a backend, you possibly need to replay old information to it, or have it sync back to neutron 15:44:12 to remove one, this is easier, you simply need to make sure first that its not handling any port anymore 15:45:57 what I would tend to conclude from this discussion: we can add to n8g-bgpvpn an ability to support multiple dirver simultaneously with an "always-call-all-drivers" behavior ; it seems it may help find interesting transition scenariis, but we aren't totally sure 15:46:43 I agree. That's a good place to start and see what works and what doesn't. 15:47:08 the behavior is not too complex to implement, except one thing: error handling (what to do when one driver raises an error and the others do not ?) 15:47:34 I see potential convergence between Gluon and Neutron but we need to explore the boundaries to see if there are places that are clearly distinct. 15:47:45 but we could still start with something simple like: fail if any of the driver fail 15:47:50 *fail_s_* 15:48:13 ok 15:48:26 let's keep the ideas flowing and revisit the topic soonish 15:48:30 tmorin: Error handling is something that HPB and Gluon/Proton have to handle too, so I'll explore what the current status is. 15:48:43 pcarver: yes, good idea 15:50:17 the other thing I wanted to briefly discuss is: 15:51:05 we need feedback/participation on the blueprint we have in the pipe for: port associations (think hub'n'spoke) and fine grained control of routing (static routes, local_pref and community control) 15:51:42 pcarver, bobmel: reviews welcome ! 15:52:00 pcarver, bobmel: and if you can pass the word to people you know who are working on SDN controllers BGPVPN backend, that will help as well! 15:52:56 tmorin: will do. Got kind of swamped, especially with going to ONS last week, but trying to dig myself out. 15:53:33 :) 15:53:36 I understand 15:54:02 I'm unlucky to be so far from the US I can't travel everywhere, but this gives time for other things as well :) 15:54:04 tmorin: I'm looking for the reviews 15:54:11 Are they open on Gerrit? 15:54:29 nope 15:54:35 launchpad blueprints 15:54:49 oh, ok. I'll take a look at Launchpad 15:54:55 https://blueprints.launchpad.net/bgpvpn/+spec/port-routes 15:55:06 https://blueprints.launchpad.net/bgpvpn/+spec/port-association 15:55:36 this one is not cleanup yet 15:55:51 the discuss on local_pref and communities control does not have its own blueprint, but https://etherpad.openstack.org/p/bgpvpn_advanced_features has lots of things 15:56:00 I need to sort all of this a little better 15:57:43 ok... 15:57:47 done for today ? 15:57:53 thanks pcarver, bobmel... 15:58:26 I'll be off next week, but you can still come, matrohon will possibly chair 15:58:37 sounds good. bye. 15:58:44 bye/// 15:58:46 #endmeeting