14:00:24 #startmeeting networking_ml2 14:00:24 hi 14:00:25 Meeting started Wed Jul 31 14:00:24 2013 UTC and is due to finish in 60 minutes. The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:28 The meeting name has been set to 'networking_ml2' 14:00:45 #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda 14:01:00 rcurran, apech, asomya: here? 14:01:05 dyup 14:01:16 rcurran here 14:01:40 OK, so today, I think there are a few discussion points we should cover first in the agenda. 14:01:55 #topic Discussion ITems 14:02:04 Good Morning 14:02:14 The first item is the directory structure for mechanism drivers. 14:02:21 sukhdev had sent an email to start that discussion. 14:02:34 Have most people seen this? I think we copied most everyone. 14:03:30 I think the consensus on the email thread was to create separate directories under neutron/plugins/ml2/drivers/ for each MechanismDriver. 14:03:36 Are people ok with this approach? 14:03:58 Yes - that sounds reasonable 14:04:08 so drivers can still be single modules or directories, right? 14:04:21 rkukura: Yes, we won't enforce a directory. 14:04:37 but same naming convention, indicating type vs. mech 14:04:40 but it reads like both cisco and arista will be creating dirs 14:04:56 rcurran: Yes, each vendor can create a directory for their files. 14:04:56 and ODL 14:05:09 neutron/plugins/ml2/drivers/[arista,cisco,odl] 14:05:42 rkukura: So, for single file drivers, you mean something like "mech_*" and "type_*", right? 14:05:46 directories would be needed for multiple files for a mechanism drivers 14:05:48 are we expecting these directories to contain a single driver, or multiple drivers? 14:06:10 long term for cisco, multiple 14:06:23 for arista, the main motivation is for drivers that want to split support into multiple files (with separate exceptions, config, db, and driver file) 14:06:36 yes, cisco will do the same 14:06:42 apech: Yes, and it makes sense to keep those in one directory. Same for Cisco. 14:07:25 For a single driver with multiple files, I think the directory should be mech_* or type_* 14:07:51 rkukura: So you want the directory named that way too? I think that sounds reasonable. 14:08:01 But for a set of related drivers, maybe even some mech and some type, naming the directory cisco is fine 14:08:21 so then it will ....neutron/plugins/ml2/drivers/mech_arista, mech_cisco ? 14:08:22 so as an example ../drivers/mech_cisco/mech_cisco_nexus.py? 14:08:27 my only concern would be the need to rename the directory when you add the first type driver 14:08:46 though at this point we have no plans of doing that, so it may be a bit academic 14:08:52 rcurran: I'd go with dirvers/cisco/mech_cisco_nexus.py 14:09:22 ok, that's what i thought :-) .... didn't understand the mech_* comment from above 14:09:25 nice thing is that with python entry points in the config files rather than class names, we can reorganize without breaking configs 14:09:39 So, to summarize: Single file mech drivers are in drivers/mech_*.py. Single mech drivers with multiple files are in drivers/mech_*/. And multiple mech drivers from one vendor are in drivers/[vendor]/ 14:09:52 mestery: +1 14:10:11 okay, works for me 14:10:14 #info Single file mech drivers are in drivers/mech_*.py. Single mech drivers with multiple files are in drivers/mech_*/. And multiple mech drivers from one vendor are in drivers/[vendor]/ 14:10:33 ok, but i should create cisco as drivers/cisco since we will (in the future) create other drivers 14:10:45 rcurran: Yes, that is the way to go. 14:10:53 got it 14:11:00 OK, lets move on to the next agenda item. 14:11:10 so, for us it will mech_arista then 14:11:40 API Extensions in MechanismDrivers. 14:11:44 because arista only plans on supporting one driver in the future? 14:11:51 asomya, do you want to talk about this one? 14:12:09 mestery: sure 14:12:09 sukhdev: that's what i understood 14:12:40 So currently ML2 loads all extensions, we might want to make is selective by loading only supported_extensions from mechanism and type drivers loaded 14:13:45 asomya: Also, what if a particular MechanismDriver wants to implement additional API extensions? 14:14:41 We can read the list from each mechanism driver and load those extensions including custom API extensions per mechanism driver. This list can be maintained per mech driver in the Mech manager 14:15:00 tricky part is defining what it means to "support an extension" when multiple drivers are involved that might not all support the extension 14:15:19 rkukura: Agreed. 14:16:13 OK, so any other comments on API extensions in MechanismDrivers? 14:16:23 I guess there might be case where the extension is explicitly driver-specific, like maybe an API for managing the controller that driver supports or something 14:17:04 so allowing drivers to define completely new resources should be well-defined 14:17:37 but if an extension adds attributes to network or port, not all drivers may know what to do with these 14:18:18 do we have any short term needs for any of this, or should we try to defer API extensibility to icehouse? 14:18:51 I think we can defer this to Icehouse, frankly. apech, rcurran, asomya, what do you think? 14:19:12 no short term need here 14:19:24 opps .. where did asomya go :-) 14:19:42 sosry lost internet for a bit 14:20:19 arvind - i don't think i need ext for cisco_nexus ... svi support ? 14:20:43 rcurran: nope no extensions needed for that 14:21:00 ok then for havana, i'm good 14:21:09 So i was suggesting maintaining a dictionary of supported extensions per driver in the Mechanism Manager and routing API calls to each mech driver based on that dict 14:21:12 #info Defer API extensibility for ML2 to Icehouse 14:22:08 asomya - you may have missed the question - does ODL need this for havana? 14:22:19 or can it wait until icehouse 14:22:30 rcurran: nope, ODL doesn't need this for hanava 14:23:10 OK, so we will defer API extensibility to Icehouse then. 14:23:18 Moving on to the next topic for discussion. 14:23:24 mestery: +1, it's not critical to have this 14:23:36 RPC extensibility in ML2. 14:23:56 rkukura, rcurran, and asomya and I spoke about this briefly on the phone yesterday. 14:24:03 asomya: Want to elaborate here? You hit this with ODL. 14:24:12 mestery: sure 14:25:07 The ODL implementation relies on some custom RPC calls from the agent to the driver to do it's job. In ML2 all RPc calls are routed to the type_manager, we need a select few calls to also come to the mechanism_manager to be distributed to the mech drivers 14:26:13 We talked about a shot term solution of creating a second topic (e.g. mechanism) to be routed to the mechanism drivers and consumed by the mechanism driver instead of the type_driver 14:26:16 So for this, we have an immediate need in Havana for this functionality, though it's tempered by the fact that some changes in ODL may preclude us needing an agent for ODL to work with Neutron. 14:26:28 the type driver can still consume the conventional dhcp, l3 etc. calls based on the old topic name 14:27:01 So right now the type_tunnel.TunnelRpcCallbackMixin is inherited by RpcCallbacks. Would a more general approach for drivers to supply callback mixins do the job? 14:28:21 rkukura: that might do the trick as well. So will the drivers themselves be inheriting the rpccallback classes in that case? 14:29:18 asomya: I was just thinking that if a driver wanted to define its own callback, there would be some way to register it 14:29:53 I'm not 100% clear on how the RPC dispatching works, so maybe drivers can already do this themselves by defining a new topic 14:30:50 I tried with the conventional way of implementing a second RPC inside the ODL mech driver but that didn't work, the topic name was the same and everything got sent to the type driver. I'll try it with a new topic name 14:31:25 asomya: If some hooks are needed for the RPC topic registration to work, we can add that 14:31:59 rkukura: sounds great, lemme try a ne topic and i'll get back to the group on what's needed 14:32:30 #action asomya to try out MechanismDriver specific RPC calls with a new topic and report back on how that works 14:32:34 The other idea kicked around was having the existing RPCs such as get_device_details() call into the MechanismDriver to which the port is bound so it can add any needed attributes to the response, or call a controller 14:33:57 rkukura: But that was very specific to the ODL use case, could be other mech drivers that need even different RPc hooks 14:35:32 could certainly do similar driver calls for update_device_[up|down] 14:36:26 OK, so asomya, can you try out the new topic and see if that gets the ODL Mech driver going in the short term here? 14:36:36 Lets see how the portbinding works out, and we can MechanismDriver calls to the L2 agent RPCs if that seems to make sense 14:36:37 mestery: sure 14:36:49 So lets move on to the next topic. 14:36:54 #topic Blueprint Updates 14:37:03 #link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ML2 Port Binding 14:37:08 rkukura: You led us into the next topic. :) 14:37:25 finally started working on it 14:37:59 with luck, may have a patch at the end of the week 14:38:36 rkukura: Great! I know some MechanismDriver writes who will anxiously be waiting. :) 14:38:54 Any questions on ML2 Port Binding? 14:39:32 nope, looking forward to it, thanks rkukura 14:39:45 #link https://blueprints.launchpad.net/neutron/+spec/ml2-multi-segment-api ML2 Multi Segment API 14:39:51 So, this one is low priority for H3. 14:40:09 I'm not sure if we should prioritize this for H3 or make this an Icehouse thing. Thoughts from others? 14:40:50 mestery: Do any mechanism drivers need to implement this asap? 14:41:03 asomya: None that I'm aware of, no. 14:41:11 At least in the Havana timeframe. 14:41:23 Nope 14:41:56 I think our priority here should be to make sure the extension being added for NVP is defined properly, and implement it if we get a chance 14:42:22 rkukura: Agreed. That hasn't gone up yet, I'll make sure to stay on top of that since this BP is assigned to me. 14:42:43 #action mestery to monitor NVP extension for multiple provider networks in relation to ML2 multi-segment API 14:43:00 I think I was supposed to start an email thread, and still might when I review the NVP patch 14:43:09 rkukra: OK, thanks! 14:43:26 mestery: you are welcome to do that if you get to it first, and discussion seems to be needed 14:43:39 rkukura: OK, sounds good. 14:43:51 #link https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info ML2 TypeDriver Extra Port Info 14:43:57 ZangMingjie: Here? 14:44:10 #link https://review.openstack.org/#/c/37893/ ExtraPort Info Review 14:44:11 ye 14:44:20 So, I provided some comments on this one based on what rkukura and I discussed Monday. 14:44:37 We thought instead of this, we could overload "physical network". 14:44:52 But, as ZangMingJie pointed out, this BP may provide more flexibility. 14:45:24 I think it may be a long term process to replace the 3-tuple with a custimizable arbitrary data struct 14:45:29 looks like this is doing with TypeDrivers what I was proposing to do for the bound MechanismDriver! 14:45:44 rkukura: Looking at it, I think you're right! 14:46:37 ZangMingJie: I will re-review this today, but I think I'm more inclined to approve it now after discussions over the last day. 14:46:50 One question is whether the ml2 schema should continue to store the 3-tuple, or maybe instead store an encoded dict for each segment 14:47:41 what about each type driver manage its own database store 14:48:06 ZangMingJie: That is also worth considering 14:48:28 I will definitely review this current patch in detail 14:48:48 ok, thank you 14:49:09 OK, we only have 11 minutes left, lets move on. 14:49:15 #topic Bugs 14:49:22 #link https://review.openstack.org/#/c/37963/ ML2 devstack 14:49:30 The good news is this change was merged yesterday. 14:49:37 So if you haven't tried it yet, now it's even easier. 14:49:58 The next step for devstack is to see if we can get it to default to ML2 instead of OVS. 14:50:14 I will likely open a bug for that and work with deantroyer and sdague to see how we can make that happen. 14:50:26 mestery: +1 14:50:58 mestery: thanks- this will help 14:51:07 In the mean time, maybe an email to openstack-dev encouraging people to try out ml2 in devstack would be a good idea 14:51:36 Along with some instructions on the ML2 wiki 14:51:39 #action mestery to send email to openstack-dev encouraging people to try ML2 out 14:51:47 #action mestery to update the wiki with ML2 devstack instructions 14:51:54 Good ideas rkukura! 14:52:19 #link https://review.openstack.org/#/c/38662/ Enable gre and vxlan with the same ID 14:52:21 matrohon: Here? 14:52:28 then we'll start getting some new bugs to work on;-) 14:52:38 rkukura: Heh. :) 14:53:05 The review I just posted is for this bug: https://bugs.launchpad.net/neutron/+bug/1196963 14:53:06 Launchpad bug 1196963 in neutron "Update the OVS agent code to program tunnels using ports instead of tunnel IDs" [Wishlist,In progress] 14:53:27 We should all review this, as this will be nice to get in for H3 to allow for GRE and VXLAN tunnels to have the same tunnel ID, which they currently can't in ML2. 14:54:03 #topic Questions and wrap up 14:54:14 I skipped a few items due to time, is there anything else we should cover here in the last 5 minutes? 14:54:52 Can someone add a few lines to the wiki on how the pre- and post- calls work in the mech drivers? 14:55:06 asomya: sure, i can take that on 14:55:06 apech: Can you take that one? 14:55:12 apech: thanks :) 14:55:17 #action apech to add some notes to the wiki about how the pre- and post- calls work 14:56:28 So, the Neutron feature cutoff is end ofday Augsut 23. 14:56:36 And the release date is September 5. 14:56:42 So we don't have a lot of time left in Havana. 14:56:53 Just FYI, no pressure or anthing. :) 14:57:22 :-) 14:57:30 OK, so thanks everyone for all your efforts on ML2! 14:57:43 We'll do this again next week, same UTC-time, same IRC-channel. 14:57:45 #endmeeting