14:00:24 <mestery> #startmeeting networking_ml2
14:00:24 <rkukura> hi
14:00:25 <openstack> Meeting started Wed Jul 31 14:00:24 2013 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:28 <openstack> The meeting name has been set to 'networking_ml2'
14:00:45 <mestery> #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda
14:01:00 <mestery> rcurran, apech, asomya: here?
14:01:05 <apech> dyup
14:01:16 <rcurran> rcurran here
14:01:40 <mestery> OK, so today, I think there are a few discussion points we should cover first in the agenda.
14:01:55 <mestery> #topic Discussion ITems
14:02:04 <sukhdev> Good Morning
14:02:14 <mestery> The first item is the directory structure for mechanism drivers.
14:02:21 <mestery> sukhdev had sent an email to start that discussion.
14:02:34 <mestery> Have most people seen this? I think we copied most everyone.
14:03:30 <mestery> I think the consensus on the email thread was to create separate directories under neutron/plugins/ml2/drivers/ for each MechanismDriver.
14:03:36 <mestery> Are people ok with this approach?
14:03:58 <sukhdev> Yes - that sounds reasonable
14:04:08 <rkukura> so drivers can still be single modules or directories, right?
14:04:21 <mestery> rkukura: Yes, we won't enforce a directory.
14:04:37 <rkukura> but same naming convention, indicating type vs. mech
14:04:40 <rcurran> but it reads like both cisco and arista will be creating dirs
14:04:56 <mestery> rcurran: Yes, each vendor can create a directory for their files.
14:04:56 <rcurran> and ODL
14:05:09 <mestery> neutron/plugins/ml2/drivers/[arista,cisco,odl]
14:05:42 <mestery> rkukura: So, for single file drivers, you mean something like "mech_*" and "type_*", right?
14:05:46 <asomya> directories would be needed for multiple files for a mechanism drivers
14:05:48 <rkukura> are we expecting these directories to contain a single driver, or multiple drivers?
14:06:10 <rcurran> long term for cisco, multiple
14:06:23 <apech> for arista, the main motivation is for drivers that want to split support into multiple files (with separate exceptions, config, db, and driver file)
14:06:36 <rcurran> yes, cisco will do the same
14:06:42 <mestery> apech: Yes, and it makes sense to keep those in one directory. Same for Cisco.
14:07:25 <rkukura> For a single driver with multiple files, I think the directory should be mech_* or type_*
14:07:51 <mestery> rkukura: So you want the directory named that way too? I think that sounds reasonable.
14:08:01 <rkukura> But for a set of related drivers, maybe even some mech and some type, naming the directory cisco is fine
14:08:21 <sukhdev> so then it will ....neutron/plugins/ml2/drivers/mech_arista, mech_cisco ?
14:08:22 <rcurran> so as an example ../drivers/mech_cisco/mech_cisco_nexus.py?
14:08:27 <apech> my only concern would be the need to rename the directory when you add the first type driver
14:08:46 <apech> though at this point we have no plans of doing that, so it may be a bit academic
14:08:52 <rkukura> rcurran: I'd go with dirvers/cisco/mech_cisco_nexus.py
14:09:22 <rcurran> ok, that's what i thought :-) .... didn't understand the mech_* comment from above
14:09:25 <rkukura> nice thing is that with python entry points in the config files rather than class names, we can reorganize without breaking configs
14:09:39 <mestery> So, to summarize: Single file mech drivers are in drivers/mech_*.py. Single mech drivers with multiple files are in drivers/mech_*/. And multiple mech drivers from one vendor are in drivers/[vendor]/
14:09:52 <rkukura> mestery: +1
14:10:11 <apech> okay, works for me
14:10:14 <mestery> #info Single file mech drivers are in drivers/mech_*.py. Single mech drivers with multiple files are in drivers/mech_*/. And multiple mech drivers from one vendor are in drivers/[vendor]/
14:10:33 <rcurran> ok, but i should create cisco as drivers/cisco since we will (in the future) create other drivers
14:10:45 <mestery> rcurran: Yes, that is the way to go.
14:10:53 <rcurran> got it
14:11:00 <mestery> OK, lets move on to the next agenda item.
14:11:10 <sukhdev> so, for us it will mech_arista then
14:11:40 <mestery> API Extensions in MechanismDrivers.
14:11:44 <rcurran> because arista only plans on supporting one driver in the future?
14:11:51 <mestery> asomya, do you want to talk about this one?
14:12:09 <asomya> mestery: sure
14:12:09 <apech> sukhdev: that's what i understood
14:12:40 <asomya> So currently ML2 loads all extensions, we might want to make is selective by loading only supported_extensions from mechanism and type drivers loaded
14:13:45 <mestery> asomya: Also, what if a particular MechanismDriver wants to implement additional API extensions?
14:14:41 <asomya> We can read the list from each mechanism driver and load those extensions including custom API extensions per mechanism driver. This list can be maintained per mech driver in the Mech manager
14:15:00 <rkukura> tricky part is defining what it means to "support an extension" when multiple drivers are involved that might not all support the extension
14:15:19 <mestery> rkukura: Agreed.
14:16:13 <mestery> OK, so any other comments on API extensions in MechanismDrivers?
14:16:23 <rkukura> I guess there might be case where the extension is explicitly driver-specific, like maybe an API for managing the controller that driver supports or something
14:17:04 <rkukura> so allowing drivers to define completely new resources should be well-defined
14:17:37 <rkukura> but if an extension adds attributes to network or port, not all drivers may know what to do with these
14:18:18 <rkukura> do we have any short term needs for any of this, or should we try to defer API extensibility to icehouse?
14:18:51 <mestery> I think we can defer this to Icehouse, frankly. apech, rcurran, asomya, what do you think?
14:19:12 <apech> no short term need here
14:19:24 <rcurran> opps .. where did asomya go :-)
14:19:42 <asomya> sosry lost internet for a bit
14:20:19 <rcurran> arvind - i don't think i need ext for cisco_nexus ... svi support ?
14:20:43 <asomya> rcurran: nope no extensions needed for that
14:21:00 <rcurran> ok then for havana, i'm good
14:21:09 <asomya> So i was suggesting maintaining a dictionary of supported extensions per driver in the Mechanism Manager and routing API calls to each mech driver based on that dict
14:21:12 <mestery> #info Defer API extensibility for ML2 to Icehouse
14:22:08 <rcurran> asomya - you may have missed the question - does ODL need this for havana?
14:22:19 <rcurran> or can it wait until icehouse
14:22:30 <asomya> rcurran: nope, ODL doesn't need this for hanava
14:23:10 <mestery> OK, so we will defer API extensibility to Icehouse then.
14:23:18 <mestery> Moving on to the next topic for discussion.
14:23:24 <asomya> mestery: +1, it's not critical to have this
14:23:36 <mestery> RPC extensibility in ML2.
14:23:56 <mestery> rkukura, rcurran, and asomya and I spoke about this briefly on the phone yesterday.
14:24:03 <mestery> asomya: Want to elaborate here? You hit this with ODL.
14:24:12 <asomya> mestery: sure
14:25:07 <asomya> The ODL implementation relies on some custom RPC calls from the agent to the driver to do it's job. In ML2 all RPc calls are routed to the type_manager, we need a select few calls to also come to the mechanism_manager to be distributed to the mech drivers
14:26:13 <asomya> We talked about a shot term solution of creating a second topic (e.g. mechanism) to be routed to the mechanism drivers and consumed by the mechanism driver instead of the type_driver
14:26:16 <mestery> So for this, we have an immediate need in Havana for this functionality, though it's tempered by the fact that some changes in ODL may preclude us needing an agent for ODL to work with Neutron.
14:26:28 <asomya> the type driver can still consume the conventional dhcp, l3 etc. calls based on the old topic name
14:27:01 <rkukura> So right now the type_tunnel.TunnelRpcCallbackMixin is inherited by RpcCallbacks. Would a more general approach for drivers to supply callback mixins do the job?
14:28:21 <asomya> rkukura: that might do the trick as well. So will the drivers themselves be inheriting the rpccallback classes in that case?
14:29:18 <rkukura> asomya: I was just thinking that if a driver wanted to define its own callback, there would be some way to register it
14:29:53 <rkukura> I'm not 100% clear on how the RPC dispatching works, so maybe drivers can already do this themselves by defining a new topic
14:30:50 <asomya> I tried with the conventional way of implementing a second RPC inside the ODL mech driver but that didn't work, the topic name was the same and everything got sent to the type driver. I'll try it with a new topic name
14:31:25 <rkukura> asomya: If some hooks are needed for the RPC topic registration to work, we can add that
14:31:59 <asomya> rkukura: sounds great, lemme try a ne topic and i'll get back to the group on what's needed
14:32:30 <mestery> #action asomya to try out MechanismDriver specific RPC calls with a new topic and report back on how that works
14:32:34 <rkukura> The other idea kicked around was having the existing RPCs such as get_device_details() call into the MechanismDriver to which the port is bound so it can add any needed attributes to the response, or call a controller
14:33:57 <asomya> rkukura: But that was very specific to the ODL use case, could be other mech drivers that need even different RPc hooks
14:35:32 <rkukura> could certainly do similar driver calls for update_device_[up|down]
14:36:26 <mestery> OK, so asomya, can you try out the new topic and see if that gets the ODL Mech driver going in the short term here?
14:36:36 <rkukura> Lets see how the portbinding works out, and we can MechanismDriver calls to the L2 agent RPCs if that seems to make sense
14:36:37 <asomya> mestery: sure
14:36:49 <mestery> So lets move on to the next topic.
14:36:54 <mestery> #topic Blueprint Updates
14:37:03 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ML2 Port Binding
14:37:08 <mestery> rkukura: You led us into the next topic. :)
14:37:25 <rkukura> finally started working on it
14:37:59 <rkukura> with luck, may have a patch at the end of the week
14:38:36 <mestery> rkukura: Great! I know some MechanismDriver writes who will anxiously be waiting. :)
14:38:54 <mestery> Any questions on ML2 Port Binding?
14:39:32 <apech> nope, looking forward to it, thanks rkukura
14:39:45 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/ml2-multi-segment-api ML2 Multi Segment API
14:39:51 <mestery> So, this one is low priority for H3.
14:40:09 <mestery> I'm not sure if we should prioritize this for H3 or make this an Icehouse thing. Thoughts from others?
14:40:50 <asomya> mestery: Do any mechanism drivers need to implement this asap?
14:41:03 <mestery> asomya: None that I'm aware of, no.
14:41:11 <mestery> At least in the Havana timeframe.
14:41:23 <sukhdev> Nope
14:41:56 <rkukura> I think our priority here should be to make sure the extension being added for NVP is defined properly, and implement it if we get a chance
14:42:22 <mestery> rkukura: Agreed. That hasn't gone up yet, I'll make sure to stay on top of that since this BP is assigned to me.
14:42:43 <mestery> #action mestery to monitor NVP extension for multiple provider networks in relation to ML2 multi-segment API
14:43:00 <rkukura> I think I was supposed to start an email thread, and still might when I review the NVP patch
14:43:09 <mestery> rkukra: OK, thanks!
14:43:26 <rkukura> mestery: you are welcome to do that if you get to it first, and discussion seems to be needed
14:43:39 <mestery> rkukura: OK, sounds good.
14:43:51 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info ML2 TypeDriver Extra Port Info
14:43:57 <mestery> ZangMingjie: Here?
14:44:10 <mestery> #link https://review.openstack.org/#/c/37893/ ExtraPort Info Review
14:44:11 <ZangMingJie> ye
14:44:20 <mestery> So, I provided some comments on this one based on what rkukura and I discussed Monday.
14:44:37 <mestery> We thought instead of this, we could overload "physical network".
14:44:52 <mestery> But, as ZangMingJie pointed out, this BP may provide more flexibility.
14:45:24 <ZangMingJie> I think it may be a long term process to replace the 3-tuple with a custimizable arbitrary data struct
14:45:29 <rkukura> looks like this is doing with TypeDrivers what I was proposing to do for the bound MechanismDriver!
14:45:44 <mestery> rkukura: Looking at it, I think you're right!
14:46:37 <mestery> ZangMingJie: I will re-review this today, but I think I'm more inclined to approve it now after discussions over the last day.
14:46:50 <rkukura> One question is whether the ml2 schema should continue to store the 3-tuple, or maybe instead store an encoded dict for each segment
14:47:41 <ZangMingJie> what about each type driver manage its own database store
14:48:06 <rkukura> ZangMingJie: That is also worth considering
14:48:28 <rkukura> I will definitely review this current patch in detail
14:48:48 <ZangMingJie> ok, thank you
14:49:09 <mestery> OK, we only have 11 minutes left, lets move on.
14:49:15 <mestery> #topic Bugs
14:49:22 <mestery> #link https://review.openstack.org/#/c/37963/ ML2 devstack
14:49:30 <mestery> The good news is this change was merged yesterday.
14:49:37 <mestery> So if you haven't tried it yet, now it's even easier.
14:49:58 <mestery> The next step for devstack is to see if we can get it to default to ML2 instead of OVS.
14:50:14 <mestery> I will likely open a bug for that and work with deantroyer and sdague to see how we can make that happen.
14:50:26 <rkukura> mestery: +1
14:50:58 <sukhdev> mestery: thanks- this will help
14:51:07 <rkukura> In the mean time, maybe an email to openstack-dev encouraging people to try out ml2 in devstack would be a good idea
14:51:36 <rkukura> Along with some instructions on the ML2 wiki
14:51:39 <mestery> #action mestery to send email to openstack-dev encouraging people to try ML2 out
14:51:47 <mestery> #action mestery to update the wiki with ML2 devstack instructions
14:51:54 <mestery> Good ideas rkukura!
14:52:19 <mestery> #link https://review.openstack.org/#/c/38662/ Enable gre and vxlan with the same ID
14:52:21 <mestery> matrohon: Here?
14:52:28 <rkukura> then we'll start getting some new bugs to work on;-)
14:52:38 <mestery> rkukura: Heh. :)
14:53:05 <mestery> The review I just posted is for this bug: https://bugs.launchpad.net/neutron/+bug/1196963
14:53:06 <uvirtbot> Launchpad bug 1196963 in neutron "Update the OVS agent code to program tunnels using ports instead of tunnel IDs" [Wishlist,In progress]
14:53:27 <mestery> We should all review this, as this will be nice to get in for H3 to allow for GRE and VXLAN tunnels to have the same tunnel ID, which they currently can't in ML2.
14:54:03 <mestery> #topic Questions and wrap up
14:54:14 <mestery> I skipped a few items due to time, is there anything else we should cover here in the last 5 minutes?
14:54:52 <asomya> Can someone add a few lines to the wiki on how the pre- and post- calls work in the mech drivers?
14:55:06 <apech> asomya: sure, i can take that on
14:55:06 <mestery> apech: Can you take that one?
14:55:12 <asomya> apech: thanks :)
14:55:17 <mestery> #action apech to add some notes to the wiki about how the pre- and post- calls work
14:56:28 <mestery> So, the Neutron feature cutoff is end ofday Augsut 23.
14:56:36 <mestery> And the release date is September 5.
14:56:42 <mestery> So we don't have a lot of time left in Havana.
14:56:53 <mestery> Just FYI, no pressure or anthing. :)
14:57:22 <sukhdev> :-)
14:57:30 <mestery> OK, so thanks everyone for all your efforts on ML2!
14:57:43 <mestery> We'll do this again next week, same UTC-time, same IRC-channel.
14:57:45 <mestery> #endmeeting