15:01:05 <mestery> #startmeeting networking_ml2
15:01:06 <openstack> Meeting started Wed Jun 12 15:01:05 2013 UTC.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:09 <openstack> The meeting name has been set to 'networking_ml2'
15:01:22 <mestery> #agenda https://wiki.openstack.org/wiki/Meetings/ML2
15:01:47 <mestery> So, first, apologies for the timing mistake on my part.
15:01:59 <mestery> #info Next week and going forward we will have the ML2 meeting at 1400 UTC Wednesday
15:02:39 <mestery> #topic MechanismDriver
15:02:54 <mestery> I thought we could start out with the MechanismDriver.
15:02:59 <mestery> #apech_: You here?
15:03:02 <apech_> yup
15:03:07 <mestery> cool
15:03:18 <mestery> I think your email to openstack-dev is a good starting point here, hopefully people have had a chance to read it.
15:03:43 <apech_> thanks, I have a few questions to pose to the list if that's a good place to start, or can summarize quickly
15:03:53 <mestery> #link http://lists.openstack.org/pipermail/openstack-dev/2013-June/010252.html
15:04:09 <mestery> apech_: Lets go with questions, assuming folks have read your email.
15:04:56 <apech_> great
15:05:21 <apech_> maybe one place to start is on the question of calls to the mechanism driver within or after the transaction
15:05:53 <apech_> I think I actually misspoke in my email and our implementation will mostly use calls after the transaction
15:06:02 <apech_> but are there use cases out there for a call within the transaction?
15:06:25 <mestery> I think outside the transaction makes the most sense.
15:06:47 <mestery> Thoughts by others?
15:06:59 <apech_> and if there are, any suggestions on the function names? The best I can think of is "create_network_in_transaction" and "create_network_after_transaction", which makes me puke a bit :)
15:07:08 <rkukura> apech_: Outside would make sense for remote round-trip communications, inside for DB access.
15:07:43 <apech_> rkurura: ok that makes sense
15:08:04 <mestery> So, sounds like we have our use case for in-transaction then.
15:08:04 <apech_> do you have any thoughts on distinguishing between the method names?
15:08:33 <rkukura> I had thought of _tx on the in-transaction ones.
15:08:51 <mestery> rkukura: I like that naming approach better than apech_'s. :)
15:08:51 <apech_> okay, that works for me
15:09:03 <apech_> haha
15:09:10 <apech_> good, I guilted people into better suggestions
15:09:31 <apech_> alternatively, I guess inside the transaction could be considered the allocation of resources for the action, while outside the transaction is the implementation
15:09:45 <Sukhdev> sorry for being late - went into the wrong meeting :-)
15:09:48 <mestery> apech_: Conceptually that makes sense.
15:10:10 <mestery> Sukhdev: No problem. :)
15:10:51 <mestery> Any other MechanismDriver questions?
15:11:20 <apech_> yeah, i guess the other one that comes up (outside of the specific API), is how to order mechanism driver calls
15:11:46 <apech_> so if you enable two mechanism drivers, do we need to be able to specify the order in which they are called? is it registration order?
15:12:20 <mestery> Registration order would be easiest I guess, and would enable the user to order things, for better or worse.
15:12:25 <rkukura> apech_: ordering is something I've been thinking about for the ml2-portbinding DP
15:12:34 <rcurran> i guess i envisioned an ini file having the list of mech_drivers ... called in that order
15:12:54 <rkukura> We want to be able to prioritize which mechanism to try first, ...
15:13:08 <mestery> rkukura: So something like what rcurran suggested here?
15:13:17 <rcurran> and the order configured in the ini isn't good enough
15:13:19 <Sukhdev> How do you decide on the priority?
15:13:47 <mestery> Sukhdev: Order in the .ini file?
15:14:01 <rkukura> probably - but I'd want to look at stevedore again to see if the same mechanism_driver list can be used once they are loaded
15:14:18 <apech_> rkukura: are you envisioning a case where only one mechanism driver handles a given call, or they all do?
15:14:32 <rkukura> only for portbinding right now
15:14:32 <apech_> I was thinking of the later, but the way you put your statement, I wonder if you're thinking of the former
15:14:52 <rkukura> I think the CRUD ops need to go to them all, and I'm not sure if order matters for that
15:15:05 <mestery> rkukura: Agreed on both accounts there.
15:15:31 <apech_> okay great, seems like we agree there
15:15:37 <rkukura> I'm at Red Hat Summit this week, and haven't had a chance to read the proposal in detail
15:16:17 <mestery> One other question I had related to all this is, the OVS and LinuxBridge stuff is currently embedded in ML2. Is the plan to make that a MechanismDriver as well?
15:16:19 <apech_> did anyone have any feedback for me after reading the proposal? Sounds like we need to add inside vs outside method calls. If that's it, we can get this coded up prety quickly and send this out for review
15:16:22 <rkukura> please don't take anything I say as intentionally contradicting anything in that
15:16:25 <mestery> I see cases where some plugins won't want to run with that enabled in ML2.
15:17:09 <rkukura> I've been viewing the RPCs as a plugin service that can eventually be extended by MechanismDrivers
15:17:34 <rkukura> All the RPC mixins kind of need to come from the plugin (L3, firewall, ...)
15:17:57 <mestery> rkukura: So in the case of a controller MechanismDriver which won't want to use OVS/LinuxBridge, that will be supportable then?
15:18:12 <rkukura> Since the L2 RPC is so similar for all existing L2 agents, it made sense to share this, at least initially
15:18:13 <mestery> rkukura: I'm thinking of something like OpenDaylight for example.
15:18:47 <rkukura> One goal I've had for ml2 is to support all types and mechanisms concurrently
15:19:00 <mestery> Ok, that makes sense.
15:19:12 <mestery> Anything else on MechanismDriver?
15:19:20 <rkukura> so part of a datacenter might be using a controller, while some other part is using the L2 agent(s)
15:19:25 <feleouet> One more question: when mechanism driver is called, will the plugin use the same process? (calling _notify_port_updated for example)
15:19:31 <rkukura> and a flat network or vlan might span these
15:20:27 <apech_> feleouet: I'm not sure I followed your question, sorry
15:20:31 <rkukura> feleouet: I wouldn't think so
15:21:17 <mestery> feleouet: Can you explain your concern a bit more?
15:21:20 <rkukura> If the portbinding results in an L2 agent being used for that port, then the L2 agent will initiate the RPCs, etc. If a controller "owns" the port, the RPCs won't occur
15:21:52 <feleouet> I'm concerned about that, and joining kyle question about making OVS & LB as machnisms drivers:
15:22:37 <rkukura> The ml2-portbinding BP does talk about adding MechDrivers for each L2 agent
15:22:37 <mestery> With regards to OVS & LB, from what rkukura is indicating here, if a "controller" MechanismDriver owns the port, the RPC calls won't happen for that port. Is that right?
15:22:58 <rkukura> mestery: That's what I was trying to say
15:23:24 <mestery> rkukura: OK, I guess I'm missing how a MechanismDriver claims ownership of a port then.
15:23:30 <mestery> But I can check the code out for that part.
15:23:45 <apech_> I think this is part of what rkukura is defining for the ml2 port binding bp
15:23:45 <rkukura> no code yet for that
15:24:00 <mestery> OK, makes sense, I'll keep my eye on the ml2 portbinding BP.
15:24:14 <apech_> for the generic mechanism driver bp, I'm not imagining adding this, as I think we'll call all registered mechanism drivers for all calls
15:24:30 <apech_> (all calls = create_network, delete_network, etc)
15:24:54 <mestery> apech_: We will need to coordinate this with Bob's blueprint then to ensure it's possible to make this happen though.
15:25:02 <rkukura> apech_: Agreed - I thought it made sense to treat what you are working on and ml2-portbinding as separate BPs so we could work in parallel
15:25:21 <mestery> Sounds like we're all on the same page here then.
15:25:21 <apech_> mestery: agreed
15:25:25 <apech_> rkukura: great
15:25:36 <mestery> OK, any other MechanismDriver questions or thoughts?
15:25:50 <rkukura> ml2-portbinding will result in the binding:vif_type being correct based on which MechDriver is used, and will also allow picking which segment of multi-segement networks that agent has access to
15:26:10 <apech_> that's all I had on the mechanism driver. I'll send out an updated proposal to my email after this
15:26:20 <mestery> apech_: OK, thanks.
15:26:25 <mestery> rkukura: That makes sense as well.
15:26:31 <rcurran> want to make sure i understand this ... so if a mdriver owns a port then the rpc calls made by the ovs driver would be stifed somehow
15:26:55 <rcurran> is it ok that the other code inside of these events are still executed
15:26:56 <rcurran> ?
15:28:08 <mestery> rcurran: I think that's the general idea, yes, though I wouldn't expect the code to be executed in the case of hte port being owned by a different MechanismDriver.
15:28:44 <mestery> Makes sense rcurran?
15:29:08 <rcurran> so no code inside of say plugin.py create_port() would be called
15:29:10 <rkukura> rcurran: Not sure stifling them is needed, but we could consider that
15:29:51 <mestery> OK, lets move on to the TypeDrivers now.
15:30:04 <rkukura> we clearly could get the MechDrivers in the loop on the RPC calls, but lets see if that turns out to be needed
15:30:25 <mestery> #topic TypeDrivers
15:30:35 <mestery> Is Rohon here?
15:30:41 <matrohon> yep
15:30:51 <mestery> matrohon: Hi!
15:31:01 <mestery> #link Early ml2-gre code here: https://github.com/mathieu-rohon/quantum/commits/bp/ml2-gre
15:31:12 <mestery> matrohon: Thanks for sharing that early code, will review it today.
15:31:24 <mestery> matrohon: Anything you wanted to share or discuss around it today?
15:31:26 <matrohon> ok
15:31:47 <matrohon> it works fine for tenant networks
15:31:56 <matrohon> need some more test for other use cases
15:31:57 <mestery> matrohon: Awesome!
15:32:07 <rkukura> took a quick glance, will review more closely as soon as I can
15:32:23 <mestery> I'm going to look at it to see if some of it can be leveraged for the ml2-vxlan BP too :)
15:32:36 <matrohon> i've copy a lot of code from ovs plugin
15:33:06 <mestery> matrohon: I think that makes some sense, given the approach of ML2 may be to deprecate the OVS+LB plugins.
15:33:21 <matrohon> but i use gre-endpoints instead of globol endpoint (for vxlan and gre)
15:33:31 <rkukura> seems the attribute validation and tenant pooling should be very similar to the VlanTypeDriver as well
15:34:01 <mestery> matrohon: I didn't follow your gre-endpoint vs. global endpoint?
15:35:00 <matrohon> you could have use the same endpoint managment code for vxlan endpoints and gre endpoints
15:35:13 <matrohon> but that's not the way i choose
15:35:17 <mestery> matrohon: OK, got it.
15:35:40 <mestery> matrohon: Let me look at your code, maybe it makes sense to combine them? The OVS+VXLAN work was just approved today in fact.
15:37:02 <matrohon> agent may be a gre endpoint but not vxlan endpoint
15:37:15 <feleouet> But won't actual OVS VXLAN code use GRE typeDriver for now?
15:37:33 <mestery> matrohon: We can add that support to your code, or I can rather, let me know.
15:37:50 <rkukura> I think mestery is adding a VxlanTypeDriver that will mirror the GreTypeDriver
15:37:57 <mestery> feleouet: Why would it use the GRE TypeDriver for now? We will need a VXLAN TypeDriver I think.
15:38:07 <mestery> rkukura: Yes, exactly.
15:38:38 <rkukura> So the RPC interface might be shared, but need to dispatch to the right TypeDriver to manage the endpoints for that tunnel type
15:38:50 <feleouet> yes, in turn, but for now how will you allocate segments if they can't overlap?
15:38:55 <matrohon> rkukura : exactly
15:39:00 <rkukura> At that point, I think we can have the L2 agents pass tunnel_type in the RPCs
15:39:23 <matrohon> rkukura : I think so, in the tunnel_sync
15:39:40 <mestery> matrohon rkukura: Exactly, that is the plan.
15:40:00 <mestery> feleouet: What do you mean the segments can't overlap? You could reuse a segment ID for both GRE and VXLAN type networks.
15:40:59 <rkukura> feleouet, mestery: these are segmentation_ids right? I agree they should be independently managed by ml2 TypeDrivers.
15:41:48 <mestery> rkukura feleouet: Yes, that's my thinking as well. No reason they can't overlap, and each TypeDriver can manage their own segment IDs.
15:42:08 <mestery> Would be configurable in the ini files per TypeDriver I believe.
15:42:19 <feleouet> yes, agree they have to be managed independantly.
15:42:39 <matrohon> the overlaping will be hard to manage in the ovs agent
15:42:53 <matrohon> since flow are managed by tunnel_id
15:43:10 <mestery> matrohon: We'll need to make changes to the agent, yes, this was discussed during the review of my OVS+VXLAN patch.
15:43:30 <mestery> The idea is to change from tunnel_id to something like the partial mesh patch implements.
15:43:38 <feleouet> My remark was concerning current VXLAN code, that doesn't supports segemntation_id overlap
15:44:05 <matrohon> mestery : great
15:44:12 <feleouet> this is was i was thinkin it would use GRE typedriver until we use perport flow rules in agents
15:44:18 <mestery> feleouet: The current code only supports VXLAN OR GRE tunnels, not both at the same time.
15:44:26 <mestery> feleouet: Got it, now I understand.
15:44:43 <mestery> I'll make changes to the agent side to support both at the same time next.
15:44:54 <mestery> So we can manage IDs using the two TypeDrivers separately.
15:45:01 <matrohon> actually we think that partial mesh stuff could be handle in the l2 population BP
15:45:22 <mestery> matrohon: That makes sense to me.
15:45:25 <rkukura> So the openvswitch plugin will remain one-or-the-other, while ml2 supports multiple types concurrently
15:45:37 <mestery> rkukura: Exactly.
15:46:42 <mestery> OK, so I think we're making good progress here on the TypeDrivers it seems.
15:46:46 <mestery> And coming to consensus on things.
15:46:55 <mestery> Anything else on TypeDrivers?
15:47:59 <mestery> #topics General Questions
15:48:06 <mestery> #topic General Questions
15:48:32 <mestery> So the list of blueprints on ML2 is on the wiki.
15:48:44 <mestery> #link ML2 blueprints link: https://wiki.openstack.org/wiki/Meetings/ML2
15:49:11 <mestery> apech_: Getting the MechanismDriver out for review this weekend woudl be great considering it's importance.
15:49:43 <apech_> yeah, agreed. Should be able to do that
15:49:45 <mestery> Once that and rkukura's portbinding BP are in, people can start writing MechanismDriver's.
15:50:00 <mestery> A question for people: We have an OpenDaylight plugin mostly working now.
15:50:14 <mestery> What are people's thoughts on making that the first Open Source ML2 MechanismDriver in the controller model?
15:51:03 <apech_> fine by me - we have to convert something in the controller model to make sure we have the right model
15:51:18 <apech_> (too many uses of model, but you hopefully get what I mean)
15:51:40 <mestery> apech_: My thoughts exactly, and we'll want an Open Source Controller MechanismDriver as well, which was my thinking here.
15:51:45 <mestery> apech_: Yes. :)
15:51:52 <mestery> OK, any other questions or thoughts by anyone?
15:51:57 <mestery> If not, we'll end the meeting.
15:52:05 <rcurran> i'm good
15:52:15 <apech_> yup, gotta run anyway
15:52:22 <apech_> thanks all
15:52:23 <rkukura> mestery: +1
15:52:31 <mestery> Ok, thanks everyone!
15:52:35 <Sukhdev> good meeting
15:52:50 <mestery> Reminder: Next week's meeting is at 1400 UTC Wednesday on #openstack-meeting.
15:52:51 <mestery> #endmeeting