14:00:26 <mestery> #startmeeting networking_ml2
14:00:27 <openstack> Meeting started Wed Jun 26 14:00:26 2013 UTC.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:30 <openstack> The meeting name has been set to 'networking_ml2'
14:00:44 <mestery> #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda
14:01:11 <mestery> #topic MechanismDriver
14:01:17 <mestery> apech: Here?
14:01:23 <apech> yup
14:01:30 <mestery> #link https://review.openstack.org/33201 MechanismDriver review
14:01:35 <rkukura> good morning!
14:01:42 <mestery> apech: Great! Can you provide an update on MechanismDriver?
14:02:10 <mestery> rkukura: Good morning to you too!
14:02:21 <apech> Sure. I posted a new review on Sunday. I hope that I've addressed everyone's comments. rcurran took a look through, am hoping to get a few more eyeballs on it
14:02:39 <apech> It does not yet have subnet support, and unit testing could be better
14:02:45 <rkukura> I'll do a detailed review today or tomorrow at the latest.
14:02:46 <apech> but hopefully it's converging to a more stable point
14:02:57 <mestery> apech: It's on my list, I'll do a review tomorrow as I'm on PTO rest of the day after this meeting.
14:02:58 <apech> rkukura: that's be great, thanks!
14:03:45 <mestery> rcurran had sent an email with some issues which may be interesting to the MechanismDriver around things like DBs, etc.
14:03:53 <mestery> rcurran: Want to discuss that now quickly?
14:04:16 <rcurran> in creating the ml2 version of cisco nexus
14:04:35 <rcurran> i'm coming across code in the cisco version that may need to be added to ml2 common code
14:04:48 <rcurran> db, exceptions, xml snippets, etc
14:05:08 <rcurran> was curious if arista guys are coming across the same issues
14:05:32 <rkukura> are these common across different ml2 drivers, or more widely?
14:05:48 <rcurran> or more restrictive
14:06:01 <rcurran> i.e. may only apply to cisco
14:06:06 <rcurran> but can be made common
14:06:08 <rkukura> do you mean common across a subset of ml2 drivers?
14:06:13 <rcurran> right
14:06:52 <apech> rcurran: Sukhdev isn't on, he'd be a better person to comment on the Arista drivers. My guess is that they'll be common
14:06:56 <rkukura> have we decided on whether sets of related drivers (such as cisco) should be in a subdir of ml2/drivers?
14:06:58 <apech> setting up the separate directories seems right
14:07:11 <rcurran> or maybe, i should say ... i don't know anything about arista implementation of their plugins
14:07:14 <apech> rkurkura: I thought we decided to put it all under drivers
14:07:22 <mestery> rkukura: I think the subdir makes sense for sets of common drivers.
14:07:36 <rkukura> I have no objection to a subdir for a closely-related set of drivers
14:07:44 <apech> yeah, subdir under drivers seems fine
14:07:59 <rkukura> type drivers should generally go at the top level since they are intended to apply across mechanisms
14:08:33 <rkukura> one goal would be to keep packaging easy - so the entire subdir can be packaged separately if needed
14:08:52 <apech> rcurran: to your original question, if you send us the files you had in mind, we can review and help provide feedback
14:09:19 <apech> but my guess is that we will end up with some common infrastructure for all mechanism drivers
14:09:41 <mestery> apech: Agreed, there is some common code there for sure.
14:09:52 <rcurran> ok, i'm still porting over cisco_nexus, when i get closer to being done i'll send out more detailed info on what i think can possibly be made common
14:09:54 <rkukura> anything shared with other plugins or agents should probably be in quantum/common, right?
14:10:12 <mestery> #action rcurran to send email or review notes for details on common code
14:10:22 <rkukura> great
14:10:51 <mestery> OK, anything else to discuss on MechanismDriver?
14:11:22 <mestery> #topic ml2-portbinding
14:11:30 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ml2-portbinding blueprint
14:11:38 <mestery> rkukura: portbinding update?
14:11:43 <rkukura> nothing new on portbinding - should start coding today or tomorrow
14:11:59 <rkukura> looks like other plugins are starting to address this as well
14:12:04 <mestery> rkukura: OK, cool! I think that will be the next piece that MechanismDriver writers will be waiting for.
14:12:10 <apech> rkukura: do you have a sense of when you'll have an initial version?
14:12:29 <rkukura> I'll shoot for Monday
14:12:41 <apech> rkukura: that'd be awesome, thanks!
14:12:44 <rkukura> Can we get the initial MechanismDrivers merged by then?
14:12:47 <mestery> rkukura: cool!
14:13:09 <mestery> rkukura: You mean the arista and cisco mech drivers, or the blueprint apech is working on?
14:13:20 <rkukura> the BP all these depend on
14:13:21 <apech> I would love to - appreciate the review
14:13:31 <apech> and I'll focus on beefing up the testing between now and then
14:13:40 <rkukura> so a question was raised about putting the nova instance name on the port
14:13:54 <apech> yeah - that's something we'd be interested in on the Arista side
14:13:56 <mestery> #action ML2 subteam to try to get MechanismDriver blueprint merged before Monday 7-1-2013.
14:14:04 <rkukura> or at least having ml2 make it available to the driver
14:14:28 <mestery> instance name on the port would be interesting, can we make that happen?
14:14:39 <rkukura> I was thinking the device_id might be sufficient, since a driver can use it to find the nova instance and get its name
14:15:06 <apech> would this just be using the nova api calls?
14:15:06 <rkukura> I'm not sure there are any other cases where ml2 code would be making nova API calls
14:15:45 <apech> This is what I wasn't sure about - is this ok, or should we be passing that information in as part of the call nova makes to quantum
14:15:53 <rkukura> Seems there are two ways to get the nova info - for nova to set it on port (like host_id), or neutron to use the nova API
14:15:53 <mestery> rkukura: If multiple MechanismDrivers would find this useful, it makes sense to do this in a generic spot in the code rather then per-driver.
14:15:57 <apech> if making nova api calls is okay, that's an easier thing to add for now
14:16:06 <rkukura> not sure about nova api calls being ok
14:16:21 <rkukura> do we do that anywhere else in neutron?
14:16:38 <mestery> rkukura: Having nova set it on the port seems to be the cleanest way for this to happen.
14:16:53 <rkukura> if we decide it is OK, then the context object could do it the first time a driver asks, and cache the result
14:17:22 <rkukura> its taking forever just to get nova to set the host_id
14:17:39 <apech> that was exactly my thinking - this seems like a very similar patch to getting the host_id
14:17:42 <rkukura> certainly having nova also set the instance_name on port would be trivial
14:17:44 <Sukhdev_> Sorry - I joined late -
14:18:03 <mestery> rkukura: That seems like a process problem, not a technical one from what I can tell.
14:18:06 <rkukura> so what is the use case for a mech driver needing the instance name?
14:18:41 <apech> visibility - we'd like to be able to map how VMs map to physical ports
14:18:46 <mestery> apech: +1
14:18:57 <doude_> the instance name can be updated so we must be careful
14:18:58 <mestery> And it's nice to have that VM name for display purposes.
14:19:07 <apech> yeah, device_id is a bit unruly :)
14:19:23 <mestery> doude_: Yes, any caching on the Neutron side would need to take that into account.
14:19:41 <rkukura> so wouldn't a display tool be able to use the device_id with the nova API to get the instance name, and any other info it also needed?
14:19:47 <Sukhdev_> rkukura: if you display 100's ov VMs it is pain to see them by long IDs :-)
14:20:11 <mestery> rkukura: That's possible, but that means the display tool needs to talk Nova API. Sometimes that's not desirable.
14:20:20 <apech> mestery: +1
14:20:30 <mestery> So, how do we proceed here?
14:21:05 <rkukura> My suggestion is to let the current host_id patch get merged, then add instance_name as a followup nova patch.
14:21:24 <apech> rkukura: that sounds like a good plan
14:21:27 <mestery> rkukura: OK, I can file a bug for the instance_name as well.
14:21:32 <mestery> rkukura: I can work on that patch as well.
14:21:36 <rkukura> I guess that would also mean extending the portbinding extension, but that should be OK
14:21:36 <apech> mestery: thanks!
14:21:49 <mestery> #action mestery to file bug to get instance_name passed in from nova following the host_id patch
14:22:05 <rkukura> as a fallback plan, we could have the ml2 context object use the nova API if needed
14:22:16 <mestery> rkukura: Sounds like a plan.
14:22:28 <rkukura> ok
14:22:36 <rkukura> that it for portbinding for now then
14:22:40 <mestery> #action If nova patch to add instance_name is rejected, have ml2 context object use the nova API instead.
14:22:43 <mestery> rkukura: thanks!
14:22:50 <mestery> #topic ml2-gre
14:22:57 <mestery> #link https://review.openstack.org/33297 ml2-gre review
14:23:04 <matrohon> hi
14:23:05 <mestery> matrohon: hi!
14:23:14 <mestery> matrohon: Updates on ML2 GRE?
14:23:24 <matrohon> I just proposed two new patch today
14:23:35 <mestery> I saw that, I will review that tomorrow.
14:23:37 <matrohon> the first one add unit test
14:23:50 <matrohon> the second one remove the endpoint ID usag
14:24:04 <matrohon> as propose by garyk in a review
14:24:16 <mestery> great progress!
14:24:19 <HenryG> Is it a separate review?
14:24:19 <rkukura> I'll review them both this week
14:24:41 <matrohon> no it's the same
14:24:42 <mestery> matrohon: I've started the vxlan type driver based on your patch since you implemented the RPC callbacks I'll need for that one as well.
14:25:01 <mestery> thanks for that matrohon. :)
14:25:03 <matrohon> but we definitly think that endpoint id doesn't make sense
14:25:04 <rkukura> correction, I'll review it this week;-)
14:25:54 <mestery> Any questions on ML2 GRE?
14:26:40 <mestery> #topic ml2-vxlan
14:26:46 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/ml2-vxlan ml2-vxlan blueprint
14:27:12 <mestery> So I've started on this now, utilizing matrohon's ml2-gre patch as a base.
14:27:25 <mestery> Shooting for Friday for an initial review.
14:27:36 <matrohon> mestery : great!
14:27:55 <mestery> matrohon: Thanks for all the RPC work in your patch, this made mine smaller than yours. :)
14:28:11 <matrohon> you will manage multicast
14:28:26 <matrohon> ?
14:28:32 <mestery> matrohon: That's the plan, I'm going to add in the option of configuring multicast as well.
14:28:52 <matrohon> mestery : ok, great!
14:29:03 <mestery> matrohon: Will appreciate your review once I push the review. :)
14:29:14 <matrohon> mestery : np
14:29:24 <mestery> Any more queries on ML2 VXLAN?
14:29:30 <feleouet> question regarding multicast:
14:30:06 <feleouet> shouldn't the option be set in agents
14:30:20 <feleouet> and plugin support implemented as provider extension
14:31:04 <mestery> feleouet: That may be the case, but if you're using multicast, you will end up mapping VNIs to multicast groups, so the type driver DB needs to store that info.
14:31:17 <mestery> The agents will determine if multicast VXLAN is supported or not I guess.
14:31:37 <mestery> E.g. the current OVS VXLAN code doesn't support it, but likely will soon, while the LinuxBridge agent (with Tomas's patch) would support it.
14:32:32 <feleouet> ok, makes sense
14:32:49 <mestery> feleouet: Cool!
14:33:03 <mestery> #topic Bugs and other blueprints
14:33:16 <mestery> Wanted to call attention to the "tunnel_types" OVS agent bug
14:33:22 <mestery> #link https://review.openstack.org/#/c/33107/ tunnel_types review
14:33:38 <mestery> matrohon: I got your latest feedback, will see if I can implement that tonight/tomorrow.
14:33:51 <rkukura> looks about ready to merge to me, but wanted to test in devstack before giving my +2
14:33:59 <mestery> rkukura: Thanks!
14:34:14 <matrohon> mestery : ok it's quite tricky
14:34:16 <mestery> matrohon: One question I had was, we could file a separate bug to address your comment and fix that separately.
14:34:30 <mestery> matrohon: Exactly, which is why maybe a different bug makes sense since this one is ready to merge.
14:34:37 <matrohon> mestery : it makes senses for me
14:34:38 <mestery> matrohon: Would you be ok with that?
14:34:42 <mestery> OK, great!
14:34:53 <mestery> #action mestery to file new bug to address matrohon's concerns on the tunnel_type fix
14:34:57 <matrohon> I can manage that bug if you want
14:35:04 <mestery> matrohon: Cool!
14:35:05 <rkukura> hadn't seen matrohon's comment though
14:35:08 <mestery> I'll assign it to you.
14:35:29 <mestery> rkukura: Can you review what's there now, we'll do a different bug to address matrohon's comment.
14:35:39 <mestery> matrohon: Once I file the bug, can you remove your -1? :)
14:35:44 <rkukura> mestery: I had one code nit if you plan to update, but not critical
14:35:49 <matrohon> mestery : ok
14:36:11 <mestery> rkukura: File your nit, and I can update the patch late tonight or tomorrow morning, up to you.
14:36:26 <rkukura> ok
14:36:50 <mestery> OK, wanted to bring everyone's attention to matrohon's L2 population blueprint as well.
14:36:56 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/l2-population L2 population blueprint
14:37:06 <mestery> matrohon: Is this something we should get in under the context of ML2?
14:37:37 <matrohon> mestery : we target it for havana3
14:37:55 <mestery> Sorry, this is under feleouet's name, not matrohon's. :)
14:38:00 <rkukura> is the idea to add the support to the agent, but only when used with ml2?
14:38:00 <mestery> matrohon: OK, fair enough.
14:38:25 <matrohon> rkukura : exactly
14:39:02 <rkukura> and keep backward compatibility with openvswitch plugin?
14:39:25 <matrohon> rkukura : this will be based on new RPC callbacks
14:39:32 <rkukura> makes sense to me
14:39:46 <mestery> Yes, this looks great matrohon.
14:40:02 <mestery> OK, did I miss any other Ml2 related bugs or blueprints?
14:40:37 <mestery> BTW: If any new bugs or blueprints come up that you don't see on the agenda, please unicast me and I will add them. I'll do my best to keep on top of it so we can discuss urgent ones in the weekly meeting though.
14:41:26 <mestery> #topic Ported Mechanism Drivers
14:41:32 <rkukura> there may be some recent reviews and/or merges to openvswitch that need to be duplicated in ml2
14:41:32 <feleouet> One last bug concerning OVS agent
14:41:59 <feleouet> sorry, may I continue?
14:42:01 <mestery> rkukura: OK, good point, lets track those and file ML2 bugs to ensure we get the work in as well.
14:42:05 <mestery> feleouet: yes
14:42:06 <mestery> please
14:42:14 <feleouet> #link https://bugs.launchpad.net/quantum/+bug/1177973
14:42:28 <feleouet> I'll try to propose a first WIP for this one today
14:42:44 <feleouet> Using event-based detection rather than pooling
14:42:59 <mestery> feleouet: Great! I'll add this to track for Ml2 as well, thanks!
14:43:02 <rkukura> polling?
14:43:32 <feleouet> Replacing the agent loop that pools for new devices
14:43:42 <rkukura> excellent!
14:43:44 <feleouet> (And is very ressource consuming)
14:43:51 <mestery> feleouet: +1
14:44:36 <feleouet> I'd be great if you could give me an inital feedback, then I'll work on UT
14:44:58 <feleouet> (depending on feedback, of course)
14:44:58 <rkukura> One other review we should look at
14:45:02 <mestery> feleouet: Will keep an eye out for the patch.
14:45:27 <feleouet> mestery: thanks
14:45:46 <rkukura> https://review.openstack.org/#/c/33736/ seems to be related to our eventual multi-segment network API
14:46:31 <mestery> rkukura: I agree, this seems relevant to the multi-segment network API.
14:46:33 <rkukura> but does not seem as usable as what I was envisioning
14:46:44 <mestery> rkukura: I agree.
14:46:57 <rkukura> also seems to expose nicira's transport zones, which wouldn't apply
14:47:25 <mestery> rkukura: Can you provide feedback of this nature on the review itself to alert them to our ML2 overlapping work and your concerns?
14:48:18 <rkukura> Does it make sense to clarify that their extension is nicira-specific?
14:48:23 <mestery> #action rkukura to provide ML2 related feedback on review https://review.openstack.org/#/c/33736/
14:48:27 <rkukura> ok
14:48:37 <mestery> rkukura: Yes, it does, I think pointing that out is the way to go.
14:48:57 <mestery> I don't think we shoudl be exposing vendor specific extensions into the core extension APIs, they can implement it as a plugin specific extension API I think.
14:50:05 <mestery> OK, we have about 10 minutes left, how about if we quickly get an update for each of the 4 known MechanismDriver implementations out there?
14:50:17 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/sukhdev-8 Arista Mechanism Driver Blueprint
14:50:25 <mestery> Sukhdev_: Hi!
14:50:46 <Sukhdev_> Hi
14:50:53 <Sukhdev_> sorry was distracted
14:51:05 <mestery> Sukhdev_: No worries, any updates on Arista MD?
14:51:36 <Sukhdev_> I am stuck on testing because of port binding issue - other than that I am good
14:51:55 <mestery> Sukhdev_: Thanks!
14:52:08 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/ml2-md-cisco-nexus Cisco Nexus Mechanism Driver
14:52:14 <mestery> rcurran: Cisco Nexus MD updates?
14:52:46 <rcurran> working on it. Have ported over most of the code. (Except common code already mentioned in this meeting)
14:53:04 <mestery> rcurran: Cool, no road blocks so far?
14:53:27 <rcurran> no, i have plenty to work on ... coding to andre's bp that should go in soon
14:53:35 <mestery> rcurran: Thanks!
14:53:42 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/ml2-opendaylight-mechanism-driver OpenDaylight Mechanism Driver
14:53:50 <Sukhdev_> mestery: are you still planning on putting out the wiki for ML2?
14:54:14 <mestery> Sukhdev_: I had thought I did that, let me check for that after the meeting and send a note.
14:54:25 <mestery> If I didn't, I'll make the wiki page and send out a note to openstack-dev
14:54:52 <apech> mestery: thanks
14:54:54 <mestery> So, the ODL MD update is that it's complicated. :)
14:54:55 <Sukhdev_> mestery: thanks
14:55:28 <mestery> We'll begin porting the code we had into a MD this coming week, but the broader question of how ODL integrates with Neutron is being worked on over in the ODL project.
14:55:47 <mestery> So the final result may look slightly different than what we have so far.
14:56:17 <rkukura> mestery: Is the plan from ODL to have both a monolithic plugin and ml2 MD support?
14:56:35 <mestery> rkukura: That's part of the complication. I would prefer only an ML2 MD one.
14:57:10 <mestery> I think after the ODL HackFest in July a more concrete plan will be know, though this will affect Havana integration for ODL I realize.
14:57:18 <rkukura> mestery: Running into anything difficult to do with ml2 as currently defined?
14:57:46 <mestery> rkukura: No, the ML2 side looks fine, it's more ODL issues and deficiencies we're hitting.
14:58:10 <mestery> The final MD for ML2 is the Tail-f NCS one
14:58:20 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/tailf-ncs Tail-f NCS Mechanism Driver
14:58:32 <mestery> I don't know Luke Gorrie's IRC nick though.
14:58:37 <mestery> rkukura; Do you?
14:58:46 <rkukura> mestery: no
14:59:02 <mestery> OK, I don't see an update, just calling it out for consistency here then.
14:59:05 <mestery> #topic Questions?
14:59:12 <mestery> OK, that's all for the agenda.
14:59:29 <mestery> Lets keep up on reviews for the next week as much as possible!
14:59:43 <apech> sounds great, thank sall
14:59:52 <rcurran> thanks
14:59:55 <mestery> #endmeeting