14:00:26 #startmeeting networking_ml2 14:00:27 Meeting started Wed Jun 26 14:00:26 2013 UTC. The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:30 The meeting name has been set to 'networking_ml2' 14:00:44 #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda 14:01:11 #topic MechanismDriver 14:01:17 apech: Here? 14:01:23 yup 14:01:30 #link https://review.openstack.org/33201 MechanismDriver review 14:01:35 good morning! 14:01:42 apech: Great! Can you provide an update on MechanismDriver? 14:02:10 rkukura: Good morning to you too! 14:02:21 Sure. I posted a new review on Sunday. I hope that I've addressed everyone's comments. rcurran took a look through, am hoping to get a few more eyeballs on it 14:02:39 It does not yet have subnet support, and unit testing could be better 14:02:45 I'll do a detailed review today or tomorrow at the latest. 14:02:46 but hopefully it's converging to a more stable point 14:02:57 apech: It's on my list, I'll do a review tomorrow as I'm on PTO rest of the day after this meeting. 14:02:58 rkukura: that's be great, thanks! 14:03:45 rcurran had sent an email with some issues which may be interesting to the MechanismDriver around things like DBs, etc. 14:03:53 rcurran: Want to discuss that now quickly? 14:04:16 in creating the ml2 version of cisco nexus 14:04:35 i'm coming across code in the cisco version that may need to be added to ml2 common code 14:04:48 db, exceptions, xml snippets, etc 14:05:08 was curious if arista guys are coming across the same issues 14:05:32 are these common across different ml2 drivers, or more widely? 14:05:48 or more restrictive 14:06:01 i.e. may only apply to cisco 14:06:06 but can be made common 14:06:08 do you mean common across a subset of ml2 drivers? 14:06:13 right 14:06:52 rcurran: Sukhdev isn't on, he'd be a better person to comment on the Arista drivers. My guess is that they'll be common 14:06:56 have we decided on whether sets of related drivers (such as cisco) should be in a subdir of ml2/drivers? 14:06:58 setting up the separate directories seems right 14:07:11 or maybe, i should say ... i don't know anything about arista implementation of their plugins 14:07:14 rkurkura: I thought we decided to put it all under drivers 14:07:22 rkukura: I think the subdir makes sense for sets of common drivers. 14:07:36 I have no objection to a subdir for a closely-related set of drivers 14:07:44 yeah, subdir under drivers seems fine 14:07:59 type drivers should generally go at the top level since they are intended to apply across mechanisms 14:08:33 one goal would be to keep packaging easy - so the entire subdir can be packaged separately if needed 14:08:52 rcurran: to your original question, if you send us the files you had in mind, we can review and help provide feedback 14:09:19 but my guess is that we will end up with some common infrastructure for all mechanism drivers 14:09:41 apech: Agreed, there is some common code there for sure. 14:09:52 ok, i'm still porting over cisco_nexus, when i get closer to being done i'll send out more detailed info on what i think can possibly be made common 14:09:54 anything shared with other plugins or agents should probably be in quantum/common, right? 14:10:12 #action rcurran to send email or review notes for details on common code 14:10:22 great 14:10:51 OK, anything else to discuss on MechanismDriver? 14:11:22 #topic ml2-portbinding 14:11:30 #link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ml2-portbinding blueprint 14:11:38 rkukura: portbinding update? 14:11:43 nothing new on portbinding - should start coding today or tomorrow 14:11:59 looks like other plugins are starting to address this as well 14:12:04 rkukura: OK, cool! I think that will be the next piece that MechanismDriver writers will be waiting for. 14:12:10 rkukura: do you have a sense of when you'll have an initial version? 14:12:29 I'll shoot for Monday 14:12:41 rkukura: that'd be awesome, thanks! 14:12:44 Can we get the initial MechanismDrivers merged by then? 14:12:47 rkukura: cool! 14:13:09 rkukura: You mean the arista and cisco mech drivers, or the blueprint apech is working on? 14:13:20 the BP all these depend on 14:13:21 I would love to - appreciate the review 14:13:31 and I'll focus on beefing up the testing between now and then 14:13:40 so a question was raised about putting the nova instance name on the port 14:13:54 yeah - that's something we'd be interested in on the Arista side 14:13:56 #action ML2 subteam to try to get MechanismDriver blueprint merged before Monday 7-1-2013. 14:14:04 or at least having ml2 make it available to the driver 14:14:28 instance name on the port would be interesting, can we make that happen? 14:14:39 I was thinking the device_id might be sufficient, since a driver can use it to find the nova instance and get its name 14:15:06 would this just be using the nova api calls? 14:15:06 I'm not sure there are any other cases where ml2 code would be making nova API calls 14:15:45 This is what I wasn't sure about - is this ok, or should we be passing that information in as part of the call nova makes to quantum 14:15:53 Seems there are two ways to get the nova info - for nova to set it on port (like host_id), or neutron to use the nova API 14:15:53 rkukura: If multiple MechanismDrivers would find this useful, it makes sense to do this in a generic spot in the code rather then per-driver. 14:15:57 if making nova api calls is okay, that's an easier thing to add for now 14:16:06 not sure about nova api calls being ok 14:16:21 do we do that anywhere else in neutron? 14:16:38 rkukura: Having nova set it on the port seems to be the cleanest way for this to happen. 14:16:53 if we decide it is OK, then the context object could do it the first time a driver asks, and cache the result 14:17:22 its taking forever just to get nova to set the host_id 14:17:39 that was exactly my thinking - this seems like a very similar patch to getting the host_id 14:17:42 certainly having nova also set the instance_name on port would be trivial 14:17:44 Sorry - I joined late - 14:18:03 rkukura: That seems like a process problem, not a technical one from what I can tell. 14:18:06 so what is the use case for a mech driver needing the instance name? 14:18:41 visibility - we'd like to be able to map how VMs map to physical ports 14:18:46 apech: +1 14:18:57 the instance name can be updated so we must be careful 14:18:58 And it's nice to have that VM name for display purposes. 14:19:07 yeah, device_id is a bit unruly :) 14:19:23 doude_: Yes, any caching on the Neutron side would need to take that into account. 14:19:41 so wouldn't a display tool be able to use the device_id with the nova API to get the instance name, and any other info it also needed? 14:19:47 rkukura: if you display 100's ov VMs it is pain to see them by long IDs :-) 14:20:11 rkukura: That's possible, but that means the display tool needs to talk Nova API. Sometimes that's not desirable. 14:20:20 mestery: +1 14:20:30 So, how do we proceed here? 14:21:05 My suggestion is to let the current host_id patch get merged, then add instance_name as a followup nova patch. 14:21:24 rkukura: that sounds like a good plan 14:21:27 rkukura: OK, I can file a bug for the instance_name as well. 14:21:32 rkukura: I can work on that patch as well. 14:21:36 I guess that would also mean extending the portbinding extension, but that should be OK 14:21:36 mestery: thanks! 14:21:49 #action mestery to file bug to get instance_name passed in from nova following the host_id patch 14:22:05 as a fallback plan, we could have the ml2 context object use the nova API if needed 14:22:16 rkukura: Sounds like a plan. 14:22:28 ok 14:22:36 that it for portbinding for now then 14:22:40 #action If nova patch to add instance_name is rejected, have ml2 context object use the nova API instead. 14:22:43 rkukura: thanks! 14:22:50 #topic ml2-gre 14:22:57 #link https://review.openstack.org/33297 ml2-gre review 14:23:04 hi 14:23:05 matrohon: hi! 14:23:14 matrohon: Updates on ML2 GRE? 14:23:24 I just proposed two new patch today 14:23:35 I saw that, I will review that tomorrow. 14:23:37 the first one add unit test 14:23:50 the second one remove the endpoint ID usag 14:24:04 as propose by garyk in a review 14:24:16 great progress! 14:24:19 Is it a separate review? 14:24:19 I'll review them both this week 14:24:41 no it's the same 14:24:42 matrohon: I've started the vxlan type driver based on your patch since you implemented the RPC callbacks I'll need for that one as well. 14:25:01 thanks for that matrohon. :) 14:25:03 but we definitly think that endpoint id doesn't make sense 14:25:04 correction, I'll review it this week;-) 14:25:54 Any questions on ML2 GRE? 14:26:40 #topic ml2-vxlan 14:26:46 #link https://blueprints.launchpad.net/quantum/+spec/ml2-vxlan ml2-vxlan blueprint 14:27:12 So I've started on this now, utilizing matrohon's ml2-gre patch as a base. 14:27:25 Shooting for Friday for an initial review. 14:27:36 mestery : great! 14:27:55 matrohon: Thanks for all the RPC work in your patch, this made mine smaller than yours. :) 14:28:11 you will manage multicast 14:28:26 ? 14:28:32 matrohon: That's the plan, I'm going to add in the option of configuring multicast as well. 14:28:52 mestery : ok, great! 14:29:03 matrohon: Will appreciate your review once I push the review. :) 14:29:14 mestery : np 14:29:24 Any more queries on ML2 VXLAN? 14:29:30 question regarding multicast: 14:30:06 shouldn't the option be set in agents 14:30:20 and plugin support implemented as provider extension 14:31:04 feleouet: That may be the case, but if you're using multicast, you will end up mapping VNIs to multicast groups, so the type driver DB needs to store that info. 14:31:17 The agents will determine if multicast VXLAN is supported or not I guess. 14:31:37 E.g. the current OVS VXLAN code doesn't support it, but likely will soon, while the LinuxBridge agent (with Tomas's patch) would support it. 14:32:32 ok, makes sense 14:32:49 feleouet: Cool! 14:33:03 #topic Bugs and other blueprints 14:33:16 Wanted to call attention to the "tunnel_types" OVS agent bug 14:33:22 #link https://review.openstack.org/#/c/33107/ tunnel_types review 14:33:38 matrohon: I got your latest feedback, will see if I can implement that tonight/tomorrow. 14:33:51 looks about ready to merge to me, but wanted to test in devstack before giving my +2 14:33:59 rkukura: Thanks! 14:34:14 mestery : ok it's quite tricky 14:34:16 matrohon: One question I had was, we could file a separate bug to address your comment and fix that separately. 14:34:30 matrohon: Exactly, which is why maybe a different bug makes sense since this one is ready to merge. 14:34:37 mestery : it makes senses for me 14:34:38 matrohon: Would you be ok with that? 14:34:42 OK, great! 14:34:53 #action mestery to file new bug to address matrohon's concerns on the tunnel_type fix 14:34:57 I can manage that bug if you want 14:35:04 matrohon: Cool! 14:35:05 hadn't seen matrohon's comment though 14:35:08 I'll assign it to you. 14:35:29 rkukura: Can you review what's there now, we'll do a different bug to address matrohon's comment. 14:35:39 matrohon: Once I file the bug, can you remove your -1? :) 14:35:44 mestery: I had one code nit if you plan to update, but not critical 14:35:49 mestery : ok 14:36:11 rkukura: File your nit, and I can update the patch late tonight or tomorrow morning, up to you. 14:36:26 ok 14:36:50 OK, wanted to bring everyone's attention to matrohon's L2 population blueprint as well. 14:36:56 #link https://blueprints.launchpad.net/quantum/+spec/l2-population L2 population blueprint 14:37:06 matrohon: Is this something we should get in under the context of ML2? 14:37:37 mestery : we target it for havana3 14:37:55 Sorry, this is under feleouet's name, not matrohon's. :) 14:38:00 is the idea to add the support to the agent, but only when used with ml2? 14:38:00 matrohon: OK, fair enough. 14:38:25 rkukura : exactly 14:39:02 and keep backward compatibility with openvswitch plugin? 14:39:25 rkukura : this will be based on new RPC callbacks 14:39:32 makes sense to me 14:39:46 Yes, this looks great matrohon. 14:40:02 OK, did I miss any other Ml2 related bugs or blueprints? 14:40:37 BTW: If any new bugs or blueprints come up that you don't see on the agenda, please unicast me and I will add them. I'll do my best to keep on top of it so we can discuss urgent ones in the weekly meeting though. 14:41:26 #topic Ported Mechanism Drivers 14:41:32 there may be some recent reviews and/or merges to openvswitch that need to be duplicated in ml2 14:41:32 One last bug concerning OVS agent 14:41:59 sorry, may I continue? 14:42:01 rkukura: OK, good point, lets track those and file ML2 bugs to ensure we get the work in as well. 14:42:05 feleouet: yes 14:42:06 please 14:42:14 #link https://bugs.launchpad.net/quantum/+bug/1177973 14:42:28 I'll try to propose a first WIP for this one today 14:42:44 Using event-based detection rather than pooling 14:42:59 feleouet: Great! I'll add this to track for Ml2 as well, thanks! 14:43:02 polling? 14:43:32 Replacing the agent loop that pools for new devices 14:43:42 excellent! 14:43:44 (And is very ressource consuming) 14:43:51 feleouet: +1 14:44:36 I'd be great if you could give me an inital feedback, then I'll work on UT 14:44:58 (depending on feedback, of course) 14:44:58 One other review we should look at 14:45:02 feleouet: Will keep an eye out for the patch. 14:45:27 mestery: thanks 14:45:46 https://review.openstack.org/#/c/33736/ seems to be related to our eventual multi-segment network API 14:46:31 rkukura: I agree, this seems relevant to the multi-segment network API. 14:46:33 but does not seem as usable as what I was envisioning 14:46:44 rkukura: I agree. 14:46:57 also seems to expose nicira's transport zones, which wouldn't apply 14:47:25 rkukura: Can you provide feedback of this nature on the review itself to alert them to our ML2 overlapping work and your concerns? 14:48:18 Does it make sense to clarify that their extension is nicira-specific? 14:48:23 #action rkukura to provide ML2 related feedback on review https://review.openstack.org/#/c/33736/ 14:48:27 ok 14:48:37 rkukura: Yes, it does, I think pointing that out is the way to go. 14:48:57 I don't think we shoudl be exposing vendor specific extensions into the core extension APIs, they can implement it as a plugin specific extension API I think. 14:50:05 OK, we have about 10 minutes left, how about if we quickly get an update for each of the 4 known MechanismDriver implementations out there? 14:50:17 #link https://blueprints.launchpad.net/quantum/+spec/sukhdev-8 Arista Mechanism Driver Blueprint 14:50:25 Sukhdev_: Hi! 14:50:46 Hi 14:50:53 sorry was distracted 14:51:05 Sukhdev_: No worries, any updates on Arista MD? 14:51:36 I am stuck on testing because of port binding issue - other than that I am good 14:51:55 Sukhdev_: Thanks! 14:52:08 #link https://blueprints.launchpad.net/quantum/+spec/ml2-md-cisco-nexus Cisco Nexus Mechanism Driver 14:52:14 rcurran: Cisco Nexus MD updates? 14:52:46 working on it. Have ported over most of the code. (Except common code already mentioned in this meeting) 14:53:04 rcurran: Cool, no road blocks so far? 14:53:27 no, i have plenty to work on ... coding to andre's bp that should go in soon 14:53:35 rcurran: Thanks! 14:53:42 #link https://blueprints.launchpad.net/quantum/+spec/ml2-opendaylight-mechanism-driver OpenDaylight Mechanism Driver 14:53:50 mestery: are you still planning on putting out the wiki for ML2? 14:54:14 Sukhdev_: I had thought I did that, let me check for that after the meeting and send a note. 14:54:25 If I didn't, I'll make the wiki page and send out a note to openstack-dev 14:54:52 mestery: thanks 14:54:54 So, the ODL MD update is that it's complicated. :) 14:54:55 mestery: thanks 14:55:28 We'll begin porting the code we had into a MD this coming week, but the broader question of how ODL integrates with Neutron is being worked on over in the ODL project. 14:55:47 So the final result may look slightly different than what we have so far. 14:56:17 mestery: Is the plan from ODL to have both a monolithic plugin and ml2 MD support? 14:56:35 rkukura: That's part of the complication. I would prefer only an ML2 MD one. 14:57:10 I think after the ODL HackFest in July a more concrete plan will be know, though this will affect Havana integration for ODL I realize. 14:57:18 mestery: Running into anything difficult to do with ml2 as currently defined? 14:57:46 rkukura: No, the ML2 side looks fine, it's more ODL issues and deficiencies we're hitting. 14:58:10 The final MD for ML2 is the Tail-f NCS one 14:58:20 #link https://blueprints.launchpad.net/quantum/+spec/tailf-ncs Tail-f NCS Mechanism Driver 14:58:32 I don't know Luke Gorrie's IRC nick though. 14:58:37 rkukura; Do you? 14:58:46 mestery: no 14:59:02 OK, I don't see an update, just calling it out for consistency here then. 14:59:05 #topic Questions? 14:59:12 OK, that's all for the agenda. 14:59:29 Lets keep up on reviews for the next week as much as possible! 14:59:43 sounds great, thank sall 14:59:52 thanks 14:59:55 #endmeeting