14:00:03 #startmeeting networking_ml2 14:00:04 Meeting started Wed Jul 17 14:00:03 2013 UTC. The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:07 The meeting name has been set to 'networking_ml2' 14:00:16 #info https://wiki.openstack.org/wiki/Meetings/ML2 Agenda 14:00:38 First off, wanted to congratulate and thank everyone for all the hard work over the last few weeks! 14:00:45 We managed to get 4 important merges in for H2! 14:01:30 #info MechanismDriver, tunnel_types, and GRE and VXLAN TypeDriver patches all merged for H2 14:01:55 champagne 14:02:26 We still have a good amount of work for H3, but ML2 is looking to be in great shape when Havana releases! 14:03:02 So, anything related to those merges anyone wants to share? 14:03:22 not really - thanks all for the review 14:03:53 we need to update devstack to handle several tunnel_types with ML2 14:03:58 So what remains for parity with the monolithic plugins? 14:04:05 matrohon: Yes, I have a bug for that and I'm working on it. 14:04:38 rkukura: Good question. I think we should pretty much be at parity for OVS and LinuxBridge now. 14:04:42 Good morning 14:05:12 We need devstack to fully support ml2, and need to start working on docs 14:05:45 rkukura: are the port-binding blueprint required for parity with monolithic plugins? 14:05:47 #action mestery to work on devstack support for new ML2 functionality 14:05:59 #action ML2 team to work on documenting new ML2 functionality 14:06:18 The port-binding BP is important, but not specifically for parity - lets discuss it later on the agenda 14:06:30 Yes, lets move on in the agenda now 14:06:34 #topic Action Items 14:06:54 Looks like rkukura updated the ML2 wiki page, thanks! We still need to flesh that out a bit more I think. 14:06:56 We should now work towards switching devstack's default from openvswitch to ml2 14:07:02 #link https://wiki.openstack.org/wiki/Neutron/ML2 ML2 Wiki Page 14:07:19 #action mestery to make sure devstack defaults to ML2 instead of OVS 14:07:26 rkukura: Yes, I agree on that switch. 14:07:27 The widi is just a start, based on the README (which needs updating for the tunnel types) 14:07:53 I'll file a bug for the README update and fix that. 14:08:08 #action mestery to file bug to update ML2 README to take into account tunnel types changes 14:08:47 rkukra: Did you get a chance to sync up with maru about the event based polling? 14:09:08 I pinged him this morning, but no response yet 14:09:33 OK 14:09:39 BTW, while we are on this topic, I tried using devstack and localrc to bringup ML2 - I was almost there, but, fell short a bit - so, yes, documentation is needed 14:09:52 I think the current polling is improved enough that this shouldn't be high priority for us 14:10:15 Sukhdev: Yes, right now you have to stop Neutron server and modify the ML2 config by hand for a few items, I have a bug open in devstack to fix that. 14:10:24 rkukura: Great! 14:10:47 Sukhdev: Using vlans for tenant networks should be fully supported in devstack now 14:11:17 I had hard time getting the VLAN ranges to work correctly using localrd 14:11:20 localrd 14:11:34 localrc 14:11:51 Sukhdev: could be a bug 14:12:13 I have only been using GRE and VXLAN mode with devstack and ML2 for hte last few weeks. 14:12:19 mestery: your devstack bug is regarding tunnels, right? 14:12:30 rkukura: setting VLAN type works, but, ranges does not work the way it was with ovs 14:12:47 rkukura: Actually, it was to update devstack to fully work with ML2, not just tunnels, and expose extra config support the same as LB and OVS. 14:12:59 rkukura: But that work mostly involves tunnels, so I guess you're right. :) 14:13:07 Sukhdev: I think I've seen what you are describing 14:13:42 Lets get the support for local/vlan/gre/vxlan tenant network types all working under one devstack patch/bug if possible 14:14:01 rkukura: I will do that. 14:14:10 great! 14:14:12 rkukura: The only way I have been able to make it work is by manually changing the config 14:14:29 #info Issues with VLAN ranges when running ML2 with devstack. 14:15:20 fmanco: Here? 14:15:29 yes 14:15:35 #link https://blueprints.launchpad.net/neutron/+spec/campus-network Campus Network BP 14:15:49 fmanco: Thanks for updating your BP to keep ML2 in mind. 14:16:03 I think the ML2 team should review this BP and see what is possible in H3 with regards to it. 14:16:07 rkukura: What do you think? 14:16:39 I haven't been able to review it in detail yet, but will for next meeting 14:16:53 and will try to post comments beforehand 14:17:07 #action rkukura to review Campus Network BP and provide feedback 14:17:10 Thanks rkukura! 14:17:11 #link https://blueprints.launchpad.net/neutron/+spec/ml2-external-port 14:17:18 This is the BP regarding ML2 14:17:31 I didn't update the campus network one yet 14:17:42 fmanco: Thanks for sharing, we'll review this for next week's meeting. 14:18:14 mestery: Ok, thank you. If everyone agrees with this one I can update the Campus network accordingly 14:18:31 fmanco: Sounds good! 14:18:52 OK, lets move on to the next agenda item. 14:19:00 #topic Blueprint Updates 14:19:14 #link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ML2 Port Binding 14:19:25 rkukura: Port Binding update? 14:19:50 arosen has been arguing on openstack-dev that the whole port binding approach should be reverted 14:20:31 I saw that, what was the outcome of that discussion? 14:20:32 I'll get working on the code this week assuming this will not happen 14:20:48 no real conclusion 14:21:20 I'll see if the ml2 port binding can be flexible enough to bind when the host_id is set, or when an RPC comes in from an L2 agent 14:22:02 rkukura: That sounds great if you can make it happen! Flexibility would be nice there if it's possible. 14:22:07 Even if we don't care about binding:vif_type, we still need to select a segment 14:22:37 rkukura: Agreed. 14:22:49 Anything else on Port Binding? 14:22:52 but I'm not sure if this will support Arista's use case or not 14:23:19 apech Sukhdev: Any comments? 14:23:23 It wouldn't hurt for others to chime in on the openstack-dev thread 14:23:33 At present we are using port binding to get host-id and the instance-id 14:24:12 #action ML2 team to respond to port binding thread on openstack-dev 14:25:10 #link https://blueprints.launchpad.net/neutron/+spec/ml2-multi-segment-api ML2 Multi-Segment API 14:25:20 The next thing on the plate for H3 is multi segment networks 14:25:29 rkukura: Do we have anyone signed up for this BP yet? 14:25:36 rkukura: I have a hack to work around portbinding now, and notice that i can get host-id with the latest nova merge 14:26:08 The patch from nicira for the multiprovider extension is improved 14:26:29 rkukura: Does that look likely to be the basis for the ML2 API then? 14:26:39 Main question right now is how does multiprovider and provider coexist 14:26:49 I hope so 14:27:09 I'm thinking for single-segment networks, both extensions would reflect the same info 14:27:32 That makes sense. 14:27:36 But with multi-segnemts, the provider extension's network_type would be a special 14:27:46 'multi-segment' value 14:28:05 rkukura: Do we consider multi segment to be critical for H3? I assume we do, but now that H2 is over, wanted to make sure. 14:28:29 I'd like to get it in, especially if the extension gets in for nvp 14:28:52 Agreed. 14:29:10 I hope implementation involves just exposing the segment list we already manage 14:29:11 Anything else on multi-segment ML2? 14:29:30 I'm happy to implement it, but if someone else gets to it sooner, that's fine 14:29:56 rkukura: I may be able to take that BP on and implement it. Will syncup offline. 14:30:16 rkukura: will you be making changes to Neutron API to supoort multi-segmented networks? 14:30:38 Sukhdev: hopefully no major driver API changes would be required 14:30:57 rkukura: thanks 14:31:43 #topic ML2 Related Bugs and BPs 14:31:52 #link https://bugs.launchpad.net/devstack/+bug/1200767 ML2 devstack updates 14:31:55 Launchpad bug 1200767 in devstack "Add support for setting extra network options for ML2 plugin" [Undecided,In progress] 14:32:11 We already discussed this a bit. 14:32:24 But I'll try to get a patch for this posted by tomorrow to enable all the new ML2 functionality. 14:32:43 great!! 14:33:25 anyone with devstack core? 14:33:55 rkukura: I thought the only Neutron folks who are devstack cores are danwendlant and garyk, right? 14:34:17 I'd like to get its importance and target milestone set 14:34:30 rkukura: I'll talk to Sean and Dean and make sure they are aware. 14:34:36 thanks 14:34:43 #action mestery to talk to dean and gary about importance of ML2 devstack bug 14:34:47 just a nit, i'm unable to use the provider extension with ML2 plugin 14:35:05 matrohon: Sounds like more than a nit! 14:35:25 yep, but it may be my conf which is wrong... 14:35:53 the syntax is this one : 14:35:56 neutron net-create net-gre1 --provider:network_type gre --provider:segmentation_id 2 14:35:58 ? 14:36:19 tenant? 14:36:26 with admin creds 14:36:45 tenant is needed with admin creds 14:38:15 same.. it looks like it doesn't consider the request as a provider network request 14:38:30 and try to allocate a tenant network 14:38:50 have you tried with network_type vlan? 14:39:49 my current conf only support vxlan and gre type driver, but i will investigate deeper 14:40:16 #action matrohon to investigate possible provider extension bug with ML2 tunnel TypeDrivers 14:40:23 OK moving on 14:40:30 #link https://blueprints.launchpad.net/quantum/+spec/l2-population OVS L2 Population BP 14:40:37 matrohon: This one is yours I believe. 14:41:05 mestery: yes the work should start today :) 14:41:12 matrohon: Great! 14:41:19 I have been using it with network_type VLAN, and it seems to be OK 14:41:23 matrohon: Does this also relate to this bug: https://bugs.launchpad.net/neutron/+bug/1196963 14:41:24 Launchpad bug 1196963 in neutron "Update the OVS agent code to program tunnels using ports instead of tunnel IDs" [Wishlist,In progress] 14:42:17 I'm working on this bug, that why i need a conf with vxlan and gre only 14:42:55 I hope to have a first patch before the end of the week 14:43:10 matrohon: Awesome, thanks! 14:43:27 We've touched on the other bugs in the agenda already in the meeting. 14:43:45 Are there any other bugs or BPs (outside of MechanismDrivers, which is the next section) people want to bring up? 14:44:51 #topic Ported Mechanism Driver Updates 14:45:01 Sukhdev: Arista update? 14:45:33 Still waiting on portbinding stuff - in the mean time working on UTs 14:46:04 Sukhdev: Thanks! 14:46:31 I know rcurran is on PTO today, the Cisco update is mostly the same, doing some UT, I believe rcurran was making good progress before going on PTO though. 14:46:47 The OpenDaylight update has some good news. 14:47:07 There is now an OVSDB project proposed into OpenDaylight, which means we'll be able to do more with the ODL MechanismDriver. 14:47:25 Over the next few weeks, we should see more progress on the ODL MechanismDriver. 14:47:48 Any other MechanismDriver updates? Or question on the ones mentioned here? 14:48:31 #topic Open Discussion 14:48:46 Thanks again for everyone's hard work at the end of H2! 14:49:11 matrohon: I confirmed creating provider gre net does not use the specified segmentation_id, but vlan works fine 14:49:12 I think by next week if we can ensure all of the ML2 H3 BPs have owners, we'll be in good shape for H3. 14:49:45 mestery: Agreed 14:49:54 rkukura: ok i'm not crazy! 14:50:06 matrohon: Maybe file a bug for that issue? 14:50:15 mestery : ok 14:50:42 matrohon: Also, I was seeing some issues on Ubuntu 13.04 with ML2 and VXLAN/GRE where flows weren't being programmed correctly. 14:50:59 matrohon: Need to investigate more today. With Fedora 19, it was working, not sure what was different. 14:51:09 matrohon: trying to create a provider vxlan also results in tenant gre network 14:51:39 mestery : im' using 12.04 14:51:50 I don't think the OVS in fedora supports tunnels yet the way openvswitch agent uses OVS 14:51:51 matrohon: I'll try 12.04 out today and see how that works. 14:52:02 rkukura: I'm using a compiled upstream master OVS. :) 14:52:07 mestery: OK 14:52:15 rkukura: Appropriately patched for the 3.9 kernel in Fedora 19 as well. :) 14:52:19 rkukura: that's my issue, myworkaround was to set very small tunnel ranges 14:53:14 OK, anything else to discuss here? 14:53:38 We need to get the basic functionality and devstack support solid before ml2 can become devstack's default 14:54:11 rkukura: Agreed. I'll leave the devstack patch defaulting to OVS until we get these issues sorted out. 14:54:30 And I expect switching devstack's default will become less likely as H-3 gets closer 14:55:07 If we can get these bugs sorted out by next week, we may be able to convince sean and dean to switch it soon after. 14:55:19 Anyone care to look into a migration tool from monolithic plugins to ml2? 14:55:39 rkukura: You mean writing some python or scripts to handle this? 14:55:45 yes 14:56:00 reading one DB schema, writing the other, or something like that 14:56:16 rkukura: Can you file a bug for this to track it? 14:56:27 maybe that would deserve a BP 14:56:33 I think you may be right. 14:56:40 OK 14:56:46 #action rkukura to file a BP to track migration from OVS/LB to ML2 14:57:17 OK, thanks for everyone's hard work and for attending today's ML2 meeting! 14:57:36 We'll meet again next Wednesday and likely continue these into H3 as well to ensure we're tracking ML2 progress for the Havana release! 14:57:44 #endmeeting