14:00:03 <mestery> #startmeeting networking_ml2
14:00:04 <openstack> Meeting started Wed Jul 17 14:00:03 2013 UTC.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:07 <openstack> The meeting name has been set to 'networking_ml2'
14:00:16 <mestery> #info https://wiki.openstack.org/wiki/Meetings/ML2 Agenda
14:00:38 <mestery> First off, wanted to congratulate and thank everyone for all the hard work over the last few weeks!
14:00:45 <mestery> We managed to get 4 important merges in for H2!
14:01:30 <mestery> #info MechanismDriver, tunnel_types, and GRE and VXLAN TypeDriver patches all merged for H2
14:01:55 <matrohon> champagne
14:02:26 <mestery> We still have a good amount of work for H3, but ML2 is looking to be in great shape when Havana releases!
14:03:02 <mestery> So, anything related to those merges anyone wants to share?
14:03:22 <apech> not really - thanks all for the review
14:03:53 <matrohon> we need to update devstack to handle several tunnel_types with ML2
14:03:58 <rkukura> So what remains for parity with the monolithic plugins?
14:04:05 <mestery> matrohon: Yes, I have a bug for that and I'm working on it.
14:04:38 <mestery> rkukura: Good question. I think we should pretty much be at parity for OVS and LinuxBridge now.
14:04:42 <Sukhdev> Good morning
14:05:12 <rkukura> We need devstack to fully support ml2, and need to start working on docs
14:05:45 <apech> rkukura: are the port-binding blueprint required for parity with monolithic plugins?
14:05:47 <mestery> #action mestery to work on devstack support for new ML2 functionality
14:05:59 <mestery> #action ML2 team to work on documenting new ML2 functionality
14:06:18 <rkukura> The port-binding BP is important, but not specifically for parity - lets discuss it later on the agenda
14:06:30 <mestery> Yes, lets move on in the agenda now
14:06:34 <mestery> #topic Action Items
14:06:54 <mestery> Looks like rkukura updated the ML2 wiki page, thanks! We still need to flesh that out a bit more I think.
14:06:56 <rkukura> We should now work towards switching devstack's default from openvswitch to ml2
14:07:02 <mestery> #link https://wiki.openstack.org/wiki/Neutron/ML2 ML2 Wiki Page
14:07:19 <mestery> #action mestery to make sure devstack defaults to ML2 instead of OVS
14:07:26 <mestery> rkukura: Yes, I agree on that switch.
14:07:27 <rkukura> The widi is just a start, based on the README (which needs updating for the tunnel types)
14:07:53 <mestery> I'll file a bug for the README update and fix that.
14:08:08 <mestery> #action mestery to file bug to update ML2 README to take into account tunnel types changes
14:08:47 <mestery> rkukra: Did you get a chance to sync up with maru about the event based polling?
14:09:08 <rkukura> I pinged him this morning, but no response yet
14:09:33 <mestery> OK
14:09:39 <Sukhdev> BTW, while we are on this topic, I tried using devstack and localrc to bringup ML2 - I was almost there, but, fell short a bit - so, yes, documentation is needed
14:09:52 <rkukura> I think the current polling is improved enough that this shouldn't be high priority for us
14:10:15 <mestery> Sukhdev: Yes, right now you have to stop Neutron server and modify the ML2 config by hand for a few items, I have a bug open in devstack to fix that.
14:10:24 <mestery> rkukura: Great!
14:10:47 <rkukura> Sukhdev: Using vlans for tenant networks should be fully supported in devstack now
14:11:17 <Sukhdev> I had hard time getting the VLAN ranges to work correctly using localrd
14:11:20 <Sukhdev> localrd
14:11:34 <Sukhdev> localrc
14:11:51 <rkukura> Sukhdev: could be a bug
14:12:13 <mestery> I have only been using GRE and VXLAN mode with devstack and ML2 for hte last few weeks.
14:12:19 <rkukura> mestery: your devstack bug is regarding tunnels, right?
14:12:30 <Sukhdev> rkukura: setting VLAN type works, but, ranges does not work the way it was with ovs
14:12:47 <mestery> rkukura: Actually, it was to update devstack to fully work with ML2, not just tunnels, and expose extra config support the same as LB and OVS.
14:12:59 <mestery> rkukura: But that work mostly involves tunnels, so I guess you're right. :)
14:13:07 <rkukura> Sukhdev: I think I've seen what you are describing
14:13:42 <rkukura> Lets get the support for local/vlan/gre/vxlan tenant network types all working under one devstack patch/bug if possible
14:14:01 <mestery> rkukura: I will do that.
14:14:10 <rkukura> great!
14:14:12 <Sukhdev> rkukura: The only way I have been able to make it work is by manually changing the config
14:14:29 <mestery> #info Issues with VLAN ranges when running ML2 with devstack.
14:15:20 <mestery> fmanco: Here?
14:15:29 <fmanco> yes
14:15:35 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/campus-network Campus Network BP
14:15:49 <mestery> fmanco: Thanks for updating your BP to keep ML2 in mind.
14:16:03 <mestery> I think the ML2 team should review this BP and see what is possible in H3 with regards to it.
14:16:07 <mestery> rkukura: What do you think?
14:16:39 <rkukura> I haven't been able to review it in detail yet, but will for next meeting
14:16:53 <rkukura> and will try to post comments beforehand
14:17:07 <mestery> #action rkukura to review Campus Network BP and provide feedback
14:17:10 <mestery> Thanks rkukura!
14:17:11 <fmanco> #link https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
14:17:18 <fmanco> This is the BP regarding ML2
14:17:31 <fmanco> I didn't update the campus network one yet
14:17:42 <mestery> fmanco: Thanks for sharing, we'll review this for next week's meeting.
14:18:14 <fmanco> mestery: Ok, thank you. If everyone agrees with this one I can update the Campus network accordingly
14:18:31 <mestery> fmanco: Sounds good!
14:18:52 <mestery> OK, lets move on to the next agenda item.
14:19:00 <mestery> #topic Blueprint Updates
14:19:14 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ML2 Port Binding
14:19:25 <mestery> rkukura: Port Binding update?
14:19:50 <rkukura> arosen has been arguing on openstack-dev that the whole port binding approach should be reverted
14:20:31 <mestery> I saw that, what was the outcome of that discussion?
14:20:32 <rkukura> I'll get working on the code this week assuming this will not happen
14:20:48 <rkukura> no real conclusion
14:21:20 <rkukura> I'll see if the ml2 port binding can be flexible enough to bind when the host_id is set, or when an RPC comes in from an L2 agent
14:22:02 <mestery> rkukura: That sounds great if you can make it happen! Flexibility would be nice there if it's possible.
14:22:07 <rkukura> Even if we don't care about binding:vif_type, we still need to select a segment
14:22:37 <mestery> rkukura: Agreed.
14:22:49 <mestery> Anything else on Port Binding?
14:22:52 <rkukura> but I'm not sure if this will support Arista's use case or not
14:23:19 <mestery> apech Sukhdev: Any comments?
14:23:23 <rkukura> It wouldn't hurt for others to chime in on the openstack-dev thread
14:23:33 <Sukhdev> At present we are using port binding to get host-id and the instance-id
14:24:12 <mestery> #action ML2 team to respond to port binding thread on openstack-dev
14:25:10 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/ml2-multi-segment-api ML2 Multi-Segment API
14:25:20 <mestery> The next thing on the plate for H3 is multi segment networks
14:25:29 <mestery> rkukura: Do we have anyone signed up for this BP yet?
14:25:36 <Sukhdev> rkukura: I have a hack to work around portbinding now, and notice that i can get host-id with the latest nova merge
14:26:08 <rkukura> The patch from nicira for the multiprovider extension is improved
14:26:29 <mestery> rkukura: Does that look likely to be the basis for the ML2 API then?
14:26:39 <rkukura> Main question right now is how does multiprovider and provider coexist
14:26:49 <rkukura> I hope so
14:27:09 <rkukura> I'm thinking for single-segment networks, both extensions would reflect the same info
14:27:32 <mestery> That makes sense.
14:27:36 <rkukura> But with multi-segnemts, the provider extension's network_type would be a special
14:27:46 <rkukura> 'multi-segment' value
14:28:05 <mestery> rkukura: Do we consider multi segment to be critical for H3? I assume we do, but now that H2 is over, wanted to make sure.
14:28:29 <rkukura> I'd like to get it in, especially if the extension gets in for nvp
14:28:52 <mestery> Agreed.
14:29:10 <rkukura> I hope implementation involves just exposing the segment list we already manage
14:29:11 <mestery> Anything else on multi-segment ML2?
14:29:30 <rkukura> I'm happy to implement it, but if someone else gets to it sooner, that's fine
14:29:56 <mestery> rkukura: I may be able to take that BP on and implement it. Will syncup offline.
14:30:16 <Sukhdev> rkukura: will you be making changes to Neutron API to supoort multi-segmented networks?
14:30:38 <rkukura> Sukhdev: hopefully no major driver API changes would be required
14:30:57 <Sukhdev> rkukura: thanks
14:31:43 <mestery> #topic ML2 Related Bugs and BPs
14:31:52 <mestery> #link https://bugs.launchpad.net/devstack/+bug/1200767 ML2 devstack updates
14:31:55 <uvirtbot> Launchpad bug 1200767 in devstack "Add support for setting extra network options for ML2 plugin" [Undecided,In progress]
14:32:11 <mestery> We already discussed this a bit.
14:32:24 <mestery> But I'll try to get a patch for this posted by tomorrow to enable all the new ML2 functionality.
14:32:43 <matrohon> great!!
14:33:25 <rkukura> anyone with devstack core?
14:33:55 <mestery> rkukura: I thought the only Neutron folks who are devstack cores are danwendlant and garyk, right?
14:34:17 <rkukura> I'd like to get its importance and target milestone set
14:34:30 <mestery> rkukura: I'll talk to Sean and Dean and make sure they are aware.
14:34:36 <rkukura> thanks
14:34:43 <mestery> #action mestery to talk to dean and gary about importance of ML2 devstack bug
14:34:47 <matrohon> just a nit, i'm unable to use the provider extension with ML2 plugin
14:35:05 <rkukura> matrohon: Sounds like more than a nit!
14:35:25 <matrohon> yep, but it may be my conf which is wrong...
14:35:53 <matrohon> the syntax is this one :
14:35:56 <matrohon> neutron net-create net-gre1 --provider:network_type gre --provider:segmentation_id 2
14:35:58 <matrohon> ?
14:36:19 <HenryG> tenant?
14:36:26 <matrohon> with admin creds
14:36:45 <rkukura> tenant is needed with admin creds
14:38:15 <matrohon> same.. it looks like it doesn't consider the request as a provider network request
14:38:30 <matrohon> and try to allocate a tenant network
14:38:50 <rkukura> have you tried with network_type vlan?
14:39:49 <matrohon> my current conf only support vxlan and gre type driver, but i will investigate deeper
14:40:16 <mestery> #action matrohon to investigate possible provider extension bug with ML2 tunnel TypeDrivers
14:40:23 <mestery> OK moving on
14:40:30 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/l2-population OVS L2 Population BP
14:40:37 <mestery> matrohon: This one is yours I believe.
14:41:05 <matrohon> mestery: yes the work should start today :)
14:41:12 <mestery> matrohon: Great!
14:41:19 <Sukhdev> I have been using it with network_type VLAN, and it seems to be OK
14:41:23 <mestery> matrohon: Does this also relate to this bug: https://bugs.launchpad.net/neutron/+bug/1196963
14:41:24 <uvirtbot> Launchpad bug 1196963 in neutron "Update the OVS agent code to program tunnels using ports instead of tunnel IDs" [Wishlist,In progress]
14:42:17 <matrohon> I'm working on this bug, that why i need a conf with vxlan and gre only
14:42:55 <matrohon> I hope to have a first patch before the end of the week
14:43:10 <mestery> matrohon: Awesome, thanks!
14:43:27 <mestery> We've touched on the other bugs in the agenda already in the meeting.
14:43:45 <mestery> Are there any other bugs or BPs (outside of MechanismDrivers, which is the next section) people want to bring up?
14:44:51 <mestery> #topic Ported Mechanism Driver Updates
14:45:01 <mestery> Sukhdev: Arista update?
14:45:33 <Sukhdev> Still waiting on portbinding stuff - in the mean time working on UTs
14:46:04 <mestery> Sukhdev: Thanks!
14:46:31 <mestery> I know rcurran is on PTO today, the Cisco update is mostly the same, doing some UT, I believe rcurran was making good progress before going on PTO though.
14:46:47 <mestery> The OpenDaylight update has some good news.
14:47:07 <mestery> There is now an OVSDB project proposed into OpenDaylight, which means we'll be able to do more with the ODL MechanismDriver.
14:47:25 <mestery> Over the next few weeks, we should see more progress on the ODL MechanismDriver.
14:47:48 <mestery> Any other MechanismDriver updates? Or question on the ones mentioned here?
14:48:31 <mestery> #topic Open Discussion
14:48:46 <mestery> Thanks again for everyone's hard work at the end of H2!
14:49:11 <rkukura> matrohon: I confirmed creating provider gre net does not use the specified segmentation_id, but vlan works fine
14:49:12 <mestery> I think by next week if we can ensure all of the ML2 H3 BPs have owners, we'll be in good shape for H3.
14:49:45 <rkukura> mestery: Agreed
14:49:54 <matrohon> rkukura: ok i'm not crazy!
14:50:06 <mestery> matrohon: Maybe file a bug for that issue?
14:50:15 <matrohon> mestery : ok
14:50:42 <mestery> matrohon: Also, I was seeing some issues on Ubuntu 13.04 with ML2 and VXLAN/GRE where flows weren't being programmed correctly.
14:50:59 <mestery> matrohon: Need to investigate more today. With Fedora 19, it was working, not sure what was different.
14:51:09 <rkukura> matrohon: trying to create a provider vxlan also results in tenant gre network
14:51:39 <matrohon> mestery : im' using 12.04
14:51:50 <rkukura> I don't think the OVS in fedora supports tunnels yet the way openvswitch agent uses OVS
14:51:51 <mestery> matrohon: I'll try 12.04 out today and see how that works.
14:52:02 <mestery> rkukura: I'm using a compiled upstream master OVS. :)
14:52:07 <rkukura> mestery: OK
14:52:15 <mestery> rkukura: Appropriately patched for the 3.9 kernel in Fedora 19 as well. :)
14:52:19 <matrohon> rkukura: that's my issue, myworkaround was to set very small tunnel ranges
14:53:14 <mestery> OK, anything else to discuss here?
14:53:38 <rkukura> We need to get the basic functionality and devstack support solid before ml2 can become devstack's default
14:54:11 <mestery> rkukura: Agreed. I'll leave the devstack patch defaulting to OVS until we get these issues sorted out.
14:54:30 <rkukura> And I expect switching devstack's default will become less likely as H-3 gets closer
14:55:07 <mestery> If we can get these bugs sorted out by next week, we may be able to convince sean and dean to switch it soon after.
14:55:19 <rkukura> Anyone care to look into a migration tool from monolithic plugins to ml2?
14:55:39 <mestery> rkukura: You mean writing some python or scripts to handle this?
14:55:45 <rkukura> yes
14:56:00 <rkukura> reading one DB schema, writing the other, or something like that
14:56:16 <mestery> rkukura: Can you file a bug for this to track it?
14:56:27 <rkukura> maybe that would deserve a BP
14:56:33 <mestery> I think you may be right.
14:56:40 <rkukura> OK
14:56:46 <mestery> #action rkukura to file a BP to track migration from OVS/LB to ML2
14:57:17 <mestery> OK, thanks for everyone's hard work and for attending today's ML2 meeting!
14:57:36 <mestery> We'll meet again next Wednesday and likely continue these into H3 as well to ensure we're tracking ML2 progress for the Havana release!
14:57:44 <mestery> #endmeeting