14:00:03 <mestery> #startmeeting networking_ml2
14:00:04 <openstack> Meeting started Wed Jul 24 14:00:03 2013 UTC.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:08 <openstack> The meeting name has been set to 'networking_ml2'
14:00:25 <rkukura> hi
14:00:27 <mestery> Hi, apech, matrohon, rkukura here?
14:00:38 <apech> hi, yup
14:00:45 <mestery> #link https://wiki.openstack.org/wiki/Meetings/ML2 Agenda
14:00:53 <pcm_> hi
14:01:11 <mestery> OK, lets get started
14:01:17 <mestery> #link https://wiki.openstack.org/wiki/Neutron/ML2 ML2 Wiki Page
14:01:24 <mestery> Thanks to rkukura for starting to add information to the ML2 Wiki
14:02:04 <mestery> I'm going to add some devstack information once we settle on the devstack direction for the patch I have in review
14:02:06 <Sukhdev> good morning
14:02:13 <mestery> Sukhdev: Hi!
14:02:49 <mestery> So, any additional information people can add to the wiki would be great!
14:03:04 <mestery> I've noticed an uptick in the number of people asking for instructions on how to run ML2, for example.
14:03:24 <mestery> So, any other questions or concerns on the wiki?
14:04:04 <mestery> #topic Blueprint Updates
14:04:15 <mestery> #link https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding ML2 portbinding
14:04:21 <mestery> rkukura: Any updates on portbinding?
14:04:39 <rkukura> I've been bogged down with other work this past week, so have made no progress coding
14:05:05 <mestery> rkukura: OK, no worries, thanks for the update!
14:05:28 <rkukura> After today, I hope to free up some time. If I can't get started by next weeks meeting, someone else may want to pick it up
14:05:40 <mestery> That's what I was going to suggest as well.
14:05:47 <mestery> I'll make a note of that here.
14:06:03 <mestery> #action rkukura If no progress on portbinding BP by next week, find a new owner.
14:06:11 <matrohon> hi
14:06:26 <mestery> rkukura: Along these lines, are we all in sync with arosen on the host_id thread on the ML?
14:07:03 <rkukura> mestery: Can you summarize the conclusion?
14:07:38 <mestery> rkukura: I think the summary was everyone wants to move port create to nova-api, but compute will still call update with the host_id afterwards.
14:07:57 <rkukura> thats what I thought
14:08:14 <mestery> We're ok with that I believe from an ML2 perspective, right?
14:08:27 <rkukura> seems there is also talk of replacing the update with a more explicit bind operation later, which I think is a good idea
14:08:41 <mestery> Yes, that makes sense to me as well.
14:08:49 <mestery> So the direction there looks good in general and for ML2 specifically.
14:08:59 <rkukura> I see no issue from ml2-portbinding perspective
14:09:40 <mestery> OK, anything else on portbinding from anyone?
14:09:54 <Sukhdev> can I ask a question for clarification -
14:10:04 <mestery> Sukhdev: Yes, please go ahead.
14:10:36 <Sukhdev> by the time ML2 driver's port_create_precommint() is invoked, we will have the host-id info, right?
14:11:32 <apech> my understanding is that we'll still be able to get it, though it might be part of the update_port call. Is that right?
14:11:33 <rkukura> Sukhdev: I don't think you can count on that - the host_id will likely be supplied in a later update()
14:11:59 <Sukhdev> Oh I see - that changes the model a bit then
14:12:05 <apech> slightly
14:12:17 <apech> are arosen's changes going to land for H3?
14:12:21 <Sukhdev> thanks for clarification -
14:12:34 <mestery> apech: Don't know, seems like he'll start working on them soon though.
14:13:07 <apech> mestery: thanks
14:13:33 <rkukura> I think ml2's port binding can occur in three places: 1) port_create if host-id supplied, 2) port_update, 3) RCP processing for the port
14:13:41 <rkukura> s/RCP/RPC/
14:14:00 <mestery> rkukura: That makes sense, and the BP you're working on will handle when that happens, right?
14:14:21 <rkukura> yes
14:14:37 <mestery> OK, thanks!
14:14:48 <mestery> Lets move on to the next BP.
14:15:02 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/ml2-multi-segment-api ML2 multi-segment-api
14:15:15 <mestery> I don't think anyone is assigned to this BP yet.
14:15:36 <mestery> So unless someone really wants it, I can take this one.
14:15:49 <mestery> rkukura: This would be nice to have for Havana I think, do you agree?
14:16:04 <rkukura> I agree, especially if the extension goes in for nvp
14:16:28 <mestery> #action mestery to assign ML2 multi-segment BP to himself and begin working on it
14:16:39 <rkukura> Only question I have is how the new extension interacts with the current provider extension
14:17:01 <mestery> Good question, I'll explore that and see what I can come up with.
14:17:15 <rkukura> I've got some thoughts on it too, so lets discuss
14:17:27 <mestery> Absolutely!
14:18:06 <mestery> OK, moving along to the last BP to discuss today.
14:18:12 <rkukura> we should do this via email on openstack-dev, CCing arosen
14:18:31 <mestery> rkukura: Agreed, lets start a thread there. Can you start the thread since you already have some thoughts on this?
14:18:38 <rkukura> OK
14:19:03 <mestery> #action rkukura to start thread on ML around multi-segment API extension and how it interacts with existing providernet extension
14:19:21 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info ML2 TypeDriver extra port info
14:19:36 <mestery> So, this was registered last week by Zang MingJie.
14:19:42 <ZangMingJie> this is simple one
14:19:47 <mestery> And code was posted, but I thought we as a team should discuss this one here.
14:19:51 <ZangMingJie> I'm already on it
14:19:53 <mestery> ZangMingJie: Welcome!
14:20:31 <mestery> Not sure people have had a chance to eyeball this BP yet, but wanted to make sure the ML2 team saw this.
14:20:31 <rkukura> Would this also apply to QoS?
14:21:42 <ZangMingJie> I don't think there is any relation to QoS
14:22:45 <mestery> My only concern with the patch as pushed so far is it relies on an agent_id for the query. Not all ML2 MechanismDrivers will have agents.
14:23:35 <rkukura> There wouldn't be RPCs unless there are agents, right
14:23:38 <rkukura> ??
14:23:46 <mestery> rkukura: Ah, right, good call.
14:24:07 <rkukura> The ml2-portbinding will be adding mechanism drivers for the agent-based mechansims
14:24:27 <mestery> rkukura: You mean OVS and LB?
14:24:57 <rkukura> yes - they will inspect the agent-db data to decide if/what segment can be bound
14:25:32 <apech> i could see this being useful for non agent-based mechanisms (if what we're talking about is getting the ip multicast address associated with a VXLAN)
14:26:23 <apech> so (maybe specific to this piece of info) it'd be great if this could be exposed to mechansim drivers
14:26:39 <apech> though i guess this wouldn't be a direct call to the type driver anyway
14:26:52 <rkukura> I'd been thinking we'd eventually make the segment dictionary a bit more extensible - maybe this extra info belongs there?
14:27:15 <apech> rkukura, yeah that would work
14:27:22 <mestery> Does Vxlan mutlicast IP belong as part of hte port or the segment? To me, I think the segment.
14:28:06 <rkukura> Do all porting bound to the segment need to use the same multicast IP?
14:28:16 <rkukura> s/porting/ports/
14:28:31 <mestery> Yes
14:28:40 <mestery> Each segment can have it's own muilticast IP though
14:28:58 <rkukura> Is there good use case for different segments having different IPs?
14:29:10 <rkukura> Maybe existing provider VXLANs?
14:29:20 <apech> yeah - it limits broadcast domains
14:29:23 <ZangMingJie> multiple group can reduce multicast domains
14:29:24 <mestery> Yes, that is the use case I was thinking of.
14:30:20 <mestery> OK, lets continue this discussion on the ML
14:30:25 <mestery> I'd like to spend some time on a few other items now.
14:30:34 <mestery> #topic Bugs
14:30:36 <rkukura> agreed, and its related to the multi-segment API
14:30:51 <mestery> #link https://review.openstack.org/#/c/37516/ Validate Provider networks correctly
14:30:54 <mestery> matrohon: Here?
14:31:03 <matrohon> yep
14:31:15 <mestery> Hey, looks like you've made some progress on this one.
14:31:28 <matrohon> i was ooo at teh begining of the week
14:31:29 <mestery> Only minor issues left and then hopefully we can get this fix in.
14:31:35 <mestery> Ah, ok.
14:31:41 <matrohon> will propose a new patch tomorrow
14:31:45 <mestery> Great!
14:31:54 <matrohon> should be merged quickly now
14:31:58 <HenryG> matrohon: nice cleanup, thanks!
14:32:26 <mestery> #link https://bugs.launchpad.net/devstack/+bug/1200767 devstack patch for ML2
14:32:28 <uvirtbot> Launchpad bug 1200767 in devstack "Add support for setting extra network options for ML2 plugin" [Undecided,In progress]
14:32:46 <mestery> I know rkukura has taken a look at this one, but I'd appreciate more ML2 folks looking at this too.
14:33:03 <mestery> My goal with this patch was to make the existing simple use case for GRE tunnel networks work with ML@.
14:33:08 <mestery> s/ML@/ML2/
14:33:14 <matrohon> will do soon
14:33:15 <mestery> But also allow for advanced ML2 configuration
14:33:33 <mestery> I've been running this with multi-node and ML2 and VXLAN with no problems, FYI.
14:33:54 <rkukura> So this maintains compatability with selecting tenant network types with the other plugins, right?
14:34:10 <mestery> Yes, all existing devstack config variables should work with the latest rev of hte patch.
14:34:22 <mestery> Please have a look and test it out if you can.
14:34:31 <matrohon> mestery : did you try with bothe vxlan and gre
14:34:34 <rkukura> I plan to ASAP
14:34:53 <matrohon> i will test it tomorrow too
14:35:00 <mestery> matrohon: I've run with both TypeDrivers loaded, but not with both networks at the same time. Will try that out and see how it goes though.
14:35:43 <matrohon> it should be ok, if you don't specify the same tunnel id
14:35:54 <mestery> matrohon: Yes, agreed.
14:36:00 <mestery> Any more devstack+ML2 questions?
14:36:11 <ZangMingJie> https://blueprints.launchpad.net/neutron/+spec/openvswitch-kernel-vxlan
14:36:24 <ZangMingJie> I have made some change of the BP
14:37:16 <mestery> ZangMingJie: That wasn't on the agenda for today.
14:37:29 <mestery> ZangMingJie: I'll add it to the ML2 page, but we're following the agenda on the meeting page.
14:37:33 <rkukura> My understanding is that the linux kernel upstream Open vSwitch implementation will be using this VXLAN implementation?
14:37:54 <mestery> rkukura: Yes, work is ongoing to make the upstream OVS integrate with the upstream VXLAN implementation.
14:38:11 <mestery> So eventually we will have to collapse the Neutron OVS VXLAN work as well.
14:38:24 <ZangMingJie> the kernel implementation is different of ovs vxlan
14:38:40 <mestery> ZangMingJie: For now it is, but upstream OVS is integrating with the kernel implementation.
14:38:48 <rkukura> I think the OVS tree and kernel tree differ on this
14:39:04 <ZangMingJie> it doesn't need tunnel point manipulation, so no tunnel sync call
14:39:08 <mestery> For now, yes. Work ongoing to collapse the two.
14:39:23 <mestery> ZangMingJie: There are pros and cons to multicast with Vxlan.
14:39:40 <mestery> I'd like to get the meeting back to the agenda now though.
14:40:08 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/l2-population L2 Population
14:40:17 <mestery> matrohon: Will you start work on this BP soon?
14:40:28 <rkukura> ZangMingJie: Can you please update the BP to clarify how this relates to the current OVS VXLAN support and the planned OVS cut-over to using kernel VXLAN support?
14:40:43 <matrohon> mestery: it's started
14:40:51 <mestery> matrohon: Great!
14:41:17 <mestery> matrohon: Any chance you'll have a patch for this by next week?
14:41:22 <mestery> I think this will be a nice optimization
14:41:49 <matrohon> i'll be in vacation, so my colleagues will propose a patch before the end of the next week
14:41:56 <mestery> Thanks.
14:42:04 <mestery> #link https://bugs.launchpad.net/neutron/+bug/1177973 OVS L2 agent polling
14:42:07 <uvirtbot> Launchpad bug 1177973 in neutron "OVS L2 agent polling is too cpu intensive (dup-of: 1194438)" [Medium,In progress]
14:42:08 <uvirtbot> Launchpad bug 1194438 in neutron/grizzly "compute node's OVS agent takes long time to scan sync all port's stat and update port security rules" [High,In progress]
14:42:11 <mestery> Along those lines, there is this bug as well.
14:42:21 <mestery> matrohon: Your colleague was working on this one too? Francois?
14:42:24 <matrohon> ZangMingJie : please have a look in l2-popiulation, for vxlan
14:42:26 <mestery> But now it's unassigned.
14:42:46 <matrohon> mainly safchain
14:43:00 <feleouet_> Hi
14:43:13 <mestery> feleouet_: Hi! You are no longer working on this bug?
14:43:31 <feleouet_> I used to propose a first patchset, but the bug turned out to be fixed in a duplicate...
14:43:50 <mestery> Ah yes, seeing that now.
14:44:12 <rkukura> I spoke with marun, who filed the L2 agent polling bug
14:44:16 <ZangMingJie> matrohon: ye, I already got that, with l2 population, the control plane will be totally managed by agent
14:45:43 <mestery> #link https://bugs.launchpad.net/neutron/+bug/1196963
14:45:44 <uvirtbot> Launchpad bug 1196963 in neutron "Update the OVS agent code to program tunnels using ports instead of tunnel IDs" [Wishlist,In progress]
14:45:53 <mestery> matrohon: Is this duplicated or fixed with your L2 population work?
14:46:38 <matrohon> mestery : not really, we need a first patch to have gre and vxlan with the same id on the same agent
14:46:55 <mestery> matrohon: OK, thanks.
14:46:58 <matrohon> mestery : but l2-population will improve this functionnality
14:47:32 <mestery> matrohon: Got it.
14:48:10 <mestery> #topic Ported MechanismDriver Updates
14:48:22 <mestery> Sukhdev_: Arista driver update?
14:50:02 <mestery> For the Cisco update, rcurran is still on PTO for another week, so not much to report there.
14:50:16 <apech> i'm happy to update there - mostly the same as last week, working on unit tests and waiting to make the final changes on the port-binding bp
14:50:26 <mestery> apech: Thanks for the update!
14:50:46 <mestery> For OpenDaylight, the OVSDB support for ODL was approved, and we are now integrating that with our ODL MechanismDriver.
14:50:57 <mestery> We hope to have a POC by next week which ties all of this together.
14:51:09 <mestery> asomya is working on this with me at the moment.
14:51:35 <mestery> If Luke Gorrie is here, he can provide an update on the Tail-f NCS MechanismDriver.
14:52:22 <mestery> #topic Questions?
14:52:31 <mestery> Anything else ML2 related this week from anyone?
14:52:39 <fmanco> Hi everyone
14:52:51 <rkukura> Congrats to mestery on becoming neutron core!
14:52:58 <fmanco> Just want to know if someone looked at my BP
14:52:59 <mestery> rkukura: Thanks!
14:53:23 <mestery> fmanco: Sorry, I think I missed that in the meeting.
14:53:32 <mestery> #link https://blueprints.launchpad.net/neutron/+spec/campus-network Campus Network BP
14:53:37 <fmanco> mestery: no prob
14:53:48 <mestery> fmanco: I think rkukura was going to have a look at this in detail and provide feedback.
14:53:50 <fmanco> And btw congrats for the core position
14:54:11 <mestery> fmanco: thank you!
14:54:53 <rkukura> fmanco: Sorry - I have not got to this yet. Others should review as well.
14:55:20 <mestery> fmanco: Did I post the link to the correct BP?
14:55:23 <mestery> Or was there an ML2 specific one?
14:55:29 <mestery> Can you add it here if I didn't post the right one?
14:55:50 <ZangMingJie> https://blueprints.launchpad.net/neutron/+spec/ml2-external-port this one ?
14:55:51 <fmanco> #link https://blueprints.launchpad.net/quantum/+spec/ml2-external-port
14:56:02 <fmanco> ZangMingJie: Yes
14:56:33 <mestery> Thanks ZangMingJie.
14:56:43 <mestery> #action ML2 team to review https://blueprints.launchpad.net/neutron/+spec/ml2-external-port
14:56:54 <mestery> fmanco: Will provide feedback on ML.
14:57:05 <rkukura> We should create a "punch list" leading to ml2 becoming default plugin in devstack
14:57:17 <fmanco> Just and update. I already started some code. I hope I can submit at least a sketch for review
14:57:22 <mestery> rkukura: Yes, agreed.
14:57:33 <rkukura> need to document ml2, get into CI, etc.
14:57:36 <mestery> rkukura: I will ping sean and dean about this as well to give them a heads up.
14:58:03 <HenryG> General question -- what different things do these try to accomplish: providernet, multi-segemnt, external-port ?
14:58:34 <HenryG> Seems to be a lot of overlap?
14:59:02 <mestery> HenryG: Lets take that to the ML I think, we only have < 2 minutes left. :)
14:59:18 <HenryG> Of course.
14:59:20 <mestery> OK, thanks folks! Remember: H3 freeze is about 4 weeks out.
14:59:30 <mestery> So lets try to finish up the ML2 BPs and bugs in the next few weeks if we can!
14:59:32 <mestery> #endmeeting