16:00:09 <mestery> #startmeeting networking_ml2
16:00:09 <openstack> Meeting started Wed Apr  2 16:00:09 2014 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:12 <openstack> The meeting name has been set to 'networking_ml2'
16:00:16 <banix> Hello
16:00:21 <mestery> #link https://wiki.openstack.org/wiki/Meetings/ML2#Meeting_April_2.2C_2014 Agenda
16:00:24 <mestery> Hi banix!
16:00:34 <mestery> rcurran: Welcome!
16:00:38 <banix> mestery: hi :)
16:01:00 <rcurran> mestery, yikes - that was a quick welcome ... hello
16:01:11 <mestery> rcurran: :P
16:01:17 <mestery> #topic Action Item Review
16:01:25 <mestery> banix, you had our only AI from last week.
16:01:33 <banix> #link https://review.openstack.org/#/c/83217/
16:01:50 <mestery> Thanks banix!
16:02:04 <banix> The patch is essentially 3 lines (not counting the comments and the unit tests)
16:02:14 <rkukura> I’ll review that today
16:02:28 <banix> great
16:02:49 <Sukhdev> I will review it as well
16:02:49 <mestery> banix: How does that one relate to this one? https://review.openstack.org/#/c/69792/
16:02:53 <mestery> They both ahve the similar titles.
16:03:25 <banix> mestery: last week we discussed that probably we need to do more on this front
16:03:40 <banix> the new patch is just a stopgap measure for this cycle (if it gets in)
16:04:05 <banix> we need to build on the othe rpatch that I have marked as WORKINPROGRESS after discussing a bit more
16:04:07 <mestery> banix: Got it, thanks for clarifying for me.
16:04:13 <mestery> banix: OK, makes sense.
16:04:41 <rkukura> any reason not to backport this one if it doesn’t make icehouse?
16:04:42 <banix> Sukhdev: thanks
16:04:54 <mestery> Was just going to ask the same thing rkukura
16:05:26 <amotoki> hi
16:05:35 <Sukhdev> A quick question on this patch for clarification
16:05:36 <mestery> banix: Should the priority of hte bug be changed to be higher? And a tag added for icehouse-rc2? https://bugs.launchpad.net/neutron/+bug/1227336
16:06:06 <banix> mestery: I think that would make sense
16:06:22 <mestery> banix: Looks like markmcclain put comments in saying it's not a release blocker, so I'll add backport potential instead.
16:06:26 <Sukhdev> Do we have a dependence among the MDs - i.e. one depends on the work done by the other?
16:07:08 <rcurran> cisco_nexus must have an md loaded that supports bind_port() for vlans
16:07:12 <rkukura> We should tag it with either icehouse-rc-potential or icehouse-backport-potential
16:07:17 <mestery> banix: Just added "icehouse-backport-potential" to track this one.
16:07:18 <banix> mestery: are we talking about the longer patch? sorry got confused.
16:07:30 <mestery> banix: The shorter one.
16:07:40 <rkukura> both are tied to the same bug, right?
16:07:42 <banix> mestery: yes that would make sense
16:07:51 <banix> rkukura: yes
16:08:11 <banix> but I marked the longer one as workinprogress
16:08:21 <mestery> OK, so lets see if we can get the short one into Juno and then backport it to the first stable icehouse release.
16:08:23 <amotoki> in my understanding *-rc-potential is higher priority.
16:09:02 <banix> i think we can still get in icehouse; considering how short the patch is depending on your reviews of course
16:09:17 <rkukura> I’d suggest we treat this as the fix for the current bug, and decide at summit whether to take the undo approach from the longer patch or maybe the resync approach Sukhdev and I are proposing for juno
16:10:07 <mestery> So, for the shorter patch, we're now considering RC2 if there is one?
16:10:20 <banix> rkukura: agree. we will need both the undo + resync or something ot that effect
16:10:34 <banix> rkukura: for the next cycle that is
16:10:43 <rkukura> don’t know about other distributions, but Red Hat’s is generally based on the 1st stable release, so we should think about whether the current fix is sufficient for that
16:10:54 <banix> rkukura: just saying a resync won't be enough on its own. that's all.
16:11:06 <rkukura> banix: I’m not dissagreeing
16:11:10 <amotoki> rkukura: 1st stable "update"?
16:11:38 <rkukura> amotoki: The 1st stable branch release after the initial icehouse release
16:11:45 <amotoki> i see.
16:12:04 <rkukura> Don’t know if that is current plan, but thats been done in the past
16:12:44 <shivharis> 2014.1.1
16:12:59 <rkukura> shivharis: that would be it
16:13:29 <mestery> OK, for now I'll leave it as backport candidate, if we get this review pushed up soon, we can see about having it a part of a potential RC2.
16:13:36 <rkukura> my point is just that the period between 2014.1 and 2014.1.1 is the time to shake out bugs, etc.
16:13:47 <rkukura> mestery: +1
16:13:55 <mestery> Agreed rkukura.
16:14:49 <mestery> #topic Reviews/Bugs Being Tracked
16:15:10 <mestery> I know we just covered one of these, but the main other review we're tracking is rkukura's here: https://review.openstack.org/#/c/82945/
16:15:42 <rkukura> this is the final part of the port binding work proposed back in February
16:15:59 <mestery> rkukura: Thanks for all your work here! The last leg of the journey is almost complete. :)
16:16:44 <rkukura> would have liked to get this into rc1, but I don’t think markmclain will accept it for rc2
16:17:01 <mestery> Agreed. But maybe as a backport potential, so lets get it into Juno now.
16:17:13 <mestery> There is one other bug which was just reported.
16:17:17 <mestery> https://bugs.launchpad.net/neutron/+bug/1301449
16:17:21 <mestery> This is related to the ODL ML2 driver.
16:17:34 <mestery> amotoki and myself were discussing in-channel prior to the meeting.
16:17:42 <mestery> I'm going to try recreating this post-meeting.
16:18:07 <banix> mestery: what amotoki was suggesting would do the trick :)
16:18:23 <mestery> banix: Good! I'll have a patch out soon!
16:18:36 <rkukura> was this in #openstack-neutron?
16:19:00 <amotoki> yes. 00:48:27 <amotoki> i haven't tried ODL after event-notification was implemented .... If ODL detects port plugging immediately, I think we can set the initial port status to ACTIVE.
16:19:10 <rkukura> I agree setting the initial status to ACTIVE should do it
16:19:25 <mestery> #action mestery to fire off a quick patch to fix https://review.openstack.org/#/c/82945/
16:19:33 <mestery> OK, thanks for the consult on that one everyone!
16:19:41 <mestery> Any other bugs we should be tracking at the moment?
16:19:47 <rcurran> yes https://review.openstack.org/#/c/83546/
16:20:11 <rcurran> pardon my ignorance - is juno "open" for commits?
16:20:21 <mestery> rcurran: Yes, Juno is open now.
16:20:46 <rcurran> ok, then i'm fine w/ that bug i just listed getting into juno
16:20:48 <Sukhdev> I pushed a very minor change - hoping to get into RC2 - can you please look https://review.openstack.org/#/c/84603/
16:21:21 <mestery> rcurran Sukhdev: I'll review both of those yet today.
16:21:37 <mestery> Sukhdev: Can you push a new version which passes the Arista Jenkins as well?
16:21:38 <Sukhdev> mestery: Thanks
16:21:45 <rcurran> mestery, for rc2 and/or juno
16:21:47 <mestery> Sukhdev: Or at least retrigger a run on the current patch?
16:21:53 <Sukhdev> mestery: already done
16:21:55 <mestery> rcurran: Yours is Juno.
16:22:04 <mestery> Sukhdev: Just saw that, thanks!
16:22:16 <mestery> Sukhdev: I'm not sure this one will get into RC2 either, frankly.
16:22:26 <mestery> It's up to markmcclain I think at this point.
16:22:44 <Sukhdev> mestery: considering the change is very minor - I will keep my fingers crossed :-)
16:22:53 <rkukura> Sukhdev: I’ll review that today
16:23:03 <Sukhdev> rkukura: Thanks
16:23:04 <amotoki> Sukhdev: it looks incompatible with older version. is it intended?
16:23:20 <rkukura> seems like a necessary change, and no impact outside the one driver
16:23:58 <Sukhdev> amotoki: We are rolling out new version of EOS - but, it will continue to work with the older version - thought we match new EOS version with Icehouse - hence the patch
16:24:26 <amotoki> Sukhdev: thanks for the clarfication. sounds good.
16:25:40 <mestery> #topic Juno Summit Discussion
16:25:49 <mestery> I wanted to spend some time talking about ML2 talks for the Juno Summit.
16:26:02 <mestery> We have until April 20th to submit talks, wanted to get an idea of what the team is looking to submit.
16:26:13 <mestery> Where we want to take ML2, etc.
16:26:36 * mestery opens the floor for discussion.
16:26:58 <asadoughi> i've submitted http://summit.openstack.org/cfp/details/155 regarding ovs-based security groups (ovs-firewall-driver blueprint)
16:27:18 <mestery> asadoughi: Great! This would be really nice to get into Juno!
16:27:44 <asadoughi> yes, i agree
16:27:50 <rkukura> +1
16:27:56 <Sukhdev> Bob and I are planning one ML2 Sync mechanism
16:28:23 <Sukhdev> s/one/on
16:28:24 <mestery> Sukhdev: OK, that's good too!
16:29:06 <amotoki> I am exploring db migration per ML2 driver in Juno. many unrelated tables are created now. (not sure for summit session)
16:29:20 <mestery> amotoki: That's an interesting idea.
16:29:34 <mestery> In the past, we've also had people looking to refactor type drivers (asomya is working on that).
16:29:42 <rkukura> I’ve got a couple completion/refactoring/enhancement kind of things that aren’t new features, so don’t need full sessions
16:29:44 <mestery> And there was the thought of per-MD extensions at some point I think?
16:29:55 <mestery> Extensible RPC calls?
16:30:14 <rkukura> Giviing drivers control over the RPC handling was one I was suggesting
16:30:22 <Sukhdev> mestery: do you have link to asomya's work?
16:30:25 <banix> +1 on per-MD extensions
16:30:26 <mestery> rkukura: OK, cool.
16:30:41 <amotoki> Re per-MD extension i am still interested in, but APi framework change is planned in juno?
16:30:41 <HenryG> #link https://blueprints.launchpad.net/neutron/+spec/ml2-type-driver-refactor
16:30:45 <rkukura> Another was real support for bulk ops, where the precommits are all one transaction
16:31:09 <Sukhdev> HenryG: Thanks
16:31:10 <mestery> Thanks HenryG
16:31:28 <rkukura> did the partial spec work get in, or should we put that on the list?
16:32:03 <mestery> rkukura: Not sure, we should probably add it to the lsit.
16:32:04 <mestery> *list
16:32:20 <mestery> I was going to start an etherpad for this before the meeting, but didn't have time. I will do that post meeting for us all to post ideas to.
16:32:29 <mestery> #action mestery to start etherpad for Juno Summit ML2 idea collaboration
16:32:31 <HenryG> rkukura: you mean for the provider attributes?
16:32:35 <rkukura> I think a couple people have mentioned dynamic (VLAN) segments from host to ToR switch
16:32:58 <mestery> rkukura: +1 to that idea
16:33:01 <rkukura> HenryG: that’s the partial spec work I mentioned
16:33:53 <rkukura> so are partial specs and type driver refactoring closely related?
16:34:06 <Sukhdev> rkukura: yes - that is why I want to see what asomya is doing and then build upon it - I have a strong interest in dynamic VLAN area
16:34:30 <rkukura> One more from my list - driver-specific and shared API extensions
16:34:35 <HenryG> Partial specs BP: https://blueprints.launchpad.net/neutron/+spec/provider-network-partial-specs
16:34:38 <mestery> We should have asomya present his work at next week's meeting, I'll work with him on that.
16:34:46 <rkukura> mestery: +`
16:34:47 <rkukura> +1
16:34:48 <mestery> #action asomya to present his type-driver refactoring at next week's meeting
16:35:29 <Sukhdev> mestery: +1
16:35:43 <rkukura> There’s alway the modular agent - we had a bit of discussion in HK on that, but is anyone interested in pushing that forward?
16:36:10 <mestery> With the addition of the ofagent into the mix, I think a modular agent makes some sense here for drivers which require that architecture.
16:36:55 <Sukhdev> rkukura, mestery: I remember you mentioning about ML3 at Portland summit - any work on that?
16:37:03 <amotoki> i need request-id between server and agent. modular agent framework really helps it.
16:37:50 <mestery> Sukhdev: I don't think much has happened with ML3 yet to be honest.
16:38:05 <mestery> We should probably work with the L3 team in that area, which carl_baldwin is leading.
16:38:55 <mestery> OK, anything else here or should we wrap up a bit early today?
16:39:05 <overlayer> I'd like to know if this BP still makes sense as of the current state of ML2: https://blueprints.launchpad.net/neutron/+spec/ml2-external-port, if it was developed for Juno.
16:39:11 <Sukhdev> I have a question to ask
16:39:13 <banix> A quick question?
16:39:19 <mestery> :)
16:39:35 <rkukura> mestery, Sukhdev: Might be good to come up with some ML2+ML3 use cases to discuss with the L3 team
16:39:47 <mestery> overlayer: I would have to look at that in detail, I remember reading it at one point. Are you interested in working on that for Juno?
16:39:55 <mestery> +1 to that rkukura
16:40:10 <Sukhdev> I asked earlier - did not get an answer, asking differently - Does ML2 plugin enforce ordering of MD's ?
16:40:38 <mestery> Sukhdev: Yes, based on order in the config file.
16:40:43 <rkukura> Sukhdev: yes, MDs are called in the order listed
16:40:48 <overlayer> mestery, Yes I'm interested in working in that or in something similar that might come from that..
16:40:52 <amotoki> Sukhdev: we need to manage the list manually now.
16:41:15 <Sukhdev> In that case banix fix may create issues
16:41:40 <Sukhdev> ordered list would imply, one MD is dependent upon the work done by other MD….
16:41:46 <mestery> overlayer: I'll review your proposal in more detail, but filing a summit session for it may not be a bad idea.
16:42:03 <Sukhdev> if the next one is called when the previous failed, doesn't that create an issue?
16:42:27 <rkukura> overlayer: Would like to see proposals based on current ML2
16:42:48 <banix> Don't lose your tempers here :) but is it reasonable/advisable to backport a simple patch to Havana that defines portbindings.VIF_DETAILS as CAPABILITIES?
16:43:11 <rkukura> Sukhdev: Initial rationale for the ordering was to prioritize which MD gets to bind the port
16:43:34 <Sukhdev> therefore, I believe our present implementation (without banix fix ) is good
16:43:35 <overlayer> rkukura, Okay, do you see anything in the original blueprint that stands out as outdated?
16:43:56 <amotoki> overlayer: is there any detail on your bp? is it a kind of gateway?
16:43:57 <rkukura> banix: Wasn’t the attribute called binding:capabilities in havana?
16:44:19 <rkukura> overlayer: Its been a while since I looked, so not sure
16:44:58 <banix> rkukura: yes somehing to make a plugin in Icehouse work with Havana if the only difference is change from capabilities to vif_details
16:44:59 <overlayer> amotoki, rkukura, the BP isn't mine but I'd be willing to start working from there
16:46:24 <rkukura> overlayer: You should propose a session
16:47:03 <amotoki> banix: vif_detail is used to select VIF driver in nova. In havana nova has the configuration for vif driver.
16:47:03 <overlayer> in order to, eventually, merge different legacy networks into one overlay one (https://blueprints.launchpad.net/neutron/+spec/campus-network)
16:47:44 <overlayer> mestery, rkukura, Okay, thank you :)
16:47:49 <rkukura> banix: do you mean to make plugins or ml2 drivers work with havana?
16:48:25 <Sukhdev> overlayer: you may want to look at this as well http://summit.openstack.org/cfp/details/74
16:49:20 <Sukhdev> Folks what about the point that I raised regarding the banix fix?
16:49:23 <banix> rkukura: in my particular case a monolethic plugin we have in Icehouse can work with Havana OpenStack that does not have our plugin except for requiring  a change in code from CAPABLITIES to VIF_DETAILS.
16:50:03 <overlayer> Sukhdev, Awesome, thanks!
16:50:28 <amotoki> banix: in havana nova-compute has the configuration for VIF driver, so i think icehouse plugins works with hava nova-compute (from this point of view)
16:50:41 <rkukura> banix: Would it be much more work to do the renaming VIF_DETAILS->CAPABILITIES in a havana-specfic version of the plugin?
16:51:27 <banix> rkukura: no except that the plugin has been added in Icehouse. Did not exist in Havana.
16:52:07 <banix> amotoki: yeah but the plugin used VIF_DETAILS that is not defined in Havana.
16:52:16 <rkukura> Sukhdev: Is your point only relevant if one MD depends on something done by another eariler on the list?
16:52:26 <banix> If not related to ML2 I can discuss this later or on the ML.
16:52:40 <Sukhdev> rkukura: correct
16:52:46 <amotoki> banix: sounds good.
16:53:00 <mestery> Thanks banix.
16:53:07 <rkukura> banix: Not sure if your goal is to get the icehouse plugin working on havana upstream stable branch, or in havana distros, or what?
16:53:13 <mestery> Sukhdev: May be best to continue your discussion on the ML as well.
16:53:18 <overlayer> I'm kind of new around here, so... What does exactly mean submitting a summit session? Some of them will be accepted for a public presentation in Atlanta? Is it just a simple discussion with all the folks gathered together? Or does every session get discussed, even if not in Atlanta?
16:53:44 <banix> Sorry guys if I distracted the discussion...
16:53:55 <Sukhdev> mestery: no worries - but, just wanted to raise everybody's awareness about the potential issue :-)
16:53:57 <mestery> overlayer: Only a small subset get selected for a slot in Atlanta. You are supposed to have a BP to discuss, with the intention of doing the work in Juno.
16:54:01 <rkukura> Sukhdev: I guess MDs could/should be written defensively if they have that kind of dependency
16:54:25 <Sukhdev> rkukura: I agree
16:55:02 <rkukura> Sukhdev: Ideally, MDs would just not throw exceptions!
16:55:12 <rkukura> at least in post-commit
16:55:34 <rcurran> rkukura, then how do you report back errors?
16:55:50 <Sukhdev> rkukura: only worry I have about banix fix is that there may be a cascade effect - any many MDs start to throw exceptions
16:55:59 <rcurran> if a postcommit fails on it's action that's bad news to the admin
16:56:13 <rkukura> rcurran: I think this is where we need to do some work - putting the resource into an error state, scheduling a resync, or whatever
16:56:22 <rcurran> in my book, LOG.error isn't good enough
16:56:55 <rkukura> rcurran: I agree, but is the error recoverable?
16:56:56 <Sukhdev> rcurran: log helps - but, sync mechanism will hopefully help better
16:57:11 <rcurran> assuming the sync works
16:57:34 <rcurran> overall i agree - this needs to be scrubbed more
16:57:36 <rkukura> Seems one session devoted to ML2 driver error handling/recovery may be needed
16:58:05 <rkukura> Do we think banix’s current patch helps or hurts?
16:58:35 <Sukhdev> rkukura: if MDs have inter-dependence, then it hurts otherwise no
16:58:44 <mestery> Folks, 2 minutes left here, just FYI, we'll ahve to cut things off really quickly here.
16:58:44 <rcurran> honestly i didn't care one way or another. we're in exception handing code and something went wrong that the admin needs to know about
16:58:53 <mestery> I propose we continue this current discussion on the ML.
16:59:01 <rkukura> +1
16:59:05 <banix> I come in peace :)
16:59:09 <rcurran> :-)
16:59:16 <Sukhdev> :-)
16:59:16 <mestery> banix: Of course. :)
16:59:24 <mestery> OK, thanks everyone, lots of good discussions today.
16:59:25 <rkukura> mestery: What’s the followup on the session proposals - are you creating an etherpad?
16:59:35 <mestery> rkukura: Yes, I gave myself an AI for that.
16:59:40 <mestery> I'll send out the link once I do that today.
16:59:41 <rkukura> thanks
16:59:45 <banix> thanks everybody
16:59:47 <mestery> We'll see everyone next week!
16:59:49 <mestery> #endmeeting