21:01:55 <mestery> #startmeeting networking
21:01:55 <openstack> Meeting started Mon Jun  2 21:01:55 2014 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:59 <openstack> The meeting name has been set to 'networking'
21:02:07 <mestery> #link https://wiki.openstack.org/wiki/Network/Meetings Agenda
21:02:12 <mestery> #topic Announcements
21:02:24 <mestery> Juno-1 is next Thursday, June 12.
21:02:27 <mestery> #link https://launchpad.net/neutron/+milestone/juno-1
21:03:09 <mestery> I know Juno-1 was very fast coming after the summit, but I think we have a good number of BPs and bugs settle for that.
21:03:29 <mestery> I'm going to make a special request to people to prioritize reviewing DVR patches to see how many we can land for Juno-1 yet.
21:03:39 <mestery> DVR is one of the most critical pieces for nova-network parity.
21:04:04 <armax> https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/neutron-ovs-dvr,n,z
21:04:06 <salv-orlando> mestery: I think the API-side patch is fast approaching maturity. The other question is whether people think it’s ok to merge without backend support
21:04:08 <mestery> Any questions on Juno-1 items?
21:04:10 <mestery> Thanks armax!
21:04:12 <armax> mestery: ^
21:04:24 <mestery> salv-orlando: That was something to discuss here, honestly.
21:04:30 <salv-orlando> I think it is ok.
21:04:41 <mestery> I was hoping to start landing DVR patches to show the progress, honestly.
21:05:02 <markmcclain> progress is good, but if we can't test backend a little concerned
21:05:03 <salv-orlando> yeah I can also leverage that to get rid of the NSX-specific extension
21:05:32 <armax> salv-orlando: I was going to do that next, but you’re more than welcome to take it off my plate :P
21:05:45 <mestery> markmcclain: I understand. I think though if htat's the case, we won't land DVR patches until Juno-2, because I don't think the backend will land in Juno-1.
21:05:51 <regXboi> A bunch of the patches on that search are -1 on the CR...
21:06:02 <salv-orlando> armax: I would not date taking work off the hands of a very respected colleague ;)
21:06:06 <salv-orlando> date/dare
21:06:15 <mestery> regXboi: Yes, I think there are comments being addressed, and some are marked WIP.
21:06:28 <mestery> OK, I don't want to get too bogged down with this as there is a lot to cover today.
21:06:32 <armax> salv-orlando: nice way to dodge a hot one
21:06:40 <mestery> IF peoiple have Juno-1 quesitons, please send them to the ML or ping me in-channel.
21:06:45 <salv-orlando> armax: that would be a warm one honestly
21:06:55 <mestery> I also wanted to point out the two mid-cycle meetups one more time, please see the agenda.
21:07:09 <mestery> markmcclain and I are looking for one more core to join us in San Antonio in two weeks if that's possible. :)
21:07:30 <mestery> I hear it's hot there this time of year if that helps nudge anyone. ;)
21:07:36 <mestery> OK, moving on.
21:07:38 <mestery> #topic Bugs
21:07:43 <salv-orlando> I’m unlikely to be there, but I was wondering if we’ve decided how to handle the API changes from an API versioning perspective
21:07:46 <markmcclain> mestery: you're supposed to tell the Alamo is something to see :)
21:07:48 <mestery> #undo
21:07:50 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x23cffd0>
21:08:09 <salv-orlando> mestery: no need to address now, I just wanted to know if we’re thinking about that as the model and the API are going to be rewritten
21:08:09 <mestery> salv-orlando: That's on the agenda for later today I believe, I think I added it.
21:08:17 <salv-orlando> cool, let’s move on then
21:08:18 <mestery> salv-orlando: We are, I will send an email to the ML.
21:08:21 <mestery> #topic Bugs
21:08:32 <mestery> OK, salv-orlando enikanorov and myself have been looking into the ssh failure issues, which are on the rise again.
21:08:38 <mestery> salv-orlando: Did you want to update people here?
21:08:38 <enikanorov> yep
21:08:44 <enikanorov> i've filed the bug for this case
21:08:45 <mestery> enikanorov: Also feel free to update.
21:08:59 <enikanorov> public connectivity issue after reboot: https://bugs.launchpad.net/neutron/+bug/1325737
21:09:00 <uvirtbot> Launchpad bug 1325737 in neutron "Public network connectivity check failed after VM reboot" [High,Confirmed]
21:09:03 * salv-orlando lets enikanovorov do the talk
21:09:23 <mestery> salv-orlando also has this patch to aid in debugging via tempest: https://review.openstack.org/#/c/97269/
21:09:32 <enikanorov> so far this is the condition that makes it different from what we've seen previously
21:09:52 <salv-orlando> I’ve looked at a few logs. There’s nothing suspicious in the agent.
21:09:56 <enikanorov> +
21:10:02 <arosen> enikanorov is the reboot a nova reboot; or nova reboot --hard?
21:10:05 <salv-orlando> Also - I’ve seen the same failure on vmware minesweeper for NSX
21:10:12 <arosen> one does a rebuild of the instance i think
21:10:26 <mestery> arosen: I *think* it was a soft reboot, but I could be wrong.
21:10:28 <salv-orlando> that pretty much confirms the problem lies within the instance. The console log might tell us more.
21:10:35 <enikanorov> arosen: i don't know that/ i saw 'instance was destroyed' in nova logs
21:10:44 <salv-orlando> arosen: the vif is never unplugged, so it’s soft I guess
21:10:52 <arosen> salv-orlando:  right
21:11:17 <arosen> I think the the agent might not detect the vif has been removed and added during the rebuild time.
21:11:55 <markmcclain> arosen: I thought we addressed that late in Icehouse
21:11:56 <mestery> It's worth noting this is also affecting stable/icehouse as well.
21:12:07 <arosen> markmcclain: how?
21:12:20 * markmcclain looks up review
21:12:32 <salv-orlando> arosen: after reboot I also the dhcp server distributing the IP to the instance
21:12:54 <arosen> markmcclain:  if i remember correctly the agent runs: ovs-vsctl list-ports -- polling; if the vif is removed and added to the bridge during the poll peroid i think we fail to reprogram the flows.
21:13:04 <armax> mestery, salv-orlando: that’s why the trigger I believe is this one https://review.openstack.org/#/c/90427/
21:13:20 <armax> because tempest does not have a stable branch
21:13:21 <markmcclain> arosen: https://review.openstack.org/#/c/66375/
21:13:26 <mestery> arosen: I think you may be correct per our discussions earlier.
21:13:47 <salv-orlando> armax: if you can prove me that triggers a failure after the instance is shutoff, I’ll believe that
21:14:03 <salv-orlando> anyway - we should probably discuss this after the meeting
21:14:10 <mestery> Agreed salv-orlando.
21:14:16 <mestery> Lets continue discussion after the meeting in-channel on this.
21:14:19 <armax> salv-orlando: yup
21:14:28 <mestery> OK, any other bugs we should be tracking as a team enikanorov?
21:14:31 <enikanorov> ok, another bug worth mentioning is https://bugs.launchpad.net/neutron/+bug/1314313
21:14:33 <uvirtbot> Launchpad bug 1314313 in neutron "Firewall fails to become active within 300 seconds" [High,Confirmed]
21:14:45 <mestery> salv-orlando: Is SridarK still looking at that one?
21:14:49 <mestery> SridarK: any progress?
21:14:58 <SridarK> mestery: yes working this
21:15:12 <salv-orlando> I think enikanotob is following this bug and another that is popping out in the gate too
21:15:20 <SridarK> seems like a corner case timing - does not happen on normal operation
21:15:25 <enikanorov> yep
21:15:28 <SridarK> will try to get some closure this week
21:15:35 <enikanorov> i've also pushed a patch to help troubleshooting it in the gate
21:15:39 <mestery> SridarK: OK, lets discuss in-channel if needed.
21:15:41 <mestery> enikanorov: Thank you!
21:15:48 <SridarK> mestery: sure
21:15:50 <mestery> OK, lets move on, there is a lot to cover still. :)
21:15:57 <mestery> #topic Team Discussion Topics
21:16:00 <marun> re: vif plugging, if we switched to using the monitor....
21:16:09 <marun> sorry, save for open discussion
21:16:10 <mestery> #undo
21:16:10 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x247aa50>
21:16:16 <mestery> marun: I thgouth we used the monitor by default?
21:16:18 <mestery> Isn't that hte case?
21:16:20 <arosen> marun:  right -- i think that's the correct fix :)
21:16:24 <marun> we don't use it to track vif state
21:16:29 <marun> only to note that we need to poll for it
21:16:30 <mestery> marun: Argh!
21:16:35 <arosen> just notifications directly.
21:16:50 <mestery> OK.
21:16:53 <mestery> #topic Team Discussion Points
21:17:01 <mestery> I added this section to discuss a few things which didn't fit elsewhere.
21:17:04 <mestery> Is ihrachyshka here?
21:17:08 <ihrachyshka> mestery: yep
21:17:16 <mestery> ihrachyshka: YOu wanted to discuss oslo.messaging for a minute I believe.
21:17:22 <salv-orlando> marun: I am actually doing that. But if you have other people doing that, I will stop where I am.
21:17:35 <ihrachyshka> mestery: hey. so re: oslo.messaging port, not much to update here really. I've worked on splitting the patch in pieces, have some progress here (see https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/oslo-messaging,n,z)
21:17:49 <ihrachyshka> it's still WIP, so read with caution :)
21:17:52 <markmcclain> ihrachyshka: thanks for spliting it
21:17:53 <mestery> #link https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:bp/oslo-messaging,n,z
21:17:53 <salv-orlando> marun: I meant processign directly what the monitor says it should be processed
21:18:07 <mestery> ihrachyshka: Was there an issue with splitting the patches up?
21:18:11 <ihrachyshka> mestery: and my concern about not being able to split in pieces is solved on its own :)
21:18:19 <mestery> ihrachyshka: Great!
21:18:23 <ihrachyshka> mestery: I've found a way to split it
21:18:31 <salv-orlando> a boring process thing rE oslo/messaging
21:18:59 <ihrachyshka> I hope we'll have something for the next week
21:19:00 <salv-orlando> do we formally need to review a spec? I think not, but let me know if you feel otherwise
21:19:20 <mestery> salv-orlando: I was leaning to not as well.
21:19:22 <marun> salv-orlando: if you're working on it, great.
21:19:51 <mestery> OK, next sub-topic here: OVS firewall driver.
21:19:52 <marun> salv-orlando: is there a spec or are you prototyping?
21:19:57 <salv-orlando> marun: yes I am I will poke you as soon I’ve somethign I can show you (working slowly in spare time).
21:20:01 <ihrachyshka> salv-orlando: I think we can push smth very high level
21:20:04 <marun> salv-orlando: ok
21:20:10 * mestery is confused by the cross talk.
21:20:13 <mestery> Which spec are we referring to?
21:20:22 <asadoughi> hi.
21:20:22 <salv-orlando> marun: honestly I am prototyping. I’ll do the spec once I see something working
21:20:30 <salv-orlando> mestery: we were talking about ovs monitor and ovs agent.
21:20:34 <salv-orlando> stopping that now.
21:20:35 <asadoughi> #link https://review.openstack.org/#/c/89712/ ovs-firewall-driver blueprint
21:20:36 <mestery> salv-orlando: OK, thus my confusion.
21:20:42 <mestery> asadoughi: Hi!
21:20:51 <mestery> So, I wanted to discuss OVS firewall driver here with the broader team.
21:20:56 <mestery> There is an issue with it as it stands now:
21:21:04 <asadoughi> last week in the ml2 meeting we discussed moving forward with the blueprint given the constraits of OVS functionality, i.e. no connection tracking
21:21:19 <mestery> asadoughi: OK, and that gets us 80% of the iptables driver, right?
21:21:59 <asadoughi> mestery: effectively what we can have today would be the iptables driver minus the RELATED feature and stateless rules for non-TCP flows
21:21:59 <mestery> I just wanted the broader core team's opinion on moving forward with this, given the limitations.
21:22:03 <mestery> So this is the chance to chime in. :)
21:22:45 <chuckC> time table for the last 20%?
21:22:46 <marun> What is the alternative?  Waiting until it's 100%?
21:22:50 <asadoughi> to make this more explicit to the cloud operator and end user, we've added a feature configuration to provide api validation around security group rules that would not match the ovs implementation for example, but would match the iptables implementation
21:22:52 <kevinbenton> this changes the way security groups need to be configured. i think the API needs to somehow indicate whether they are stateful or stateless
21:23:02 <asadoughi> chuckC: 2015, k or l cycle
21:23:05 <mestery> marun: connection tracking is being implemented in OVS now, I've seen the design doc on the mailing list there.
21:23:26 <rkukura> asadoughi: Any additional thoughts on having the server reject attempts to create rules that it won’t be able to enforce when the ovs-firewall-driver is in use
21:23:26 <mestery> kevinbenton: That's the main concern, yes.
21:23:30 <marun> So ovs firewall is experimental in juno...
21:23:38 <mestery> marun: That's one approach, yes.
21:23:42 <marun> it won't be feature complete and/or well-tested
21:23:46 <asadoughi> rkukura: no, i've expressed all i have in the blueprint as-is today
21:24:04 <mestery> I am ok with calling it experimental in Juno. asadoughi, is this ok?
21:24:09 <asadoughi> kevinbenton: sure, if you could comment on the review what you'd like the request to look like with that in mind
21:24:14 <asadoughi> mestery: sure
21:24:22 <nati_ueno> I'm OK if there is clear documentation & if it is experimental
21:24:24 <marun> and when it is no longer experimental, what is the advantage over iptables?  easier to configure remotely?
21:24:29 <marun> faster?
21:24:31 <mestery> #info OVS firewall driver will be labeled experimental in Juno
21:24:34 <markmcclain> I just think the incomplete implementation will cause confusion
21:24:40 <asadoughi> marun: better performance w/o extra bridge (minus 2 hops)
21:24:44 <mestery> marun: One less bridge when using OVS for VIFs, for one thing.
21:24:56 <marun> but we don't have numbers to back that up....
21:24:57 <marun> so...
21:25:02 <mestery> marun: It's less confusing to users who look and wonder why so many bridges.
21:25:03 <marun> we *think* it will be faster
21:25:06 <asadoughi> marun: vthapar is working on numbers
21:25:11 <mestery> marun: EVen without faster, it's simpler.
21:25:19 <marun> kind of
21:25:19 <sweston> does this have any bearing on how I am handling https://bugs.launchpad.net/neutron/+bug/1163569 ?
21:25:20 <uvirtbot> Launchpad bug 1163569 in neutron "security groups don't work with vip and ovs plugin" [High,Triaged]
21:25:33 <marun> there is still an educational burden
21:25:37 <banix> vthapar working on getting some perf numbers afaik
21:25:37 <markmcclain> sweston: yes
21:25:47 <kevinbenton> mestery: trading troubleshooting firewall rules for troubleshooting openflow rules. :-)
21:25:48 <marun> debugging will be more complicated for people already familiar with iptables
21:25:50 <sweston> markmcclain: I thought so
21:25:50 <asadoughi> sweston: wasn't aware, i'll take al ook
21:26:00 <markmcclain> even if it is most performant we won't have parity with nova
21:26:08 <marun> ^^ +1
21:26:23 <mestery> Right, which is why I'm advocating for calling it experimental.
21:26:40 <marun> so long as it doesn't take resources away from parity and is labeled experimental, I'm ok with it.
21:26:44 <sweston> so, why not have the implementation of iptables vs openflow rules as a configuration option
21:26:45 <mestery> marun: +1
21:26:51 <salv-orlando> I want to throw more meat on the sec group plate… has anybody thought about whether it would be possible to pass packets in and out openvswitch ports to netfiler for iptables processing?
21:26:51 <asadoughi> markmcclain: specifically you're referring to dvr being ovs only?
21:26:55 <markmcclain> the problem is that the community doesn't really listen to experimental designation
21:27:14 <kevinbenton> salv-orlando: is that how vmware handles it? ;-)
21:27:21 <marun> markmcclain: what would you propose instead?  live out of tree until ready?
21:27:25 <marun> markmcclain: or...?
21:27:30 <salv-orlando> kevinbenton: we use upstream ovs.
21:27:43 <salv-orlando> and it’s been two years I’m here and still did not understood how it works
21:27:49 <mestery> salv-orlando: Ha!
21:27:52 <salv-orlando> that’s blakc magic for me.
21:27:55 <markmcclain> marun: sadly I think we may have to sit on it a cycle until the upstream project is ready
21:28:20 <marun> markmcclain: what do you mean by 'sit on it'?  not merge it?
21:28:47 <markmcclain> yeah… merging it doesn't really buy us anything
21:28:48 * nati_ueno wondering what's Black magic salvatore have...
21:28:51 <markmcclain> because we can't test it
21:29:09 <markmcclain> experimental items aren't tested in the gate
21:29:20 <mestery> markmcclain: There is that too, yes.
21:29:51 <mestery> So, is the team's consensus to shelve this for Juno until OVS implements connection tracking?
21:30:05 <markmcclain> +1
21:30:11 <nati_ueno> It looks like we need to discuss what's
21:30:20 <nati_ueno> 'experimental' policy
21:30:23 <marun> +1 the testing limitation is what pushed me in that direction.
21:30:28 <sweston> I think that's reasonable, why would we implement in OVS when the functionality works perfectly fine in netflow?
21:30:30 <banix> mestery: that won’t happen until 2015
21:30:37 <asadoughi> nati_ueno: yeah, this is the first time i'm hearing experimental as well
21:30:44 <mestery> banix: Yes, that's the downside.
21:30:49 <marun> nati_ueno: I think making sure new features can live outside the tree with minimal churn should be the meat of that policy
21:31:03 <mestery> My concerns are around the fact this will only implement 80% of the existing API.
21:31:17 <nati_ueno> marun: This is a driver, so it fit your definition, IMO
21:31:19 <marun> My concern is in the testing...
21:31:41 <markmcclain> mestery: same here is that will 80% and untested
21:31:54 <mestery> OK, lets move on from this for now.
21:32:02 <mestery> We should start a ML thread on this to close on it.
21:32:02 <asadoughi> 80% of the api today, but 100% of the new api extension
21:32:08 <asadoughi> mestery: ok i'll do that
21:32:18 <mestery> asadoughi: Thank you, I was going to ask you to send that. :)
21:32:23 <mestery> OK, lets move on to the next topic.
21:32:25 <mestery> #topic Docs
21:32:28 <mestery> emagana: Howdy!
21:32:32 <mestery> emagana:  Any updates today?
21:32:34 <emagana> hi there!
21:32:50 <emagana> I updated the wiki.. nothing new.. and we are short in time, so let's move on
21:33:00 <salv-orlando> regarding docs, I might be missing stuff on gerrit, but where are we pushing developer docs?
21:33:17 <mestery> salv-orlando: Good quesitons, was sc68cal working on those?
21:33:21 <sc68cal> I was
21:33:43 <emagana> sc68cal: do you want me to look over there?
21:33:55 <salv-orlando> sc68cal: I should really help you. If I do not get in touch with you by Friday feel free to tell everybody on twitter I’m a chicken
21:34:12 <sc68cal> LOL - well I have neglected devref part of the tree for a while now
21:34:14 * mestery thinks about #info'ing that one from salv-orlando. :)
21:34:19 <mestery> OK, lets move on then.
21:34:22 <mestery> #topic Tempest
21:34:23 <mestery> mlavalle: Hi!
21:34:29 <mestery> #undo
21:34:30 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x23d1f90>
21:34:30 <mestery> mlavalle: Sorry
21:34:34 <mestery> Missed parity. :)
21:34:37 <mestery> #topic PArity
21:34:49 <mestery> markmcclain: Any updates here this week?
21:35:04 <mestery> We covered DVR reviews already I think, which are high importance for the team.
21:35:05 <markmcclain> the db subteam met to discuss healing
21:35:20 <mestery> HenryG: Thanks for leading the DB meetings!
21:35:51 <mestery> I'll just use this slot to plug the mid-cycle meeting in July around parity efforts as well: https://etherpad.openstack.org/p/neutron-juno-mid-cycle-meeting
21:36:03 <HenryG> A spec is in review https://review.openstack.org/95738
21:36:15 <mestery> Thanks HenryG! It would be great to get core reviewers on that spec.
21:36:29 <mestery> If we could get that merged by next week it puts us in a good place to get the DB work done early in Juno-2.
21:36:52 <mestery> OK, thanks markmcclain and HenryG!
21:36:54 <mestery> #topic Tempest
21:36:55 <salv-orlando> HenryG: do you think any further attempt from at trying to get healing inline with migration timeline would be a waste of time?
21:37:03 <mestery> #undo
21:37:04 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x237cd90>
21:37:11 <salv-orlando> undo-fest today
21:37:21 <mestery> salv-orlando: Heh :)
21:37:24 <HenryG> salv-orlando: Can you attend the meeting tomorrow at 1300 UTC to discuss?
21:37:26 <salv-orlando> no worry I’ll discuss that tomorrow at the meeting set up specifically for this
21:37:27 <mestery> salv-orlando: I'm too quick on the draw today.
21:37:37 <salv-orlando> I had another question on parity...
21:37:42 <mestery> salv-orlando: Please go ahead.
21:37:51 <salv-orlando> any update from obondarev regarding migration from nova-net?
21:37:57 <obondarev> sure
21:38:05 <obondarev> after vacation and catching up on everything I've started implementing the migration
21:38:15 <obondarev> as first step I think it will be a migration for 1 choosen instance for simplicity
21:38:29 <obondarev> I decided to go with ML2 OVS for now as DVR will be based on it
21:38:36 <obondarev> and DVR is one of major parts of nova parity
21:38:40 <mestery> obondarev: Good choice. :)
21:38:45 <obondarev> during this first step I hope the design should be refined
21:39:03 <obondarev> I plan to come up with smth working by J-1 at least
21:39:09 <salv-orlando> makes sense to me. So you will have a solution for migrating instances one-by-one from a nova-network network to a neutron one?
21:39:21 <nati_ueno> +1 for that idea
21:39:31 <obondarev> salv-orlando: correct
21:39:43 <salv-orlando> just asking here about requirements - is 0 data plane downtime still a requirement?
21:39:47 <nati_ueno> so we run both of nova-network & neutron same time?
21:40:06 <markmcclain> salv-orlando: we can drop connections as we move devices
21:40:06 <mestery> salv-orlando: Yes, it is, though we can handle things which survive TCP session timeouts.
21:40:27 <obondarev> nati_ueno: yes
21:40:29 <salv-orlando> ah cool. just good to know the rules.
21:40:42 <nati_ueno> so requirement is no packet drop in TCP?
21:40:46 <mestery> OK, 20 minutes left and a few more things to cover yet.
21:40:57 <mestery> Lets move details of this to the ML or in-channel at this point.
21:41:14 <mestery> #topic Tempest (third try)
21:41:21 <mestery> mlavalle: Hi there! Any quick tempest updates for the team?
21:41:24 <mlavalle> Hi, a few things. First of all, please take a look at the qa-spec I have created for the scenario tests we'll be developing this cycle. Any feedback is welcome: https://review.openstack.org/#/c/95600
21:41:33 <mestery> #link https://review.openstack.org/#/c/95600
21:42:08 <mestery> Thanks mlavalle!
21:42:18 <mlavalle> second, I've been interacting over the ML with the team working on the new LBaaS. I want to make sure we have an API defined by the code sprint in San Antonio so tests can be developed
21:42:46 <mestery> mlavalle: That makes sense, and we're converging in that direction.
21:42:53 <sc68cal> I'm working on developing scenario tests that do not rely on the l3 agent, since that will be important for IPv6 scenarios (NO NAT)
21:42:59 <sc68cal> so I'll keep an eye on that BP
21:43:05 <mestery> sc68cal: Good idea!
21:43:13 <mestery> Anything else mlavalle?
21:43:17 <marun> mlavalle: just a reminder that it's possible to develop scenario tests in tree and only push them to the tempest repo when the api has stabilized
21:43:28 <mlavalle> third: I've been also interacting with the FWaaS and it seems we can develop a scenario in one node
21:43:48 <mlavalle> marun: thanks
21:43:56 <SumitNaiksatam> mlavalle: yes, thanks
21:43:57 <marun> mlavalle: given how important usage is to api design, I think it would be a good idea to try this approach this cycle at least once
21:44:03 <mlavalle> for the sake of brevity, that's it this week
21:44:14 <mestery> mlavalle: Thank you for the update!
21:44:21 <mestery> #topic L3
21:44:25 <carl_baldwin> mestery: hi
21:44:26 <mestery> carl_baldwin: Hi! Any updates this week?
21:44:35 <carl_baldwin> We’ve already discussed DVR.
21:44:43 <carl_baldwin> Everything else is on the team page.
21:45:02 <mestery> carl_baldwin: OK, thanks!
21:45:09 <carl_baldwin> yw
21:45:11 <mestery> carl_baldwin: And yes, DVR is at the front of everyone's minds right now. :)
21:45:18 <carl_baldwin> Agreed.
21:45:27 <mestery> #topic Advanced Services
21:45:42 <mestery> SumitNaiksatam: hi! Any updates here?
21:45:57 <mestery> I think enikanorov also had a request in this slot as well around flavor framework.
21:46:10 <enikanorov> yep
21:46:25 <enikanorov> i've sent an email regarding two aspects of flavor fw implementation
21:46:39 <enikanorov> which i think need participation from markmcclain
21:47:02 <mestery> enikanorov: OK, will read the ML thread there.
21:47:12 <mestery> OK, lets move on due to time then.
21:47:12 <enikanorov> yep
21:47:15 <mestery> #topic IPv6
21:47:17 <markmcclain> enikanorov: I'll catch up on the thread
21:47:18 <sc68cal> Hello
21:47:20 <mestery> sc68cal: Hi! Any updates?
21:47:26 <enikanorov> markmcclain: thanks
21:47:35 <sc68cal> Yes, I met with some cores from tempest and devstack to lay some groundwork for tempest testing for v6
21:47:39 <sc68cal> on Friday.
21:47:48 <mestery> sc68cal: Great! I saw the twitter pics. ;)
21:47:55 <sc68cal> In addition, we have a bug report that is going to be filed soon regarding a behavior for floating ips
21:48:22 <sc68cal> if you have two floating ip pools, v4 and v6 the provisioning happens randomly - see baoli's comments in the transcript from last week's meeting for more info
21:48:30 <sc68cal> sometimes v4 sometimes you get a v6
21:48:38 <sc68cal> waiting on a bug report for full details
21:49:00 <markmcclain> sc68cal: it's dependent on db natural ordering
21:49:15 <sc68cal> agree
21:49:32 <sc68cal> hopefully baoli can writeup a bug report so we can share
21:49:39 <sc68cal> I'm trying to reach out to a subteam member about a patch that was abandoned
21:49:52 <sc68cal> that implemented the changes in dnsmasq for the two attributes
21:50:01 <mestery> sc68cal: Is this for provider net SLAAC?
21:50:17 <sc68cal> No, this is for doing subnets with ra_mode = 'slaac', etc..
21:50:29 <mestery> sc68cal: OK
21:50:29 <sc68cal> where dnsmasq and the l3 agent handles forwarding
21:50:40 <sc68cal> It was a big patch that needs to be broken up.
21:50:46 <mestery> sc68cal: Got it
21:50:47 <sc68cal> That's it for me, unless there are q's
21:51:01 * mestery waits a bit for any questions.
21:51:16 <mestery> #topic ML2
21:51:24 <mestery> rkukura: Hi! Any updates for the team this week?
21:51:54 <rkukura> mestery: I can’t think of anything we need to cover here. Sukhdev, can you?
21:52:03 <Sukhdev> Yes - I can
21:52:23 <Sukhdev> We have bunch of specs - which we are prioritizing and reviewing
21:52:52 <Sukhdev> https://etherpad.openstack.org/p/Neutron_ML2_Juno_Spec_Tracking
21:53:09 <mestery> Sukhdev: OK, thanks!
21:53:18 <Sukhdev> Our plan is to go through all these specs and move on to the next phase
21:53:30 <Sukhdev> that is it really
21:53:37 <mestery> Sukhdev rkukura: Thanks!
21:53:43 <mestery> #topic Open Discussion
21:53:48 <mestery> nati_ueno: You had an item here in the agenda.
21:53:57 <mestery> nati_ueno: Around BGPVPN and OpenContrail.
21:54:03 <nati_ueno> mestery: yours is more important, so I can discuss it later
21:54:10 <mestery> nati_ueno: OK, cool. :)
21:54:15 <mestery> #link https://wiki.openstack.org/wiki/NeutronPolicies
21:54:24 <mestery> I started a wiki to document all the "procedures and policies" in Neutron
21:54:32 <mestery> I've asked some people to start filling in pages (thanks to those who have!)
21:54:39 <mestery> But wanted to broaden this to the team now.
21:54:53 <mestery> It would be great to get people to review, provide feedback, fill in holes, etc.
21:55:04 <marios> mestery: i only came across this today on the ML. should be good for newcomers. i'll try take closer look tomorrow
21:55:05 <mestery> The idea is to document everything we do as a team.
21:55:18 <nati_ueno> +1
21:55:24 <mestery> marios: Thank you! There is a section for new contributors, that's somethiong we can fill in as a team.
21:55:30 <SumitNaiksatam> mestery: +1
21:55:44 <chuckC> As a newcomer, thanks!
21:55:59 <mestery> OK, nati_ueno, now it's your turn for BGPVPN :)
21:56:01 <armax> mestery: how do we go about it though?
21:56:10 <mestery> armax: Updating it?
21:56:16 <nati_ueno> markmcclain: can we talk about your -2 in BGPVPN bp?
21:56:19 <mestery> armax: Sync with me offline if you have questions.
21:56:23 <armax> will do
21:56:26 <nati_ueno> so my question is related with the policy
21:56:30 <markmcclain> nati_ueno: yes in 4 mins or less :)
21:56:43 <nati_ueno> (1) Can we merge bp without dependent code merged or not
21:56:51 <nati_ueno> (2) Can we use 3rd party OSS in neutron
21:57:01 <nati_ueno> markmcclain: Thanks
21:57:08 <arosen> OSS?
21:57:15 <mestery> nati_ueno: Is the issue having Contrail as the OSS backend for the BGPVPN work?
21:57:16 <nati_ueno> armax: open source project
21:57:27 <VijayB_> mestery: sorry for the interruption, but since we're short on time, raising my hand for lbaas - are we discussing it in this meeting?
21:57:27 <nati_ueno> mestery: yes it is an example
21:57:52 <armax> nati_ueno: ?
21:58:03 <markmcclain> nati_ueno: re (1): it doesn't make sense to merge a blueprint if we don't have do not have an acceptable path to test it
21:58:04 <nati_ueno> sorry typo
21:58:22 <mestery> VijayB_: We have 2 minutes left, lets discuss in #openstack-neutron since salv-orlando wanted to be a part of that.
21:58:23 <nati_ueno> arosen: open source project
21:58:50 <nati_ueno> markmcclain: I agree if it is a code, but bp is for planning, right?
21:58:56 <salv-orlando> mestery: yeah maybe just give 5 minutes to get a coffee
21:58:57 <nati_ueno> markmcclain: dependent bp is merged already
21:59:10 <markmcclain> nati_ueno: blueprint implies that it is intended work for the cycle
21:59:12 <mestery> salv-orlando: +1 to that. :)
21:59:27 <markmcclain> we really should not accept BPs for items that we won't complete
21:59:27 <VijayB_> mestery: sure, let's do that - what time is the next #openstack-neutron scheduled for?
21:59:49 <banix> VijayB: it’s 24/7 :)
21:59:51 <nati_ueno> markmcclain: IMO, That's independent topic
21:59:59 <VijayB_> banix: ok :)
22:00:00 <mestery> VijayB_: Lets discuss in 5 minutes from now.
22:00:08 <mestery> OK, folks this meeting is coming to a close now!
22:00:10 <markmcclain> nati_ueno: so until there is consensus on the usage of contrail as the testing medium then approving the bp can't happen
22:00:11 <VijayB_> mestery: ok
22:00:12 <amotoki> regarding (2) i think we need to ensure neutron works with open source reference implementation even if it is open source impl.
22:00:39 <markmcclain> amotoki: for now I agree
22:00:41 <Sukhdev> Its time folks…. bye..,need to run to another meetring
22:00:47 <mestery> Reminder: Next week is Juno-1!
22:00:56 <mestery> Keep up the BP and bug work (and reviews).
22:00:58 <mestery> #endmeeting