21:00:37 <mestery> #startmeeting networking
21:00:38 <openstack> Meeting started Mon Jul  6 21:00:37 2015 UTC and is due to finish in 60 minutes.  The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:39 <dougwig> o/
21:00:40 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:42 <openstack> The meeting name has been set to 'networking'
21:00:47 <mestery> #link https://wiki.openstack.org/wiki/Network/Meetings Agenda
21:00:49 <amotoki> hi
21:00:57 <mestery> We have an "On-Demand" heavy agenda today, so lets get rolling!
21:01:05 <mestery> #topic Announcements
21:01:06 <yamamoto> hi
21:01:20 <mestery> I actually have nothing huge to announce today, which is a first. Does anyone have an announcement for the team this week?
21:01:52 <ZZelle> Hi everyone
21:02:22 <mestery> Must be the US holday weekend, lets move along
21:02:25 <mestery> #topic Bugs
21:02:27 <mestery> First up:
21:02:34 <mestery> #link https://bugs.launchpad.net/neutron/+bug/1465434
21:02:34 <openstack> Launchpad bug 1465434 in neutron "DVR issues with supporting multiple subnets per network on DVR routers - PortbindingContext does not have the status." [Critical,Confirmed] - Assigned to Swaminathan Vasudevan (swaminathan-vasudevan)
21:02:38 <mestery> Swami: Any updates on the DVR issues plauging the gate?
21:02:49 <sballe> hi
21:02:55 <annp> hi
21:03:16 <mestery> Looks like the first patch for 1465434 needs some reivew love
21:03:20 <Swami> mestery: I have a patch up for review and I will be pushing another one today. That would fix the problem that we are seeing now in upstream with respect to the tempest failure.
21:03:31 <mestery> Swami: Excellent!
21:03:33 <mestery> #link https://review.openstack.org/#/c/196908/
21:03:37 <mestery> Swami: That's it, right?
21:04:00 <Swami> Yes, once that patch goes in then we can make the job voting.
21:04:15 <mestery> Also, once we get this fixed, lets monitor the DVR job very closely with the goal of getting it voting once it's stabilized
21:04:23 <mestery> It will prevent us from getting into this situation again once it's voting
21:04:24 <Swami> mestery: Sure
21:05:06 <mestery> Swami: Thanks!
21:05:07 <mestery> Next up
21:05:10 <mestery> #link https://bugs.launchpad.net/neutron/+bug/1465837
21:05:10 <openstack> Launchpad bug 1465837 in neutron "Linux bridge: Dnsmasq is being passed None as an interface" [Critical,New] - Assigned to Sean M. Collins (scollins)
21:05:14 <mestery> sc68cal: How goes the Linuxbridge struggle?
21:05:59 <Sukhdev> hello
21:06:12 <mestery> sc68cal: Is this one still Critical?
21:06:49 <mestery> Also, for what it's worth for those following along in the linuxbridge saga:
21:06:50 <mestery> #link https://bugs.launchpad.net/neutron/+bugs?field.tag=linuxbridge-gate-parity
21:07:04 <sc68cal> mestery: I think 1465837 can be de-escalated to medium
21:07:11 <mestery> sc68cal: Cool, doing it!
21:07:43 <mestery> sc68cal: Any other LB parity bugs you'd like to highlight here?
21:08:10 <sc68cal> Nope, that tag has pretty much what we're seeing so far
21:08:14 <mestery> cool
21:08:44 <mestery> Any other bugs anyone wants to highlight? Those are the main ones I wanted to highlight this week
21:09:21 <john-davidge> #link https://bugs.launchpad.net/neutron/+bug/1471316
21:09:21 <openstack> Launchpad bug 1471316 in neutron "_get_subnetpool_id does not return None when a cidr is specified and a subnetpool_id isn't." [Undecided,In progress] - Assigned to John Davidge (john-davidge)
21:09:33 <john-davidge> would like to get some more eyes on that
21:09:52 <john-davidge> some differing opinions so far on what is considered a breaking api chnages
21:10:13 <mestery> john-davidge: I've marked it as "High" for now while we're investigating
21:10:31 <john-davidge> mestery: thanks
21:10:45 <mestery> Lets move along and see if emagana has any doc updates for us this week
21:10:47 <mestery> #topic Docs
21:10:56 <emagana> Hi mestery!
21:10:56 <mestery> emagana: Hi there! I hope you're getting some sleep now ;)
21:11:06 <emagana> mestery: getting better....  ;-)
21:11:12 <mestery> awesome
21:11:33 <emagana> Last Friday was holiday in USA so we did not have the Doc meeting... besides what I have already updated in the wiki.. got nothing else..
21:11:46 <emagana> mestery: you can take this time for the rest of the agenda
21:11:53 <mestery> Awesome! I'll refer folks to the wiki for updates
21:12:03 <mestery> #info Doc updates for this week on the wiki
21:12:03 <mestery> Thanks emagana!
21:12:34 <mestery> OK, lets move along and try not to wake many more people up as we finish this thing
21:12:39 <mestery> #topic ACLs for Neutron sub-projects
21:12:45 <mestery> #link https://review.openstack.org/#/c/198749/
21:13:08 <mestery> The tl;dr here is that I'm proposing passing control of things like merge commits, tags, and releases to the OpenStack release team for ann Neutron sub-projects
21:13:15 <mestery> Please review the patch series if you have interest here
21:13:41 <dougwig> mestery: i think it should just be a requirement of being in the neutron tent.  and if we haven't stated that, let's update the devref.
21:13:42 <mestery> My thinking is that projects in the Neutron Stadium are under OpenStack governance, and this makes things more consistent, especially with regards to stable backports
21:13:54 <russellb> mestery: i assume you talked to thierry/doug about this (release team) ?
21:13:57 <mestery> #action mestery to update devref with information on being in the tent
21:14:14 <mestery> russellb: I tried this morning, but got no response, but I added them to the review and will pign them during office horus tomorrow
21:14:17 <russellb> otherwise seems fine ... it's so many that it might be good to have someone from neutron with access to all of that
21:14:25 <Sukhdev> dougwig: +1 for devref documentation
21:14:38 <mestery> russellb: I am good with updating the team which can do this as people ahve interest
21:14:55 <russellb> neutron release team or something, that works with global release team
21:15:03 <mestery> Yup
21:15:07 <russellb> anyway, intention all seems good
21:15:11 <mestery> thanks russellb
21:16:05 <mestery> #topic Partial Upgrade Support
21:16:08 <mestery> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067156.html
21:16:22 <mestery> armax: You put this on the agenda, I know you sent an email, can you update the team?
21:16:30 <armax> mestery: sure
21:16:33 <mestery> I think this in reply/conjunction with russellb as well
21:16:33 <mestery> thanks armax
21:16:44 <armax> I was hoping we could get some momentum back into this
21:17:35 <mestery> AGreed
21:17:35 <russellb> to be completely transparent, i was happy to whip up the patches to make a job that does the same thing as the nova job
21:17:51 <russellb> but if it requires a big rework to do it a new way, i'm not likely to do it
21:18:00 <russellb> so anyone else interested should feel free to jump in
21:18:12 <mestery> I understand completely, thanks for jumping in on this russellb
21:18:15 <armax> mestery: this is the mail I sent this morning:
21:18:16 <armax> http://lists.openstack.org/pipermail/openstack-dev/2015-July/068693.html
21:18:17 <russellb> patches i did are still up for review, but blocked basically
21:18:23 <mestery> #link http://lists.openstack.org/pipermail/openstack-dev/2015-July/068693.html
21:18:27 <dougwig> that was kinda dirty pool, IMO.
21:18:30 <armax> I tried to summarize the state of the union
21:18:42 <armax> it’d be good to get a sense from sdague on where we are
21:18:47 <mestery> I appreciate russellb and armax trying to move the ball forward here given the constraints
21:19:03 <mestery> armax: Hopefully your ML post generates some discussion. Thanks for that.
21:19:22 <sc68cal> I think we're talking in -infra currently about some parts of that work
21:19:25 <armax> mestery: until we find a volunteer to work on adding multinode support to Grenade, then we’re in limbo
21:19:37 <mestery> armax: Agreed
21:19:43 <mestery> sc68cal: Thanks for keeping an eye on that
21:19:44 <sc68cal> the main thing is if DVR is a component in the multinode grenade job
21:20:03 <mestery> sc68cal: For that to happen, we need Swami's fixes and a stable DVR itself
21:20:05 <sc68cal> meaning is that the point of the job, or is a configuration without DVR sufficient just to test partial upgrades
21:20:06 <mestery> *stable DVR job
21:20:14 <armax> sc68cal: that can be added at a later date, it can be just a config switch
21:20:15 <sc68cal> and then later enable DVR
21:20:18 <Swami> mestery: yes I agree
21:20:41 <armax> the building blocks to wire everything together are there, afaik
21:20:56 <mestery> Yes, someone needs the Ikea instruction sheet to move forward at this point
21:21:14 <mestery> OK, lets move further discussion to the ML
21:21:22 <mestery> #topic Flow Classifier Work
21:21:23 <mestery> #link https://etherpad.openstack.org/p/flow-classifier
21:21:31 <mestery> vikram: Are you around?
21:22:45 <mestery> OK, lets table that one for next week or Open Discussion in case vikram shows up
21:23:03 <mestery> #topic Packet Logging API for Neutron
21:23:04 <mestery> #link https://bugs.launchpad.net/neutron/+bug/1468366
21:23:04 <openstack> Launchpad bug 1468366 in neutron "RFE - Packet logging API for Neutron" [Undecided,Confirmed] - Assigned to Yushiro FURUKAWA (y-furukawa-2)
21:23:23 <mestery> I'm not sure who added this to the agenda to be honest
21:23:25 <hoangcx> sc68cal: I got sc68cal explanation about the difference between our SG API and Amazon AWS in mailing list.
21:23:33 <mestery> hoangcx: Excellent!
21:23:48 <hoangcx> So I would like to continue discuss and get decision about which approach [support NEW packet logging API or extend current SG API] should be ok for approval?
21:24:25 <hoangcx> mestery: It is Yushiro. I am on behalf of Yushiro. He has a problem with his condition today :-(
21:24:27 <mestery> If we're leaning towards not deviating from AWS, then it seems like the natural way forward is a new API
21:24:52 <hoangcx> mestery: yeah. I think so. Thanks
21:25:03 <amotoki> precisely speaking, as sc68cal says, there is differences between neutron and aws sg Api, so I think we cannot make sg api fully compat.
21:25:18 <hoangcx> So i will concentrate to support NEW logging API.
21:25:19 <ZZelle> hoangcx, it should also be coherent with https://review.openstack.org/#/c/136760 ?
21:25:20 <sc68cal> most of the differences are the attribute names
21:25:22 <amotoki> imho we should keep the basic concept of SG api compat with aws.
21:25:41 <sc68cal> but the concepts are fairly, if not completely similar
21:26:26 <hoangcx> ZZelle: I will look on it. Thanks
21:26:46 <hoangcx> amotoki: Thanks. I see.
21:26:46 <ZZelle> hoangcx, great
21:26:57 <amotoki> at least we should not consider applying SG to a network like firewall :-)     I am not sure we need to keep all compat.
21:27:06 <mestery> amotoki: Point is well taken
21:27:43 <mestery> #topic Open Discussion
21:27:44 <amotoki> perhasp we don't have a consensus and more broader discussion seems needed.
21:28:03 <mestery> amotoki: Lets iterate on the RFE bug for that
21:28:10 <amotoki> yes. go ahead for open discussion
21:28:48 <ajmiller> I have two open discussion items:
21:28:57 <ajmiller> #link https://review.openstack.org/#/c/139758/
21:29:03 <mestery> ajmiller: Please, go ahead!
21:29:27 <mestery> #link https://review.openstack.org/#/c/139758/ Flavor Framework review
21:29:31 <ajmiller> ^^Flavor framework patch.  madhu_ak has just fixed merged conflicts and it is in the check tests now.  We wold appreciate reviews on it, and feel it is ready
21:29:49 <mestery> ajmiller: Ack on that, it would be good to get this one in. I encourage folks to review this ASAP
21:30:03 <mestery> This slipped Kilo and would be ideal for Liberty
21:30:04 <ajmiller> Also FWaaS Use case discussion https://etherpad.openstack.org/p/fwaas_use_cases
21:30:20 <ajmiller> Please read the use cases, comment, add as necessary!
21:30:28 <pc_m> Would be nice to get review of RFE #link https://review.openstack.org/#/c/191944/4
21:30:34 <ajmiller> Thanks
21:30:47 <mestery> #link https://etherpad.openstack.org/p/fwaas_use_cases
21:30:53 <mestery> ajmiller: That's a good one to bringup as well!
21:30:53 <dougwig> it's worth noting that the fwaas use cases/api gap discussion will be happening at the services midcycle next week.
21:31:05 <pc_m> Could use direction on three options proposed.
21:31:09 <mestery> #info FWaaS use case/API gap analysis at services midcycle next week
21:31:25 <mestery> dougwig: Will you provide a way for remote participants to take part in this?
21:32:00 <s3wong> dougwig: remote participation?
21:32:03 <mestery> dougwig: Given a lot of people can't travel, this would be ideal if possible
21:32:26 <dougwig> mestery: absolutely, if there's interest, which it sounds like there is.
21:32:42 <mestery> #action dougwig to ensure remote vehicle for participation for FWaaS discussion next week
21:32:59 <regXboi> can we get a link for the service midcycle into the minutes?
21:33:07 <mestery> dougwig: ^^^
21:33:40 <russellb> where's the meetup
21:33:52 <s3wong> Seattle?
21:34:09 <blogan> yes seattle
21:34:37 <kevinbenton> where is seattle?
21:34:41 <blogan> https://etherpad.openstack.org/p/LBaaS-FWaaS-VPNaaS_Summer_Midcycle_meetup
21:34:56 <Sukhdev> kevinbenton: in Siberia :-)
21:35:02 <ajmiller> Seattle - I'm looking for the link...
21:35:09 <blogan> ajmiller: ^^
21:35:16 <s3wong> kevinbenton: where IS Seattle, or where IN Seattle?
21:35:22 <ajmiller> HP office
21:35:23 <mestery> #link https://etherpad.openstack.org/p/LBaaS-FWaaS-VPNaaS_Summer_Midcycle_meetup
21:35:26 <mestery> rofl
21:35:35 <ajmiller> 701 Pike St, Suite 900, corner of 7th and Pike
21:36:03 <mestery> It's a wonderfully awesome office within a 30 minute train ride straight north FWIW
21:36:03 <mestery> :)
21:36:07 <ajmiller> We are in an office tower attached to the Washington State Convention Center.  Details in the link mestery pasted
21:36:27 <ajmiller> Yes, anyone coming, don't bother renting a care.
21:36:29 <ajmiller> car
21:36:49 <blogan> good, no chance of getting towed
21:37:01 * kevinbenton thinks he planned his trip home poorly. missing seattle and the nova mid cycle...
21:37:40 <mestery> kevinbenton: For some definition of horrible, perhaps
21:37:45 <mestery> OK, thanks everyone!
21:38:12 <mestery> Lets keep the train rolling towards Liberty-2 with a final touchdown near Liberty-3 for most things, and a bumpy landing for things landing in the FF. :)
21:38:16 <mestery> See you all on the ML and IRC!
21:38:18 <mestery> #endmeeting