22:01:57 <armax> #startmeeting neutron_drivers
22:01:58 <openstack> Meeting started Thu Jul 14 22:01:57 2016 UTC and is due to finish in 60 minutes.  The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:01:59 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:02:02 <openstack> The meeting name has been set to 'neutron_drivers'
22:02:28 <amuller> o/
22:04:36 <dougwig> ?
22:04:45 <kevinbenton> [][][][][][][][]
22:05:42 <kevinbenton> armax: alive?
22:06:04 <johnsom> Can you hear us now?  grin
22:06:22 <kevinbenton> netsplit?
22:07:18 <dougwig> i don't see one.  shall we start on the rfe list, or wait?
22:07:26 <dougwig> yep, there he went.
22:07:33 <dougwig> anyone have the link handy?
22:07:38 <amuller> https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Confirmed&field.tag=rfe
22:07:45 <amuller> That's bugs in confirmed to be vetted for discussion
22:08:23 <armax> hello
22:08:27 <armax> anyone there?
22:08:31 <kevinbenton> hi
22:08:34 <amuller> there you are
22:08:35 <armax> ok
22:08:40 <armax> I dunno what happened
22:08:48 <dougwig> oh, there he is
22:08:48 <amuller> you forgot to type :(
22:08:49 <carl_baldwin> Glad you're back.
22:08:55 <armax> I was saying
22:08:56 <armax> to special request from kevinbenton
22:09:02 <armax> I would like to start with this announcement
22:09:07 <armax> #link http://lists.openstack.org/pipermail/openstack-dev/2016-July/099281.html
22:09:19 <armax> please share your thoughts on
22:09:20 <armax> #link https://etherpad.openstack.org/p/neutron-project-mascot
22:09:30 * HenryG sneaks in late
22:09:42 <dougwig> committee naming.  this will end well.
22:09:47 <armax> yak shave away
22:10:00 <dougwig> i put in 'newt', because i'm corny and unoriginal.
22:10:04 <armax> but let’s do it offline :)
22:10:13 <HenryG> CHANGE THE TOPIC NOW!!!
22:10:18 <armax> for now let’s chew up on the RFE list
22:10:29 <armax> #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe
22:10:29 <dougwig> mascot bike shedding sounds like more fun
22:10:39 <armax> we only have 5 RFEs to go through
22:10:42 <armax> easy peasy
22:10:50 <armax> bug #1583694
22:10:50 <openstack> bug 1583694 in neutron "[RFE] DVR support for Allowed_address_pair port that are bound to multiple ACTIVE VM ports" [Wishlist,Triaged] https://launchpad.net/bugs/1583694 - Assigned to Swaminathan Vasudevan (swaminathan-vasudevan)
22:11:12 <johnsom> easy peasy and he starts here.....
22:11:19 <armax> we talked about this last week
22:11:21 <dougwig> kill dvr.  done, easy.
22:11:24 <armax> and the week before
22:11:36 <armax> dougwig: if it were that simple!
22:12:52 <armax> I think that the need of having this work on DVR has unveiled a much larger problem were FIP and allowed address pairs work implicitly together
22:12:54 <johnsom> Would this "redundant FIP" be provisioned differently via the API or magically happen behind the scene?
22:13:34 <armax> johnsom: I would lean on the former, but we could explore what magically behind the scene means
22:13:37 <kevinbenton> armax: more generally, it's that before DVR, FIPs could target arbitrary IP addresses. after dvr, FIPs became restricted to actual port IPs
22:14:42 <johnsom> I lean towards magic as it would mean a different API experience based on if DVR is enabled or not
22:14:51 <armax> kevinbenton: yes, it feels to me that the distributed nature of DVR has unveiled a potential issue with the way FIP can work with VRRP clusters
22:14:59 <armax> johnsom: not per se
22:15:07 <armax> johnsom: we could make this work for both CVR and DVR
22:15:13 <amuller> Is this a DVR specific issue? Wouldn't say ODL and OVN have the same issue?
22:15:19 <johnsom> Ok
22:15:41 <armax> johnsom: it feels to me that the FIP + allowed address pairs is a trick to make a VRRP use case work
22:15:47 <carl_baldwin> amuller: which ones of those have floating ips?
22:16:12 <dougwig> what is CVR?  and the previous 10 lines have an amusing acronym density
22:16:14 <armax> johnsom: when ideally a user could be oblivious of allowed address pairs to associate the FIP to multiple ports
22:16:15 <amuller> I'm not sure but if they don't have now they will with OVS 2.5 soon enough
22:16:16 <johnsom> armax Yes, but the user creating the FIP may or may not know if the endpoint is VRRP enabled.
22:16:24 <armax> dougwig: Centralized
22:16:27 <russellb> both have it in some degree
22:17:09 <armax> johnsom: I’d argue he/she does, as he/she is the one creating the FIP and associating it to a dangling port for a very specific reason
22:17:13 <kevinbenton> russellb: in OVN, everything is statically programmed so it would break on this as well, right?
22:17:29 <kevinbenton> russellb: i.e. there is no mechanism to discover where the IP is living
22:17:44 <amuller> I think this is a challenging use case to cover in any distributed routing solution so I would't give DVR a bad rep here :)
22:17:45 <russellb> right, it's bound
22:18:03 <amuller> (without involving the control plane on failover)
22:18:09 <armax> associating a  FIP to a dangling (unbound) port and then pair with a port via allowed address pairs
22:18:18 <armax> is not exactly what I would call it kosher
22:18:22 <johnsom> armax My use case is, user asks for load balancer, we give them a VIP for that load balancer.  They then turn around and create a FIP that points to the VIP.  They don't necessarily know if we created the VIP VRRP or not.
22:18:39 <armax> johnsom: agreed
22:19:19 <armax> right now the VIP is the IP you give to the unbound port, that’s gonna be the entry into your allowed address pair attributes for whichever ports you want to sit behind the vip correct?
22:19:43 <johnsom> Correct
22:19:50 <armax> now, I am thinking, if we make this somewhat explicit, then it’s easier and less error prone implementing the logic for DVR
22:20:18 <armax> because we’re not reusing a construct that wasn’t exactly designed to work with this set of hacks
22:20:33 <carl_baldwin> I think the issue, as armax is trying to state it, is an API issue that isn't specific to any implementation.
22:20:36 <johnsom> It seems like we can query the VIP port, see AAP is enabled, and create a "redundant FIP"
22:21:12 <johnsom> I think I'm getting into implementation details too much here.
22:21:16 <armax> if DVR’s FIP is by design placed on the compute note, moving it around, locking it to the central snat, and potentially having to deal with moving it again if the VM migrate or get delete would be incredibly brittle logic
22:21:54 <kevinbenton> armax: wouldn't it just be "all unbound ports are on central snat" ?
22:22:02 <armax> whereas for a ‘redundant FIP’ or whatever we want to call it, we can assume by design that the object is confined to a central network node and its limited to being there for its life cycle
22:23:10 <dougwig> one of the only compelling use cases of FIPs is for VIPs.  (the issue them for everything mentality is madness.)  your previous statements just relegated the common use case for elastic IPs back into a non-DVR world.
22:24:17 <armax> dougwig: FIP are typically associated to a single port at any given time, in this case the FIP is serving multiple ports at any given time
22:24:23 <armax> dougwig: or am I talking nonsense?
22:24:38 <armax> dougwig: in other words, you take allowed address pairs out the picture
22:24:49 <armax> FIP for VIPs breaks even for CVR
22:25:00 <armax> centralized virtual router
22:25:01 <dougwig> FIPs are just a deployment aid.  bridging to rare public IPs, or when moving the IP is more important/needs to be faster than DNS ttl.  that we map them to single port is implementation.
22:25:43 <armax> true but the implementation is expressed throught the API
22:25:47 <dougwig> allowed address pair is just ACLs; i'm not sure how that's relevant.
22:25:47 <armax> the association is one to one
22:25:48 <carl_baldwin> I think it'd be nice if there were an explicit relationship between the IP and the ports that should be allowed to received traffic for it.  Instead of using allowed_address_pairs for an implicit one.
22:25:53 <armax> FIP <-> PORT
22:26:12 <armax> carl_baldwin: +1
22:26:44 <armax> I wonder if by adopting a more explicit relationship, the DVR solution looks simpler and less prone to regressions
22:26:46 <dougwig> then how to do FIP to a vrrp?
22:27:07 <armax> I am totally sold on the use case, I am unconvinced of the proposed solution as it stands
22:27:16 <kevinbenton> dougwig: is it possible on EC2 to do that? How does their API look for this?
22:27:28 <armax> dougwig: not sure I understand the question
22:27:31 <dougwig> kevinbenton: i've never tried.
22:27:48 <armax> so my question to you guys is
22:28:20 <armax> in order to address this RFE, do you think it’s sensible to explore ways to model the need of FIP for VIP a little more formally?
22:28:24 <johnsom> The current proposed patch does not solve the problem in a good way
22:28:35 <armax> before diving into the implementation details?
22:28:42 <amuller> armax: exploring ways is always sensible, that's almost a tautology =p
22:28:48 <armax> johnsom: and I am hoping that revisiting the model might address that
22:29:27 <amuller> I wonder if we could use distributed port bindings somehow, by binding the fixed IP port to two hosts
22:29:29 <dougwig> armax: yes, because you're right, the proposed fix feels like a duct tape band-aid.
22:29:37 <amuller> then having that info in Neutron would let DVR do the right thing
22:30:14 <armax> amuller: at this point I am happy to brainstorm, the mid-cycle is approaching and perhaps we can nail this down
22:30:31 <carl_baldwin> amuller: Each instance still needs its own port.  I think we need to think about this like moving a VIP between ports.
22:30:40 <amuller> carl_baldwin: yeah, for the VIP
22:30:48 <amuller> carl_baldwin: each instance has its own port + a port that is bound to both
22:31:09 <armax> so have I managed to seed the weed?
22:31:26 <armax> I don’t know what it means :)
22:31:39 <armax> seed the germ of doubt :)
22:31:40 <dougwig> what is that? are we all supposed to light up now?
22:31:43 <carl_baldwin> amuller: Ports have other extra stuff that wouldn't make sense, like macs, vif binding, etc.
22:32:05 <amuller> that's true
22:32:08 <kevinbenton> need to make IP allocations first class citizens
22:32:29 <amuller> we already have ports in neutron that float between nodes
22:32:30 <armax> we should probably go ahead and crunch the rest of the list
22:32:30 <carl_baldwin> kevinbenton: That's never come up before.  ;)
22:32:33 <kevinbenton> :)
22:32:35 <amuller> the HA routers qr interfaces
22:32:50 <amuller> anyway
22:33:00 <armax> but at this point I guess we’re all in agreement that we cannot accept the proposed solution as is
22:33:04 <kevinbenton> maybe the work to bind a port to multiple hosts can help here
22:33:12 <armax> and more thinking has to go into how this use case is solved
22:33:16 <kevinbenton> then the floating IP can be associated to a port bound to multiple hosts
22:33:18 <dougwig> armax +1 +1
22:33:20 <armax> kevinbenton: amuller said that
22:33:24 <armax> kevinbenton: pay attention!
22:33:39 <carl_baldwin> kevinbenton: and I expressed some doubt.
22:33:39 <johnsom> armax +1
22:34:06 <armax> ok
22:34:10 <armax> shall we move on?
22:34:34 <kevinbenton> can we just pitch CVR as a feature that solves this?
22:34:47 <armax> kevinbenton: shut up
22:35:04 <armax> kevinbenton is sitting next to me, he knows I am joking
22:35:14 <amuller> Before we move on, what is supposed to happen before next week?
22:35:14 <kevinbenton> "ever thought you're routing was getting out of hand being everywhere? centralize it!"
22:35:21 <armax> bug #1586056
22:35:21 <openstack> bug 1586056 in neutron "[RFE] Improved validation mechanism for QoS rules with port types" [Wishlist,Triaged] https://launchpad.net/bugs/1586056 - Assigned to Slawek Kaplonski (slaweq)
22:35:37 <armax> #undo
22:35:38 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x7f5bbf33c9d0>
22:35:54 <armax> amuller: we’ll get together with carl_baldwin and lay out a plan
22:35:57 <armax> bug #1586056
22:36:44 <HenryG> you have put the meeting bot to sleep
22:36:45 <dougwig> kevinbenton: now you're talking!
22:36:48 <amuller> johnsom: do you feel like you have a way to move forward?
22:37:26 <armax> ajo and/or ihar might not be around
22:37:46 <amuller> they're both returning from travel
22:37:53 <armax> ok
22:37:54 <johnsom> amuller Not sure I follow.  I'm here to give input, help understand the problem.  I think the next steps is more design work.
22:38:39 <armax> johnsom: yes, I’d rather nail down the design now than deal with the pain later
22:38:40 <amuller> johnsom: alright
22:38:47 <armax> as for QoS
22:39:16 <armax> and this RFE
22:39:28 <armax> I am unclear how this is being tackled
22:39:29 <armax> anyone?
22:40:09 <amuller> we need QoS team leaders for this, I suggest we skip
22:40:29 <armax> ok
22:40:41 <armax> bug #1592028
22:40:41 <openstack> bug 1592028 in neutron "[RFE] Support security-group-rule creation with address-groups" [Wishlist,Triaged] https://launchpad.net/bugs/1592028 - Assigned to Roey Chen (roeyc)
22:41:28 <armax> this didn’t seem outrageous to me, but I wonder if this is something that could be handled by FWaaS, any fwaas expert in the room?
22:41:45 <kevinbenton> security groups themselves already represent a group of addresses
22:41:54 <kevinbenton> (all of the ports that are a member of that group)
22:42:04 <kevinbenton> is this to handle cases where the IP addresses don't belong to neutron ports?
22:42:19 <armax> that’s a good question to ask on teh bug report
22:42:56 <dougwig> i think it's meant to be, "allow all from rfc1918" style groups.
22:43:20 <kevinbenton> "An Openstack cloud may require connectivity between instances and external services which are not provisioned by Openstack, each service may also have multiple endpoints."
22:43:20 <kevinbenton> yeah
22:43:24 <kevinbenton> i didn't read close enough
22:43:30 <kevinbenton> I'm fine with this idea
22:43:58 <dougwig> at one time sc68cal wanted SG's frozen, and all of this going into fwaas. i think he reversed that position.
22:44:00 <armax> so that means a spec to understand the API proposal etc
22:44:29 <armax> dougwig: I think we all come to the realization that we diverged from Amazon’s style security groups anyway
22:44:30 <sc68cal> haven't reversed
22:44:40 <sc68cal> but I appear to be a minority viewpoint
22:44:47 <kevinbenton> so a destination/source can be a CIDR/securitygroup right now
22:44:59 <kevinbenton> and the adjustment would be that it could be an 'addressgroup'
22:45:06 <kevinbenton> also
22:45:18 <armax> and that OpenStack relaxed its EC2 compliance somewhat
22:45:33 <armax> so say we’ll give this the go ahead
22:45:57 <armax> is there anyone in the group willing to volunteer for taking care of the review process?
22:46:06 <armax> we can ask on the bug report
22:46:21 <armax> but if don’t find volunteers this must have to go on the backburner
22:46:58 <armax> I assume that lack of comment means, backburner does it?
22:47:02 <armax> moving on
22:47:05 <armax> bug #1592918
22:47:05 <openstack> bug 1592918 in neutron "[RFE] Adding Port Statistics to Neutron Metering Agent" [Wishlist,Triaged] https://launchpad.net/bugs/1592918 - Assigned to Sana Khan (sana.khan)
22:47:11 <dougwig> is there anyone not so overcommitted that it's hollow anyway?  i mean, i'd love to say yes, but...
22:47:19 <armax> dougwig: fair point
22:47:46 <armax> I am still waiting for your braindump on https://bugs.launchpad.net/neutron/+bug/1524916
22:47:46 <openstack> Launchpad bug 1524916 in neutron "neutron-ns-metadata-proxy uses ~25MB/router in production" [Medium,In progress] - Assigned to Doug Wiegley (dougwig)
22:47:48 <armax> but I digress
22:48:01 <dougwig> heh.
22:48:12 <armax> dougwig: I’ll bug you until you comply :)
22:48:20 <kevinbenton> dougwig promised a simple nginx solution! :)
22:48:27 <dougwig> armax: an effective strategy.
22:48:34 <armax> dougwig: it’s not working right now
22:48:39 <armax> dougwig: not sure it’s effective
22:48:51 <armax> so talking about the port stats thingy
22:48:53 <dougwig> i have the bug open in a tab, and i feel guilt when i see it.  that's more than last week.
22:49:02 <armax> ah
22:49:04 <armax> ok so progress!
22:49:20 <amuller> We don't have a scenario test for metering do we
22:49:23 <amuller> We have API and unit tests
22:49:27 <armax> let’s see how you feel next week when I’ll ship a dead horse’s head to your house
22:49:29 <amuller> nothing that checks that it works end to end, *I think*
22:49:48 <amuller> I think we should require  such a test before any RFEs in the metering area
22:49:48 <armax> amuller: we have some tests in the gate
22:49:54 <armax> right
22:50:02 <armax> but that goes back to a staffing problem
22:50:10 <amuller> it would be on the bug reporter
22:50:12 <armax> rossella_s volunteered but she’s spread thin
22:50:46 <armax> I gather from this meeting and the past one that no-one has objections or reserves on this requet
22:50:48 <armax> request
22:50:50 <amuller> forget staffing for a sec, does anyone care about metering and wants to discuss the usefulness of the RFE?
22:51:04 <armax> though someone has to help the reporter through the review pipeline
22:51:32 <armax> amuller: I sense that all of us are lukewarm when it comes to metering
22:52:06 <amuller> in that case I wouldn't block the RFE, just let it through on a best effort basis, and if the author is persistent enough maybe he or she will actually get reviews :)
22:52:17 <armax> but that’s not a reason to reject when there’s some degree of usefulness in the counters collected
22:52:37 <amuller> armax: agreed
22:52:49 <armax> ok
22:53:04 <armax> no rejection, best effort basis assumed the right steps are taken
22:53:13 <armax> amuller: can you point out your demands on teh bug report?
22:53:18 <amuller> will do
22:53:29 <armax> and we can figure out how stringent or painful they are :)
22:53:39 <armax> bug #1599488
22:53:39 <openstack> bug 1599488 in neutron "[RFE] Enhance Quota API calls to return resource usage per tenant" [Wishlist,Triaged] https://launchpad.net/bugs/1599488 - Assigned to Sergey Belous (sbelous)
22:53:53 <armax> this one seems ‘easy'
22:54:02 <armax> I have asked salv_orlando to look at it
22:54:05 <armax> but he’s worse than dougwig
22:54:08 <armax> erm :)
22:54:14 * dougwig sighs.
22:54:53 <armax> I think this also deserves a spec to better understand what the proposal looks like, being an API change required
22:55:26 <armax> unless there are complications on making sure that the attribute reliably represent the value the reporter is interested in
22:55:43 <armax> then this seems a nice addition to the quota API
22:55:52 <armax> any thoughts?
22:55:59 <armax> lukewarm anyone?
22:56:08 <amuller> a consistency with the other core projects would be... nice.
22:56:26 <dougwig> it would also be downright un-openstack!
22:56:27 <armax> amuller: quota API is not exactly consistent across projects
22:57:09 <sc68cal> I poked sbelous to file the RFE.
22:57:26 <sc68cal> so I can be the sherpa for it I guess
22:57:31 <armax> ok
22:57:52 <armax> let’s refresh the bug report with this ntoes
22:57:53 <armax> notes
22:57:58 <armax> anything anyone wants to add?
22:58:08 <armax> if not let’s get 3 minutes back
22:58:16 <armax> so that dougwig can do his braindump
22:58:20 <armax> 2 minutes now
22:58:59 <armax> #stopmeeting
22:59:01 <carl_baldwin> bye
22:59:05 <kevinbenton> bye
22:59:06 <armax> #endmeeting