22:01:57 #startmeeting neutron_drivers 22:01:58 Meeting started Thu Jul 14 22:01:57 2016 UTC and is due to finish in 60 minutes. The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:01:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:02:02 The meeting name has been set to 'neutron_drivers' 22:02:28 o/ 22:04:36 ? 22:04:45 [][][][][][][][] 22:05:42 armax: alive? 22:06:04 Can you hear us now? grin 22:06:22 netsplit? 22:07:18 i don't see one. shall we start on the rfe list, or wait? 22:07:26 yep, there he went. 22:07:33 anyone have the link handy? 22:07:38 https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Confirmed&field.tag=rfe 22:07:45 That's bugs in confirmed to be vetted for discussion 22:08:23 hello 22:08:27 anyone there? 22:08:31 hi 22:08:34 there you are 22:08:35 ok 22:08:40 I dunno what happened 22:08:48 oh, there he is 22:08:48 you forgot to type :( 22:08:49 Glad you're back. 22:08:55 I was saying 22:08:56 to special request from kevinbenton 22:09:02 I would like to start with this announcement 22:09:07 #link http://lists.openstack.org/pipermail/openstack-dev/2016-July/099281.html 22:09:19 please share your thoughts on 22:09:20 #link https://etherpad.openstack.org/p/neutron-project-mascot 22:09:30 * HenryG sneaks in late 22:09:42 committee naming. this will end well. 22:09:47 yak shave away 22:10:00 i put in 'newt', because i'm corny and unoriginal. 22:10:04 but let’s do it offline :) 22:10:13 CHANGE THE TOPIC NOW!!! 22:10:18 for now let’s chew up on the RFE list 22:10:29 #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe 22:10:29 mascot bike shedding sounds like more fun 22:10:39 we only have 5 RFEs to go through 22:10:42 easy peasy 22:10:50 bug #1583694 22:10:50 bug 1583694 in neutron "[RFE] DVR support for Allowed_address_pair port that are bound to multiple ACTIVE VM ports" [Wishlist,Triaged] https://launchpad.net/bugs/1583694 - Assigned to Swaminathan Vasudevan (swaminathan-vasudevan) 22:11:12 easy peasy and he starts here..... 22:11:19 we talked about this last week 22:11:21 kill dvr. done, easy. 22:11:24 and the week before 22:11:36 dougwig: if it were that simple! 22:12:52 I think that the need of having this work on DVR has unveiled a much larger problem were FIP and allowed address pairs work implicitly together 22:12:54 Would this "redundant FIP" be provisioned differently via the API or magically happen behind the scene? 22:13:34 johnsom: I would lean on the former, but we could explore what magically behind the scene means 22:13:37 armax: more generally, it's that before DVR, FIPs could target arbitrary IP addresses. after dvr, FIPs became restricted to actual port IPs 22:14:42 I lean towards magic as it would mean a different API experience based on if DVR is enabled or not 22:14:51 kevinbenton: yes, it feels to me that the distributed nature of DVR has unveiled a potential issue with the way FIP can work with VRRP clusters 22:14:59 johnsom: not per se 22:15:07 johnsom: we could make this work for both CVR and DVR 22:15:13 Is this a DVR specific issue? Wouldn't say ODL and OVN have the same issue? 22:15:19 Ok 22:15:41 johnsom: it feels to me that the FIP + allowed address pairs is a trick to make a VRRP use case work 22:15:47 amuller: which ones of those have floating ips? 22:16:12 what is CVR? and the previous 10 lines have an amusing acronym density 22:16:14 johnsom: when ideally a user could be oblivious of allowed address pairs to associate the FIP to multiple ports 22:16:15 I'm not sure but if they don't have now they will with OVS 2.5 soon enough 22:16:16 armax Yes, but the user creating the FIP may or may not know if the endpoint is VRRP enabled. 22:16:24 dougwig: Centralized 22:16:27 both have it in some degree 22:17:09 johnsom: I’d argue he/she does, as he/she is the one creating the FIP and associating it to a dangling port for a very specific reason 22:17:13 russellb: in OVN, everything is statically programmed so it would break on this as well, right? 22:17:29 russellb: i.e. there is no mechanism to discover where the IP is living 22:17:44 I think this is a challenging use case to cover in any distributed routing solution so I would't give DVR a bad rep here :) 22:17:45 right, it's bound 22:18:03 (without involving the control plane on failover) 22:18:09 associating a FIP to a dangling (unbound) port and then pair with a port via allowed address pairs 22:18:18 is not exactly what I would call it kosher 22:18:22 armax My use case is, user asks for load balancer, we give them a VIP for that load balancer. They then turn around and create a FIP that points to the VIP. They don't necessarily know if we created the VIP VRRP or not. 22:18:39 johnsom: agreed 22:19:19 right now the VIP is the IP you give to the unbound port, that’s gonna be the entry into your allowed address pair attributes for whichever ports you want to sit behind the vip correct? 22:19:43 Correct 22:19:50 now, I am thinking, if we make this somewhat explicit, then it’s easier and less error prone implementing the logic for DVR 22:20:18 because we’re not reusing a construct that wasn’t exactly designed to work with this set of hacks 22:20:33 I think the issue, as armax is trying to state it, is an API issue that isn't specific to any implementation. 22:20:36 It seems like we can query the VIP port, see AAP is enabled, and create a "redundant FIP" 22:21:12 I think I'm getting into implementation details too much here. 22:21:16 if DVR’s FIP is by design placed on the compute note, moving it around, locking it to the central snat, and potentially having to deal with moving it again if the VM migrate or get delete would be incredibly brittle logic 22:21:54 armax: wouldn't it just be "all unbound ports are on central snat" ? 22:22:02 whereas for a ‘redundant FIP’ or whatever we want to call it, we can assume by design that the object is confined to a central network node and its limited to being there for its life cycle 22:23:10 one of the only compelling use cases of FIPs is for VIPs. (the issue them for everything mentality is madness.) your previous statements just relegated the common use case for elastic IPs back into a non-DVR world. 22:24:17 dougwig: FIP are typically associated to a single port at any given time, in this case the FIP is serving multiple ports at any given time 22:24:23 dougwig: or am I talking nonsense? 22:24:38 dougwig: in other words, you take allowed address pairs out the picture 22:24:49 FIP for VIPs breaks even for CVR 22:25:00 centralized virtual router 22:25:01 FIPs are just a deployment aid. bridging to rare public IPs, or when moving the IP is more important/needs to be faster than DNS ttl. that we map them to single port is implementation. 22:25:43 true but the implementation is expressed throught the API 22:25:47 allowed address pair is just ACLs; i'm not sure how that's relevant. 22:25:47 the association is one to one 22:25:48 I think it'd be nice if there were an explicit relationship between the IP and the ports that should be allowed to received traffic for it. Instead of using allowed_address_pairs for an implicit one. 22:25:53 FIP <-> PORT 22:26:12 carl_baldwin: +1 22:26:44 I wonder if by adopting a more explicit relationship, the DVR solution looks simpler and less prone to regressions 22:26:46 then how to do FIP to a vrrp? 22:27:07 I am totally sold on the use case, I am unconvinced of the proposed solution as it stands 22:27:16 dougwig: is it possible on EC2 to do that? How does their API look for this? 22:27:28 dougwig: not sure I understand the question 22:27:31 kevinbenton: i've never tried. 22:27:48 so my question to you guys is 22:28:20 in order to address this RFE, do you think it’s sensible to explore ways to model the need of FIP for VIP a little more formally? 22:28:24 The current proposed patch does not solve the problem in a good way 22:28:35 before diving into the implementation details? 22:28:42 armax: exploring ways is always sensible, that's almost a tautology =p 22:28:48 johnsom: and I am hoping that revisiting the model might address that 22:29:27 I wonder if we could use distributed port bindings somehow, by binding the fixed IP port to two hosts 22:29:29 armax: yes, because you're right, the proposed fix feels like a duct tape band-aid. 22:29:37 then having that info in Neutron would let DVR do the right thing 22:30:14 amuller: at this point I am happy to brainstorm, the mid-cycle is approaching and perhaps we can nail this down 22:30:31 amuller: Each instance still needs its own port. I think we need to think about this like moving a VIP between ports. 22:30:40 carl_baldwin: yeah, for the VIP 22:30:48 carl_baldwin: each instance has its own port + a port that is bound to both 22:31:09 so have I managed to seed the weed? 22:31:26 I don’t know what it means :) 22:31:39 seed the germ of doubt :) 22:31:40 what is that? are we all supposed to light up now? 22:31:43 amuller: Ports have other extra stuff that wouldn't make sense, like macs, vif binding, etc. 22:32:05 that's true 22:32:08 need to make IP allocations first class citizens 22:32:29 we already have ports in neutron that float between nodes 22:32:30 we should probably go ahead and crunch the rest of the list 22:32:30 kevinbenton: That's never come up before. ;) 22:32:33 :) 22:32:35 the HA routers qr interfaces 22:32:50 anyway 22:33:00 but at this point I guess we’re all in agreement that we cannot accept the proposed solution as is 22:33:04 maybe the work to bind a port to multiple hosts can help here 22:33:12 and more thinking has to go into how this use case is solved 22:33:16 then the floating IP can be associated to a port bound to multiple hosts 22:33:18 armax +1 +1 22:33:20 kevinbenton: amuller said that 22:33:24 kevinbenton: pay attention! 22:33:39 kevinbenton: and I expressed some doubt. 22:33:39 armax +1 22:34:06 ok 22:34:10 shall we move on? 22:34:34 can we just pitch CVR as a feature that solves this? 22:34:47 kevinbenton: shut up 22:35:04 kevinbenton is sitting next to me, he knows I am joking 22:35:14 Before we move on, what is supposed to happen before next week? 22:35:14 "ever thought you're routing was getting out of hand being everywhere? centralize it!" 22:35:21 bug #1586056 22:35:21 bug 1586056 in neutron "[RFE] Improved validation mechanism for QoS rules with port types" [Wishlist,Triaged] https://launchpad.net/bugs/1586056 - Assigned to Slawek Kaplonski (slaweq) 22:35:37 #undo 22:35:38 Removing item from minutes: 22:35:54 amuller: we’ll get together with carl_baldwin and lay out a plan 22:35:57 bug #1586056 22:36:44 you have put the meeting bot to sleep 22:36:45 kevinbenton: now you're talking! 22:36:48 johnsom: do you feel like you have a way to move forward? 22:37:26 ajo and/or ihar might not be around 22:37:46 they're both returning from travel 22:37:53 ok 22:37:54 amuller Not sure I follow. I'm here to give input, help understand the problem. I think the next steps is more design work. 22:38:39 johnsom: yes, I’d rather nail down the design now than deal with the pain later 22:38:40 johnsom: alright 22:38:47 as for QoS 22:39:16 and this RFE 22:39:28 I am unclear how this is being tackled 22:39:29 anyone? 22:40:09 we need QoS team leaders for this, I suggest we skip 22:40:29 ok 22:40:41 bug #1592028 22:40:41 bug 1592028 in neutron "[RFE] Support security-group-rule creation with address-groups" [Wishlist,Triaged] https://launchpad.net/bugs/1592028 - Assigned to Roey Chen (roeyc) 22:41:28 this didn’t seem outrageous to me, but I wonder if this is something that could be handled by FWaaS, any fwaas expert in the room? 22:41:45 security groups themselves already represent a group of addresses 22:41:54 (all of the ports that are a member of that group) 22:42:04 is this to handle cases where the IP addresses don't belong to neutron ports? 22:42:19 that’s a good question to ask on teh bug report 22:42:56 i think it's meant to be, "allow all from rfc1918" style groups. 22:43:20 "An Openstack cloud may require connectivity between instances and external services which are not provisioned by Openstack, each service may also have multiple endpoints." 22:43:20 yeah 22:43:24 i didn't read close enough 22:43:30 I'm fine with this idea 22:43:58 at one time sc68cal wanted SG's frozen, and all of this going into fwaas. i think he reversed that position. 22:44:00 so that means a spec to understand the API proposal etc 22:44:29 dougwig: I think we all come to the realization that we diverged from Amazon’s style security groups anyway 22:44:30 haven't reversed 22:44:40 but I appear to be a minority viewpoint 22:44:47 so a destination/source can be a CIDR/securitygroup right now 22:44:59 and the adjustment would be that it could be an 'addressgroup' 22:45:06 also 22:45:18 and that OpenStack relaxed its EC2 compliance somewhat 22:45:33 so say we’ll give this the go ahead 22:45:57 is there anyone in the group willing to volunteer for taking care of the review process? 22:46:06 we can ask on the bug report 22:46:21 but if don’t find volunteers this must have to go on the backburner 22:46:58 I assume that lack of comment means, backburner does it? 22:47:02 moving on 22:47:05 bug #1592918 22:47:05 bug 1592918 in neutron "[RFE] Adding Port Statistics to Neutron Metering Agent" [Wishlist,Triaged] https://launchpad.net/bugs/1592918 - Assigned to Sana Khan (sana.khan) 22:47:11 is there anyone not so overcommitted that it's hollow anyway? i mean, i'd love to say yes, but... 22:47:19 dougwig: fair point 22:47:46 I am still waiting for your braindump on https://bugs.launchpad.net/neutron/+bug/1524916 22:47:46 Launchpad bug 1524916 in neutron "neutron-ns-metadata-proxy uses ~25MB/router in production" [Medium,In progress] - Assigned to Doug Wiegley (dougwig) 22:47:48 but I digress 22:48:01 heh. 22:48:12 dougwig: I’ll bug you until you comply :) 22:48:20 dougwig promised a simple nginx solution! :) 22:48:27 armax: an effective strategy. 22:48:34 dougwig: it’s not working right now 22:48:39 dougwig: not sure it’s effective 22:48:51 so talking about the port stats thingy 22:48:53 i have the bug open in a tab, and i feel guilt when i see it. that's more than last week. 22:49:02 ah 22:49:04 ok so progress! 22:49:20 We don't have a scenario test for metering do we 22:49:23 We have API and unit tests 22:49:27 let’s see how you feel next week when I’ll ship a dead horse’s head to your house 22:49:29 nothing that checks that it works end to end, *I think* 22:49:48 I think we should require such a test before any RFEs in the metering area 22:49:48 amuller: we have some tests in the gate 22:49:54 right 22:50:02 but that goes back to a staffing problem 22:50:10 it would be on the bug reporter 22:50:12 rossella_s volunteered but she’s spread thin 22:50:46 I gather from this meeting and the past one that no-one has objections or reserves on this requet 22:50:48 request 22:50:50 forget staffing for a sec, does anyone care about metering and wants to discuss the usefulness of the RFE? 22:51:04 though someone has to help the reporter through the review pipeline 22:51:32 amuller: I sense that all of us are lukewarm when it comes to metering 22:52:06 in that case I wouldn't block the RFE, just let it through on a best effort basis, and if the author is persistent enough maybe he or she will actually get reviews :) 22:52:17 but that’s not a reason to reject when there’s some degree of usefulness in the counters collected 22:52:37 armax: agreed 22:52:49 ok 22:53:04 no rejection, best effort basis assumed the right steps are taken 22:53:13 amuller: can you point out your demands on teh bug report? 22:53:18 will do 22:53:29 and we can figure out how stringent or painful they are :) 22:53:39 bug #1599488 22:53:39 bug 1599488 in neutron "[RFE] Enhance Quota API calls to return resource usage per tenant" [Wishlist,Triaged] https://launchpad.net/bugs/1599488 - Assigned to Sergey Belous (sbelous) 22:53:53 this one seems ‘easy' 22:54:02 I have asked salv_orlando to look at it 22:54:05 but he’s worse than dougwig 22:54:08 erm :) 22:54:14 * dougwig sighs. 22:54:53 I think this also deserves a spec to better understand what the proposal looks like, being an API change required 22:55:26 unless there are complications on making sure that the attribute reliably represent the value the reporter is interested in 22:55:43 then this seems a nice addition to the quota API 22:55:52 any thoughts? 22:55:59 lukewarm anyone? 22:56:08 a consistency with the other core projects would be... nice. 22:56:26 it would also be downright un-openstack! 22:56:27 amuller: quota API is not exactly consistent across projects 22:57:09 I poked sbelous to file the RFE. 22:57:26 so I can be the sherpa for it I guess 22:57:31 ok 22:57:52 let’s refresh the bug report with this ntoes 22:57:53 notes 22:57:58 anything anyone wants to add? 22:58:08 if not let’s get 3 minutes back 22:58:16 so that dougwig can do his braindump 22:58:20 2 minutes now 22:58:59 #stopmeeting 22:59:01 bye 22:59:05 bye 22:59:06 #endmeeting