20:00:03 #startmeeting Octavia 20:00:04 Meeting started Wed Mar 29 20:00:03 2017 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:07 The meeting name has been set to 'octavia' 20:00:14 Hi folks 20:00:47 Not a big agenda today, will probably be mostly talking about security groups 20:00:48 hello 20:01:01 Hi jniesz, welcome 20:01:06 o/ 20:01:40 o/ 20:01:41 #topic Announcements 20:01:52 I don't have much for announcements this week. 20:02:12 The octavia v2 API is merged/merging through pools so that is awesome. 20:02:19 o/ 20:02:36 The diskimage-builder project is now an infrastructure project. Hopefully that will bring some stability. 20:02:59 Any other announcements? 20:03:17 #topic Brief progress reports / bugs needing review 20:03:32 not so fast wanted to brag about being +2 on os-oansible-octavia 20:03:40 I have been focused on testing the API patches and working on PTL-ish things 20:03:52 Ha, ok congrats! 20:04:07 yes that has helped us a lot. thanks xgerman! 20:04:15 y.w. 20:04:26 still some work todo over there :-) 20:04:50 o/ 20:05:01 Really nice to have OSA support. I want to try that out. 20:05:51 I should also mention the service type is now load-balancer. That patch went in for infra and octavia. Maybe still pending on OSA 20:06:02 Hopefully that is behind us now. 20:06:20 Any other updates/patch discussions before we jump into security groups? 20:06:58 #topic Continue discussion about security groups 20:07:03 only update I had is that working on L3 active-active spec 20:07:06 #link https://review.openstack.org/445274 20:07:22 have most of the framework 20:07:31 still need work on data model changes 20:07:36 jniesz Excellent. Is it posted for review yet? 20:08:43 did you want me to post what I have or wait until more of the details for data model and specific Octavia changes? 20:09:06 jniesz incremental is fine. You can post and mark it WIP. 20:09:12 +1 20:09:24 ok 20:09:26 That way if we have time we can get started reading it. 20:09:40 Work In Progress (WIP) is a good way to go 20:09:59 That way we know it's not yet ready for full review 20:10:10 but being worked on 20:10:30 ok, I will mark it as WIP 20:10:31 Ok, so security groups. We ran out of meeting time last week discussing this. 20:10:55 yeah, so I checked with the author of the patch 20:11:25 Ok, it doesn't look like there are updated comments on the patch 20:11:56 no, he basically wants to restrict access to a list of servers 20:12:16 so access control lists would solve hois use case 20:12:20 So source IP ACLs basically 20:12:25 yep 20:12:40 Ok, that makes sense to me. 20:12:58 https://usercontent.irccloud-cdn.com/file/lCaLWl9C/lbchain.png 20:13:06 he drew me a picture 20:13:23 We had a proposal last week about doing a policy/rules implementation similar to how we do L7. Is that still on the table? 20:13:38 I think this is what we ought to do 20:14:11 so... they're using LBs as Firewalls? 20:14:17 That is all I see here <_< 20:14:20 the securoty_group model is just too rich to reasonably map it to a load balancer ACL and most people only want an ACL 20:14:31 I like that approach as well. 20:15:16 rm_work we currently firewall down to the TCP port(s) on the load balancers. This would extend that to be able to restrict the source IPs for the VIP port as well. 20:15:25 right, but 20:15:27 +1 20:15:32 isn't what they're looking for ... FWaaS? 20:15:47 I mean, we'd just be adding in FWaaS-Lite 20:15:57 sure, it's something we CAN do at our layer 20:16:12 but it violates the UNIX pipeline principle IMO 20:16:17 do one thing and do it well 20:16:25 that has always been the argument if we should just slap the other service on our port, or expose our port to the other servcie, or roll our own and make the othr service a driver 20:16:34 I think the issue with FWaaS is the VIP ports are owned by octavia/neutron and are not visible to the user to add FWaaS to them. 20:16:35 adding in firewalls-lite to LBs seems odd 20:16:39 we also don;t make you pass in a nova vm 20:16:46 hmm 20:17:08 ok, so if FWaaS *isn't usable* with LBs, that's a problem 20:17:17 maybe we should be addressing that? 20:17:30 Implementation wise, the default would just be implement them as security groups on the port. 20:17:31 or is there just no way to make that work, and adding this is just the compromise? 20:18:12 FWaaS/SG/QoS all work on the port 20:18:13 I just want to caution against increasing our scope to overlap the scope of another project 20:18:21 so we would need to expose the port somehow 20:18:36 unless we can show a clear reason why we need to 20:18:43 xgerman I'm not up to speed on FWaaS v2, is the blocker there that they don't own the port? Would our SG still be enforced under FWaaS? 20:18:44 ok, so maybe that's the clear reason? It is impossible to allow FWaaS to work with an LB? 20:19:22 johnsom FWaaS and SG complement each other so only hat both allow on the port would come through 20:20:17 So the issue is the user can't see the port to apply a FWaaS to it 20:20:25 yep 20:21:02 now if we look at the use case: the use wants an ACL. he doesn’t care if we do it with haproxy, SG, or GWaaS 20:21:09 which looks like a driver to me 20:21:14 maybe this could be implemented as an optional driver somehow? (as suggested above) 20:21:30 i mean, optional driver in octavia 20:22:07 Yeah, I think with ACLs there is flexibility as to how it's implemented. It could be an SG on the port or could be implemented by the vendor appliance. 20:22:10 i mean, i'm not saying we ABSOLUTELY CAN'T implement it the way suggested above 20:22:18 I just want us to be very certain it makes sense to do so 20:22:26 +1 20:22:37 +1 20:22:49 +1 I think being able to apply SG, FWaaS, on our port is also valuable 20:23:07 but I think we can split that into a different problem 20:23:20 and we might be able to leverage SFC 20:23:38 I think allowing the user to pass in a SG is out of the question. I think it has too much potential to do harm to the amps. 20:24:08 Plus it gets very strange with port owned by project A and SG owned by project B 20:24:10 yep, I think those things on a port should be solved through SFC 20:24:13 like, actual harm somehow? or just "shoot self in foot" harm 20:24:33 like opening holes and making security hard 20:24:35 actually yeah i'm not sure neutron ALLOWS that 20:24:47 well, again, that'd be of the "shoot self in foot" variety 20:24:53 no? 20:24:59 rm_work, the latter, i think. for example an SG that conflicts with a lb listener 20:25:03 Well, right now we lock it down to the listener ports. We know that if any of the packages in the namespace open something it won't be exposed to the internet, etc. 20:25:12 right 20:25:31 so ... how much do we want to protect the user from themself, but in the process reduce functionality 20:25:33 Since we are owning the lifecycle of the amp, it seems risky 20:25:34 but id the user manages the SG he has to know if there are ports in pur implementation which should never be exposed 20:25:48 well 20:25:50 it's for the VIP right? 20:25:53 so... 20:25:56 VIP 20:25:58 we don't ever expose anything 20:26:01 or rather 20:26:03 there's nothing else open 20:26:09 johnsom, plus, think what happens if the sec group blocks the traffic between the amp agent to octavia services.. yikes. 20:26:13 Shouldn't be 20:26:34 nmagnezi That is a different port so wouldn't be impacted 20:26:35 nmagnezi: again, it's just the VIP 20:26:37 ah, though they could block HM 20:26:42 HM goes over VIP, no? 20:26:49 or just vrrp 20:26:50 keepalived goes over vip 20:26:54 oh, right. 20:26:55 No, but the VRRP and sticky do 20:26:57 yeah k 20:27:02 so they could block vrrp 20:27:07 which is a "shoot self in foot" situation 20:27:21 which triggers failover, which.... 20:27:22 well, which requires them to know what we do under the hood 20:27:35 does it trigger failover? 20:27:40 so let’s get back on the rail 20:27:47 yeah k agree 20:27:47 1) ACL - yes/no 20:28:04 2) Way to apply FWaaS, etc. on port - which might be SFC 20:28:56 With FWaaS and SFC it gets back to the user experience argument of having to do something in three places where ACL is all in one place/call. 20:29:16 Granted, I think that is why there are automation tools, but I hear this argument 20:29:18 that’s why I separated them into two issues 20:29:20 :-) 20:29:54 if we really can't TECHNICALLY expose the port for them to add SGs to 20:29:59 which I think is actually true 20:30:05 then ... 1) Yeah, I guess that's fine 20:30:09 we can have ACL in Octavia and expose SFC for more complex use cses 20:30:26 Yeah, maybe 20:30:39 one day we need to play nice with others 20:30:51 HA 20:30:58 We try to.... 20:32:12 What it comes down to is what we want is the ability to share a port, with restrictions, to the user. But I don't think OpenStack is there yet. 20:32:55 well, I wouldn;’t necessarily share the port as long as we can bring it into a SFC they can slap FWaaS whatever on it 20:33:08 what is SFC again? 20:33:21 Service Function Chain 20:33:34 ah k 20:33:35 yeah 20:34:12 yeah, they basically forward packages through different services which is basiclaly waht we want. Full control of our port and the traffic gets filteretd elsewhere 20:34:43 but there might be other ways to achieve that since SFC isn’t doing that hot 20:35:09 but I like to tackle that independent from the ACL problem 20:36:37 I haven't kept up with SFC, so I'm not sure what the state is or how easy/hard that would be 20:37:04 yeah, I think that is a future thing how we expose us to FWaaS and other servcies 20:39:11 I think it makes sense to have ACLs in Octavia since haproxy supports it 20:39:26 deep-packet-inspection likely not 20:40:29 It does support both actually. I think how it's implemented should be flexible. Default is SG in neutron on the port, with a way drivers can do something different if necessary 20:41:11 ok 20:41:23 as long as we make it a driver we should be good 20:41:37 Do we want another week to investigate other options or are we to a point we can give guidance on the RFE? 20:42:35 Don't everyone speak up at once.... 20:43:12 I think having ACL in Octavia makes a lot of sense 20:43:39 we are going a different route with QoS but I think that’s ok 20:43:46 Me too. Does anyone else disagree? 20:44:21 (though in QoS we had comments to got he same route - add them to Octavia instead of taking a policy id) 20:44:30 I almost miss Stephen as he wasn't shy.... 20:44:39 yep 20:44:56 but he might come up with something crazy 20:45:13 rm_work? 20:45:28 The ACL path would allow for more advanced ACLs to be implemented in the future, such as on HTTP header fields, etc. 20:45:35 yeah, that's fine 20:46:30 diltram nmagnezi jniesz Any comments? 20:46:46 johnsom: I'm ok with this 20:46:48 I think it makes sense 20:47:06 it is a missing gap and user might need to program ACL on the VIP port 20:47:10 until it's all gonna be a driver we can always change this behaviour 20:47:25 Ok, thanks for the feedback. 20:48:12 johnsom, i don't know much about haproxy ACLs but from what I do understand it seem more of a "in house" solution for octavia that answer the requested usecase, so i tend to agree with xgerman and rm_work 20:48:29 We will give guidance down the ACL path. 20:48:40 #topic Open Discussion 20:48:48 Any other topics for today? 20:48:55 do we need to chat QoS? Or are we good? 20:49:01 I just had one issue to bring up 20:49:08 I think we decided that last week 20:49:13 k 20:49:17 Unless something new came up 20:49:23 reedip commented 20:49:30 jniesz Go ahead 20:49:36 macros.j2 template for the newton stable branch 20:49:48 the time check is missing the 's' 20:50:00 yes, that could be 20:50:03 timeout check {{ pool.health_monitor.timeout }} 20:50:08 i noticed this was fixed 20:50:17 but never back ported 20:50:19 so we need to backport 20:50:27 is that something we could backport in? 20:50:38 absolutely! 20:50:48 Yeah, we can do that. 20:50:59 I noticed on our LB's today the backends were timing out : ) 20:51:17 health checks set at 2 ms 20:51:20 vs 2s 20:53:02 Yeah, that falls into the stable policy for newton, so we can do that. I will take a look. I think we also need to fix some DIB issues on the stable branch and will need to do a release for that. 20:53:18 awesome, thanks! 20:54:00 Any other topics? 20:55:05 nop :) 20:55:15 I am good as well 20:55:27 Ok, thanks folks. Catch you next week or in our IRC channel 20:55:34 thx 20:55:36 cu 20:55:37 #endmeeting