20:00:52 #startmeeting Octavia 20:00:53 Meeting started Wed Nov 21 20:00:52 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:55 :P 20:00:56 o/ 20:00:57 The meeting name has been set to 'octavia' 20:01:05 rm_work, only because you asked 20:01:05 Sorry, was slow on the draw there. 20:01:06 o/ 20:01:13 lol 20:01:20 o/ 20:01:25 Trying to research what I know the hot topic is today... 20:01:38 As we have a LONG history of discussing the solution 20:01:57 #topic Announcements 20:02:17 Today is my last working day at GD, will start at Oath on Dec 3 :) 20:02:30 The summit was last week. It sounds like it went well for us. We had two keynotes mentioning Octavia, so that is great! 20:02:31 rm_work, congratulations!! 20:02:47 rm_work Congrats! 20:02:58 https://j.apps.cgoncalves.pt/UI/Dashboard 20:03:02 oosp 20:03:02 rm_work, Congrats and good luck! :) 20:03:04 thanks :) 20:03:05 I know they are using Octavia there so maybe I will have more time? 20:03:19 cgoncalves, Bad Gateway, Bad! 20:03:42 Sadly, I guess the conference venue in Berlin didn't have the bandwidth for the videos, so they had to ship hard drives back to the states. They are estimating mid-December for the non-keynote sessions to be posted. 20:03:54 lol wat 20:04:05 johnsom, that's actually a first. wow. 20:04:14 cgoncalves nginx/1.14.1 ??? Really..... sigh 20:04:22 such sneakernet 20:04:46 aww, cert not trusted? wheres your letsencrypt? 20:04:48 johnsom, wasn't pwd protected yet, so I shut it off immediately 20:04:57 lol 20:05:12 Don't want to share your cat videos? 20:05:13 it's letsencrypt'ed 20:05:24 I'm a dog person, sorry 20:05:32 Still, nginx? 20:05:36 hmm, wonder if my root certs are crazy old on this machine 20:05:42 what's wrong with nginx? :P 20:06:10 johnsom, he's secretly coding an Octavia driver for it 20:06:13 Also, if you weren't aware, the openstack-dev and ops mailing lists are merging and going away. 20:06:31 Please make sure you are subscribed to the new mailing list (if you are interested). 20:06:37 #link http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html 20:06:45 All of the details are at that link 20:06:59 i wish they had one for nginx... would be nice to really proof out our interface for the amp driver 20:07:18 w00t, I was not aware of this. thank you 20:07:25 Yeah, but I'm not approving an API so you can load you license key.... 20:08:09 There is some talk of new [dev] tags, but I didn't get a chance to read up on that. 20:08:09 the link in that mail seems to go to the wrong place 20:08:14 it's this one I think? http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss 20:08:34 Yeah, it's now "discuss" 20:08:37 yeah, subscribed now 20:08:50 johnsom: i assume that'd be a config variable specific to nginx? :P 20:09:15 I guess you could add a flavor for it... sigh 20:09:32 and we'd need to split out config stuff specific to backends into their own sections (ie, haproxy stuff would need to be in [haproxy]) 20:10:34 Ok, last thing I have: 20:10:44 Octavia priority review list.... 20:10:48 #link https://etherpad.openstack.org/p/octavia-priority-reviews 20:10:55 I have updated the list for Stein 20:11:01 We are way behind on reviews 20:11:14 ~56 reviews or so 20:12:15 Just a reminder. This is a list of patches that *appear* ready for reviews. It is not a list of patches I think should merge. It is roughly ordered in a priority I think makes sense (patches that have children first, impact on the project, etc.) 20:12:33 If it has a -1 or WIP it is not on this list 20:12:44 That is by design 20:13:13 If there are things you think should be on the list, add them to "Awaiting prioritization" so I can order it and be aware of it. 20:13:59 Hopefully this is useful. At least it highlights the need for reviews on patches and our backlog. 20:14:17 Feel free to poke me if you think we should re-order, etc. 20:14:26 Any questions/comments on the review list? 20:15:01 Any other announcements today? 20:15:26 #topic Brief progress reports / bugs needing review 20:15:42 I proposed tagging octavia-tempest-plugin 0.2.0: https://review.openstack.org/#/c/619314/ 20:15:45 I am on vacation this week, so not a ton of things going on. 20:16:02 I finished the work for creating octavia-lib and moving the code out to it. 20:16:43 I also finished the first patch in the flavors work, which adds APIs for managing flavors. Still a lot of work to do there, but a start. 20:16:44 as a followup to what cgoncalves just wrote, we work to promote the RPM for octavia-tempest-plugin for both OSP13 and OSP14 (which is in the works). 20:17:05 Carlos Goncalves proposed openstack/octavia master: Fix devstack plugin for /var/log/dib-build exists https://review.openstack.org/617838 20:17:08 Oh i think this one should be easy (speaking of tempest) https://review.openstack.org/#/c/607382/ 20:17:10 Cool. 20:18:10 For whomever is interested, that is also going to be the case in RDO 20:18:49 rm_work FYI, I did include that in the review list 20:18:56 ah k 20:19:08 I also spent time creating the list and doing some reviews last week. 20:19:17 I'm gonna be out until like mid-december prolly for most stuff still, but I REALLY hope I can get back to reviewing at that point 20:19:29 We hope too! 20:20:13 one of the first things i'll ask my new team is "so, do you guys want me to run for PTL of this project?" :P 20:20:15 Any other updates of note? 20:20:24 Sweet! 20:20:40 It's time for some new leadership around here... lol 20:20:42 though I'm secretly hoping cgoncalves wants to do it :P 20:20:49 I still *hate* paperwork 20:21:00 with a fiery passion 20:21:08 rm_work, there's no paperwork. we are 100% digital 20:21:10 (enough that it tends to incinerate any paperwork i come in contact with) 20:21:17 lol well that's good then 20:21:37 Ok so we're going to vote now? :) 20:21:40 There is *plenty* of bit-work to do however 20:22:02 Ok, moving on to the hot topic today...... 20:22:13 #topic VIP ACLs/SGs 20:22:35 So there is a patch posted: 20:22:38 didn't we discuss this at length at the last PTG? 20:22:56 #link https://review.openstack.org/#/c/602564/ 20:23:05 And a patch to immediately deprecate it 20:23:25 rm_work Yes, we have had lengthy conversations about this at multiple PTGs and IRC meetings. 20:23:56 However, this patch is not the approach we decided on at those previous meetings. 20:24:08 #link https://review.openstack.org/#/c/612631/2 20:24:11 deprecation patch 20:24:27 lol k 20:24:41 I had hoped to pull up the meeting logs and have a list of links for the history, but with vacation I didn't get time. 20:24:51 Here is my summary of the issue: 20:25:13 Users want to be able to restrict the source addresses allowed to access the VIP of their load balancers. 20:25:22 Do we agree on the problem statement? 20:25:41 Yup 20:25:42 johnsom, yep 20:25:44 yes 20:25:59 Cool! Just want to set a common ground work 20:27:11 The challenge comes in that Octavia "owns" the VIP and security group today. It will update it as ports are added for listeners, etc. It will also rebuild it should a failover occur and the SG needs to be rebuilt. 20:27:20 also FYI, we have customers requesting this feature in our queens-based product 20:27:27 Neutron only supports an "OR" operation with SGs on ports. 20:28:06 This means if we stacked a user SG and the octavia SG on a port, it's which ever is more open wins. 20:29:50 We did move the VIP port itself into the tenant project to allow floating IP assignment. However this has been proven to be a horrible idea as people are using automation tools to manage their projects which have "delete all" logic, and we (at least us) are seeing a large number of support calls where they have shot themselves in the foot. 20:30:37 johnsom, but in this case the user will not be able to delete the SG if the amphora VM is using it 20:31:02 So, due to this, and the fact that moving the SG into the tenant means they can open ports/protocols against our managed amphora is not a popular idea for some. 20:31:19 johnsom, which is different to the VIP, as the VIP is not 'attached' to any VM in the amphora driver (just used through allowed address pair) 20:31:22 ltomasbo Yes you can 20:32:05 johnsom, if the SG is in used by another port (as is the case) you cannot remove it, neutron will reject it 20:32:07 If the SG is owned by the tenant they can do anything to it. Common is open it wide up, no rules 20:32:29 ltomasbo They already own the ports 20:32:55 johnsom, how? the SG is attached to the vrrp port 20:33:03 and that is owned by the octavia tennant 20:33:08 so, the SG cannot be removed 20:34:00 plus, the user is able to define listeners opening any port he wants on the octavia created SG, therefore the security concern is not a real one, as the user could to just the same without owning the SG 20:34:08 what about the thing i was pushing last time -- having a user port and an internal port? It's been so long i actually forgot the details of my own proposal though, unfortunately 20:34:42 Ah, but they cannot open protocols and any port they open as a listener is configured and has the appropriate haproxy listening on the port. 20:35:01 openstack security group rule create ec2b4ea4-fe88-41ed-904c-5be2b88d188c 20:35:11 That is really the worst issue 20:35:26 That opens everything basically 20:36:56 johnsom, I don't know enogh about octavia, but I though you can specify tcp/udp on the listeners, and create as much as you want 20:37:04 thought 20:37:07 The approach we eventually agreed upon was adding an ACL API to the listener that allowed users to add the source ACLs they want applied. We can then manage those, and restore those on failover. It is also a very explicit "here are the rules" interface. 20:37:37 It would be setup similar to adding the ACLs via neutron, but with a limited set of features down to source IP, etc. 20:38:07 ltomasbo You can, but you can't open protocol 50 for example. 20:38:40 Or the VRRP protocols 20:39:34 johnsom, that is true 20:40:13 We looked at accepting a user SG and cloning it, but then the user experience gets horrible as if they change the SG rules, the LB would not get updated. That is unless we added a dependency that we are tied into the neutron event stream and watch for those update events, which gets super ugly quickly. 20:40:29 johnsom, is there a draft somewhere of how that ACL API extension would look like? 20:40:40 but if you allow the user to pass the SG you are in the same situation, and reimplementing the SG API (restricted) on octavia... 20:40:51 I think there is in a story. 20:41:14 But adding an ACL API is super simple 20:42:06 johnsom, celebdor1 started a PoC to handle that: https://review.openstack.org/#/c/619193/ 20:42:15 the downside of ACL API is reinventing the wheel, sort of 20:42:48 Also not backportable 20:42:50 Yeah, we went down that path and decided it was a bad idea. (allowing the passing of an SG in) 20:42:53 cgoncalves, I agree 20:43:01 But I guess most suggestions won't be anyways 20:43:16 nmagnezi None of these options are backportable 20:43:29 nmagnezi, let's forget about that for a moment. let's focus on the right fix to the problem 20:43:39 Agreed. 20:43:55 johnsom, reimplementing SG API is a bad idea too, it already exists... 20:44:23 johnsom, I cannot find the story for the ACL API, sorry. could you please share? 20:44:23 ltomasbo: and took a long time and man hours to get right and sort out the bugs 20:44:25 The ACL option has an advantage as well, that the F5 guys liked. It allows us to implement it in different ways. For example, the amphora driver *may* use neutron SGs, or may use the haproxy ACLs. The F5 could opt to use the hardware ACLs. 20:44:33 storyboard is not easy to navigate/search 20:44:58 it'd be a simplified version of the existing ACL type apis tho right? 20:45:12 cgoncalves, yeah I would have to dig. The title is not ACL, the comment is.... So their search is useless 20:45:15 or do we just THINK it'd start that way, but it'd eventually grow to be the same huge mess? 20:45:49 rm_work: I bet for the latter 20:45:50 I think it would be very simple. We already handle all of the port stuff 20:46:17 unless you really restrict it to just CIDRs 20:46:19 So there is no real need for anything more complex that the source ACLs 20:46:22 which is not too powerful 20:46:53 on the pro side of ACL API is that we wouldn't be tightly coupled to neutron, which is good for e.g. octavia standalone 20:47:44 The only downside we came up with is you can't use the transitive trust neutron SGs provide. I.e. if ports on members of the same SG they have full access. However, we felt that was better as this way the access is explicit and not implied. 20:48:12 johnsom: it is not implied in Neutron IIRC 20:48:30 Right. I think the hadware ACLs is a good case too. Why use slow iptables when you have an ASIC that can do the work 20:48:44 you have a rule allowing from same SG, right ltomasbo? 20:48:53 just that it is always put in the 'default' SG 20:49:07 yep, you can simply allow from other ports having a specific SG, not an specific CIRD 20:49:10 CIDR 20:49:48 In neutron, two ports can be part of the same SG. That SG may exclude the IP of one port. However, the traffic will pass 20:50:40 johnsom, in neutron everything is block by default, unless you open it 20:50:43 (ingress) 20:50:54 If you don't share the SG, yes 20:51:22 of course, as everything is deny by default, the policy is that is denied unless you specifically allow it 20:51:43 so, if you add a rule allowing it... it will pass 20:51:57 johnsom: even if you share the SG 20:52:06 If the ports share an SG, it always passes 20:52:18 johnsom, that is not true, depend on the rule 20:52:32 http://paste.openstack.org/show/735917/ 20:52:46 those two rules that allow all ingress and egress from the same group 20:52:49 johnsom, if they don't have the allow from remote_group_id SG_ID, then it will be blocked 20:52:51 are not special 20:52:54 they can be there 20:52:59 They would have had to change that recently if that is the case. Plus a bunch of k8s people will get really grumply quickly if that stops working 20:53:04 (and they are for the 'default' group) 20:53:08 but they don't need to be there 20:53:31 this has been the case for as long as I can remember 20:53:40 yep, same here 20:53:44 Anyhow, I think that is a bit of a side issue, and not that important for this conversation. 20:53:48 that is there since newton at least 20:55:53 johnsom: it was important to point out so if that would affect its usage we'd not be left with the wrong impression of how it works 20:56:03 but it's not the main part of the topic 20:56:05 of course 20:56:19 What I am talking about is not remote SGs. 20:57:04 Yeah, the ACL list approach would not be affected by neutron's behavior as the tenant would have access to the SG for other ports. 20:57:50 johnsom, I don't get that 20:58:58 By adding the ACL API, Octavia would retain ownership of the SG applied to the amphora. A tenant would not be able to add the SG to their other ports. 21:00:08 ahh, by contrast you will be redoing the SG API (and that was not a simple task) 21:00:43 A very small part of the SG API. Probably could be done in a week or so 21:01:21 johnsom: ltomasbo: I probably missed it before (I was putting the toddler to bed). Why does it matter if they put the SG to other ports if it is one SG that they precreated like in the patch I sent 21:02:25 If the VIP and other ports share an SG, their are not rules applied. But again, this isn't really the bigger issues. 21:02:31 johnsom: is that small part of the API just a single CIDR for all the listeners in an LB? Or a list of CIDRs? or different CIDRs per listener? 21:03:16 johnsom: I already showed that even if they share the SG unless the SG has explicit rules to allow same SG traffic, the rules apply 21:03:18 celebdor1, and the remote_group_id... 21:03:40 It would have to be a list. I would assume per listener for flexibility. CIDRs only. 21:03:46 ltomasbo: no, I don't think the ACL proposal Octavia pushes for has remote group 21:04:00 since it can't refer to Neutron stuff if it is a reimplementation 21:04:07 Dang we are out of time. 21:04:10 #endmeeting