20:00:00 <johnsom> #startmeeting Octavia
20:00:01 <openstack> Meeting started Wed Mar 22 20:00:00 2017 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:05 <openstack> The meeting name has been set to 'octavia'
20:00:14 <johnsom> Hi folks
20:00:31 <xgerman> o/
20:00:51 <johnsom> #topic Announcements
20:01:33 <johnsom> Just an FYI, we are rescheduling one of our sessions at the Boston summit to be earlier in the day.  They had scheduled two of them back-to-back, so I asked for a change.
20:01:44 <johnsom> The schedule should be updated soon.
20:02:16 <johnsom> Other than that, the only thing I have for announcements is we merged the load balancer API for Octavia v2 API....
20:02:40 <johnsom> Any other announcements today?
20:03:16 <johnsom> Oh, I probably should mention the logging thing...
20:03:36 <johnsom> Per the mailing list it has been decided to stop localizing log entries.
20:04:02 <johnsom> This means no more _LI, _LW, _LE for log entries.
20:04:17 <rm_work> o/
20:04:25 <johnsom> However, there will still be localization for exception strings that may end up being displayed to the user.
20:04:36 <johnsom> Those should now be tagged with just _
20:05:29 <johnsom> We should clean this up, but the two patches posted are search/replace kind of things and not clean.  Plus we need to intelligently update the hacking rules (not just delete).
20:05:36 <ankur-gupta-f4> sorry im late
20:05:49 <johnsom> Ok, now that is all of my announcements.
20:06:02 <johnsom> #topic Brief progress reports / bugs needing review
20:06:32 <johnsom> I am just about done with the load balancer section of the API reference doc.  I just have the status tree left to add.
20:06:47 <johnsom> Other than that, a bunch of reviews, etc.
20:06:51 <johnsom> Any other updates?
20:07:03 <ankur-gupta-f4> The two minor bug fixes I tossed up for v2 LB API fixes: https://review.openstack.org/#/c/448307/ and https://review.openstack.org/#/c/448317/
20:07:30 <ankur-gupta-f4> also initial frameworks for the python-octaviaclient are up
20:07:41 <johnsom> Cool, glad to get a few of those bugs closed out.  Will review after the meeting
20:07:47 <ankur-gupta-f4> side note: can't do a "pip install python-octaviaclient" yet
20:08:01 <johnsom> Right, I would have to cut a release for that
20:08:08 <ankur-gupta-f4> ah. duh
20:08:18 <johnsom> grin
20:08:24 <rm_work> ah i had a similar thing https://review.openstack.org/#/c/448626/
20:08:55 <johnsom> Is this in a state we could start reviewing the CLI syntax?  I am hoping we can clean up a few things that neutron client did not handle well
20:09:05 <ankur-gupta-f4> not even close
20:09:11 <ankur-gupta-f4> the base framework all needs to go in
20:09:16 <johnsom> rm_work Ok, added to my after-the-meeting list
20:09:25 <ankur-gupta-f4> starting with cutting a release so OSC knows that octaviaclient exists
20:09:27 <johnsom> ankur-gupta-f4 ok
20:09:56 <ankur-gupta-f4> https://review.openstack.org/#/c/447068/ is failing everything  because of it
20:10:07 <ankur-gupta-f4> once that is in, the actual work can begin
20:10:09 <johnsom> ankur-gupta-f4 Is the client plugin self-documenting, does sphinx generate docs for it based on the code?
20:10:40 <ankur-gupta-f4> not sure. havent gotten to that step yet
20:11:05 <ankur-gupta-f4> started this. https://review.openstack.org/#/c/446223/
20:11:20 <ankur-gupta-f4> but had to pause since the skeleton for the client hasn't been made yet, so went back and started it
20:11:21 <ankur-gupta-f4> here:
20:11:30 <ankur-gupta-f4> https://review.openstack.org/#/c/448331/
20:12:19 <johnsom> I posted a comment on the first patch.  I think "load balancer" should be "loadbalancer"
20:12:49 <ankur-gupta-f4> currently my progress... if everything is installed correctly. "openstack load balancer list" is acknowledged by OSC as a command, but does nothing yet since the octaviaclient isn't bound to the Octavia api yet. :D
20:13:06 <ankur-gupta-f4> as in the namespace should be "loadbalancer"
20:13:25 <johnsom> As in the command "openstack loadbalancer list" would work
20:13:47 <ankur-gupta-f4> I can ping out to see what dtroyer thinks. but i have a feeling he will want it to be 'load balancer' and so do i
20:14:25 <johnsom> https://etherpad.openstack.org/p/octavia-ptg-pike line 94 is what dtroyer and I came up with at the PTG
20:14:45 <johnsom> So, yes, ping him and let us know if that is changing
20:15:00 <ankur-gupta-f4> hate it. but if thats what came up at PTG thats what it will be
20:15:15 <johnsom> What do the other cores here think?
20:16:31 <johnsom> rm_work xgerman?
20:16:36 <nmagnezi> i'm not a core, but I think loadbalancer is preferable.
20:16:50 <xgerman> +1
20:16:55 <johnsom> Thanks nmagnezi
20:17:09 <johnsom> Sorry, I should have not said core other than to get their attention
20:17:13 <ankur-gupta-f4> done will update all
20:17:16 <rm_work> same, I think
20:17:20 <rm_work> would PREFER lb tho
20:17:25 <rm_work> openstack lb list :P
20:17:29 <xgerman> maybe they support alias?
20:17:30 <nmagnezi> johnsom, np
20:17:45 <nmagnezi> rm_work, lb is very much lbaasv1 like :)
20:17:47 <johnsom> Yeah, we were going to ask about that I think
20:17:55 <ankur-gupta-f4> tab completion can be used if you dont want to type it all out rm_work :P
20:18:00 <rm_work> heh k
20:18:00 <johnsom> Yeah, I have the same feeling about lb
20:18:07 <rm_work> then definitely loadbalancer
20:18:09 <johnsom> lb could be linux bridge
20:18:12 <rm_work> because otherwise it'd be a space
20:18:15 <rm_work> yeah yeah fine
20:18:33 <ankur-gupta-f4> loadbalancer possible alias of lb. done-done
20:18:38 <johnsom> Ok
20:18:47 <johnsom> how about alias of lbaas
20:18:50 <ankur-gupta-f4> will reach out to the OSC cores to confirm their thoughts to double check
20:19:04 <johnsom> ankur-gupta-f4 Thanks!
20:19:24 <johnsom> #topic When should we lock neutron-lbaas for new features? Revisited
20:19:35 <johnsom> So, I am bringing this up again
20:20:10 <johnsom> The more I thought about where we are in the API work and how short Pike-1 is, I'm thinking we should reconsider and maybe do Pike-2
20:20:19 <nmagnezi> feature freeze != bug fixes, right?
20:20:24 <johnsom> Correct
20:20:25 <xgerman> yep
20:20:36 <johnsom> This is just blocking new features
20:21:05 <johnsom> I would also strongly encourage dependent patches for the same change in Octavia
20:21:06 <nmagnezi> ack. do we have any new features correctly up for review at the moment?
20:21:13 <johnsom> Yes, we do
20:21:22 <nmagnezi> I know about a single patch for manual reschedualing
20:21:35 <johnsom> Including the next topic, QoS and security groups
20:21:51 <nmagnezi> #link https://review.openstack.org/#/c/447177/
20:21:52 <johnsom> Thoughts?  Comments?
20:21:55 <nmagnezi> oh, alright
20:22:41 <xgerman> so QoS is pretty straight forward and we have a spec
20:22:47 <nmagnezi> mmm.. not that I have any specific feature in mind, but when the lbaasv2 will move to Octavia (pass thru), will new features get accepted there?
20:22:48 <johnsom> If I don't hear any comments/discussion I will move forward with feature freeze at Pike-2 milestone
20:23:13 <xgerman> yep, P-2 seems right
20:23:17 <rm_work> feature freeze at ocata-3 is out of the question?
20:23:21 <johnsom> nmagnezi The idea is the new features should be done against octavia
20:23:38 <nmagnezi> no objections here (for freeze in P-2)
20:23:42 <johnsom> rm_work funny
20:23:45 <xgerman> indeed our goal is to phase out LBaaS
20:23:48 <johnsom> I kind of wish
20:24:00 <nmagnezi> johnsom, ack. sounds good.
20:24:16 <xgerman> so if you do a new feture in Octavia it won’t be available in Neutron LBaaS
20:24:31 <xgerman> or are we allowing adding API calls
20:24:35 <nmagnezi> i was asking this because I noticed some stuff gets send both to Octavia and the neutron-lbaas repo. which is a bad thing..
20:25:00 <johnsom> xgerman My thought is if it's a new feature, it is octavia only
20:25:06 <xgerman> agreed
20:25:09 <johnsom> Pass through ignores it
20:25:10 <nmagnezi> +1
20:25:16 <johnsom> Or, I guess rejects it
20:25:21 <xgerman> yep
20:25:48 <johnsom> Right the point here is to stop duplicating work and stop porting nightmares
20:26:25 <johnsom> Ok, I think we are agreed on this
20:26:35 <xgerman> cool
20:26:42 <johnsom> #agreed Feature freeze neutron-lbaas at the Pike-2 milestone
20:27:06 <johnsom> #topic Discuss QoS and Security Groups
20:27:21 <johnsom> xgerman Do you want to talk to this?
20:27:25 <xgerman> sure
20:27:31 <johnsom> #link https://review.openstack.org/#/c/441912/
20:27:35 <johnsom> for QoS
20:27:42 <johnsom> #link https://review.openstack.org/445274
20:27:47 <johnsom> for security groups
20:27:53 <nmagnezi> johnsom, maybe it would be a good idea to submit comments about the feature freeze in P-2 in relevant patches so ppl will know it is coming and they should finalize their work
20:28:08 <xgerman> +1
20:28:19 <johnsom> nmagnezi Yes, good idea.  I will also send out e-mail to the mailing list.
20:28:34 <nmagnezi> :)
20:29:03 <xgerman> So the challenge with those two patches (and other potential ones working on the port) that this will ties us pretty close to Neutron and it is unclear how hardware vendors can implement that
20:29:32 <johnsom> Yes.  I also have concerns since our service account current "owns" the port
20:30:25 <xgerman> biggest problem is that users can change the rules outside Octavia and we won’t know about it
20:30:49 <xgerman> or a hardware vendor would need to track any innovations in those systems
20:31:53 <xgerman> QoS is simple enough so we could swing it but security groups are really complex and orthogonal to a laod balancers access control feature
20:32:29 <xgerman> #link https://www.haproxy.com/doc/aloha/7.0/haproxy/acls.html
20:32:49 <xgerman> so one idea is to have our own API and then a driver for security groups, etc.
20:33:16 <johnsom> #link http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4-acl
20:33:28 <xgerman> thanks johnsom
20:34:15 <johnsom> So maybe we discuss them one at a time
20:34:22 <xgerman> ok
20:34:25 <johnsom> For QoS I see two options:
20:35:05 <johnsom> We add parameters to our API for the actual settings.  Reference driver sets these via neutron, vendor drivers do what ever they want.
20:36:28 <johnsom> Or, we accept a neutron qos policy ID.  Reference applies that to the port (using neutron super powers since it's cross project) and vendor gets to interact with neutron to either set or query the settings and/or register for some kind of change notifications.
20:36:38 <johnsom> Does that sound about right for the options?
20:36:45 <xgerman> yes
20:37:24 <xgerman> #vote?
20:38:02 <johnsom> I was waiting to see if there was discussion....
20:38:42 * xgerman crickets
20:38:53 <johnsom> I lean towards option one as it keeps the mechanics clean.  The only con I see is extra parameters
20:39:27 <xgerman> ok, seems fine for me — though then people can’t share QoS policies and update at once…
20:39:50 <johnsom> Option two has the advantage of, one parameter and if new QoS features come into neutron, we inherit.  Probably less code on our side
20:40:08 <johnsom> Yep.  Sharing is a good point.
20:40:39 <johnsom> Option two just puts more work on the drivers if they don't want to just use neutron for QoS
20:40:59 <johnsom> Ok, I guess I lean more towards option two the more I think about it
20:41:40 <johnsom> Anyone else want to comment or should we put it to a vote?
20:42:03 <xgerman> well, if a hardware vendor doesn’t implement QoS and can’t use neutron — it will just silently fail?
20:42:13 <m-greene-> does this push Barbican API support back into each vendor driver?
20:42:17 <m-greene-> to be consisten?
20:42:19 <johnsom> I guess a vote is kind of silly, we should all just comment on the spec
20:42:34 <xgerman> no, we will keep barbican as is ;-)
20:42:35 <nmagnezi> please link the spec :)
20:43:00 <m-greene-> if octavia owns the API queries back into neutron, we know if will do the right thing (neutron-client, and not direcrt DB access :)
20:43:04 <xgerman> this is limited to things which happen on the vip port
20:43:07 <m-greene-> can’t trust us vendors to not violate things
20:43:13 <johnsom> m-greene- No, I think option two would actually be more consistent with how we want to do barbican, i.e. interact with it from the octavia code
20:43:25 <johnsom> #link https://review.openstack.org/#/c/441912/
20:43:40 <m-greene-> i like octavia maintaining point of strategic control
20:44:02 <johnsom> m-greene-  Ok, good feedback.
20:44:17 <m-greene-> we need to learn a single API, instead of rummaging through every other project API
20:45:03 <johnsom> Right and if you are fine with neutron's QoS implementation, you basically have to do nothing.  It's only if you want to not use neutron's implementation that you have to put in the extra work....
20:45:42 <johnsom> Ok, good discussion.  All comment on the spec.
20:45:49 <johnsom> On to security groups....
20:46:46 <johnsom> This one is a bit more tricky
20:46:58 <xgerman> yes, and we still haven’t seen the use case
20:47:49 <johnsom> In my opinion it would be bad if we allowed users to open ports that are not defined as listener ports.
20:48:02 <xgerman> or close ports defined as them
20:48:15 <johnsom> It would only expose what is in the network namespace (netns)
20:48:41 <johnsom> True, that is a shoot yourself in the foot scenario.  Adds to operator support overhead (which I don't want).
20:49:15 <johnsom> I think the interesting use case is if you want to restrict the ACL by subnets/source IPs
20:49:16 <m-greene-> user ::= end-user of lb service?
20:49:24 <xgerman> yes
20:49:26 <johnsom> Yes, end user
20:50:05 <johnsom> We have the same kind of options:
20:50:23 <nmagnezi> johnsom, xgerman, i also don't know what is the expected usecase here. the only usecase comes to my mind is if someone wants to have a loadbalancer that is accessible from specific subnets
20:50:27 <xgerman> well, its a bit different since we would like people not have the power to change ports
20:51:03 <nmagnezi> in this case it *think* this can be achieved with floalting ips, but i'm not 100% sure
20:51:04 <johnsom> Add an ACL api similar to the L7 api.  Reference driver validates and updates neutron security group on port, vendor driver gets the choice to implement themselves or use neutron security groups.
20:51:14 <m-greene-> who or what originated this request for SG?
20:51:33 <m-greene-> is there a blueprint?
20:51:34 <xgerman> #link https://review.openstack.org/#/c/445274/
20:51:41 <m-greene-> :)
20:51:44 <xgerman> #link https://blueprints.launchpad.net/heat/+spec/security-groups-lbaas
20:51:53 <johnsom> Yeah, and that blueprint won't fly
20:52:00 <johnsom> As I have commented
20:52:28 <m-greene-> gosh- we can do that, just an an API parameter
20:52:30 <xgerman> yeah, I am not even sure if we want to do security groups and not haproxy ACLs
20:53:00 <johnsom> Option two is to accept a neutron security group ID and clone it.  The problem here is tracking changes in neutron.
20:53:14 <xgerman> and that they cna shoot themselves in the foot
20:53:24 <johnsom> Option three, blindly apply their security group with some initial validation.
20:53:38 <m-greene-> the review link seems weird to me
20:53:48 <xgerman> yeah, it is weird
20:53:58 <m-greene-> they want to specify the SG as an lb-create param, and have us proxy that request to the neutron-port
20:54:05 <m-greene-> what if multiple LBs use the same subnet
20:54:20 <m-greene-> well, this is per-port
20:54:29 <johnsom> We currently manage a security group on each port we create/manage
20:54:29 <xgerman> yep
20:54:50 <xgerman> and mostly we open the port you put the listener on
20:54:51 <johnsom> It only allows the TCP ports that are defined for the listener
20:55:07 <johnsom> (s)
20:56:02 <m-greene-> we create other ports as part of our vendor implementation, not necessarily related to the LB service.  Attaching the SG might break things.
20:56:04 <johnsom> It is true, if we do the API approach, we could either use neutron  security groups or use HAProxy ACLs.  HAPRoxy ACLs and vendor ACLs probably have more capability
20:56:38 <johnsom> This would only be on the VIP port we hand you.  I think....
20:56:39 <m-greene-> What problem does this solve that cannot be solved by attaching the SG youself to the port?
20:57:00 <johnsom> They don't own the VIP port, our service account does
20:57:03 <m-greene-> seems like a workflow change
20:57:07 <johnsom> They can't see the current security group/port
20:57:20 <xgerman> nor can they attach somethign to it
20:57:26 <m-greene-> ok
20:57:38 <nmagnezi> what I still don't understand is how will we validate that the sec group does not conflict with the listeners?
20:57:47 <johnsom> So, we are about out of time.  Maybe we should put this one on the next agenda to continue
20:58:06 <johnsom> Yeah, it would be a lot of ugly parsing
20:58:24 <xgerman> I think (1) feels the most feasible to me
20:58:39 <m-greene-> they ::= person invoking heat stack-create?
20:58:47 <johnsom> I kind of like 1 myself.  I think it gives us flexibility
20:58:48 <nmagnezi> yeah, also one can add additional listeners in the future that might conflict. so this will not be so simple to resolve
20:59:05 <xgerman> +1
20:59:07 <johnsom> m-greene- They == end user creating a load balancer
20:59:07 <m-greene-> so someone with elevated permissions within the tenant, not necessarily for the LB service
20:59:27 <xgerman> just LB service
20:59:39 <xgerman> if you are elevated you cna slap stuff on our port today
20:59:55 <xgerman> (though we will remove it when we do changes)
21:00:10 <johnsom> End users, creating the load balancer, will not see the VIP neutron port if they do a "neutron port-list" as it is owned by the octavia/neutron service account
21:00:13 <m-greene-> that’s what i am trying to understand.  Can this be solved by the operator creating the right user permissions within keystone
21:00:25 <xgerman> no
21:00:30 <m-greene-> o
21:00:31 <m-greene-> k
21:00:39 <johnsom> Ok.  Out of time.
21:00:42 <johnsom> Thanks folks!
21:00:46 <xgerman> Thanks
21:00:48 <johnsom> #endmeeting