20:00:10 <johnsom> #startmeeting Octavia
20:00:11 <openstack> Meeting started Wed Jan 10 20:00:10 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:15 <openstack> The meeting name has been set to 'octavia'
20:00:19 <cgoncalves> o/
20:00:22 <johnsom> Hi folks
20:00:31 <johnsom> Another fine week working on Octavia
20:01:10 <jniesz> hi
20:01:15 <johnsom> #topic Announcements
20:01:24 <johnsom> Feature freeze - Queens MS3 is coming January 22nd
20:01:29 <longstaff> hi
20:01:35 <johnsom> Just a reminder, 12 days to feature freeze
20:01:45 <johnsom> #link https://releases.openstack.org/queens/schedule.html
20:02:05 <johnsom> Rocky (Dublin) PTG planning etherpad
20:02:10 <johnsom> #link https://etherpad.openstack.org/p/octavia-ptg-rocky
20:02:26 <johnsom> I have setup an etherpad for the Rocky PTG coming up next month.
20:02:49 <johnsom> Please indicate if you will be attending or not and any topics you think we should discuss at the PTG.
20:03:06 <johnsom> I will then take those and try to make a rough schedule we can use in Dublin.
20:03:42 <johnsom> Also of note, zuul has been having a very rough week.
20:04:23 <johnsom> If you are seeing RETRY_LIMIT, POST_FAILURE, TIMEOUT, etc. errors about all we can do is wait a while and try a "recheck".
20:04:59 <johnsom> It sounds like some of this is due to the hosting providers rolling out patches, some are other zuul issues.
20:05:10 <johnsom> I hope they can be resolved soon.
20:05:22 <johnsom> Any other announcements this week?
20:06:12 <johnsom> Oh, I should mention, the discussion about changing to one year release cycles is on hold.  Rocky will be a "normal" release cycle.  Let me see if I can pull up a link to the email.
20:06:50 <johnsom> #link http://lists.openstack.org/pipermail/openstack-dev/2018-January/126080.html
20:07:38 <johnsom> #topic Brief progress reports / bugs needing review
20:08:18 <johnsom> Moving on, I discovered that our functional test gates for the OpenStackSDK had been disabled while checking on the status of a SDK release for our horizon work.
20:09:13 <nmagnezi> o/
20:09:16 <johnsom> I have been fighting with zuul and the gate code to get those re-enabled and optimized to use our noop drivers (since it is just testing the API). I think I have that handled now, but that took much longer than expected.
20:09:41 <johnsom> I plan to get back to focus on the active/active work today.
20:10:33 <johnsom> I also did a big push to catch up on reviews after the break. The team was busy! Which is awesome.  I think we merged a bunch of that stuff already, with more in flight.
20:10:48 <johnsom> Any other progress updates?
20:11:44 <johnsom> Ok
20:12:04 <johnsom> #topic Octavia project quota consumption (nmagnezi)
20:12:15 <nmagnezi> hey
20:12:27 <johnsom> Nir added a topic to the agenda about quota usage.
20:12:35 <nmagnezi> yup
20:12:49 <johnsom> I put a short off-the-head response below it.
20:12:54 <nmagnezi> johnsom, and you provided a feedback in the agenda
20:12:56 <johnsom> #link https://wiki.openstack.org/wiki/Octavia/Weekly_Meeting_Agenda#Meeting_2018-01-10
20:13:27 <johnsom> Basically Octavia resources (VMs, ports, security groups, etc.) all use quota from the service account defined in the octavia.conf.
20:13:42 <nmagnezi> exactly
20:13:52 <johnsom> nmagnezi Did that answer your question or is there more we should discuss
20:14:13 <johnsom> This is definitely a topic I want to add to the install guide once I can get that started.
20:14:23 <nmagnezi> so, best practices are not listed in our docs IIRC
20:14:24 <cgoncalves> johnsom: I take that by account you mean project, not (keystone) user
20:15:26 <johnsom> #link https://github.com/openstack/octavia/blob/master/etc/octavia.conf#L300
20:15:37 <nmagnezi> johnsom, so in that account (project) you simply set quotas to -1? not sure I followed how RBAC comes into play here
20:15:38 <johnsom> cgoncalves It includes a user
20:16:52 <nmagnezi> johnsom, say I create an "Octavia" project and all amphoras live there, I'm still limited by the compute quotas for that project, right?
20:17:22 <cgoncalves> johnsom: right. so best pratice should be a separate project (e.g. called octavia), not using 'admin' or 'service' project
20:17:38 <johnsom> nmagnezi Correct on the quotas.  The RBAC part is this service account requires some RBAC configuration in neutron.  It needs to have permission to plug ports/networks from tenants into it's own amphora. So, to setup a special service account for Octavia to use, it requires some RBAC configuration in other services.  Similar in barbican depending on how you deploy it.
20:17:59 <johnsom> cgoncalves Yes
20:18:24 <nmagnezi> johnsom, alright. and in that dedicated project I will just set quotas to -1 ?
20:18:36 <johnsom> nmagnezi Yes, you need to set those quotas appropriately for your deployment.  Many will use -1, some might want to set a limit.  Up to the operator
20:19:07 <nmagnezi> johnsom, thank you. i imagined so, but wanted to hear from you since you already run Octavia in prod :)
20:19:51 <cgoncalves> why aren't we creating a dedicated project in devstack plugin then, if that's the recommended setting?
20:19:51 <johnsom> Any more discussion on quota before we move on to the next question?
20:20:45 <nmagnezi> johnsom, I personally have no additional questions. cgoncalves might
20:20:46 <johnsom> cgoncalves Mostly because it isn't truely needed, simplicity for testing, etc.  devstack != production configuration by any perspective
20:21:17 <nmagnezi> +1 i think we can / should simply document this
20:21:37 <cgoncalves> johnsom: understood. currently devstack defaults to 'admin' project
20:21:42 <cgoncalves> no further questions :)
20:21:53 <johnsom> devstack is really for testing and development, where speed is a benefit.  It's a fair argument to set it up that way, it just hasn't happened.
20:23:16 <johnsom> nmagnezi I am really itching to write that "step-by-step"/"The hard way to install" Octavia installation guide. Sadly I only have so much time and it's not at the top of my priority list right now.  I suspect after I get act/act done/mostly done, it will pop to the top of my list.
20:23:53 <johnsom> I just know it will take some time and we committed to making progress on Active/Active for Queens.
20:23:57 <nmagnezi> johnsom, once we are done with the tripleO stuff we can also assist with this
20:24:03 <johnsom> +1
20:24:07 <nmagnezi> we are getting close btw
20:24:21 <johnsom> #topic Amphora certificates
20:24:32 <johnsom> nmagnezi Good to hear
20:24:58 <johnsom> So, this question as about the certificates issued to the amphora.  I think there is some confusion on how these work.
20:25:12 * nmagnezi listens
20:25:26 <nmagnezi> i actually read your reply, anything else I was wrong about?
20:25:40 <johnsom> When we create an amphora, each amphora gets issued a unique certificate that has a common name (cn) that is it's amphroa UUID.
20:26:31 <nmagnezi> ack. thank u for that correction
20:26:38 <johnsom> This is pushed to the amp, along with the CA cert.  Those combined are used for a two-way TLS/SSL authentication between the controller and the amphora. This is our secure command/control
20:27:49 <nmagnezi> but a question that still remains is, what happens if an amphora lived long enough for that cert to become expire?
20:27:52 <johnsom> Since many companies have certificate rotation guidelines, and limited lifetimes, we added a certificate rotation component to the housekeeping process.  It monitors the DB for amphora with expiring amphora certificates and issues renewed certificates to the amphroa.
20:28:33 <johnsom> #link https://github.com/openstack/octavia/blob/master/octavia/cmd/house_keeping.py#L69
20:28:37 <nmagnezi> aha. can you point me to that part? I was not aware of it
20:28:37 <johnsom> here and...
20:28:54 <nmagnezi> johnsom, thanks again :)
20:29:03 <johnsom> #link https://github.com/openstack/octavia/blob/master/octavia/controller/housekeeping/house_keeping.py#L105
20:29:36 <johnsom> It uses a normal taskflow flow, via the controller worker library to rotate those certs.
20:29:58 <johnsom> So, that is how it's intended to work.
20:30:29 <nmagnezi> johnsom, so when an amp run with an expiring cert it will simply stop working (health) and will get swapped with a new one?
20:31:07 <nmagnezi> by "swapped" I mean the currently running amp, not cert. it will generate a new amp with a rotated cert?
20:31:09 <johnsom> nmagnezi, no, this is for command/control only.  The health heartbeats do not use this certificate.
20:31:38 <johnsom> The amps will continue to run, but the controllers will no longer be able to control them as the trust will be broken.
20:32:19 <nmagnezi> johnsom, so how can an operator manually swap a given amp? kill it via nova and let Octavia spawn a new one?
20:32:49 <johnsom> heartbeats use a HMAC shared key that is nonced with the amp ID.
20:32:50 <nmagnezi> if it's on ha config I guess the operator can perform a failover (and fail back)
20:33:03 <johnsom> sorry, nonced -> salted
20:33:37 <jniesz> shouldn't the cert swap happen prior to expiring?
20:33:56 <johnsom> Right, if, for some reason the cert expires (which it shouldn't given the housekeeping setup), the operator can either manually issue a cert or failover the amp
20:33:59 <johnsom> via the API
20:34:19 <johnsom> jniesz It does, it starts two weeks before by default config setting.
20:34:28 <johnsom> It tries until it is successful
20:34:36 <jniesz> ok, that makes sense
20:35:04 <nmagnezi> johnsom, so thanks a lot for your answers. I will play with this for a bit, to learn it better.
20:35:17 <nmagnezi> johnsom, do we have anything about cert rotation in the docs?
20:35:26 <cgoncalves> well thought ;)
20:36:14 <johnsom> #link https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.html#rotating-cryptographic-certificates
20:36:23 <johnsom> This section, but it could probably use some enhancement
20:37:30 <nmagnezi> when I'll spend time on this, I'll try to add information there
20:38:00 <nmagnezi> i have no further questions
20:38:01 <johnsom> #link https://docs.openstack.org/octavia/latest/configuration/configref.html#house_keeping.cert_interval
20:38:12 <johnsom> These are the certificate rotation config settings
20:39:02 <johnsom> interval is how often it looks for expiring certs, buffer is how far before expiration it should rotate them, threads is how many concurrent rotations the housekeeping process should be doing.
20:39:23 <johnsom> nmagnezi Cool, thanks
20:39:43 <nmagnezi> johnsom, thank you :)
20:40:02 <johnsom> #topic Open Discussion
20:40:10 <johnsom> Any other topics for today?
20:40:32 <cgoncalves> johnsom: you skipped 'provider driver' spec on purpose?
20:40:46 <johnsom> Opps, nope, oversight
20:40:56 <johnsom> #link https://review.openstack.org/509957
20:41:18 <johnsom> There has been an update to the provider driver spec.  Please re-review the changes.
20:41:32 <johnsom> This is a priority spec to get merged, so all votes are very important.
20:41:33 <johnsom> Thanks
20:41:41 <nmagnezi> will do
20:42:07 <cgoncalves> I had a look at it today and seems good to go. will vote
20:42:34 <johnsom> Thank you
20:43:34 <johnsom> No other topics for today?
20:44:19 <johnsom> Since we have a few RH folks here, jniesz had a question about secondary IPs on interfaces.
20:44:42 * nmagnezi listens
20:44:51 <johnsom> Is the "alias" config file still the only way to stack IPs on a single interface? Or is there a new/better way to do that?
20:44:55 <jniesz> we want to implement multiple IP addresses without alias
20:45:01 <jniesz> as that seems to be deprecated and the old method
20:45:12 <jniesz> so assign multiple IP addresses to single interface
20:45:31 <jniesz> https://www.irccloud.com/pastebin/VimANHhU/
20:45:38 <jniesz> ^looks like that is the new method
20:45:46 <johnsom> He referenced these pay-wall articles:
20:45:49 <johnsom> #link https://access.redhat.com/solutions/8672
20:45:59 <johnsom> #link https://access.redhat.com/solutions/127223
20:46:08 * nmagnezi looks
20:46:08 <jniesz> #link https://wiki.debian.org/NetworkConfiguration#Multiple_IP_addresses_on_one_Interface
20:46:13 <jniesz> there is a free one for Ubuntu
20:46:16 <jniesz> : )
20:47:18 <johnsom> I am too rusty to help with this one and I don't have a RH account any longer.
20:47:19 <nmagnezi> one of those articles does not offer a solution
20:47:21 <nmagnezi> the second one
20:47:34 <nmagnezi> offers the same way it's implemented
20:47:58 <nmagnezi> meaning an alias ifcfg file eth0:1..
20:48:19 <nmagnezi> jniesz, where did you see it got deprecated?
20:48:21 <johnsom> Ok, so the :# syntax via "alias" files is still the method.  I suspected as much.
20:49:29 <cgoncalves> actually you can use IPADDRn, yes
20:49:29 <jniesz> they do work different as it is not just syntax
20:49:38 <cgoncalves> IPADDR2=172.31.33.1
20:49:43 <jniesz> one creates a virtual sub interface
20:49:43 <cgoncalves> NETMASK2=255.255.255.0
20:50:04 <jniesz> seems cleaner to me to just add multiple IPs
20:50:09 <jniesz> like above with IPADDR2
20:50:14 <jniesz> IPADDR3
20:50:20 <cgoncalves> you would get a single ethX with multiple IP addressses
20:50:28 <jniesz> yea, which is what I want
20:50:41 <jniesz> to store all the /32 or /128 anycast VIPs
20:50:55 <jniesz> so the host will accept traffic for the VIP Ip
20:50:58 <nmagnezi> I'm not sue what is the benefit here, but I'm not against this solution
20:51:25 <johnsom> It's for a dummy interface for the BGP based L3 Act/Act solution
20:52:01 <johnsom> Cool, so we have an answer for that.  Thanks!
20:52:12 <johnsom> Helping us, help you...  grin
20:52:52 <nmagnezi> jniesz, if you will send a patch, list me for review. I'll give it some cycles :)
20:52:53 <johnsom> Any other quick topics in the last few minutes?
20:53:15 <jniesz> i can send to you, thanks
20:54:08 <johnsom> Thanks folks!  Chat with you all next week if not before.
20:54:10 <johnsom> #endmeeting