20:00:49 <johnsom> #startmeeting octavia
20:00:50 <openstack> Meeting started Wed Mar  6 20:00:49 2019 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:53 <openstack> The meeting name has been set to 'octavia'
20:01:03 <cgoncalves> hi folks
20:01:05 <xgerman> o/
20:01:10 <nmagnezi> o/
20:01:29 <johnsom> Sorry folks, got distracted working on a dashboard patch
20:01:39 <xgerman> n.p,
20:01:51 <xgerman> exciting week
20:01:59 <johnsom> #topic Announcements
20:02:05 <xgerman> TC election results
20:02:20 <johnsom> Yep, the TC election is complete
20:03:20 <xgerman> https://twitter.com/SeanTMcGinnis/status/1103086613322641408
20:03:44 <johnsom> #link https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_6c71f84caff2b37c
20:03:53 <johnsom> Sorry, took me a minute to find the link
20:04:03 <xgerman> congrats to friends of LBaaS mnaser mugsie dewsday
20:04:05 <johnsom> Yeah, looks like a great TC
20:04:16 <xgerman> +1
20:05:00 <xgerman> big disturbance in the force
20:05:34 <johnsom> Also of note, the PTL election cycle for Train is now open.
20:05:36 <johnsom> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003505.html
20:05:47 <xgerman> 4 more years!!
20:05:59 <johnsom> Ha, well, we will get to that later in the agenda
20:06:46 <johnsom> The big item of note this week:
20:06:47 <johnsom> It is feature freeze week. No new features will be merged until the open of Train. (Tempest tests and documentation are exempt)
20:07:14 <johnsom> We will talk a bit later in the agenda about where we are and what items we can get into Stein.
20:07:16 <xgerman> https://techcrunch.com/2019/03/01/rackspace-announces-it-has-laid-off-200-workers/
20:07:29 <johnsom> Well, yeah, ^^^^ that
20:07:42 <johnsom> We can talk more about that later in the meeting
20:08:47 <johnsom> Also, there is call to review the two potential community goals for Train:
20:08:47 <xgerman> +1
20:08:50 <johnsom> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003549.html
20:09:19 <johnsom> One is about cleaning up all of the resources a project owned in the whole cloud.
20:09:40 <johnsom> The other is about OpenStack client migration.
20:10:03 <johnsom> I suspect the client goal may be a no-op for us as we are already 100% OpenStack client
20:10:27 <johnsom> We might need to play whack a mole on any neutron-lbaas patches that show up, but otherwise should be good.
20:10:44 <johnsom> Any other announcements today?
20:11:14 <johnsom> #topic Brief progress reports / bugs needing review
20:11:46 <johnsom> I have been heads down, aside from some other distractions, getting patches ready and merged for the feature freeze.
20:12:02 <johnsom> TLS is all done and in. Some other enhancements are close.
20:12:14 <johnsom> I'm working on the dashboard patch for flavors now.
20:12:26 <cgoncalves> great work, you and Zhao!
20:12:30 <xgerman> “full stack engineer"
20:12:39 <xgerman> cgoncalves: +10
20:12:49 <nmagnezi> cgoncalves, totally agree
20:13:06 <johnsom> Most of the credit goes to Zhao.  It is awesome that he could get backend re-encrypt in as well. That was a special ask we had from the PTG.
20:13:45 <johnsom> It also sets us up to support TLS for non-HTTP protocols, so good stuff there.
20:13:56 <xgerman> +1
20:14:39 <johnsom> Bonus on the TLS stuff, you don't need to roll the amp image to get it. (though other patches will likely make you want to.
20:15:07 <xgerman> Sweet!
20:15:50 <johnsom> Once we declare feature freeze, I plan to pivot to looking at bugs. I think we have a few we should address in Stein.
20:16:16 <johnsom> RC1 will be the week of the 18th.
20:16:17 <xgerman> happy to help - now as I have some time on my hands
20:16:35 <johnsom> Any other updates from folks?
20:17:27 <nmagnezi> RHEL8 amphora support is on the way
20:17:37 <xgerman> ARM, too?
20:17:40 <nmagnezi> cgoncalves, made some additions to Octavia. I work on other places as well
20:17:41 <cgoncalves> well, yet to be seen
20:17:41 <colin-> o/
20:17:41 <johnsom> Cool. I assume that DIB patch did not land?
20:17:43 <colin-> sorry i'm late
20:17:48 <nmagnezi> Including dib, SELinux and other stuff
20:18:05 <nmagnezi> Sadly no but we will keep pushing it
20:18:32 <cgoncalves> no. my idea is to create a rhel8 element and have it merged in DIB
20:18:32 <johnsom> Ok. Just checking if we could land the 8 support patch
20:18:46 <cgoncalves> the rhel-minimal is for rhel 8 and uses the beta repos
20:18:49 <johnsom> Cool
20:18:56 <nmagnezi> yup
20:19:30 <nmagnezi> Also nice to see 'Encrypt certs and keys' in
20:19:36 <nmagnezi> Not a "feature" but still a good one
20:19:48 <xgerman> yeah, that seemed nice...
20:19:52 <johnsom> +1, yeah, that was important to get in.
20:20:14 <johnsom> It also will help when we do persistent flow storage for the sub-flow recovery work.
20:20:21 <cgoncalves> there has been some work in octavia-lib to sync data models that still exist in octavia. hopefully we can merge the open patch and make a release asap
20:20:39 <xgerman> we probably can :-)
20:20:40 <johnsom> Yeah, that is my plan. We can talk about the in/out here in a minute.
20:20:53 <johnsom> Ok, let's move on to the next topic.....
20:21:00 <nmagnezi> cgoncalves, +1. I'll look at it after the meeting
20:21:04 <cgoncalves> I also worked on a patch to fix creation of TLS-terminated listeners via horizon
20:21:06 <cgoncalves> #link https://review.openstack.org/#/c/640686/
20:21:38 <johnsom> Ah, yeah, thanks for that bug fix. I plan to test it right after I finish up the flavor dashboard patch review
20:21:49 <johnsom> #topic PTL role update/discussion
20:22:16 <johnsom> So, if you have not yet heard, Rackspace had a layoff and halted work on some projects.
20:22:52 <johnsom> This has impacted my employment, so I am now looking for a new job.
20:23:06 <xgerman> I am also impacted :-(
20:23:10 <johnsom> Others here have also been impacted
20:23:19 <colin-> sorry to hear that you guys, that's really unfortunate :(
20:23:37 <johnsom> My plan is to of course, look for a new job....
20:23:54 <johnsom> But I will also try to fulfill my PTL commitment through the end of my term.
20:24:24 <johnsom> This means I will continue to work on patches and reviews, lead meetings, and generally be the PTL while I hunt for what is next.
20:24:25 <cgoncalves> that is very generous of you. thank you very much, Michael
20:24:34 <colin-> hear hear
20:24:35 <xgerman> +100
20:25:23 <johnsom> At this time, since I do not know if I will find an OpenStack related job, I do not plan to run for PTL for the Train release.
20:25:36 <johnsom> It would be unfair to run and then need to resign right away.
20:25:39 <nmagnezi> Best of luck with finding your next jobs guys!
20:25:48 <xgerman> thanks
20:26:39 <johnsom> If magic happens and I have an offer by the PTL deadline, and the employer would like me to run, I would. However there are a lot of "if"s in that sentence.... lol
20:27:08 <johnsom> Any questions/comments?
20:27:57 <johnsom> Ok, thanks folks for understanding.
20:28:03 <nmagnezi> johnsom, first, thank you for keep doing this even now. Secondly, I'm really sad that this is the situation but fully get what you meant
20:28:06 <johnsom> #topic Stein feature freeze
20:28:43 <johnsom> So, looking at the priority list
20:28:48 <johnsom> #link https://etherpad.openstack.org/p/octavia-priority-reviews
20:29:13 <johnsom> I have put a blank line in the list where I think we are going to be able to get things in.
20:29:49 <johnsom> Stuff below the line seems like a long shot. They either fail tests or have other issues to address.
20:30:08 <johnsom> Any comments or concerns about that list?
20:30:34 <colin-> none here
20:30:37 <johnsom> I had really hoped we could get volume backed amps in, but in light of my reduced time, I wasn't able to get into it and fix the bugs.
20:31:22 <johnsom> Again, this is feature freeze. We can still add tempest tests and documentation. We can also continue to work on bug fixes.
20:31:47 <johnsom> The idea here is to stabilize and focus on bug fix/stability for the Stein release.
20:31:48 <xgerman> also FFE
20:31:59 <cgoncalves> I added https://review.openstack.org/#/c/640825/ (octavia-lib), a dependency of  https://review.openstack.org/613709
20:32:36 <johnsom> True, if there was a critical feature, we can go through the FFE process, but that has a pretty high bar and I don't see anything on the horizon that would need/meet that.
20:32:46 <johnsom> cgoncalves yes, good call
20:33:49 <johnsom> I have to say, congratulations team on a pretty nice release for Stein. Though I haven't polished the release notes yet, it's a pretty nice list of new capabilities:
20:33:52 <johnsom> #link https://docs.openstack.org/releasenotes/octavia/unreleased.html
20:34:41 <cgoncalves> a list that is only made possible when folks include release notes in their patches ;-)
20:35:12 <johnsom> Ha, true. I try to make sure patches have them.  They really are useful for folks.
20:35:39 <xgerman> +1
20:35:52 <johnsom> Ok, it sounds like we are all aligned on the Stein features list.
20:36:03 <johnsom> #topic Open Discussion
20:37:02 <johnsom> Other topics this week?
20:38:49 <cgoncalves> again, apologies for not have been successful thus far in fixing rocky grenade job. there are 11 open backport requests
20:39:19 <colin-> have ever considered value in a "please update the whole fleet" style operation against our amphorae?
20:39:21 <cgoncalves> I gave it a try last week (or two). I couldn't reproduce the issue seen in upstream CI
20:39:24 <johnsom> No worries, sometimes these things are hard to file
20:39:26 <johnsom> find
20:39:26 <colin-> adding new HMs to my config over the weekend caused me to wonder
20:40:06 <colin-> what i'm imagining would likely leverage failovers to perform it gracefully or something
20:40:08 <johnsom> cgoncalves After we feature freeze today/tomorrow remind me and I will focus on that for a bit.
20:40:23 <cgoncalves> johnsom, thank you, much appreciated
20:40:27 <johnsom> colin- Short answer is yes. Though there is a longer answer
20:41:11 <johnsom> The use case of adding HMs will be an API call per amp now in Stein, where it now pushes out a new config and adopts it without requiring a failover.
20:41:39 <colin-> i saw that, that's going to be nice
20:41:50 <colin-> it also made me wonder if we ever considered having the amps periodically pull config from a centralized location?
20:42:00 <colin-> maybe when they go to do their heartbeats, for example
20:42:24 <johnsom> We have put "bulk" actions on the back burner and have gone down the path of enabling that via the API and leaving the exercise to the operator and their favor automation tool. This is for a few reasons:
20:43:23 <johnsom> 1. Bulk operations can be dangerous. If the process hasn't been tested well (i.e. the operator loaded a bad custom image), we don't want to be responsible for runway breakage.
20:43:42 <colin-> yeah that would be horrifying, good point
20:43:46 <johnsom> 2. We don't have mechanisms built in to "cancel/abort" these actions after they start.
20:44:00 <johnsom> 3. We don't have a good way to track/monitor success/failure/progress.
20:44:07 <colin-> yeah that is a fairly stateful job
20:44:08 <colin-> ok
20:44:18 <johnsom> 4. We haven't had anyone that had time to go after that problem....
20:44:38 <johnsom> Mostly #4
20:44:51 <xgerman> for me #1-3 are also problematic
20:45:02 <xgerman> but I can see us to have a controb flder with common “scripts"
20:45:46 <xgerman> I also know that having systems get their config from an API endpoint is quite modern those days (envoy)
20:45:58 <johnsom> As for amps, pulling configs is something that breaks our trust model.  Amps are un-trusted in our model. We try to push from "more trusted" to "less trusted" and never rely on the amp being what it says it is.
20:46:14 <xgerman> yep, that was my next point how to make that secure
20:46:23 <colin-> cool, that's a really succinct way to phrase it ty
20:46:26 <johnsom> So, for example, we don't want a rogue amp asking for the certs and keys  from another tenant's load balancer.
20:47:41 <xgerman> but there are a ton of startups working in the zero trust idendity space
20:47:43 <johnsom> So, maybe that would be something to consider in the future, but right now we are in keep it simple mode.
20:48:20 <colin-> understood
20:48:42 <johnsom> We have discussed using something like etcd, consul, etc. but there were a bunch of trade offs and extra deployment overhead.
20:49:01 <xgerman> yeah, and you still have the trust problem
20:49:16 <johnsom> Again, if someone has a use case, need, and willing to work on it, please feel free to post a spec.
20:49:51 <xgerman> yep, happy to review it. There is exciting stuff out there and our CA inside the worker is just a baby step ;-)
20:50:17 <johnsom> Right, I agree. The certs model we put in place is a good start for this.
20:50:48 <colin-> >
20:50:48 <colin-> Adds an administrator API to access per-amphora statistics.
20:50:50 <colin-> connection stats?
20:50:57 <colin-> like data plane
20:51:10 <johnsom> This is actually one of the problems in Trove. One DB instance can shutdown another by sending commands back up to the control plane.  (Maybe they have since fixed this)
20:51:36 <johnsom> colin- No, it' exposes the listener traffic stats "per-amphora".
20:52:02 <colin-> ah ok
20:52:05 <johnsom> Just another way to query the data, which happens to help us with testing.
20:52:49 <johnsom> The driver for that was testing active/standby VRRP failover. However cgoncalves has posted an alternative option too.
20:53:14 <colin-> thanks for humoring my wandering attention :)
20:53:32 <johnsom> Since VRRP failover occurs autonomously inside the amphorae it's hard to track which amp is passing traffic at any one time.
20:53:52 <johnsom> Sure, no problem.
20:54:02 <johnsom> Any other topics in the last few minutes today?
20:54:02 <cgoncalves> #lin khttps://review.openstack.org/#/c/584681/
20:54:07 <cgoncalves> #link https://review.openstack.org/#/c/637073/
20:54:19 <xgerman> yeah, we have a lot of ideas but no time...
20:54:53 <johnsom> Yeah, it was a lot easier to do things when the active team was 20+ people
20:55:38 <xgerman> :-)
20:56:04 <johnsom> Ok then. Thanks again for all of the hard work for Stein, we are in the home stretch.
20:56:14 <johnsom> Have a great week!
20:56:22 <nmagnezi> o/
20:56:23 <cgoncalves> let's do it!! :D
20:56:24 <xgerman> o/
20:56:36 <johnsom> #endmeeting