20:00:49 #startmeeting octavia 20:00:50 Meeting started Wed Mar 6 20:00:49 2019 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:53 The meeting name has been set to 'octavia' 20:01:03 hi folks 20:01:05 o/ 20:01:10 o/ 20:01:29 Sorry folks, got distracted working on a dashboard patch 20:01:39 n.p, 20:01:51 exciting week 20:01:59 #topic Announcements 20:02:05 TC election results 20:02:20 Yep, the TC election is complete 20:03:20 https://twitter.com/SeanTMcGinnis/status/1103086613322641408 20:03:44 #link https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_6c71f84caff2b37c 20:03:53 Sorry, took me a minute to find the link 20:04:03 congrats to friends of LBaaS mnaser mugsie dewsday 20:04:05 Yeah, looks like a great TC 20:04:16 +1 20:05:00 big disturbance in the force 20:05:34 Also of note, the PTL election cycle for Train is now open. 20:05:36 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003505.html 20:05:47 4 more years!! 20:05:59 Ha, well, we will get to that later in the agenda 20:06:46 The big item of note this week: 20:06:47 It is feature freeze week. No new features will be merged until the open of Train. (Tempest tests and documentation are exempt) 20:07:14 We will talk a bit later in the agenda about where we are and what items we can get into Stein. 20:07:16 https://techcrunch.com/2019/03/01/rackspace-announces-it-has-laid-off-200-workers/ 20:07:29 Well, yeah, ^^^^ that 20:07:42 We can talk more about that later in the meeting 20:08:47 Also, there is call to review the two potential community goals for Train: 20:08:47 +1 20:08:50 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003549.html 20:09:19 One is about cleaning up all of the resources a project owned in the whole cloud. 20:09:40 The other is about OpenStack client migration. 20:10:03 I suspect the client goal may be a no-op for us as we are already 100% OpenStack client 20:10:27 We might need to play whack a mole on any neutron-lbaas patches that show up, but otherwise should be good. 20:10:44 Any other announcements today? 20:11:14 #topic Brief progress reports / bugs needing review 20:11:46 I have been heads down, aside from some other distractions, getting patches ready and merged for the feature freeze. 20:12:02 TLS is all done and in. Some other enhancements are close. 20:12:14 I'm working on the dashboard patch for flavors now. 20:12:26 great work, you and Zhao! 20:12:30 “full stack engineer" 20:12:39 cgoncalves: +10 20:12:49 cgoncalves, totally agree 20:13:06 Most of the credit goes to Zhao. It is awesome that he could get backend re-encrypt in as well. That was a special ask we had from the PTG. 20:13:45 It also sets us up to support TLS for non-HTTP protocols, so good stuff there. 20:13:56 +1 20:14:39 Bonus on the TLS stuff, you don't need to roll the amp image to get it. (though other patches will likely make you want to. 20:15:07 Sweet! 20:15:50 Once we declare feature freeze, I plan to pivot to looking at bugs. I think we have a few we should address in Stein. 20:16:16 RC1 will be the week of the 18th. 20:16:17 happy to help - now as I have some time on my hands 20:16:35 Any other updates from folks? 20:17:27 RHEL8 amphora support is on the way 20:17:37 ARM, too? 20:17:40 cgoncalves, made some additions to Octavia. I work on other places as well 20:17:41 well, yet to be seen 20:17:41 o/ 20:17:41 Cool. I assume that DIB patch did not land? 20:17:43 sorry i'm late 20:17:48 Including dib, SELinux and other stuff 20:18:05 Sadly no but we will keep pushing it 20:18:32 no. my idea is to create a rhel8 element and have it merged in DIB 20:18:32 Ok. Just checking if we could land the 8 support patch 20:18:46 the rhel-minimal is for rhel 8 and uses the beta repos 20:18:49 Cool 20:18:56 yup 20:19:30 Also nice to see 'Encrypt certs and keys' in 20:19:36 Not a "feature" but still a good one 20:19:48 yeah, that seemed nice... 20:19:52 +1, yeah, that was important to get in. 20:20:14 It also will help when we do persistent flow storage for the sub-flow recovery work. 20:20:21 there has been some work in octavia-lib to sync data models that still exist in octavia. hopefully we can merge the open patch and make a release asap 20:20:39 we probably can :-) 20:20:40 Yeah, that is my plan. We can talk about the in/out here in a minute. 20:20:53 Ok, let's move on to the next topic..... 20:21:00 cgoncalves, +1. I'll look at it after the meeting 20:21:04 I also worked on a patch to fix creation of TLS-terminated listeners via horizon 20:21:06 #link https://review.openstack.org/#/c/640686/ 20:21:38 Ah, yeah, thanks for that bug fix. I plan to test it right after I finish up the flavor dashboard patch review 20:21:49 #topic PTL role update/discussion 20:22:16 So, if you have not yet heard, Rackspace had a layoff and halted work on some projects. 20:22:52 This has impacted my employment, so I am now looking for a new job. 20:23:06 I am also impacted :-( 20:23:10 Others here have also been impacted 20:23:19 sorry to hear that you guys, that's really unfortunate :( 20:23:37 My plan is to of course, look for a new job.... 20:23:54 But I will also try to fulfill my PTL commitment through the end of my term. 20:24:24 This means I will continue to work on patches and reviews, lead meetings, and generally be the PTL while I hunt for what is next. 20:24:25 that is very generous of you. thank you very much, Michael 20:24:34 hear hear 20:24:35 +100 20:25:23 At this time, since I do not know if I will find an OpenStack related job, I do not plan to run for PTL for the Train release. 20:25:36 It would be unfair to run and then need to resign right away. 20:25:39 Best of luck with finding your next jobs guys! 20:25:48 thanks 20:26:39 If magic happens and I have an offer by the PTL deadline, and the employer would like me to run, I would. However there are a lot of "if"s in that sentence.... lol 20:27:08 Any questions/comments? 20:27:57 Ok, thanks folks for understanding. 20:28:03 johnsom, first, thank you for keep doing this even now. Secondly, I'm really sad that this is the situation but fully get what you meant 20:28:06 #topic Stein feature freeze 20:28:43 So, looking at the priority list 20:28:48 #link https://etherpad.openstack.org/p/octavia-priority-reviews 20:29:13 I have put a blank line in the list where I think we are going to be able to get things in. 20:29:49 Stuff below the line seems like a long shot. They either fail tests or have other issues to address. 20:30:08 Any comments or concerns about that list? 20:30:34 none here 20:30:37 I had really hoped we could get volume backed amps in, but in light of my reduced time, I wasn't able to get into it and fix the bugs. 20:31:22 Again, this is feature freeze. We can still add tempest tests and documentation. We can also continue to work on bug fixes. 20:31:47 The idea here is to stabilize and focus on bug fix/stability for the Stein release. 20:31:48 also FFE 20:31:59 I added https://review.openstack.org/#/c/640825/ (octavia-lib), a dependency of https://review.openstack.org/613709 20:32:36 True, if there was a critical feature, we can go through the FFE process, but that has a pretty high bar and I don't see anything on the horizon that would need/meet that. 20:32:46 cgoncalves yes, good call 20:33:49 I have to say, congratulations team on a pretty nice release for Stein. Though I haven't polished the release notes yet, it's a pretty nice list of new capabilities: 20:33:52 #link https://docs.openstack.org/releasenotes/octavia/unreleased.html 20:34:41 a list that is only made possible when folks include release notes in their patches ;-) 20:35:12 Ha, true. I try to make sure patches have them. They really are useful for folks. 20:35:39 +1 20:35:52 Ok, it sounds like we are all aligned on the Stein features list. 20:36:03 #topic Open Discussion 20:37:02 Other topics this week? 20:38:49 again, apologies for not have been successful thus far in fixing rocky grenade job. there are 11 open backport requests 20:39:19 have ever considered value in a "please update the whole fleet" style operation against our amphorae? 20:39:21 I gave it a try last week (or two). I couldn't reproduce the issue seen in upstream CI 20:39:24 No worries, sometimes these things are hard to file 20:39:26 find 20:39:26 adding new HMs to my config over the weekend caused me to wonder 20:40:06 what i'm imagining would likely leverage failovers to perform it gracefully or something 20:40:08 cgoncalves After we feature freeze today/tomorrow remind me and I will focus on that for a bit. 20:40:23 johnsom, thank you, much appreciated 20:40:27 colin- Short answer is yes. Though there is a longer answer 20:41:11 The use case of adding HMs will be an API call per amp now in Stein, where it now pushes out a new config and adopts it without requiring a failover. 20:41:39 i saw that, that's going to be nice 20:41:50 it also made me wonder if we ever considered having the amps periodically pull config from a centralized location? 20:42:00 maybe when they go to do their heartbeats, for example 20:42:24 We have put "bulk" actions on the back burner and have gone down the path of enabling that via the API and leaving the exercise to the operator and their favor automation tool. This is for a few reasons: 20:43:23 1. Bulk operations can be dangerous. If the process hasn't been tested well (i.e. the operator loaded a bad custom image), we don't want to be responsible for runway breakage. 20:43:42 yeah that would be horrifying, good point 20:43:46 2. We don't have mechanisms built in to "cancel/abort" these actions after they start. 20:44:00 3. We don't have a good way to track/monitor success/failure/progress. 20:44:07 yeah that is a fairly stateful job 20:44:08 ok 20:44:18 4. We haven't had anyone that had time to go after that problem.... 20:44:38 Mostly #4 20:44:51 for me #1-3 are also problematic 20:45:02 but I can see us to have a controb flder with common “scripts" 20:45:46 I also know that having systems get their config from an API endpoint is quite modern those days (envoy) 20:45:58 As for amps, pulling configs is something that breaks our trust model. Amps are un-trusted in our model. We try to push from "more trusted" to "less trusted" and never rely on the amp being what it says it is. 20:46:14 yep, that was my next point how to make that secure 20:46:23 cool, that's a really succinct way to phrase it ty 20:46:26 So, for example, we don't want a rogue amp asking for the certs and keys from another tenant's load balancer. 20:47:41 but there are a ton of startups working in the zero trust idendity space 20:47:43 So, maybe that would be something to consider in the future, but right now we are in keep it simple mode. 20:48:20 understood 20:48:42 We have discussed using something like etcd, consul, etc. but there were a bunch of trade offs and extra deployment overhead. 20:49:01 yeah, and you still have the trust problem 20:49:16 Again, if someone has a use case, need, and willing to work on it, please feel free to post a spec. 20:49:51 yep, happy to review it. There is exciting stuff out there and our CA inside the worker is just a baby step ;-) 20:50:17 Right, I agree. The certs model we put in place is a good start for this. 20:50:48 > 20:50:48 Adds an administrator API to access per-amphora statistics. 20:50:50 connection stats? 20:50:57 like data plane 20:51:10 This is actually one of the problems in Trove. One DB instance can shutdown another by sending commands back up to the control plane. (Maybe they have since fixed this) 20:51:36 colin- No, it' exposes the listener traffic stats "per-amphora". 20:52:02 ah ok 20:52:05 Just another way to query the data, which happens to help us with testing. 20:52:49 The driver for that was testing active/standby VRRP failover. However cgoncalves has posted an alternative option too. 20:53:14 thanks for humoring my wandering attention :) 20:53:32 Since VRRP failover occurs autonomously inside the amphorae it's hard to track which amp is passing traffic at any one time. 20:53:52 Sure, no problem. 20:54:02 Any other topics in the last few minutes today? 20:54:02 #lin khttps://review.openstack.org/#/c/584681/ 20:54:07 #link https://review.openstack.org/#/c/637073/ 20:54:19 yeah, we have a lot of ideas but no time... 20:54:53 Yeah, it was a lot easier to do things when the active team was 20+ people 20:55:38 :-) 20:56:04 Ok then. Thanks again for all of the hard work for Stein, we are in the home stretch. 20:56:14 Have a great week! 20:56:22 o/ 20:56:23 let's do it!! :D 20:56:24 o/ 20:56:36 #endmeeting