16:00:07 #startmeeting Octavia 16:00:08 Meeting started Wed May 8 16:00:07 2019 UTC and is due to finish in 60 minutes. The chair is rm_work. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:11 The meeting name has been set to 'octavia' 16:00:18 o/ 16:00:20 hello 16:00:30 hey! 16:00:31 Ha, I forgot. That is a nice feeling 16:00:53 welcome back ya'll 16:01:17 yeah i'm about to pass out though so uhh... hopefully i make it through the whole thing 16:01:29 FYI, I have an agenda item today. 16:01:36 k 16:01:38 #topic Announcements 16:02:09 I don't have anything today really... 16:02:32 But, the PTG was good, lots of stuff covered 16:02:34 There is an e-mail thread about some stable branch requirements version changes coming 16:02:49 johnsom, can you update your stresstest tool :p 16:02:58 good morning 16:03:02 or evening 16:03:13 eandersson Almost done 16:03:39 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005956.html 16:03:45 hello 16:04:12 That might break things, so if you see requiests/urllib issues on the stable branch you know why. 16:04:43 Otherwise, yes, the PTG was great. If you couldn't join us, check out the notes on the etherpad. 16:04:56 #link https://etherpad.openstack.org/p/octavia-train-ptg 16:05:01 any other announcements? 16:05:18 I also spent time yesterday fighting storyboard to open RFE's and update stories based on our discussions. 16:06:00 That is all I can think of. 16:06:06 #topic Brief progress reports / bugs needing review 16:06:12 well, some of that is probably this ^^ 16:07:11 True 16:07:23 I have mostly been catching up since the Summit/PTG. 16:07:43 I made some progress on multi-vip, but ran into a schema issue i was hoping i'd be able to avoid, so will need to do a lot more work on that... putting it on hold for a week or so to get some internal stuff done to unblock some other folks 16:07:46 I opened the stories (4 out of 5 storyboard gave 500's), etc. 16:08:12 I do have one change I'm about to push up, which is to make the amp/worker communication use TLSv1.2 instead of SSLv2/3 16:08:35 Backward compat I assume? 16:08:41 because SSLv2/3 is really broken, and it was brought to my attention that we're still using that, by our internal security team 16:08:52 it's a change on the amp side 16:08:58 so it'll just work 16:09:26 nice, glad to hear that 16:09:27 Hmmm, I would have thought it would be on the controller side to be compatible.... 16:09:28 maybe can do it on both sides, and then yes 16:09:34 thanks for doing that 16:09:52 I split PoC for jobboard, and looked at transition flows DB objects to dicts a bit more 16:09:56 because they all SUPPORT it, just need to force it 16:09:56 I.e. ask for TLSv1.2 on the initiation 16:10:01 yes 16:10:20 will submit that patch today later 16:10:30 ataraday_ Awesome, you probably saw that I broke out the tasks as we discussed at the PTG. 16:10:53 we've also seen one customer disabling SSLv3 and TLS 10 ciphers in listeners 16:11:03 johnsom, yes, thanks! I already linked the changes to proper tasks 16:11:04 ataraday_ Do you need me to do an example patch of what I'm talking about with the provider dictionaries? 16:11:25 sorry, I meant to write TLS 1.0 16:11:48 cgoncalves Yeah, letting users pick ciphers and protocols was on my "wish I had time to implement" list. I have a plan, but no time right now. 16:12:59 johnsom, if you have time for this this would be nice 16:13:02 I have some internal paperwork to do this week, but hope to get back to fixing the unsets "soonish" 16:13:33 ataraday_ Ok, sounds like it is not a blocker for you, so I will probably work on that early next week. 16:14:13 johnsom, I don't see ver in Rocky https://github.com/openstack/octavia/blob/stable/rocky/octavia/amphorae/backends/health_daemon/health_daemon.py#L118 16:14:30 johnsom, yes, I will be off for Thuesday and Friday anyway :) 16:14:50 eandersson I think I just messaged you that. 16:15:07 ataraday_, I very, very quickly checked your patches. a question I have is if you want zookeeper to be the default or redis. I recall we discussed about going for redis as default and to add support for it in our devstack plugin 16:15:12 I just moved from home to the office so I might have missed the message. 16:15:16 *that 16:15:23 eandersson https://github.com/openstack/octavia/commit/2170cc6c459b7ae8461a09a0b6fd754ecef9654e 16:15:43 Can we back port that one to Rocky? 16:15:47 Yeah, we decided to do Redis in the interim 16:15:54 cgoncalves, we agreed on redis 16:16:21 eandersson No, I don't think so, it changes the protocol so.... This is why there is the backward compatibility in the controller side. 16:16:31 ataraday_, right. I asked because I saw zookeeper as default in your patch 16:17:05 I see - we need to update to Stein because the performance hit is pretty large there. 16:17:41 yeah, I used it to try things out, now will swith on redis 16:17:53 eandersson Yeah, is it enough to improve that backwards compat code in Rocky? What I saw you had was .2 seconds per heartbeat instead of .006 16:18:02 ok, just wanted to make sure we were on the same page. thanks! 16:18:21 eandersson I will make the stress tool configurable for versioned or not. 16:18:52 i advised that we could probably intelligently skip that code path in most cases 16:18:58 which i think would be fine? 16:19:26 though i think you also proposed a patch to improve the speed by skipping the db refresh, not sure if that is a problem or not 16:19:30 Yeah, it gets skipped in Stein forward. It was just Rocky that had UDP and amphora agents that didn't send the version. 16:19:38 right but even in rocky 16:19:51 you can skip that code path if the db_lb has no UDP listeners on it 16:19:55 right? 16:20:20 What little I recall, it does, but maybe not?... 16:20:29 didn't look like it 16:20:39 and the code has the right info already 16:20:56 would just be a quick loop&check which is super fast compared to the guaranteed DB pull 16:21:16 certainly sounds preferrable 16:21:38 ok well we're beyond quick updates, should we make this a topic> 16:21:39 ? 16:21:41 * xgerman lurking 16:22:36 It's up to you guys. I can probably keep a patch internally or maybe look at updating to Stein. 16:22:40 or should we move on, I think this just needs to be fixed... maybe I can do it today if eandersson doesn't do it in the next 10 minutes :D 16:22:57 rm_work, https://review.opendev.org/#/c/657756/ 16:23:08 yeah i saw you did that one, but not sure if that causes issues or not 16:23:12 was hoping johnsom would review 16:23:15 So it is loop that skips non-UDP listeners 16:23:20 which he did 16:24:15 johnsom: yeah but the issue is that *triggering the loop* is the expensive part 16:24:31 the lb_repol.get 16:24:47 Yeah, it's probably the DB round trips for the listeners 16:24:51 yes 16:24:59 so could skip before that, or do what eandersson did 16:25:02 But, again, I'm just curious is .2 seconds is a problem.... 16:25:16 well it's an order of magnitude time increase 16:25:17 so 16:25:23 that's not ideal 16:25:48 0.006 -> 0.2? two orders? lol 16:26:03 and that's exacerbated by load 16:26:05 and more DB load 16:26:15 so yes i would say it could definitely be a problem 16:26:15 Yeah, otherwise, we need to do what was proposed, update the query to include the protocol. (please only in Rocky as it's a waste on Stein forward). 16:27:53 ah it's the special get, so i guess not 16:29:10 what was your topic johnsom ? 16:30:02 User survey 16:30:19 #topic User survey 16:30:56 So, for yet another year the foundation ignored us and did not ask us if we wanted a question on the user survey. 16:32:03 They only went to the "represented" projects (grumble, the privileged ones evidently). 16:32:08 don't we know multiple people on the TC? do they not have any say int his either? 16:32:13 So I raised the issue on the mailing list 16:32:19 saw that heh 16:32:23 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005959.html 16:32:35 Yeah, this is good feedback for mugsie etc. 16:32:57 This is literally the fourth year they have ignored us. 16:33:08 Sorry, fourth cycle, not yer. 16:33:51 and jroll 16:33:51 Anyway, even though the survey has already been advertised and we likely will not get many responses for 2019, my question for the team is what do we want to ask? 16:34:32 johnsom: unfortunately we are not in control of that survey - it is the UC 16:34:33 The question we had wanted to ask prior to the neutron-lbaas retirement this year was "What load balancing providers are you using?" 16:34:53 two questions? 1) most wanted new features, 2) Most wanted provider drivers 16:34:55 mugsie It's all foundation folks from the e-mail chain. 16:35:17 damn 16:35:41 cgoncalves +1 16:36:08 Anyone else have input? 16:36:31 personally i'd like to have a better sense of deployed scale 16:37:05 colin- You can probably already get that from the 2018 survey. We are at least now listed as a project. 16:37:11 it's difficult to ascertain when we discuss complex topics if we're speaking figuratively/academically or from experience and getting a better picture of how broadly deployed octavia is (vexxhost for example?) would help me understand how unique our problems are 16:37:22 just trying to offer some feedback 16:37:26 #link https://www.openstack.org/analytics 16:37:48 Yep, I understand 16:37:56 so a question that informs that would help me. will review the analytics thanks 16:38:28 From a quick glance at the 2018 survey, 13% of deployments have octavia in production. 16:38:54 I would have to spend some time with the tool to map that to compute cores for example. 16:39:55 I am in favor of the two questions cgoncalves asked. 16:40:49 yep 16:40:55 rm_work do you want to vote on this or ??? 16:41:15 do we need to? 16:41:24 Probably not. 16:41:27 I haven't seen any disagreement 16:41:34 rm_work Do you want to reply to the mailing list with the questions? 16:41:47 not particularly? :D 16:42:03 do I have a choice? ^_^ 16:42:09 Lol 16:42:29 I can do it. I just thought it might be nice to have another voice from Octavia on the chain 16:42:42 you seem to be a good ML Liaison 16:42:45 perhaps cgoncalves could do so, as he posed the questions? 16:43:03 (sorry carlos!) 16:43:11 I don't mind :) 16:43:14 * johnsom notes it is interesting, for queens deployments we jump to 29% in prod 16:43:38 cgoncalves Thanks! 16:43:51 johnsom: do i understand right that one copy of each octavia service and one VIP/listener/member would show up the same on that analytics page as a deployment 1000x that size? 16:44:18 Correct 16:44:21 thx 16:44:29 It's per-cloud, not instance 16:45:17 ok so 16:45:44 #action cgoncalves reply to ML user survey email with octavia questions 16:46:21 can probably move on 16:46:37 I didn't have anything else, and wiki didn't have anything, so 16:46:46 #topic Open Discussion 16:47:06 I don't think I have any other topics for today. 16:47:25 Too much engineering via google docs for me this week 16:47:42 no fun 16:49:56 yeah, me either 16:52:29 1. Which OpenStack load balancing (Octavia) provider drivers would you like to see supported? 16:52:35 2. Which new features would you like to see supported in OpenStack load balancing (Octavia)? 16:52:55 +1/-1, comments, suggestions, ... please 16:53:04 That looks good to me, +1 16:53:25 yeah just need to decide the options 16:53:37 right? cause... doesn't it have to be multi-choice? 16:54:32 good point. johnsom, do you remember how it was for the 2017 survey? 16:54:52 These are fill in questions I think. Let's see what they say. 16:55:02 ok 16:55:14 ah 16:58:04 I see in their e-mail about the 2019 survey opening a question from ironic ""What would you find most useful if it was part of Ironic?" 16:58:10 So, open ended should be ok 17:01:35 rm_work Are we done with the meeting? 17:01:45 AH yeah, we're at time 17:01:47 #endmeeting