16:00:07 <rm_work> #startmeeting Octavia
16:00:08 <openstack> Meeting started Wed May  8 16:00:07 2019 UTC and is due to finish in 60 minutes.  The chair is rm_work. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:11 <openstack> The meeting name has been set to 'octavia'
16:00:18 <johnsom> o/
16:00:20 <cgoncalves> hello
16:00:30 <rm_work> hey!
16:00:31 <johnsom> Ha, I forgot.  That is a nice feeling
16:00:53 <rm_work> welcome back ya'll
16:01:17 <rm_work> yeah i'm about to pass out though so uhh... hopefully i make it through the whole thing
16:01:29 <johnsom> FYI, I have an agenda item today.
16:01:36 <rm_work> k
16:01:38 <rm_work> #topic Announcements
16:02:09 <rm_work> I don't have anything today really...
16:02:32 <rm_work> But, the PTG was good, lots of stuff covered
16:02:34 <johnsom> There is an e-mail thread about some stable branch requirements version changes coming
16:02:49 <eandersson> johnsom, can you update your stresstest tool :p
16:02:58 <colin-> good morning
16:03:02 <colin-> or evening
16:03:13 <johnsom> eandersson Almost done
16:03:39 <johnsom> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005956.html
16:03:45 <ataraday_> hello
16:04:12 <johnsom> That might break things, so if you see requiests/urllib issues on the stable branch you know why.
16:04:43 <johnsom> Otherwise, yes, the PTG was great. If you couldn't join us, check out the notes on the etherpad.
16:04:56 <johnsom> #link https://etherpad.openstack.org/p/octavia-train-ptg
16:05:01 <rm_work> any other announcements?
16:05:18 <johnsom> I also spent time yesterday fighting storyboard to open RFE's and update stories based on our discussions.
16:06:00 <johnsom> That is all I can think of.
16:06:06 <rm_work> #topic Brief progress reports / bugs needing review
16:06:12 <rm_work> well, some of that is probably this ^^
16:07:11 <johnsom> True
16:07:23 <johnsom> I have mostly been catching up since the Summit/PTG.
16:07:43 <rm_work> I made some progress on multi-vip, but ran into a schema issue i was hoping i'd be able to avoid, so will need to do a lot more work on that... putting it on hold for a week or so to get some internal stuff done to unblock some other folks
16:07:46 <johnsom> I opened the stories (4 out of 5 storyboard gave 500's), etc.
16:08:12 <rm_work> I do have one change I'm about to push up, which is to make the amp/worker communication use TLSv1.2 instead of SSLv2/3
16:08:35 <johnsom> Backward compat I assume?
16:08:41 <rm_work> because SSLv2/3 is really broken, and it was brought to my attention that we're still using that, by our internal security team
16:08:52 <rm_work> it's a change on the amp side
16:08:58 <rm_work> so it'll just work
16:09:26 <colin-> nice, glad to hear that
16:09:27 <johnsom> Hmmm, I would have thought it would be on the controller side to be compatible....
16:09:28 <rm_work> maybe can do it on both sides, and then yes
16:09:34 <colin-> thanks for doing that
16:09:52 <ataraday_> I split PoC for jobboard, and looked at transition flows DB objects to dicts a bit more
16:09:56 <rm_work> because they all SUPPORT it, just need to force it
16:09:56 <johnsom> I.e. ask for TLSv1.2 on the initiation
16:10:01 <rm_work> yes
16:10:20 <rm_work> will submit that patch today later
16:10:30 <johnsom> ataraday_ Awesome, you probably saw that I broke out the tasks as we discussed at the PTG.
16:10:53 <cgoncalves> we've also seen one customer disabling SSLv3 and TLS 10 ciphers in listeners
16:11:03 <ataraday_> johnsom, yes, thanks! I already linked the changes to proper tasks
16:11:04 <johnsom> ataraday_ Do you need me to do an example patch of what I'm talking about with the provider dictionaries?
16:11:25 <cgoncalves> sorry, I meant to write TLS 1.0
16:11:48 <johnsom> cgoncalves Yeah, letting users pick ciphers and protocols was on my "wish I had time to implement" list.  I have a plan, but no time right now.
16:12:59 <ataraday_> johnsom, if you have time for this this would be nice
16:13:02 <johnsom> I have some internal paperwork to do this week, but hope to get back to fixing the unsets "soonish"
16:13:33 <johnsom> ataraday_ Ok, sounds like it is not a blocker for you, so I will probably work on that early next week.
16:14:13 <eandersson> johnsom, I don't see ver in Rocky https://github.com/openstack/octavia/blob/stable/rocky/octavia/amphorae/backends/health_daemon/health_daemon.py#L118
16:14:30 <ataraday_> johnsom, yes, I will be off for Thuesday and Friday anyway :)
16:14:50 <johnsom> eandersson I think I just messaged you that.
16:15:07 <cgoncalves> ataraday_, I very, very quickly checked your patches. a question I have is if you want zookeeper to be the default or redis. I recall we discussed about going for redis as default and to add support for it in our devstack plugin
16:15:12 <eandersson> I just moved from home to the office so I might have missed the message.
16:15:16 <eandersson> *that
16:15:23 <johnsom> eandersson https://github.com/openstack/octavia/commit/2170cc6c459b7ae8461a09a0b6fd754ecef9654e
16:15:43 <eandersson> Can we back port that one to Rocky?
16:15:47 <johnsom> Yeah, we decided to do Redis in the interim
16:15:54 <ataraday_> cgoncalves, we agreed on redis
16:16:21 <johnsom> eandersson No, I don't think so, it changes the protocol so.... This is why there is the backward compatibility in the controller side.
16:16:31 <cgoncalves> ataraday_, right. I asked because I saw zookeeper as default in your patch
16:17:05 <eandersson> I see - we need to update to Stein because the performance hit is pretty large there.
16:17:41 <ataraday_> yeah, I used it to try things out, now will swith on redis
16:17:53 <johnsom> eandersson Yeah, is it enough to improve that backwards compat code in Rocky?  What I saw you had was .2 seconds per heartbeat instead of .006
16:18:02 <cgoncalves> ok, just wanted to make sure we were on the same page. thanks!
16:18:21 <johnsom> eandersson I will make the stress tool configurable for versioned or not.
16:18:52 <rm_work> i advised that we could probably intelligently skip that code path in most cases
16:18:58 <rm_work> which i think would be fine?
16:19:26 <rm_work> though i think you also proposed a patch to improve the speed by skipping the db refresh, not sure if that is a problem or not
16:19:30 <johnsom> Yeah, it gets skipped in Stein forward. It was just Rocky that had UDP and amphora agents that didn't send the version.
16:19:38 <rm_work> right but even in rocky
16:19:51 <rm_work> you can skip that code path if the db_lb has no UDP listeners on it
16:19:55 <rm_work> right?
16:20:20 <johnsom> What little I recall, it does, but maybe not?...
16:20:29 <rm_work> didn't look like it
16:20:39 <rm_work> and the code has the right info already
16:20:56 <rm_work> would just be a quick loop&check which is super fast compared to the guaranteed DB pull
16:21:16 <colin-> certainly sounds preferrable
16:21:38 <rm_work> ok well we're beyond quick updates, should we make this a topic>
16:21:39 <rm_work> ?
16:21:41 * xgerman lurking
16:22:36 <eandersson> It's up to you guys. I can probably keep a patch internally or maybe look at updating to Stein.
16:22:40 <rm_work> or should we move on, I think this just needs to be fixed... maybe I can do it today if eandersson doesn't do it in the next 10 minutes :D
16:22:57 <eandersson> rm_work, https://review.opendev.org/#/c/657756/
16:23:08 <rm_work> yeah i saw you did that one, but not sure if that causes issues or not
16:23:12 <rm_work> was hoping johnsom would review
16:23:15 <johnsom> So it is loop that skips non-UDP listeners
16:23:20 <rm_work> which he did
16:24:15 <rm_work> johnsom: yeah but the issue is that *triggering the loop* is the expensive part
16:24:31 <rm_work> the lb_repol.get
16:24:47 <johnsom> Yeah, it's probably the DB round trips for the listeners
16:24:51 <rm_work> yes
16:24:59 <rm_work> so could skip before that, or do what eandersson did
16:25:02 <johnsom> But, again, I'm just curious is .2 seconds is a problem....
16:25:16 <rm_work> well it's an order of magnitude time increase
16:25:17 <rm_work> so
16:25:23 <rm_work> that's not ideal
16:25:48 <rm_work> 0.006 -> 0.2? two orders? lol
16:26:03 <colin-> and that's exacerbated by load
16:26:05 <rm_work> and more DB load
16:26:15 <rm_work> so yes i would say it could definitely be a problem
16:26:15 <johnsom> Yeah, otherwise, we need to do what was proposed, update the query to include the protocol. (please only in Rocky as it's a waste on Stein forward).
16:27:53 <rm_work> ah it's the special get, so i guess not
16:29:10 <rm_work> what was your topic johnsom ?
16:30:02 <johnsom> User survey
16:30:19 <rm_work> #topic User survey
16:30:56 <johnsom> So, for yet another year the foundation ignored us and did not ask us if we wanted a question on the user survey.
16:32:03 <johnsom> They only went to the "represented" projects (grumble, the privileged ones evidently).
16:32:08 <rm_work> don't we know multiple people on the TC? do they not have any say int his either?
16:32:13 <johnsom> So I raised the issue on the mailing list
16:32:19 <colin-> saw that heh
16:32:23 <johnsom> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005959.html
16:32:35 <johnsom> Yeah, this is good feedback for mugsie etc.
16:32:57 <johnsom> This is literally the fourth year they have ignored us.
16:33:08 <johnsom> Sorry, fourth cycle, not yer.
16:33:51 <rm_work> and jroll
16:33:51 <johnsom> Anyway, even though the survey has already been advertised and we likely will not get many responses for 2019, my question for the team is what do we want to ask?
16:34:32 <mugsie> johnsom: unfortunately we are not in control of that survey - it is the UC
16:34:33 <johnsom> The question we had wanted to ask prior to the neutron-lbaas retirement this year was "What load balancing providers are you using?"
16:34:53 <cgoncalves> two questions? 1) most wanted new features, 2) Most wanted provider drivers
16:34:55 <johnsom> mugsie It's all foundation folks from the e-mail chain.
16:35:17 <mugsie> damn
16:35:41 <johnsom> cgoncalves +1
16:36:08 <johnsom> Anyone else have input?
16:36:31 <colin-> personally i'd like to have a better sense of deployed scale
16:37:05 <johnsom> colin- You can probably already get that from the 2018 survey. We are at least now listed as a project.
16:37:11 <colin-> it's difficult to ascertain when we discuss complex topics if we're speaking figuratively/academically or from experience and getting a better picture of how broadly deployed octavia is (vexxhost for example?) would help me understand how unique our problems are
16:37:22 <colin-> just trying to offer some feedback
16:37:26 <johnsom> #link https://www.openstack.org/analytics
16:37:48 <johnsom> Yep, I understand
16:37:56 <colin-> so a question that informs that would help me. will review the analytics thanks
16:38:28 <johnsom> From a quick glance at the 2018 survey, 13% of deployments have octavia in production.
16:38:54 <johnsom> I would have to spend some time with the tool to map that to compute cores for example.
16:39:55 <johnsom> I am in favor of the two questions cgoncalves asked.
16:40:49 <rm_work> yep
16:40:55 <johnsom> rm_work do you want to vote on this or ???
16:41:15 <rm_work> do we need to?
16:41:24 <johnsom> Probably not.
16:41:27 <rm_work> I haven't seen any disagreement
16:41:34 <johnsom> rm_work Do you want to reply to the mailing list with the questions?
16:41:47 <rm_work> not particularly? :D
16:42:03 <rm_work> do I have a choice? ^_^
16:42:09 <johnsom> Lol
16:42:29 <johnsom> I can do it.  I just thought it might be nice to have another voice from Octavia on the chain
16:42:42 <rm_work> you seem to be a good ML Liaison
16:42:45 <colin-> perhaps cgoncalves could do so, as he posed the questions?
16:43:03 <colin-> (sorry carlos!)
16:43:11 <cgoncalves> I don't mind :)
16:43:14 * johnsom notes it is interesting, for queens deployments we jump to 29% in prod
16:43:38 <johnsom> cgoncalves Thanks!
16:43:51 <colin-> johnsom: do i understand right that one copy of each octavia service and one VIP/listener/member would show up the same on that analytics page as a deployment 1000x that size?
16:44:18 <johnsom> Correct
16:44:21 <colin-> thx
16:44:29 <johnsom> It's per-cloud, not instance
16:45:17 <rm_work> ok so
16:45:44 <rm_work> #action cgoncalves reply to ML user survey email with octavia questions
16:46:21 <rm_work> can probably move on
16:46:37 <rm_work> I didn't have anything else, and wiki didn't have anything, so
16:46:46 <rm_work> #topic Open Discussion
16:47:06 <johnsom> I don't think I have any other topics for today.
16:47:25 <johnsom> Too much engineering via google docs for me this week
16:47:42 <colin-> no fun
16:49:56 <rm_work> yeah, me either
16:52:29 <cgoncalves> 1. Which OpenStack load balancing (Octavia) provider drivers would you like to see supported?
16:52:35 <cgoncalves> 2. Which new features would you like to see supported in OpenStack load balancing (Octavia)?
16:52:55 <cgoncalves> +1/-1, comments, suggestions, ... please
16:53:04 <johnsom> That looks good to me, +1
16:53:25 <rm_work> yeah just need to decide the options
16:53:37 <rm_work> right? cause... doesn't it have to be multi-choice?
16:54:32 <cgoncalves> good point. johnsom, do you remember how it was for the 2017 survey?
16:54:52 <johnsom> These are fill in questions I think.  Let's see what they say.
16:55:02 <cgoncalves> ok
16:55:14 <rm_work> ah
16:58:04 <johnsom> I see in their e-mail about the 2019 survey opening a question from ironic ""What would you find most useful if it was part of Ironic?"
16:58:10 <johnsom> So, open ended should be ok
17:01:35 <johnsom> rm_work Are we done with the meeting?
17:01:45 <rm_work> AH yeah, we're at time
17:01:47 <rm_work> #endmeeting