20:00:03 #startmeeting Octavia 20:00:04 Meeting started Wed May 2 20:00:03 2018 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:08 The meeting name has been set to 'octavia' 20:00:16 Hi folks! 20:00:49 hi 20:00:52 o/ 20:01:00 #topic Announcements 20:01:12 The only announcement I have this week is that we have a new TC elected: 20:01:17 +1 20:01:18 #link https://governance.openstack.org/election/results/rocky/tc.html 20:02:06 Oh, and there is now an Octavia ingress controller for Octavia 20:02:13 #link https://github.com/kubernetes/cloud-provider-openstack/tree/master/pkg/ingress 20:02:35 Any other announcements this week? 20:03:14 #topic Brief progress reports / bugs needing review 20:03:49 I have been busy working on the provider driver. The Load Balancer part is now complete and up for review comments. 20:03:56 #link https://review.openstack.org/#/c/563795/ 20:04:16 It got a bit big due to single-call-create being part of load balancer. 20:04:30 o/ 20:04:32 So, I'm going to split it across a few patches (and update the commit to reflect that) 20:05:01 johnsom, thank you for taking a lead on this. I will review it. 20:05:06 Ha, I guess there is that announcement as well 20:05:33 I have been working on the octavia tempest plugin. Two patches ready for review (although I need to address johnsom's comments) 20:05:36 I think the listener one will be a good example for what needs to happen with the rest of the API. It's up next for me 20:05:53 +1 on tempest plugin work 20:07:06 Any updates on Rally or grenade tests? 20:07:53 sorry, I still need to resume the grenade patch 20:08:24 Ok, NP. Just curious for an update. 20:08:32 johnsom, the rally scenario now works, i have some other internal fires to put out and then I'll iterate back to run it and report the numbers. it had a bug with the loadbalancers cleanup which is fixed now. so we are in a good shape there overall. 20:08:47 Cool! 20:09:11 Any other updates this week or should we move on to our next agenda topic? 20:09:24 yeah :) it took quite a few tries but it worth the effort i think. 20:09:36 #topic Discuss health monitors of type PING 20:09:44 #link https://review.openstack.org/#/c/528439/ 20:09:53 nmagnezi This is your topic. 20:10:04 open it ^^ while gerrit still works :) 20:10:13 PING is dumb and should be burned with fire 20:10:17 so, rm_work submitted a patch to allow operators to block it 20:10:26 I can give a little background on why I added this feature. 20:10:39 rm_work: wait for it. I think you will like it ;) 20:10:46 1. Most load balancers offer it. 20:10:49 johnsom: because you want users to suffer? 20:10:52 i commented that I understand rm_work's point, but I don't know if adding a config option is a good idea here 20:11:02 rm_work, lol 20:11:33 we're handing them a gun and pointing it at their foot for them 20:11:34 anyhow, the discussion I think we should have is whether or not we want to deprecate and later remove this option from our API 20:11:47 cgoncalves: you're right :) 20:11:57 2. I was doing some API load testing with members and wanted them online, but not getting HTTP hits to skew metrics. 20:12:53 you could also just ... not use HMs in a load test... they'll also be "online" 20:13:02 or use an alternate port 20:13:10 Well, they would be "no monitor" 20:13:35 does TCP Connect actually count for stats? 20:13:36 It was basically, ping localhost so they all go online no matter what. 20:14:17 So, I'm just saying there was a reason I went to the trouble to fix that (beyond the old broken docs that listed it) 20:15:11 we could rename it to "DO_NOT_USE_PING" 20:15:16 johnsom, your opinion is that we should keep ping hm as is? 20:15:38 Now, I fully understand that joe-I-don't-know-jack-but-am-an-load-balancer-expert will use PING for all of the wrong reasons.... I have seen it with my own eyes. 20:16:18 in *most openstack clouds* the default SG setup is to block ICMP 20:16:29 though I guess I can't back that up with actual survey data 20:16:47 Nice, so they instantly fail and they don't get too burned by being dumb 20:16:54 grin 20:16:56 so people are like "all my stuff is down, your thing is broken" 20:17:41 I dislike most ooenstack clouds — there are some wacky clpuds out there 20:17:46 lol 20:18:02 My stance is, most, if not all of our load balancers support it. There was at least one use case for adding it. It's there and works (except on centos amps). Do we really need to remove it? 20:18:05 johnsom, in your eyes, what are the right reasons for using ping hm? 20:18:16 * xgerman_ read about people using k8s to loadbalance since they don’t want to upgrade from Mitaka 20:18:27 Testing purposes only... Ha 20:18:34 lol 20:19:19 i'm not asking if we should or shouldn't remove this because of the centos amps. I'm asking this because it seem that everyone agree with rm_work's gentle statements about ping :) 20:19:38 * rm_work is so gentle and PC 20:20:17 tremendously gentle, everyone says so. anyone who doesn't is fake news 20:20:27 #link http://andrewkandels.com/easy-icmp-health-checking-for-front-end-load-balanced-web-servers 20:20:29 lol 20:20:34 +1. unless there's a complelling use case for keeping ping, I'm for removing it 20:20:48 we SHOULD probably check with some vendors 20:20:54 I wish we had more participation from them 20:20:58 the point i'm trying to make here is that if ping is something we would want to keep, i don't think we need a config option to block it. 20:21:06 +1 20:21:12 I don't even see most of our vendor contacts in-channel anymore 20:21:20 if we agree that it should be removed, we don't need that config option as well :) 20:21:26 that’s why we are ding providers 20:21:38 nmagnezi: yeah, this was supposed to be a compromise 20:21:53 you could argue that all compromise is bad and we should just pick a direction 20:21:54 anyhow, I think ping has value — not everybody runs HTTP or TCP 20:22:00 we have UDP coming up 20:22:06 Yeah, from what I see, all of our vendors support ICMP 20:22:15 alright 20:22:16 well 20:22:37 just trying to thik through a UDP healthmonitor 20:22:39 This is true, UDP is harder to check 20:22:42 yes 20:22:57 Maybe someone will want us to load balance ICMP.... 20:22:58 grin 20:23:03 but that's why there's TCP_CONNECT and alternate ports 20:23:03 HAHA 20:23:33 any reason a UDP member wouldn't allow a TCP_CONNECT HM with the monitor_port? 20:23:53 Yes, if they don't have any TCP code.... 20:24:09 rm_work, that might depend on the app you run on the members 20:24:51 i mean 20:24:54 Yeah, so F5, A10, radware, and netscaler all have ICMP health check options 20:24:56 you would run another app 20:25:04 that is a health check for the UDP app 20:25:08 to make sure it is up, etc 20:25:33 so combo of connectable + 200OK response == good 20:25:43 I was pretty sure that was the standard for healthchecking stuff and why we added the monitor_port thing to begin with 20:25:48 Well, some of this UDP stuff is for very dumb/simple devices. That was what the use case discussion was at the PTG around the need for UDP 20:26:02 rm_work, sounds a little bit redundant. if you want to check the health of you ACTUAL app, why have another one just to answer the lb? 20:26:06 probably not too dumb for ICMP 20:26:25 (but you could argue the same for ICMP, but at least it checks networking.. ha) 20:26:28 So, if the concern is for users mis-using ICMP, should we maybe just add a warning print to the client and dashboard? 20:26:37 johnsom, +! 20:26:40 johnsom, +1 20:26:48 +1 20:26:58 k T_T 20:27:01 I am ok with this 20:27:03 johnsom, i would add another warning to the logs as well 20:27:25 +1, plus warning msg in server side? 20:27:32 eh, logs just go to ops, and they can see it in the DB 20:27:33 which is easier to check 20:27:34 and they already know it's dumb 20:27:38 i wouldn't bother with the server side 20:27:45 Eh, not sure operators would care that much what heatlh monitors the users are setting. Does that cross the "INFO" log level????? 20:27:46 its users we need to reach 20:28:28 johnsom, a user being dump sounds like a warning to me :) 20:28:34 dumb* 20:29:12 Yeah, I just want us to draw a balance between filling up log files with noise and having actionable info in there. 20:29:38 well, you only print it once, when the it's created 20:29:50 so it's not spamming the logs that bad 20:30:02 Ha, I have seen projects with 250 LBs in it. Click-deploy.... 20:30:27 I am ok with logging it, no higher than INFO, if you folks think it is useful 20:30:41 fair enough. 20:30:51 wait, isn't info the one that always prints? 20:31:01 or, i guess that was your point 20:31:02 k 20:31:08 It would be some "fanatical support" to have agents call the user that just did that.... Grin 20:31:40 I would set up an automated email job 20:31:46 lol 20:32:02 That was flux... 20:32:21 Ha, ok, so where are we at with the config patch? 20:32:22 "We noticed you just created a PING Health Monitor for LB #UUID#. We recommend you reconsider, and use a different method for the following reasons: ...." 20:33:04 I mean... I would still like to be able to disable it, personally, but I grant that it should probably remain an option at large (however reluctantly) 20:33:07 I can open a story to add warnings to the client and dashboard 20:33:32 I can put WIP on this one or DNM or whatever, and just continue to pull it in downstream I guess <_< 20:33:48 I just figured a config couldn't hurt 20:34:11 the way I designed it, it would explain to the user when it blocks the creation 20:34:17 rm_work, if everyone else agree on that, I will not be the one to block it. Just wanted to raise discussion around this topic 20:34:22 I am ok with empowering operators myself 20:34:55 can we get CentOS to 1.8? :P 20:35:05 I'd have a much weaker case then 20:35:13 \me wrong person to ask 20:35:14 +1, still knowing nmagnezi is not a fan of adding config options like this 20:35:15 cgoncalves and myself are working on it. it's not easy but we are doing our best :) 20:35:25 rm_work: soon! ;) 20:35:33 k 20:35:33 rm_work, we'll keep you posted 20:35:34 I mean 20:35:35 if we got a more official repo 20:35:40 we don't even need it in the main repo 20:35:49 we could merge my patch to the amp agent element 20:35:54 err, amp element 20:36:10 (which I already pull in downstream) 20:36:21 rm_work: short answer is: likely to have 1.8 in OSP14 (Rocky) 20:36:32 in what way? 20:36:41 CentOS amps based on CentOS8? 20:36:51 Official repo for OpenStack HAProxy? 20:37:00 HAProxy 1.8 backported into CentOS7? 20:37:34 cross tag. haproxy rpm in osp repo, same rpm as from openshift/pass repo 20:37:43 ok 20:37:53 so we would update and merge my patch 20:38:06 we will keep haproyz 1.5 but add 'haproxy18' package 20:38:11 yeah 20:38:18 #link https://storyboard.openstack.org/#!/story/2001957 20:38:54 rm_work: you could then delete the repo add part from your patch 20:39:01 ok 20:39:08 i wish i could look up that CR now >_> 20:39:14 great timing on gerrit outage for us, lol 20:40:55 So, I guess to close out the PING topic, vote on the open patch. (once gerrit is back) 20:41:16 #topic Open Discussion 20:41:23 Any topics today? 20:41:33 Multi-AZ? 20:41:42 I have a patch, it is actually reasonable to review 20:42:01 the question is... since it will only work if every AZ is routable on the same L2... is this reasonable to merge? 20:42:26 At least one other operator was doing the same thing and even had some similar patches started 20:42:28 We have a bionic gate, it is passing, but I'm not sure how giving the networking changes they made. It must have a backward compatibility feature. It's on my list to go update the amphora-agent for bionic's new networking. 20:43:28 I have not looked at the AZ patch, so can't really comment at the moment 20:43:32 (or if they're using an L3 networking driver) 20:43:44 k, it's more about whether the concept is a -2 or not 20:45:27 In general mutli-AZ seems great to me. However the details really get deep 20:47:06 yeah 20:47:33 though if you have a routable L2 for all AZs, or you use an L3 net driver... then my patch will *just work* 20:47:37 +1 20:47:39 and the best part is that the only required config change is ... adding the additional AZs to the az config 20:47:51 :) 20:48:21 Would love nova to do something reasonable but in the interim… 20:49:19 Yeah, so I think it's down to review 20:49:39 Which brings me to a gentle nag.... 20:49:39 +1 20:49:49 #link ttps://review.openstack.org/#/q/(project:openstack/octavia+OR+project:openstack/octavia-dashboard+OR+project:openstack/python-octaviaclient+OR+project:openstack/octavia-tempest-plugin)+AND+status:open+AND+NOT+label:Code-Review%253C0+AND+NOT+label:Verified%253C%253D0+AND+NOT+label:Workflow%253C0 20:50:09 Well, when gerrit is back up. 20:50:10 johnsom, forgot an 'h' 20:50:28 ono 20:50:29 There are a ton of open un-reviewed patches.... 20:50:38 #undo 20:50:39 Removing item from minutes: #link ttps://review.openstack.org/#/q/(project:openstack/octavia+OR+project:openstack/octavia-dashboard+OR+project:openstack/python-octaviaclient+OR+project:openstack/octavia-tempest-plugin)+AND+status:open+AND+NOT+label:Code-Review%253C0+AND+NOT+label:Verified%253C%253D0+AND+NOT+label:Workflow%253C0 20:50:42 so many 20:50:50 I need to go review too, but 20:50:53 #link https://review.openstack.org/#/q/(project:openstack/octavia+OR+project:openstack/octavia-dashboard+OR+project:openstack/python-octaviaclient+OR+project:openstack/octavia-tempest-plugin)+AND+status:open+AND+NOT+label:Code-Review%253C0+AND+NOT+label:Verified%253C%253D0+AND+NOT+label:Workflow%253C0 20:50:55 not just me :P 20:51:15 Yeah, please take a few minutes and help us with reviews. 20:51:41 Any other topics today? 20:52:30 Ok then. Thanks everyone! 20:52:35 #endmeeting