19:59:14 #startmeeting Octavia 19:59:15 Meeting started Wed Apr 3 19:59:14 2019 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:59:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:59:18 The meeting name has been set to 'octavia' 19:59:24 hello 19:59:28 Hi folks 19:59:46 poke rm_work our future PTL 20:00:14 #topic Announcements 20:00:57 This is the final RC week. I think we need an RC2 for octavia, so we will try to get those fixes in today and hopefully do the RC2 today as well. 20:01:10 We are also close to doing some stable branch releases. 20:01:37 Other than that, I don't have any other announcements this week. Anyone else? 20:02:35 #topic Proposal to change meeting time (cgoncalves) 20:02:50 cgoncalves Do you want to talk to this? 20:02:59 sure, thanks 20:03:48 so currently our weekly meetings are at 1pm PST, which means 10 pm CEST and 11 pm IST (Israel) 20:04:04 2000 UTC 20:04:20 in Asia, it is in the middle of the night 20:04:55 I was wondering if we could have our meetings earlier to be more friendly to folks in EMEA and Asia 20:05:25 thanks for the correction 20:05:30 Yes as long as we get quorum with the change. 20:05:36 agreed 20:05:44 (I was just adding to the conversation, grin) 20:05:48 sure, what time are you proposing? 20:06:05 So how this works, community process wise: 20:06:07 also we should make sure rm_work is available 20:06:14 1. we propose some times/days 20:06:16 right 20:06:28 2. I will create a doodle for those times/days 20:06:42 3. We e-mail the openstack list with the details and the doodle. 20:06:48 s/I/rm_work/g 20:07:33 4. We let that soak a week, then if we have quorum for a new time, I will go update all the places that need updating and we have a new time. 20:08:11 sounds good to me 20:08:13 Questions/comments on the process? 20:08:27 +1 (other than we should have rm_work own more of the process) 20:09:01 I think it will stretch into when he takes over 20:09:02 I would hope that rm_work would participate in the proposals. 20:09:13 * johnsom wonders how many times we can ping him.... grin 20:09:20 perhaps current meeting time is also not much convenient for rm_work 20:09:40 yeah, who knows which time zone he lives by nowerdays 20:09:58 So I will start, 1600 UTC is a nice time for me 20:10:05 true. it's 5 am in Japan 20:12:15 since for the people here the time works we are the wrong ones to ask to begin with 20:13:03 let's throw more time options in to doodle. say 1500 UTC 20:13:20 +1 20:13:51 let's also make sure we include the current meeting time 20:14:01 Ok, fair point 20:14:28 +1 20:14:37 Do we have any particular day that we should propose or are no-go for folks? 20:15:07 Let’s stick with Wednesday - Friday/Monday are funny in a lot of time zones 20:15:12 Fridays are no go for Israelis 20:15:16 I am guessing Friday, Saturday, Sunday are bad 20:15:36 Yeah, so Tue-Wed-Thur 20:15:38 Tue-Thu 20:15:43 +1 20:16:21 Ok, any other proposed times for the doodle? 20:16:22 cool. thank you, all! 20:17:05 johnsom, there's an option in doodle that allows anyone to add new rows (= times), no? 20:17:24 s/rows/columns/ 20:17:25 Ok. I will get the process going. 20:17:31 =1 20:17:39 yes I think so, you think we should leave it open? 20:17:44 but really rm_work... 20:17:49 why not 20:17:54 * johnsom notes it's been a year or two since I did this 20:18:19 Ok, will do 20:18:34 thanks 20:18:42 It just means folks need to check back to it in case new times are added 20:19:06 #topic Brief progress reports / bugs needing review 20:20:08 I have worked on removing the last references to oslosphinx which is broken with sphinx 2, deprecated, and won't be fixed. 20:20:35 For the most part we have already done that, but there were two references we missed. You should not see any major changes in the docs/release notes 20:21:30 I helped figure out a solution to our grenade issue. 20:21:53 Lots of reviews, etc. 20:22:30 johnsom, thank you for your help troubleshooting and proposing a fix to the grenade issue. really appreciate! 20:22:31 Currently I'm working on adding the "unset" option to our openstack client. This will make it more clear for users of how to clear settings. 20:22:58 I'm going to go through the main options first, then come back and do tags. 20:23:14 I send back my laptop, did a week vacation… slowly getting my 2008 Mac into 2019’s software 20:23:22 I need to move a module out of neutron in OSC up to osc-lib so we can share the tags code. 20:23:54 Yeah, I am also running on "alternate" hardware now. Seems to be working ok though. 20:24:32 I also fixed a security related issue in the OSA role. 20:24:39 #link https://review.openstack.org/648744 20:24:52 xgerman You might want to do a quick review on that 20:25:01 on it 20:25:45 So, that is my plan for the next few days, work on unset and then tags for the client. 20:26:11 Also, I will be travelling and not available much Sun-Wed. 20:26:14 Just as a heads up 20:26:25 there is a patch in master and stein that broke spare pools. Change https://review.openstack.org/#/c/649381/ will fix it. we need to backport it to stein and release Stein RC2 this week 20:26:58 Yeah, I am going to take a stab at that after lunch. 20:27:05 it would be nice if we could merge a tempest test for spare pool to prevent regressions like this in the future 20:27:09 #link https://review.openstack.org/#/c/634988/ 20:27:14 I think we just need to do another migration and we can fix it that way 20:28:03 migration? as in DB migration? 20:28:10 Yep. Good stuff. I had previously +2'd, will circle back 20:28:13 patch set I take 20:28:14 Yeah 20:28:52 Any other updates this week? 20:28:54 I also addressed reviews on https://review.openstack.org/#/c/645817/ 20:29:08 I have a customer waiting for it 20:29:52 +2, you addressed my only issue with it. 20:30:01 looking 20:30:09 thank you, thank you! 20:30:17 need to see what happened to my patches... 20:31:04 #topic Open Discussion 20:31:09 Ok, other topics this week? 20:32:10 some folks were discussing here on the channel earlier today about an issue where health manager would trigger failover 20:32:28 while the network was still being configured, i.e. flows, etc 20:32:31 yeah, it can do that ;-) 20:32:54 how do we know that the network is configures 20:32:54 no resolution yet 20:32:59 ? 20:33:06 precisely, that's the question 20:33:15 Really? The HM honors the lock on the objects, so it should not be able to start a failover if another controller owns the resouce 20:33:37 Oh, you mean the neutron networking.... 20:33:39 Right 20:33:41 I was thinking neutron was pulling sh*t 20:33:43 yes 20:34:08 would it work not to fail over amps unless at least one heartbeat from any amp is received by the HM on start up? 20:34:17 that goes back to should we go i to ERROR and tell the operator neutron is broken or keep retrying 20:34:46 see [1] http://blog.eichberger.de/posts/yolo_cloud/ 20:35:00 we don't know if neutron is "broken". all we know is the HM hasn't received heartbeat within the heartbeat_timeout (60 seconds) 20:35:11 I suspect the issue is around net splits where some hosts and racks are working, but others are not, so any heartbeat would likely have the same issue 20:35:21 cgoncalves: we had it not fail over stuff when there was no heartbeat which caused other problems 20:35:40 (mainly amp doesn’t come up right and we never know) 20:35:45 xgerman, we still do not failover on newly created LBs 20:36:03 I thought we fixed that a while back... 20:36:33 I thought someone was looking at that again and proposed a fix as well. Not positive though. 20:36:46 my understanding was that it was a feature/desired behavior, not a bug 20:36:47 It's a trickier problem than it seems on the surface 20:37:20 yeah, it’s the wait or trow hands-up in the air 20:38:34 just wanted to bring this up in case anyone had some thoughts 20:38:54 this is affecting some customers on this side 20:39:12 how? OSP 13 is not HA... 20:39:14 I heard other people also facing same issue 20:39:21 Personally, if the amp isn't talking to us, it seems like it is the right answer to fail it over. The question is what to do after that, specifically if the neutron outage causes the failover to not be successful. Right now we fail "safe", in that it's marked ERROR, but the secondary amp is still passing traffic. 20:39:41 lol, ouch 20:40:33 having a HA tempest job would play a long way in having it supported in OSP 20:41:34 We could have a periodic that looks for ERROR LBs with an amp in ERROR and attempts an amp failover again. We would just need to figure out the right back off and make super sure we don't bother the other functional amp. 20:42:06 I think we have some work to do on the failover flows in general actually. 20:42:07 mmh, I would leave that for the PTG… 20:42:23 I don't like that, to be honest. we would be killing an amphora that is actually up by failing over not once but twice 20:42:23 Yes, good topic for the PTG 20:42:25 yep , agree on failover fpws 20:42:36 was beating that drum for a ear now 20:43:55 Added a topic to the PTG etherpad 20:44:00 #link https://etherpad.openstack.org/p/octavia-train-ptg 20:44:31 is there a reason you guys see not to like my proposed approach? 20:45:39 The one heartbeat one or did I miss something? 20:45:52 "would it work not to fail over amps unless at least one heartbeat from any amp is received by the HM on start up?" 20:46:01 let me know if I'm not being clear 20:46:15 I responded: "I suspect the issue is around net splits where some hosts and racks are working, but others are not, so any heartbeat would likely have the same issue" 20:46:50 ah, sorry. I read that but didn't read it as a reply to my message 20:47:09 We could also try to set some threshold, say if more than x amps are "down", pause failovers 20:47:59 I guess now that we can mutate the config this could be more feasible now. It would allow the operator to have a knob to turn. 20:48:15 it would be expected that sometime "soon" the network would be back again so HMs would start receiving heartbeats 20:49:08 Well, there is always the "rack got zapped by a evil mastermind's laser" scenario where it will not come back 20:50:23 Or the scenario I saw once, where the host was just powered off for a day 20:51:36 hmm... 20:51:59 That one was nice, it led to a bunch of zombie instances showing up again 20:52:24 health manager kills them now, thanks to xgerman patch 20:52:36 Stuff to think about. I captured a few on the etherpad, please add more! 20:52:53 Yeah, that is some of the background on that patch 20:53:53 We have about 5 minutes, were there other topics we needed to discuss? 20:55:14 Ok, just wanted to check. Sometimes we run out of time before we discuss everything. 20:55:51 Oh, BTW, the devstack patch still hasn't merged, so the barbican job is still going to fail. 20:56:22 which devstack patch? 20:56:27 #link https://review.openstack.org/648951 20:58:04 thanks 20:58:28 Ok, sounds like we are wrapping up. Thanks folks! Have a great week. 20:58:30 #endmeeting