16:00:40 <gthiemonge> #startmeeting Octavia 16:00:40 <opendevmeet> Meeting started Wed Jun 30 16:00:40 2021 UTC and is due to finish in 60 minutes. The chair is gthiemonge. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:40 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:40 <opendevmeet> The meeting name has been set to 'octavia' 16:00:46 <gthiemonge> Hi 16:01:22 <johnsom> o/ 16:01:25 <haleyb> hi 16:02:19 <gthiemonge> #topic Announcements 16:02:25 <gthiemonge> CI is RED today 16:02:51 <gthiemonge> 1) PEP8 job is broken 16:03:07 <gthiemonge> A new pylint release introduced new checkers 16:03:21 <gthiemonge> and some older checkers detects new errors 16:03:34 <gthiemonge> I proposed a fix, I think it's already CR+2 W+1 16:03:45 <johnsom> Yay for linters.... 16:03:47 <gthiemonge> #link https://review.opendev.org/c/openstack/octavia/+/798811 16:04:17 <gthiemonge> I got it when rechecking a patch after the neutron SG rule deletion issue was merged :/ 16:04:45 <gthiemonge> 2) octavia-tox-functional-py37-tips is unstable 16:04:57 <gthiemonge> there's a timeout issue when testing the DriverAgent 16:05:09 <gthiemonge> Probably related to the slowness of the CI host 16:05:18 <gthiemonge> I have a workaround, I'll propose it after the meeting 16:05:33 <johnsom> Do you have a link to the logs of one of these failures? 16:05:36 <gthiemonge> but I still don't understand why it occurs only on the -tips job 16:06:06 <gthiemonge> #link https://zuul.openstack.org/build/d7abd5b0ef7b49c69e3325313fb46603 16:06:22 <johnsom> This is a "learning opportunity" as that is my code. grin 16:06:24 <johnsom> Thanks 16:06:32 <gthiemonge> basically there's a settimeout(5) on a socket in the driver_lib 16:06:42 <gthiemonge> and we're hitting the timeout 16:07:13 <gthiemonge> in my env, receiving a message in the functional tests takes less than 200ms 16:07:21 <rm_work> there's some stuff that periodically I realize is not mocked that probably should be in certain situations 16:07:26 <rm_work> this is functional tho? 16:07:29 <gthiemonge> I managed to reproduce it with the cpulimit tool 16:07:40 <gthiemonge> functional yes 16:07:44 <rm_work> hmm k 16:08:20 <johnsom> Yeah, it's a very simple unix domain socket protocol, so ... interesting that it can't send data in 5 seconds 16:08:51 <gthiemonge> yeah i agree 16:10:40 <gthiemonge> any other announcements? 16:10:57 <johnsom> PTG is scheduled and registration is open. 16:11:24 <gthiemonge> Oh, I missed that :D 16:11:28 <johnsom> #link https://www.openstack.org/ptg/ 16:12:08 <gthiemonge> johnsom: thanks 16:12:24 <rm_work> I need these to be in-person again T_T 16:12:47 <gthiemonge> in... person... what is that? 16:13:06 <johnsom> We are all virtual AIs 16:14:06 <johnsom> There is also a lively discussion of community goals for the next release on the discuss list. 16:14:30 <johnsom> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-June/023350.html 16:14:49 <johnsom> If you are interested in the goal process 16:17:58 <gthiemonge> #topic Brief progress reports / bugs needing review 16:19:08 <gthiemonge> I found an issue regarding haproxy worker cleanup in active standby 16:19:19 <gthiemonge> I opened a story that describes the behavior: 16:19:27 <gthiemonge> #link https://storyboard.openstack.org/#!/story/2009005 16:19:34 <gthiemonge> and I proposed a patch: 16:19:44 <gthiemonge> #link https://review.opendev.org/c/openstack/octavia/+/797882 16:20:17 <johnsom> That one is super interesting and a good find! 16:20:24 <gthiemonge> I wanted to get some reviews, but it seems it's already W+1 16:20:33 <johnsom> In theory broken for quite a while. 16:21:12 <gthiemonge> yeah the stick tables are not synchronized after a config reload 16:21:19 <johnsom> I still need to go back and read up as I thought that was done over unix domain sockets, so wouldn't need lo, but that is probably something I remember wrong. 16:21:56 <johnsom> This goes back to design sessions we had in Seattle with a HAProxy core 16:22:10 <gthiemonge> I read somewhere that haproxy uses the peers in the peer list even for the local sync 16:22:47 <johnsom> I have been working on changing how we use tempest service clients to align with some "new-ish" guidelines for tempest plugins. 16:22:59 <johnsom> #link https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/797715 16:23:21 <johnsom> Huge thanks to g-mann (no need to ping him) for helping with that. 16:23:22 <gthiemonge> johnsom: and it started with a one-line patch 16:23:28 <johnsom> It did.... sigh 16:24:05 <johnsom> A conversation in #openstack-qa with the ironic team about service client credentials.... 16:24:48 <johnsom> Ah, well, that should be pretty close if not good. I need to re-review today 16:25:22 <johnsom> Otherwise my time has been on reviews and Designate things 16:25:28 <gthiemonge> I'm still working on the two-node job, a commit was in review, but it was broken by the switch to ML2/OVN 16:25:55 <gthiemonge> with some help from the neutron folkks, I managed to fix the last issue (floating ips not reaachable) 16:26:04 <gthiemonge> #link https://review.opendev.org/c/openstack/octavia-tempest-plugin/+/773888 16:26:29 <gthiemonge> but now: TIMED_OUT 16:26:30 <gthiemonge> great 16:27:03 <johnsom> Grin 16:27:41 <johnsom> We should look into the "parallel" devstack install code. I thought that was default now, but maybe not? It would in theory stop devstack from installing each node sequentially. 16:29:03 <johnsom> Hnmm, looks like that is working there, so... maybe another issue 16:29:19 <gthiemonge> yeah 16:29:39 <gthiemonge> it looks like controller2 is ready 50 min after the start of the job 16:29:45 <opendevreview> Merged openstack/octavia stable/wallaby: Fix race conditions between API and worker DB calls https://review.opendev.org/c/openstack/octavia/+/798255 16:29:50 <johnsom> Hmm yeah, so maybe not 16:30:40 <gthiemonge> I'll take a look at it 16:30:42 <johnsom> 50 minutes.... I wonder if devstack still says "will be done in 10 minutes". lol 16:32:01 <gthiemonge> #topic Open Discussion 16:32:10 <gthiemonge> any other topics? 16:34:24 * haleyb is still neck deep in the ovn provider, so nothing else from me 16:35:08 <gthiemonge> ok folks 16:35:11 <gthiemonge> thank you! 16:35:20 <gthiemonge> #endmeeting