16:00:42 #startmeeting Octavia 16:00:42 Meeting started Wed Jul 7 16:00:42 2021 UTC and is due to finish in 60 minutes. The chair is gthiemonge. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:42 The meeting name has been set to 'octavia' 16:00:48 o/ 16:00:50 Hi everyone 16:01:34 Some folks are on vacation I think, might be a quiet week 16:01:35 hi 16:01:50 Don't be shy 16:02:32 #topic Announcements 16:02:48 Next week is Xena-2 milestone 16:03:02 perhaps we need to publish an intermediate release for python-octaviaclient 16:03:05 johnsom: ^ 16:03:16 we missed it for Xena-1 16:03:28 Yeah, we have some good backports/patches as well, so worth a release 16:04:15 ok,I'll need your help for it 16:04:23 Ok, ping me 16:04:32 johnsom: thanks 16:04:46 Shameless plug: 16:04:48 #link https://review.opendev.org/q/project:openstack/python-octaviaclient+status:open+owner:johnsomor%2540gmail.com 16:05:20 yeah we need to get those commits 16:06:40 other core reviewers are on vacation 16:07:45 any other announcements? 16:09:25 #topic Brief progress reports / bugs needing review 16:09:41 I have spent most of my time on downstream stuff 16:10:03 but I've also been working at fixing the two-node job and the centos stream job 16:10:35 centos stream support is now broken on devstack (issue with libvirt bindings) I hope it will be fixed soon 16:10:43 Yeah, I had some vacation time. Not much new on Octaiva 16:10:50 and two-node job is still V-1 because of unrelated CI issues 16:12:02 i've been working on the health monitor backport for OVN, and trying to fix it's gates, plan on getting to multi-VIP rebase this week 16:12:20 haleyb: great! 16:12:41 gthiemonge: i have to review your network interface change again as well 16:13:31 I'll have to rebase it, it's in merge conflict :/ 16:14:01 but the conflict is probably in a file that I have deleted 16:14:06 Yeah, I don't think I got back to that review yet either 16:14:08 sigh 16:14:59 FYI a revert has been proposed in devstack 16:15:03 #link https://review.opendev.org/c/openstack/devstack/+/799251 16:15:20 merging this commit would break our IPV6-based scenario tests 16:16:53 Joy 16:18:02 #topic Open Discussion 16:18:09 one last topic about CI issues: 16:18:29 some jobs (noop-api) have been frequently failing in timeout since the beginning of June 16:18:46 it seems that the duration of those jobs have increased a lot 16:19:08 for instance the duration of the scoped-tokes jobs was between 1h30 and 1h45 last month 16:19:14 it is now around 2h 16:19:36 #link https://zuul.openstack.org/builds?job_name=octavia-v2-dsvm-noop-api-scoped-tokens&project=openstack%2Foctavia-tempest-plugin 16:19:58 and sometime it hits the timeout (2h15) 16:20:43 it seems that something has changed between June 6th and 11th 16:20:59 sqlalchemy 1.4 was added to upper constraints June 9th 16:21:27 #link https://review.opendev.org/c/openstack/requirements/+/788339 16:22:59 yeah, I checked the new octavia/octavia-tempest-plugin/devstack commits, I didn't see anything there 16:23:29 so sqlalchemy might be a suspect 16:24:15 My first idea was to increase the value of the timeout for the noop-api jobs (+15 or 30min) 16:24:37 but I believe that johnsom has some smarter ideas ;-) 16:24:43 That means we wait longer.... sad face 16:25:34 My thoughts would be to try to narrow down what is slow. If it's the SQL calls, see if there is some new magic incantation we missed (probably not). Then I might explore moving the sqlite file to /dev/shm so it runs in RAM vs. the slow disks on these test hosts. Though RAM is certainly a premium on these test nodes, so that may not be a good idea. Frankly I have no idea what size that file gets during a test run. 16:25:42 Or we split the jobs up more. 16:26:28 These are no-op, so really should be darn fast. 16:26:54 573 tests :D 16:27:21 That isn't that much. grin 16:28:01 johnsom: I can work on your first point: analyzing what is slow 16:28:13 Cool. I just can't spare the time right now 16:28:19 perhaps we could isolate one part ofcode that is problematic 16:31:00 I would be curious to know. 16:31:07 if I don't find anything, we can try to move the sqlite file 16:31:13 or just increase the timeout 16:31:20 yeha 16:32:31 FYI, there is a comment on the discuss mailing list: 16:32:32 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-July/023502.html 16:32:45 About barbican integration in dashboard. 16:33:08 I don't have a stack with Octavia and dashboard at the moment to test this out. 16:33:19 this is weird, because barbicanAPI.getCertificates is part of octavia-dashboard 16:33:34 johnsom: I tried to reproduce it on master, I didn't find anything 16:34:11 There is a code check in the plugin that looks in the service catalog for key-manager, that should disable any calls to the barbican client. But this seemed like some javascript issues I don't understand and would need to run it myself. 16:34:21 hmm 16:34:29 It's possible that message comes out on the console all the time, but doesn't impact functionality 16:36:08 I will take another look 16:36:27 In my experience, almost every page spews something in the browser console these days. Most of it doesn't matter. 16:41:29 any other topics for this meeting? 16:41:49 That is all I have today. 16:42:46 ok 16:42:51 thanks everyone! 16:42:53 #endmeeting