16:00:41 #startmeeting Octavia 16:00:41 Meeting started Wed Nov 16 16:00:41 2022 UTC and is due to finish in 60 minutes. The chair is gthiemonge. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:41 The meeting name has been set to 'octavia' 16:00:47 o/ 16:00:55 o/ 16:00:58 o/ 16:01:15 o/ 16:03:04 #topic Announcements 16:03:12 ** Antelope-1 16:03:16 It is MS1 week 16:03:23 this week is Antelope-1 Milestone 16:03:25 yep 16:03:39 I will be proposing a MS1 tag for octavia-tempest-plugin this week 16:03:47 johnsom: thanks 16:04:26 we have also planned to review/merge some RFEs before this milestone 16:04:30 o/ 16:04:37 (sorry, a tad late) 16:04:43 I think that things are moving well for the cpu-pinning RFE from tweining 16:05:20 (and it's not a big issue if it is not merged this week, at least we got interesting feedback on it) 16:05:55 yes, I think I will not change things and wait for reviews now 16:07:17 ack 16:07:33 different topic: I read that call for papers for the next summit is open 16:07:33 any other announcements that I have missed? 16:08:17 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031201.html 16:08:44 interesting 16:09:19 Note this link for how to setup a forum session: 16:09:22 #link https://cfp.openinfra.dev/app/vancouver-2023/20/ 16:10:07 Hmm, I wonder if that is the correct link, but it is what they had in the email 16:11:23 ack 16:11:55 wow, 1000 submission on average. 16:12:51 If anyone is interested in submitting a session, feel free to ask me questions. I have done a few over the years. 16:14:52 thanks 16:15:01 #topic CI Status 16:15:21 FYI the (periodic) FIPS job is failing with timeouts 16:15:27 https://zuul.openstack.org/builds?job_name=octavia-v2-dsvm-scenario-fips&project=openstack/octavia 16:15:43 I proposed to split it into 2 jobs (one with the traffic tests, one with the non-traffic tests) 16:15:48 https://review.opendev.org/c/openstack/octavia/+/864391 16:16:00 a test showed that it would fix those failures 16:16:04 https://zuul.opendev.org/t/openstack/buildset/d9eaf090ccba4ddc80925eadf69983c2 16:16:42 (note: there's a another FIPS job in the check pipeline that uses only the tls_barbican scenario tests) 16:16:52 Nice 16:19:43 #topic Brief progress reports / bugs needing review 16:21:05 Cores: we have 3 pages of open backports: https://review.opendev.org/q/project:openstack/octavia+status:open+branch:%255Estable/.* 16:21:06 I have finished the Barbican secrets consumer patch. It will not pass the Barbican tests until the Barbican team fixes a bug in the client. But I fixed that locally and had successful test runs. 16:21:13 johnsom: rm_work: tweining: gthiemonge: ^ 16:21:34 #link https://review.opendev.org/c/openstack/octavia/+/864308 16:21:34 johnsom: great! 16:22:10 I will have a look 16:24:14 I'have commented on #link https://review.opendev.org/c/openstack/octavia/+/859387 16:24:23 I proposed a fix for the bug that QG described last week: 16:24:28 https://review.opendev.org/c/openstack/octavia/+/864192 16:24:48 ^ but I don't know if we should translate exception strings, any idea johnsom? 16:25:15 QG: oh thanks, I will take a look 16:26:03 Translation is a good question. I don't think OpenStack is translating error messages anymore. I have only seen Dashboard and release notes 16:26:24 but an exception might be displayed in the dashboard 16:26:50 Yeah, we pass through a lot of exception strings in our dashboard. 16:27:21 https://codesearch.opendev.org/?q=ValidationException&i=nope&literal=nope&files=&excludeFiles=&repos=openstack/octavia 16:27:30 #link https://review.opendev.org/c/openstack/octavia/+/415646 16:28:42 About logs i have a question what do you think of having in every log message the loadbalancer id ? 16:28:59 I guess we still have the "_" in common/exceptions, so we probably should tag those. I don't think we have done a good job at that however. We might want to audit 16:30:14 Well, not all logs are related to a load balancer specifically. I.e. amphora instances not yet associated with an LB. 16:30:41 Generally, you can take the request ID, grep for that and be able to tie it back to the root resource. 16:31:23 amphora during spawn are not associated to any LB ? 16:31:27 ohhh 16:31:29 ok 16:31:50 Well, no, we still have the "spares" code I think. Where we boot instances before they are assigned. 16:32:07 the spare feature was removed 16:32:21 it's more in a way of we want to centralize all the logs from a specific LB 16:32:35 Personally, I think the log lines are already too long. I also think it will be a bit of work to pass the LB ID down all of the flows. 16:34:06 Yeah, I get the idea of tracing. OpenStack has been aligned around the request ID. 16:34:37 We do tag all of the tenant flow logs with the LB that produced it. It's just the control plane that does not. 16:34:57 What do others think on this? 16:35:56 I like the idea (I spent a lot of time looking at the logs), but yeah it might be complicated to implement 16:36:02 I'm fine with having the request id I think 16:36:17 and yes, log lines are long already 16:36:18 about the request ID, I think that sometimes, we are losing this context 16:36:52 We are, we have not fully implemented the request ID 16:36:57 johnsom: would it be possible to re-use the request id from the API in the controller? 16:37:00 ah ok 16:37:07 maybe we could start by fixing it 16:37:09 The API is fully implemented, but the backends are not 16:37:44 Yeah, we should do that 16:39:47 "I think there is an open story for that" (tm) 16:39:51 grin 16:40:49 a "what"? 16:41:33 Maybe in an hour I can find it 16:42:04 #link https://storyboard.openstack.org/#!/story/1694861 16:42:10 Wow, I got lucky 16:42:26 nice 16:42:28 :-) 16:45:32 ok, folks, I think we are already in the "Open Discussion" topic 16:47:31 any other topics? 16:47:58 I will spend more time on reviewing things in the next few days. 16:48:26 same here 16:48:30 we also have a few very tiny changes that can be reviewed very quickly 16:48:40 o/ my latest showing up to upstream meetings 16:48:50 I will review patches as well 16:49:02 better late than never ;) 16:49:24 cool, thank you folks! 16:49:39 #endmeeting