16:00:23 #startmeeting Octavia 16:00:23 Meeting started Wed Jun 22 16:00:23 2022 UTC and is due to finish in 60 minutes. The chair is gthiemonge. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:23 The meeting name has been set to 'octavia' 16:00:29 Hi 16:00:38 o/ 16:00:41 o/ 16:01:09 o/ 16:01:52 #topic Announcements 16:02:09 I don't have any announcements, but maybe I'm missing somthg 16:02:16 johnsom? 16:02:28 Not that I can think of 16:04:02 ok 16:04:07 #topic CI Status 16:04:21 Great News: The gate blockers are fixed now! 16:04:34 thank you folks for working on it! 16:04:52 +1 16:05:46 we still need to work on the new centos 9 stream jobs, but the latest packages for libvirt are broken in c9s (I don't know if they fixed it) 16:06:12 (in case you're using c9s, downgrade the libvirt RPM to 8.3.0) 16:06:58 good to know 16:07:52 #topic Brief progress reports / bugs needing review 16:08:02 hey I would like to get feedback on 2 patches: 16:08:06 #link https://review.opendev.org/c/openstack/octavia/+/846996 16:08:11 "Restart rsyslog from cloud-init in amphorav1" 16:08:35 I fixed the same bug in amphorav2, but I forgot to patch amphorav1 :/ 16:08:53 I forgot that amphorav1 is still the default driver in wallaby 16:08:59 the other patch is 16:09:05 #link https://review.opendev.org/c/openstack/octavia/+/839935 16:09:08 some context: 16:09:35 while I was backporting a fix from master to the stable branches, I got coverage issue (<92%) on wallaby (and only wallaby) 16:09:59 so I created this new commit, it adds a lot of tests for the controller worker 16:10:10 Probably means there are other missing backports on wallaby 16:10:18 my goal is to squash this commit in the related backports 16:10:26 But, more tests are almost always good. 16:10:31 yeah this is weird 16:10:50 the missing coverage was triggered by the spare pool feature in wallaby (removed in xena) 16:12:02 Still, have to give props to the team. 92% coverage as our minimum is really good for an OpenStack project. Neutron is only 83% for example. 16:12:03 in the meantime I set W-1 in the backports, I will update them 16:12:44 https://review.opendev.org/c/openstack/octavia/+/843248 would also be good to integrate soon because the bug is annoying. it's also only a small change 16:14:09 yeah i agree, without the patch it takes 5min to restart the driver-agent (with a recent python release) 16:15:07 https://review.opendev.org/c/openstack/octavia/+/831051 I just finished testing this new version of the notifications change and I see the same behavior about the default notifications topic as you, gthiemonge . I wonder what's so special about it. I do see that errors get added to notifications.error though. 16:15:09 yeah, I wish neutron would fix their services as well. It takes forever to clean devstack. 16:16:17 johnsom: and you don't need a graceful shutdown when you clean devsatck ;-) 16:17:20 Well, yeah, since they get killed I sure all of the neutron stuff gets left in the VM 16:18:10 yeah I had many old processes for the driver-agent for instance 16:20:06 #topic Open Discussion 16:21:07 any other topics Folks? 16:21:39 well, are there any questions regarding the forum session we had on lbaas at oif summit? 16:22:03 it was raised here last week (and I sadly couldn't make it) but commented in this channel afterwards 16:22:19 Mostly sad that no Octavia team members were able to attend to address the misinformation shared there. 16:22:56 fkr: yeah maybe you can help us to understand some of the concerns in the etherpad 16:22:56 It was odd too as it wasn't listed as an official forum session. 16:23:04 #link https://etherpad.opendev.org/p/oif-summit-berlin-2022-forum-lbaas 16:23:09 johnsom: it was 16:23:27 i'd like to work on clearing that picture, since for me it seemed that the operators seem to live in a silo and I think, as part of my work, I can try to bridge some of those gaps (or at least try) 16:23:51 johnsom: it was on the schedule (if that is what makes it an 'official' forum session) 16:24:13 fkr https://wiki.openstack.org/wiki/Forum/Berlin2022 16:24:22 aaah 16:24:49 This was the wiki for the official forum sessions, which go through a different process than regular sessions/talks. 16:24:58 johnsom: than it was my mistake, since I was not aware that it should've been added there 16:25:24 am new to the openinfra/openstack community, so these are things, I'll pick up then ;) 16:26:08 Yeah, no worries. We track the forum process, so that was part of why it was a surprise 16:26:34 gthiemonge: to come back to your comment: I think, what has happened is, that various operators (at least in my scope) stumbled across problems with octavia but some of them were never reported as proper bugs, which of course makes it hard for octavia devs to address them 16:26:56 (various operators == cloud service providers that run openstack based clouds) 16:26:56 Also interesting is I have gone to operator sessions at a few previous OpenStack events and got no feedback, so interesting that this happened now. 16:28:34 The other thing I noted for the etherpad (thank you BTW for having one) was it appears people are trying to use the OVN provider, which the neutron team is still developing. Even this week they are working to make the status fields functional. 16:28:44 with one reported bug we noticed that the speed with which the problem was addressed by the octavia devs was blazing fast, which shows that it makes sense to report them in a good manor. as stated, I'm fairly new to the scene, so I have no idea on how close most of the operators are with the devs. 16:29:06 well, I must say not all feedback was constructive on the etherpad though 16:29:41 We have one operator that is a core reviewer. But otherwise we don't hear much. 16:29:48 tweining: :( 16:30:14 Yeah, the language was.... not necessarily constructive. 16:31:02 It blew my mind a bit that "UDP is not supported" but it's been a feature since Rocky..... 16:31:18 tweining: i have it on my list to work myself through the pad (together with some fellow devs and operators) in order to turn it into something that becomes constructive. However - thanks tweining and johnsom for that feedback, I'll also take that back into the group of people that worked on the pad 16:32:43 fkr: anyways, for any issues, people can reach to us on the openstack-discuss mailing list, on this channel on IRC, or on storyboard 16:32:59 Yeah, we meet here every week as well. 16:33:23 ack. that is why I joined and stick around 16:33:46 and we would be really happy to help people and to fix new potential bugs :D 16:34:24 johnsom: regarding the udp support, I'll try to find out why it is on there and where the misunderstanding is 16:34:30 gthiemonge: :) 16:34:55 so, johnsom has left a lot of comments in the etherpad (in purple), I think this would be a good starting point 16:35:06 +1 16:35:13 ok. thanks! 16:37:00 I'm also adding comments 16:37:06 it would also be interesting to understand your use cases. for instance with k8s. 16:38:25 tweining: you're refering to the cloud&heat remark? 16:39:50 Yeah, k8s usage is interesting. There are many different parts to it. Octavia is great for ingress, but the providers aren't setup for some other use cases. Not to mention Envoy with Istio is already a pretty good option there already. 16:40:05 no specifically. In the beginning it says "We want a stable and reliable LBaaS provided by OpenStack that can be consumed by Kubernetes clusters". kubernetes clusters is a broad term 16:40:41 +1 16:43:49 johnsom: that was also part of a discussion (wether it really needs to be one size fits all or octavia for certain cases and for others something deployed on top) 16:44:16 fkr I guess in summary, there is a community of folks developing and operating Octavia here. We are happy to discuss any issues/features/bugs etc. 16:45:46 side-note but not unimportant: it was not the intention of stepping on anyones toes or "just complain and moan without trying to improve stuff" 16:46:29 in case it was received in that way: I'm sorry 16:47:12 We have got a lot of heat from what occurred there, I have to admit. It's really unfortunate that no one active in the Octavia community could attend. 16:49:41 ok, so now, I think we need to work together to make things better 16:49:47 +1 16:49:59 +1 :) 16:49:59 if we can learn from each other and get operators better connected with the dev team in the future it had something positive. 16:50:11 and during the next summit, we would have a great forum session 16:50:26 full ack 16:51:09 It was three operators (among others) that started the project. We all worked together to build this design. I hope that can continue. 16:51:46 fkr: don't hesitate to ping us on IRC if you have any questions, I think you're based in Germany, tweining and I are in the same TZ 16:52:08 good to know! 16:53:34 I and some of the other cores are US timezones. 16:53:46 24/7 ;-) 16:53:52 Pretty much 16:54:18 but we don't have a hotline unless you pay for a Red Hat subscription ;) 16:54:26 lol 16:54:26 hehe 16:54:32 ok Folks, I think that's it for today 16:55:09 that was interesting, I hope we will improve the communication between us 16:55:20 thank you Folks! 16:55:26 o/ 16:55:28 #endmeeting