13:00:40 <mnasiadka> #startmeeting kolla
13:00:40 <opendevmeet> Meeting started Wed Sep 13 13:00:40 2023 UTC and is due to finish in 60 minutes.  The chair is mnasiadka. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:40 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:40 <opendevmeet> The meeting name has been set to 'kolla'
13:00:42 <mnasiadka> #topic rollcall
13:00:43 <mnasiadka> o/
13:00:46 <bbezak> o/
13:00:47 <frickler> \o
13:00:52 <mattcrees> o/
13:00:56 <jangutter> \o
13:01:22 <mmalchuk> o/
13:01:41 <SvenKieske> o/
13:02:15 <jsuazo> o/
13:02:21 <ihalomi> \o
13:02:39 <mnasiadka> #topic agenda
13:02:39 <mnasiadka> * CI status
13:02:39 <mnasiadka> * Release tasks
13:02:39 <mnasiadka> * Regular stable releases (first meeting in a month)
13:02:39 <mnasiadka> * Current cycle planning
13:02:40 <mnasiadka> * Additional agenda (from whiteboard)
13:02:40 <mnasiadka> * Open discussion
13:02:42 <mnasiadka> #topic CI status
13:03:33 <mnasiadka> Overall is green I guess
13:03:48 <mnasiadka> But I think I don't like that multinode (ceph) jobs are really working nice in master
13:03:55 <mnasiadka> but they are failing in stable branches
13:04:06 <mnasiadka> the same way as they were breaking before we reconfigured RMQ to HA
13:04:28 <mnasiadka> Would we want to enable RMQ HA in tests/templates/globals-defaults for stable branches?
13:04:51 <mmalchuk> no?
13:04:53 <bbezak> in CI only?
13:05:04 <mnasiadka> in CI only
13:05:08 <mnasiadka> mmalchuk: why not?
13:05:16 <mmalchuk> oh on CI is ok
13:05:27 <mnasiadka> frickler: any thoughts?
13:05:29 <SvenKieske> seems ok
13:05:46 <mattcrees> Seems reasonable to me
13:05:48 <frickler> my other idea whether doing single node rmq only would be better
13:06:08 <frickler> but likely HA is fine, too, gives it some testing
13:06:12 <mmalchuk> frickler we need HA
13:06:28 <mmalchuk> on CI
13:06:33 <mnasiadka> let's try if switching those jobs to HA solve the issue of randomness
13:06:48 <mnasiadka> anybody willing to change it and do a couple of rechecks?
13:07:25 <bbezak> can take a look probably soonish
13:07:42 <mnasiadka> ok, bbezak wins ;)
13:07:48 <mnasiadka> #topic Release tasks
13:08:06 <mnasiadka> I didn't raise a patch for cycle highlights, will do after this meeting
13:08:12 <mnasiadka> no other release tasks seem to be waiting
13:08:22 <mnasiadka> #topic Current cycle planning
13:08:51 <mnasiadka> I don't think anybody spend time reviewing podman patches
13:09:06 <mnasiadka> and I think kevko did not update the LE Kolla patch according to reviews
13:09:14 <mnasiadka> so those two things need focus :)
13:09:50 <mnasiadka> I also proposed setting OVN jobs as voting, because it's getting more and more deployments, so I guess it would make sense to have it as voting
13:10:03 <mnasiadka> #link https://review.opendev.org/c/openstack/kolla-ansible/+/894914
13:10:33 <mnasiadka> SvenKieske: did you mention that td-agent needs bumping up?
13:10:57 <SvenKieske> yeah, currently figuring that one out, might not be soo trivial
13:11:12 <mmalchuk> do we switch to OVN from OVS later?
13:11:15 <mnasiadka> do you have that amount of time, or do you need help?
13:11:34 <mnasiadka> mmalchuk: that's on my backlog for working on that, we should have something beginning of next cycle
13:11:46 <mmalchuk> cool thanks
13:13:24 <SvenKieske> I'll should figure it out today or cry for help :D
13:13:29 <mnasiadka> ok :)
13:13:42 <mnasiadka> #topic Additional agenda (from whiteboard)
13:13:55 <mnasiadka> to zun or not to zun (jangutter). TL;DR, zun needs fundamental fixes to work with docker>20, etcd>3.3. Docker is a _host_ level dependency so vendoring options in the container won't work.
13:14:24 <jangutter> Does anyone have contacts with folks on the Zun team?
13:14:44 <mnasiadka> jangutter: I think it does work, but only for Ubuntu and Rocky 9 (since they still have 20.x available)
13:15:06 <mnasiadka> jangutter: there is one person I think, and he already responded on the ticket I created in Zun
13:15:06 <jangutter> yep, unless etcd is updated, of course.
13:15:11 <kevko> mnasiadka: didn't have a time
13:15:13 <kevko> :(
13:15:29 <frickler> there's only hongbin left in zun afaict, who I think has been around doing kolla patches, too
13:15:31 <mnasiadka> That's why I proposed we would copy the current etcd as etcd-zun, and upgrade etcd (that is used for everything else)
13:16:03 <mnasiadka> jangutter: if they don't shape up in C - we deprecate Zun and drop it in D (or earlier if we have to)
13:16:11 <jangutter> ah, right... running on different ports?
13:16:15 <frickler> mnasiadka: do you have a link to that ticket?
13:16:38 <mnasiadka> #link https://bugs.launchpad.net/zun/+bug/2007142
13:16:40 <frickler> maybe deprecate in B still? and undeprecate if things improve?
13:16:48 <mnasiadka> the goal is to fix it in C
13:16:59 <mnasiadka> I'm fine with deprecating in B, gives clear mandate to drop if they don't shape up
13:17:39 <jangutter> i.e. not remove, just don't run CI jobs on it?
13:17:45 <mnasiadka> jangutter: well, then the role would need to support it, right
13:17:57 <mnasiadka> deprecate means we're planning to drop, don't use it ;)
13:19:21 <jangutter> ok, I'll see what I can do with etcd-zun, can't promise too many hours.
13:20:59 <jangutter> will report back if it's feasible (migration path might be "fun")
13:22:31 <mnasiadka> well, seems like too much work
13:23:20 <jangutter> oof, yeah, bootstrap involvement: docker on the host needs access to it.
13:23:55 <mnasiadka> oh boy
13:24:18 <mnasiadka> so what's the alternative? do not update etcd or drop zun?
13:24:26 <mnasiadka> Anybody here that cares for Zun?
13:24:53 <jangutter> or pin your etcd container to a previous version of kolla?
13:25:07 <frickler> I'd favor updating etdc. maybe one can pin etcd for zun deployments?
13:25:20 <frickler> just add a reno for that, right
13:25:28 <SvenKieske> well not upgrading is not really an alternative imho
13:25:56 <SvenKieske> if we could pin for zun, that'd be nice, but I assume it's not that easy? would need a separate deployment somehow?
13:26:02 <SvenKieske> also zun relies on old docker
13:26:40 <mnasiadka> well, so maybe still a separate etcd container image for zun, and if you use zun - it's the one you get for deployment?
13:26:55 <mnasiadka> but then for such people - the upgrade will happen later
13:27:01 <mnasiadka> and they can't skip versions
13:27:02 <frickler> so ... drop zun now, restore when it gets fixed?
13:27:04 <mnasiadka> oh holy crap
13:27:09 <jangutter> that sound like a plan. etcd and etcd-3.3
13:27:11 <mnasiadka> yes, that's what I vote for
13:27:16 <mnasiadka> drop zun
13:27:23 <frickler> and people really needing zun will need to stay on 2023.1 until then
13:27:23 <mmalchuk> +1
13:27:53 <jangutter> any workaround is just going to postpone the inevitable, if not make it worse :-(
13:27:55 <mnasiadka> yeah, drop now, if they backport support for new docker into 2023.1 - we can revert
13:28:22 <SvenKieske> should we maybe announce that separately via ML?
13:28:43 <SvenKieske> maybe someone who cares is encourage to fix zun earlier then ;)
13:28:50 <mnasiadka> jangutter: can you please send a mail to openstack-discuss that we are going to drop zun support for that reason, and the plan is to revert the drop once Zun supports new Docker version - because we can't be running this etcd version forever? :)
13:29:13 <mnasiadka> let's wait a week if any massacre happens and start dropping
13:29:17 <SvenKieske> +1
13:29:31 <jangutter> will do, adding that if zun gains support it will be considered for the stable 2023.1 branch?
13:30:22 <mnasiadka> yes, something like that - we should probably drop it in kolla-ansible in a way that the Ansible inventory groups stay there, so it's easy to revert (without breaking anything)
13:30:41 <mnasiadka> and if they don't shape up - we drop the remnants in C
13:30:43 <jangutter> :thumbsup:
13:31:34 <mnasiadka> ok, let's hope case solved for now
13:31:53 <mnasiadka> Tap as a Service installation and configuration (jsuazo)
13:32:01 <mnasiadka> Did we get to any conclusion last time? :)
13:32:57 <jsuazo> I have a clearer picture as to why we are touching the upper constraints, which I addressed in the kolla proposal
13:33:27 <mnasiadka> Yeah, I see that
13:33:42 <mnasiadka> Question is if we should be installing it from a tarball, or honor the u-c
13:34:03 <jsuazo> TL;DR the pinned version is not working on X and above (tested versions), we tested de latest release, 11.0.0, which seems to work fine
13:34:28 <mmalchuk> afaik we need bump X version in u-c
13:34:53 <mnasiadka> release 11.0.0 is in master u-c
13:35:03 <jsuazo> We managed to add the plugin as a neutron addition from git, the branch of the source could be changed to the correct one, currently we are overriding the source to master on deployments
13:35:09 <mnasiadka> we are not considering to add that to any of the stable branches (no backports)
13:35:19 <mnasiadka> (it's a feature)
13:35:26 <opendevreview> Merged openstack/kolla-ansible stable/zed: CI: add q35 hardware machine type to tests  https://review.opendev.org/c/openstack/kolla-ansible/+/894582
13:35:28 <opendevreview> Merged openstack/kolla-ansible stable/2023.1: CI: add q35 hardware machine type to tests  https://review.opendev.org/c/openstack/kolla-ansible/+/894581
13:35:33 <opendevreview> Merged openstack/kolla-ansible stable/yoga: CI: add q35 hardware machine type to tests  https://review.opendev.org/c/openstack/kolla-ansible/+/894583
13:36:11 <mmalchuk> so honor u-c should be fine
13:37:26 <mnasiadka> frickler: any controversial opinions? ;-) (apart the ones, that who needs TaaS or is that project alive)
13:37:46 <mnasiadka> up until now removing packages from u-c was rather exception, not a way of working
13:38:34 <jsuazo> mnasiadka: Should i propose a change to the u-c in X then ? (and remove the u-c editing from the proposal)
13:38:45 <mnasiadka> in Xena?
13:39:00 <mmalchuk> jsuazo no since it is feature
13:39:04 <mnasiadka> (changing u-c in stable branches is not something is going to pass from my perspective)
13:39:08 <mmalchuk> will not backport to xena
13:39:36 <jsuazo> ok, got it, will update the proposal then
13:39:55 <mnasiadka> you can backport the taas feature in your downstream branches and override the u-c, but I have a feeling taas is there for some reason
13:40:29 <mnasiadka> unless requirements team is happy with removing tap-as-a-service from u-c, then we can rework the build to use tarballs
13:40:49 <SvenKieske> td-agent is meeh, the repo installation changed to a good old "curl $internet | bash" -> voila you got a new repo under /etc/apt/sources.d/ :(
13:42:01 <jsuazo> We actually had a tarball installation initially, but my team didnĀ“t like it that much :(
13:42:23 <mnasiadka> SvenKieske: oh boy, I'll try to have a look in mythical free time
13:42:25 <SvenKieske> and everything is now called "fluentd*" instead of "td-agent*" at least that can be mass changed I guess
13:42:40 <SvenKieske> https://docs.fluentd.org/installation/install-by-deb
13:42:50 <SvenKieske> https://www.fluentd.org/blog/upgrade-td-agent-v4-to-v5
13:42:59 <mnasiadka> mhm
13:43:07 <mnasiadka> let's discuss that next week when I'll know more about it :)
13:43:22 <mnasiadka> S3 backend support for Glance and Cinder-Buckup (jsuazo)
13:43:24 <SvenKieske> I guess it's doable, will try to extract URLs for gpg keys et all from that shell script :)
13:43:41 <mnasiadka> jsuazo: I think I commented on this patch today, do you need any help?
13:44:01 <jsuazo> i would appreciate if you gave the kolla-ansible TaaS proposal a look
13:44:02 <mmalchuk> me too as always add some comments
13:44:24 <mnasiadka> jsuazo: I'll have a look on the k-a side as well this week
13:44:38 <SvenKieske> have a link? I know I saw it somewhere but it got lost in the torrent of other tabs :D
13:44:42 <jsuazo> mnasiadka No help needed, Im keeping an eye and comments and trying to address them as I see them
13:45:15 <mnasiadka> kolla-ansible: https://review.opendev.org/c/openstack/kolla-ansible/+/885417
13:45:17 <mnasiadka> SvenKieske: ^^
13:45:22 <mnasiadka> ok then
13:45:35 <mnasiadka> #topic Open discussion
13:45:39 <mnasiadka> anybody anything?
13:45:52 <mmalchuk> orphans-backports
13:45:57 <mmalchuk> https://review.opendev.org/q/I3ca4e10508c26b752412789502ceb917ecb4dbeb
13:46:01 <mmalchuk> https://review.opendev.org/q/I0401bfc03cd31d72c5a2ae0a111889d5c29a8aa2
13:46:05 <mmalchuk> https://review.opendev.org/q/I169e51c0f5c691518ada1837990b5bdd2a3d1481
13:46:10 <mmalchuk> https://review.opendev.org/q/Ief8dca4e27197c9576e215cbd960da75f6fdc20c
13:46:16 <mmalchuk> some merged, some one
13:46:42 <mmalchuk> and kayobe reviews
13:46:45 <mmalchuk> https://review.opendev.org/c/openstack/kayobe/+/861397
13:46:49 <mmalchuk> https://review.opendev.org/c/openstack/kayobe/+/879554
13:46:59 <mmalchuk> thats all from my side
13:47:42 <kevko> guys
13:47:42 <kevko> 1. ovn/ovs redeployed ..not working :( ...
13:47:42 <kevko> 2. Can anyone check podman kolla patch ? locally working ..zuul is failing while building SOME images
13:48:01 <mnasiadka> I should have some more time next week
13:48:08 <mnasiadka> I can have a look around podman kolla patch
13:48:34 <SvenKieske> that would be great, I'm also out of ideas there, and I looked at the logs.. (maybe the wrong way)
13:49:01 <kevko> mnasiadka: there is no reason to not work ..but it's not in zuul ...locally working like a charm
13:50:27 <frickler> if all other debugging fails, we can hold a node and check what is going on there
13:50:37 <mnasiadka> kevko: well, basically if it fails on upgrade jobs, so it's checkin out previous release code - but what do I know :)
13:51:04 <kevko> but it is failing on kolla build
13:51:12 <kevko> so it is not related to upgrade job
13:51:31 <kevko> because upgrade job is just deploy -> build images -> upgrade
13:51:39 <kevko> and it is failing on build images
13:51:46 <kevko> everything is built OK ..but not glance,horizon,eutron
13:52:38 <mnasiadka> I saw pip is failing or something similar
13:52:51 <mnasiadka> but it's working ok on all other jobs, just not upgrade :)
13:53:34 <mnasiadka> anyway - I unfortunately need to run - feel free to resolve that issue while I'm gone :)
13:53:38 <mnasiadka> Thank you all for coming
13:53:40 <mnasiadka> #endmeet
13:53:43 <mnasiadka> #endmeeting