18:01:26 <gouthamr> #startmeeting tc 18:01:26 <opendevmeet> Meeting started Tue Dec 10 18:01:26 2024 UTC and is due to finish in 60 minutes. The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:26 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:26 <opendevmeet> The meeting name has been set to 'tc' 18:02:06 <gouthamr> Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct. 18:02:08 <gouthamr> Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee 18:02:11 <gouthamr> #topic Roll Call 18:02:12 <cardoe> o/ 18:02:17 <slaweq> o/ 18:02:20 <frickler> \o 18:02:21 <gtema> o/ 18:02:22 <noonedeadpunk> O/ 18:02:26 <gmann> o/ 18:03:52 <frickler> bauzas might still be doing kitchen design ;) 18:03:57 <gouthamr> courtesy-ping: bauzas, spotz[m] probably grabbing coffee 18:04:26 <spotz[m]> Thanks! 18:04:31 <gouthamr> thank you for joining, lets get started.. 18:04:41 <opendevreview> Merged openstack/election master: Fix linters error on noble https://review.opendev.org/c/openstack/election/+/937419 18:04:52 <gouthamr> #topic Last Week's AIs 18:05:39 <gouthamr> in the gate health topic, we brought up noble migration having affected horizon plugin CI jobs 18:06:05 <gouthamr> we didn't take an AI though.. but, was there anything note worthy that changed the situation here? 18:06:34 <gmann> I am not aware of that bug but we have separate topic for noble things, maybe we can discuss there. 18:06:41 <gouthamr> ack 18:07:23 <gouthamr> we got a early-heads-up regarding CPU requirements for CentOS 10 that aren't available in many providers 18:07:28 <gouthamr> #link https://review.opendev.org/c/openstack/diskimage-builder/+/934045 18:07:49 <frickler> or rather not at all in any of our providers? 18:08:06 <gouthamr> oh, thought we had RAX-FLEX and OpenMetal 18:08:21 <gouthamr> but i see Karolina's post on that review: Available infrastructure has disabled AVX and AVX2 CPU flags (even if it's Haswell), which effectively disabling building Centos 10 which is compiled for x86_64-v3. 18:08:28 <fungi> we think they might be on in rackspace flex, and if they're not in openmetal we're probably able to set that ourselves through kolla 18:08:50 <gmann> time to have new provider who need CentOS 18:09:00 <fungi> but neither of those providers currently provide many test nodes for us 18:09:33 <gmann> do we know how many project test CentOS as voting job? 18:09:51 <slaweq> in Neutron we have some in periodic queue only 18:10:04 <slaweq> but I think we had FIPs jobs using Centos, right? 18:10:12 <gmann> last time when we discussed about its stability most of project moved it to non voting or periodic 18:10:21 <slaweq> and there was some problems with running those on Ubuntu IIRC 18:10:28 <gmann> slaweq: I think yes or they are running on ubuntu focla? 18:10:30 <gmann> focal 18:10:30 <fungi> hard to measure that empirically, but yes the fips compliance testing effort was driving some increase/resurgence at one point in recent history 18:10:38 <frickler> also, slightly related: https://www.phoronix.com/news/Torvalds-Mind-Fart-x86_64-Level 18:12:11 <gouthamr> breathe, repeat: "we suck at naming" 18:13:00 <clarkb> my perspective on it is that centos/rhel should be working with cloud providers to uplevel the available cpus 18:13:13 <clarkb> it really isn't openstack's responsibility to be mediator here 18:13:24 <gouthamr> okay; i don't think we had any AIs here.. and it looks like RH folks working on this are aware, but i don't see any call to attention from them.. we can revisit this issue if they need us to do anything 18:13:35 <gouthamr> clarkb: ack 18:14:22 <gouthamr> the last AI i was tracking was asking the ML for objections for EOL'ing Victoria (V), Wallaby (W), and Xena (X) branches 18:14:32 <cardoe> I agree with clarkb here. You might have to end up disabling old Rackspace. 18:14:32 <clarkb> then if they decide like openshift to make it impossible to test their platform we'll just stop testing their platform 18:15:23 <fungi> cardoe: old rackspace and ovh provide the bulk of our test resources today (and neither support the newer instructions) 18:15:30 <gouthamr> i own that last AI, and haven't finished my ML post.. will catch up on it after this meeting 18:16:00 <frickler> just fyi elodilles has claimed multiple times to want to keep those branches alive 18:16:04 <gouthamr> that's all the AIs i was tracking, was there anything else you were working on? 18:16:49 <gmann> I think all unmaintained branches except latest one needs to be opt-in explicitly right? otherwise by default they move to EOL 18:16:50 <gouthamr> frickler: ack, and he does a bulk of the job reviewing/maintaining them.. but, like you noted a ton of projects have not merged even the .gitreview updates 18:17:00 <spotz[m]> After speaking some with thejulia on this it seems that V3 has been around for a long time and folks just aren't implementing it 18:17:29 <gmann> is there any change in unmaintained policy than what is written? 18:17:47 <frickler> gmann: yes, but what happens like in this case when someone says "I opt in", but they don't fulfill the written requirements? 18:17:52 <gouthamr> okay; lets switch to our real topics so we can have the discussion in the right place 18:18:03 <gouthamr> #topic Maintaining Unmaintained branches ¯\_(ツ)_/¯ 18:18:15 * gouthamr continue please 18:18:21 <frickler> :) 18:18:44 <gmann> so if they do not fulfil the requirement that means we should not approve the opt-in for them 18:18:57 <gouthamr> yes, we had a default/global opt-in at the beginning 18:19:15 <gmann> when anyone want to opt-in, we need to judge their maintainability and whether they fill the required things to stay as unmaintianed 18:19:28 <frickler> so how do we do that? we don't have a process to handle this opt-in and alternative eoling 18:20:03 <fungi> a driver for the opn-in caretaking of unmaintained branches was to keep the list of them from growing unbounded, since it puts a strain on opendev to continue, e.g., running periodic jobs for an ever-increasing number of them, especially when most of those jobs have been perpetually failing for years and don't get looked at 18:20:09 <gmann> gouthamr: not at beginning but after any branch is in unmaintained and that is not latest unmaintained then it goes to EOL by default and should be opt-in explicitly to stay in unmaintianed state 18:20:16 <fungi> s/opn-in/opt-in/ 18:20:32 <gmann> frickler: I think we documented somewhere the implementation 18:21:07 <gmann> unmaintained branches will be moved to EOL and anyone want to keep them in unmaintained needs to -1 there 18:21:15 <gouthamr> #link https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html#unmaintained-branches (unmaintained branches resolution) 18:21:21 <gmann> and then we check the requirement to keep it in unmaitined or not 18:21:21 <fungi> gmann: there was a call for volunteers that went out to the openstack-discuss ml a while after the clarifications to unmaintained caretaker groups in gerrit were merged 18:21:32 <gouthamr> and the stable branch policy here: 18:21:41 <gouthamr> #link https://docs.openstack.org/project-team-guide/stable-branches.html#unmaintained (Stable Maintenance) 18:21:47 <frickler> so I'm not aware of any tooling that implements what is written in the policy there 18:22:05 <fungi> but i don't think anyone ever got a clear process documented for how people express their desire to volunteer for those, and how the periodic re-opt-in is tracked for them 18:23:03 <fungi> and i've definitely not been seeing calls anywhere for the caretakers to re-opt-in for existing unmaintained branches twice yearly 18:23:17 <frickler> also this will be a lot of review work for the release team if this is to happen individually for every repo 18:23:23 <gmann> its there in p-t-g 18:23:25 <gmann> "By default, only the latest eligible Unmaintained branch is kept open. To prevent an Unmaintained branch from automatically transitioning to End of Life once a newer eligible branch enters the status, the Unmaintained branch liaison must manually opt-in as described below for each branch." 18:23:31 <gmann> here 18:23:33 <gmann> #link https://docs.openstack.org/project-team-guide/stable-branches.html#unmaintained 18:23:38 <frickler> that's policy, not implementation 18:23:52 <gmann> "To opt-in to keep the Unmaintained branch open, the PTL or Unmaintained liaison must -1 the appropriate patch in the openstack/releases repo to EOL the branch. " 18:23:57 <frickler> the policy fine, but it needs to actually get applied 18:24:07 <gmann> frickler: yeah, I think we should implement that in this transition 18:24:45 <fungi> who proposes the changes to close the branches? 18:24:54 <frickler> currently nobody 18:25:06 <fungi> i suppose that's supposed to happen immediately after each new coordinated release 18:25:09 <gmann> I think its release team to move all V-Z branches to EOL and let volunteer to -1 those and we check the requirement 18:25:20 * gouthamr spies a change on release tooling 18:25:22 <gouthamr> #link https://review.opendev.org/c/openstack/releases/+/936637 (Update tooling to delete unmaintained branches) 18:25:47 <frickler> gouthamr: that's only to actually delete branches that were marked eol in the release data 18:26:00 <gmann> but I might be wrong if its not release team doing or want to do. that is not actually discussed previously or in policy 18:26:16 <gmann> that is open thing I think who will do it 18:26:36 <cardoe> Its seemed a bit inconsistently recently 18:26:37 <frickler> release team doesn't have the capacity I'd think 18:26:50 <gouthamr> frickler: ah, ack - although the script you're adding will do the bulk abandons if we could re-use it in this context 18:27:00 <gmann> frickler: yeah i can understand 18:27:13 <frickler> I started to create some eol patches semi-manually for those repos that didn't merge their .gitreview updates 18:27:57 <frickler> that lead to https://review.opendev.org/q/topic:%22eom-to-eol%22 were some repos were opted out of eol-ing, so quasi opted-in 18:28:11 <gmann> I will say just force retire/EOL them as nobody wanted those as .gitreview change is not merged 18:28:14 <gouthamr> #link https://review.opendev.org/c/openstack/releases/+/935373 (Transition victoria-eom branches to EOL) 18:28:24 <gouthamr> #link https://review.opendev.org/c/openstack/releases/+/935374 (Transition wallaby-eom branches to EOL) 18:28:37 <gouthamr> #link https://review.opendev.org/c/openstack/releases/+/935375 (Transition xena-eom branches to EOL) 18:28:44 <frickler> but those still don't fulfill the "CI in good shape" 18:29:09 <gouthamr> ^ will stop there because, during the last meeting, we did have some buy in to keep yoga/zed "unmaintained" 18:29:09 <frickler> and then there are more with zuul config errors, which I would like to tackle next 18:29:52 <gmann> whole point of unmaintained was to shrink and filter out the *not maintianed branches* to go out but if we want to keep unlimited unmaintained branches then we are stuck in same situation what we were in EM model 18:30:37 <frickler> so how would we do this? have the TC do "executive overrides" against the -1 on the eol release patches? 18:30:38 <gmann> that is why we should default to EOL and let volutneer to come and tell us if they need it and want t maintian 18:30:53 <slaweq> gmann++ 18:31:25 <gmann> I think yes, we ask volunteer who is -1 to tell us the plan 1. gate green or not 2. testing requirement 3. maintainers list etc 18:31:25 <frickler> who wants to implement the "default to EOL" tooling? otherwise this isn't happening 18:32:15 <gmann> yeah so issue here is we do not have bandwidth to implement the policy which set us back to original problem we faced during EM 18:32:30 <fungi> i guess one change to eol all of e.g. unmaintained/xena branches in every repository in the openstack namespace, and then if someone wants to volunteer to be the caretaker for unmaintained/xena on specific repositories those have to be revised out of the change? i don't see opening 500 separate changes as a reasonable approach anyway 18:33:02 <frickler> doing 300 updates to that single change doesn't sound much more feasible 18:33:08 <fungi> but also repeatedly revising the bulk eol change adds manual effort for someone 18:33:14 <fungi> right, that 18:33:21 <gouthamr> i don't think it'll be 300 18:33:46 <frickler> even 10 is a lot of work, which is kind of what I already had with the subset above 18:33:47 <gouthamr> it'll be more like 2-3.. but we can timebox the reviews 18:34:14 <gouthamr> frickler: ack; i think we could get some more attention on the ML and tack a deadline for the transition 18:34:20 <gmann> we have 1 month time as written in policy "The patch to EOL the Unmaintained branch will be merged no earlier than one month after its proposal. " 18:34:37 <gouthamr> nice 18:34:38 <gmann> so volunteer has 1 month to -1 and fulfil the requirement 18:35:15 <frickler> o.k., so we update the patch only if CI is free of errors 18:35:25 <fungi> to be clear, while i appreciate the work frickler has put into identifying which branches have not merged automated changes and which ones have broken zuul configs, i think that's way more effort than should be going into it, and we should be starting with a blanket eol and, i dunno, maybe ask anyone who wants to volunteer to take care of some specific repos to revise that 18:35:27 <fungi> change themselves? 18:35:28 <gmann> maybe we should discuss release team if they can implement it but as frickler mentioned team is already occupied so we should find someone else? 18:35:41 <frickler> I guess I can propose a patch like that for victoria first and see how it goes 18:35:57 <gmann> fungi: ++ 18:36:27 <fungi> put as much work on the volunteer caretakers as possible to minimize work we're creating for the release managers and/or tc 18:36:30 <gmann> yeah, doing a single/blanket EOL and see who all need what all repo to stay and we can filter out those based on requirement fulfil 18:36:49 <frickler> maybe a volunteer from TC to review -1s on that eol patch and update if appropiate could be found? 18:36:54 <gmann> and that needs to stay open for a month so volunteer have enough time to respond 18:37:38 <gouthamr> i can be that guinea pig.. i don't think this would be done often after we're done pruning branches until Zed 18:37:49 <gmann> frickler: good point. I can volunteer and bring up/validate the requirement to stay as unmaintained 18:37:50 <gouthamr> crap, missed the opportunity, until "Zed is ded" 18:37:57 <spotz[m]> hehe 18:38:01 <frickler> once per year after that 18:38:05 <fungi> if a volunteer caretaker doesn't have the time to push up a revision to the blanket eol change, then they certainly don't have the time to actually take care of those branches 18:38:10 <gmann> yeah, its once a year 18:38:31 <gmann> once we get rid of <=zed it will be more easy and smooth as only SLURP goes to unmaintained 18:39:26 <frickler> ok, sounds like we have a plan 18:39:26 <gmann> fungi: exactly. if any volunteer to keep any repo in unmaintined then we just handover to them to take care of everything including filterout form EOL and maintianed CI etc 18:41:53 <gouthamr> ack; just to recap, frickler will attempt a bulk EOL change for V, and gmann and gouthamr will assist with the rest of the branches until Zed.. each change will be open for exactly 1 month providing liaisons time to review 18:42:58 <gmann> I think I volunteer to help on validating the -1 from volunter on EOL proposal change :) 18:43:01 <gouthamr> if done right, this should bring us to implement our policy correctly; i.e., keep only one unmaintained branch (Antelope) 18:43:47 <gmann> ++ yeah that is end goal to have one unmaintained branches if no one volutneer to maintain more 18:44:50 <gouthamr> so are we settling on liaisons showing up at the time of the EOL proposal? we don't maintain a list anywhere? 18:45:18 <fungi> also remember that older branches have to go eol if newer ones do, so that needs to be taken into account for future iterations 18:45:25 <gmann> yeah 18:45:36 <fungi> (or older slurp when newer slurp goes eol, eventually_ 18:46:05 <fungi> i suppose future blanket eol changes could do multiple branches in one change to facilitate that 18:46:15 <gouthamr> yes 18:48:07 <gouthamr> i don't think that's a problem - but if peole will get confused by a patch that says: "EOL for Bobcat and Caracal, but you can only object to Caracal", don't mind there being two patches 18:48:58 <fungi> not-slurp branches don't transition to unmaintained, by policy, so putting them in their own change is a good idea just for expediency, as there's no process for objecting to them 18:49:32 <fungi> but eol for unmaintained/2024.1 and unmaintained/2025.1 in the same change could make sense 18:49:39 <gmann> ++ 18:50:39 <gouthamr> anything else to be agreed upon/debated for $topic? 18:50:55 <gouthamr> i think this was super helpful, even though it feels like we took a bulk of the meeting for this.. 18:52:28 <gouthamr> i appreciate frickler raising attention to this, and to all the thoughts expressed here, its grunt work this sort of housekeeping..but it'll pay off to keep our contributors sane as we've seen over the years 18:52:39 <gouthamr> #topic Status on migrate CI to Ubuntu Noble 18:52:50 <gouthamr> gmann: how goes it? 18:52:58 <gmann> I sent latest status on ML yesterday 18:53:00 <gmann> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/U5HIACZZVSGUHCPR47YS7KQWBCA6OKC5/ 18:53:19 <gmann> we have 3 projects still failing and doc job migration still pending 18:53:55 <gmann> out of those 3 projects, we have two project team looking into the failure bt 3rd one skyline is no response 18:54:10 <gmann> #link https://review.opendev.org/c/openstack/skyline-apiserver/+/935600 18:54:12 <gmann> #link https://review.opendev.org/c/openstack/skyline-apiserver/+/935604/2 18:54:29 <gmann> not sure what to do on this, I sent it on ML also and added core member in review 18:55:16 <gmann> other than that, it seems monasca gate is broken and no response from team 18:55:19 <gmann> #link https://etherpad.opendev.org/p/migrate-to-noble#L165 18:55:50 <gmann> monsaca is in inactive projetc list with extensions to this cycle but I am not seeing any progress to make it active 18:55:52 <gmann> #link https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html#current-inactive-projects 18:56:59 <frickler> time for retirement, then? we're close to e-2, too 18:57:11 <gmann> good thing in this migration is we also checked all team whether their gate is broken and maintainer are responsive or not. and all good except these two project i mentioned above 18:57:12 <gouthamr> what's worked with wu_wenxiang and hasan.acar in the past has been direct email... they don't pay attention to the ML, and neither do any of the contributors unfortunately 18:57:16 <gmann> frickler: I think yes 18:57:19 <gouthamr> +1 18:57:52 <gmann> gouthamr: on horizon plugin failure, can you point me to the failure. I tested some plugin and they worked fine but have note tested all. 18:58:09 <gmann> horizon also greenon noble 18:58:10 <gmann> #link https://review.opendev.org/c/openstack/horizon/+/935413 18:58:14 <frickler> I think horizon merged some fix last week? 18:58:28 <gmann> django jobs running on py312 also 18:58:39 <gmann> #link https://review.opendev.org/c/openstack/horizon/+/936801/4 18:59:13 <gmann> after these two it should be green but feel free to add bug/failure in etherpad 18:59:15 <gouthamr> #link https://review.opendev.org/c/openstack/manila-ui/+/936972 (horizon-tox-python3-django42 failure on noble) 18:59:26 <gouthamr> this may be one, i'll look and confirm on the etherpad 18:59:27 <noonedeadpunk> i THINK THIS ONE SHOULD BE FIXED NOW 18:59:28 <gmann> gouthamr: recheck that, it is fixed now 18:59:31 <noonedeadpunk> sory 18:59:36 <gmann> yeah 18:59:40 <gouthamr> beautiful, hey don't shout at me, noonedeadpunk 18:59:46 <gmann> heh 18:59:50 <noonedeadpunk> I didn't mean to, sorry :) 18:59:59 <gouthamr> okay, we're at the top of the hour 19:00:02 <gmann> thanks noonedeadpunk for fixing and help there 19:00:03 * noonedeadpunk multitasking 19:00:07 <gouthamr> thanks for the updates gmann 19:00:10 <slaweq> :) 19:00:16 <gouthamr> anything else to be added in the minutes here? 19:00:20 <gouthamr> #topic Open Discussion 19:00:59 * gouthamr waits the full minute 19:01:27 <gouthamr> one thing could be that two of our meetings fall on year-end-holidays 19:02:00 <slaweq> I will not attend meeting on 24th and 31st of December for sure :) 19:02:02 <gouthamr> and, due to a personal life-change, i might need help running the next meeting (Dec 17th) 19:02:29 <gouthamr> we can catch up on these here, async.. 19:02:34 <gouthamr> thank you all for attending today 19:02:44 <slaweq> thx, bye 19:02:46 <gmann> gouthamr: I can run on 17th if no other volunteer. let me know 19:03:02 <spotz[m]> Thanks! I'll be out fo the rest of the year but will probably pop in 19:03:08 <gouthamr> gmann++ that's awesome, thank you very much!! :) 19:03:12 <gouthamr> spotz[m]: ack 19:03:15 <gouthamr> #endmeeting