Tuesday, 2024-02-20

opendevreviewElod Illes proposed openstack/openstack-manuals master: [www] Set Train series state as End of Life  https://review.opendev.org/c/openstack/openstack-manuals/+/90950908:31
opendevreviewElod Illes proposed openstack/openstack-manuals master: [www] Set Train series state as End of Life  https://review.opendev.org/c/openstack/openstack-manuals/+/90950912:59
opendevreviewElod Illes proposed openstack/openstack-manuals master: [www] Set Train series state as End of Life  https://review.opendev.org/c/openstack/openstack-manuals/+/90950913:00
elodillesfrickler: i've updated the patch ^^^13:02
opendevreviewMerged openstack/openstack-manuals master: [www] Set Stein series state as End of Life  https://review.opendev.org/c/openstack/openstack-manuals/+/90472013:19
opendevreviewRiccardo Pittau proposed openstack/election master: Adding Riccardo Pittau candidacy for Ironic PTL  https://review.opendev.org/c/openstack/election/+/90953413:36
opendevreviewTakashi Kajinami proposed openstack/governance master: Retire PowerVMStacker SIG  https://review.opendev.org/c/openstack/governance/+/90954013:59
opendevreviewMerged openstack/openstack-manuals master: [www] Set Train series state as End of Life  https://review.opendev.org/c/openstack/openstack-manuals/+/90950914:16
opendevreviewTakashi Kajinami proposed openstack/governance master: Retire PowerVMStacker SIG  https://review.opendev.org/c/openstack/governance/+/90954014:16
fungitc-members: ^ not sure what the actual intended implementation is when repos get retired twice (once as part of a project team when moving to a sig, then when the sig is retired)15:09
fungibut also, it reminds me that we probably haven't done a good job of making sure legacy.yaml is updated when deliverables or entire projects get retired, so could be worth a quick audit to make sure we're not missing any15:09
fungiunrelated, rosmaita's recent messaging around unmaintained-core has been a resounding success! https://review.opendev.org/c/openstack/project-config/+/908911 "Neutron team decided not to create now this new group. Anyone is able to join the um general group if needed to merge Neutron related patches."15:11
rosmaita\o/15:12
fungioh, also, for anyone interested in discussion of ai/machine-generated code contributions and such, in case you're not following the public openinfra-board ml, there's a call coming up 15:00 utc thursday which i'm told is open for participation from anyone who wants to help:15:21
fungihttps://lists.openinfra.dev/archives/list/foundation-board@lists.openinfra.dev/message/JJCH2CRGQ5HFWQ46LVE4N6GG6Q7YMF5B/15:21
dansmithfungi: double ugh15:35
opendevreviewElod Illes proposed openstack/openstack-manuals master: [www] Set Ussuri series state as End of Life  https://review.opendev.org/c/openstack/openstack-manuals/+/90960217:19
opendevreviewElod Illes proposed openstack/openstack-manuals master: [www] Set Yoga series state as Unmaintained  https://review.opendev.org/c/openstack/openstack-manuals/+/90960317:25
JayFtc-members: meeting in ~10 minutes17:49
fungiirc this week, not voice? i never can remember, ml announcement didn't say either way17:53
JayFirc17:54
dansmithday(date) < 7 && voice || irc17:54
fungithanks!17:54
JayFsorry for omitting that, I was late getting the agenda out so I did it from memory instead of my template17:54
* fungi engraves that formula in his eyeballs17:54
clarkbfungi: that sounds worse than floaters17:57
JayF.startmeeting tc18:00
JayF#startmeeting tc18:00
opendevmeetMeeting started Tue Feb 20 18:00:44 2024 UTC and is due to finish in 60 minutes.  The chair is JayF. Information about MeetBot at http://wiki.debian.org/MeetBot.18:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.18:00
opendevmeetThe meeting name has been set to 'tc'18:00
JayFWelcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct.18:00
JayFToday's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee.18:00
JayF#topic Roll Call18:01
JayFo/18:01
dansmitho/18:01
gmanno/18:01
spotz[m]o/18:01
frickler\018:01
rosmaitao/18:01
knikollaO/18:01
slaweqo/18:01
JayF#info One indicated absense on the agenda: jamespage 18:01
JayFAnd everyone else is here! 18:01
JayF#topic Followup on Action Items18:02
JayFrosmaita appears to have sent the email as requested last meeting, thanks for that18:02
rosmaita;)18:02
JayFAny comments about that action item to send an email inviting people to global unmaintained group?18:02
JayFmoving on18:02
JayF#topic Gate Health Check18:02
fricklerthe promised followup from tonyb and elodilles is still pending18:02
JayFack18:02
JayF#action JayF to reach out to Tony and Elod about follow-up to unmaintained group email18:03
JayFI'll reach out to them18:03
JayFAnything on the gate?18:03
dansmithnova just merged a change to our lvm-backed job to bump swap to 8G. I know that's sort of controversial because "we'll just consume the 8G now", but18:03
dansmithwe've spent a LOT of time looking at OOM issues18:03
dansmithI think we should consider just bumping the default on all jobs to 8G18:03
dansmithwe've also got the zswap bit up and available and I think we should consider testing that in more jobs as well18:04
clarkbnote that will make jobs run slower as we can no longer use fallocated sparse files for swap files18:04
JayFI think Ironic is one of the most sensitive projects to I/O performance, so I should see it if it impacts our jobs as a bit of a canary.18:04
dansmithspeaking of, JayF I assume ironic is pretty tight on ram?18:04
JayF(Ironic CI, I mean)18:04
fungithe zuul update over the weekend dropped ansible 6 support. i haven't seen anyone mention being impacted (it's not our default ansible version)18:04
dansmithJayF: see how zswap impacts you mean?18:04
JayFmore swap usage generally18:05
dansmithokay18:05
slaweqIIRC we tried that some time ago and yes, it helps with OOM but indeed, as clarkb said, it makes many jobs to be slower and timeouting more often18:05
JayFI suspect zswap would help in all cases 18:05
rosmaitadansmith: which jobs are running zswap now?18:05
dansmithrosmaita: I think just nova-next18:05
rosmaitaok18:05
dansmithslaweq: which zswap?18:05
dansmither, s/which/which,/18:06
slaweqno, with swap bump to 8GB or something like that18:06
dansmithah, well, a number of our jobs are already running 8G, like the ceph one because there's no way it could run without it18:06
JayFdansmith: https://zuul.opendev.org/t/openstack/build/fec0dc577f204ea09cf3ee37bec9183f is an example Ironic job if you know exactly what to look for w/r/t memory usage; it's a good question I can't quickly answer during meeting18:06
dansmithzswap will not only make more swap available, but it also sort of throttles the IO, which is why I'm interested in ironic's experience with it18:07
JayFzswap will also lessen the actual I/O hitting the disk18:07
gmannI think monitoring nova-next first will be good to see how slow it will be. I am worried about doing it slow test/multinode tests jobs18:07
slaweqwith zswap it may be better indeed18:07
fungiat the expense of cpu i guess?18:07
JayFwe could also experiment with zram, I suspect18:07
dansmithgmann: doing which, 8G or zswap?18:07
JayFfungi: yep, but I/O is the long pole in every Ironic job I've seen so far18:07
gmanndansmith: 8 gb18:07
fungimakes sense. i agree i/o bandwidth seems to be the scarcest resource in most jobs18:08
dansmithfungi: yes, but we're starting to think that IO bandwidth is the source of a lot of our weird failures that don't seem to make sense18:08
dansmithgmann: okay18:08
clarkbwhich is made worse by noisy neighbors swapping18:08
fungiwhich in many cases are also our jobs ;)18:08
JayFSo if we can reduce actual disk I/O through things like zswap (I'm really curious about zram, too), it should have an improvement18:08
dansmithclarkb: I'm totally open to other suggestions, but a lot of us have spent a lot of time debugging these failures lately,18:09
dansmithincluding guest kernel crashes that seem to never manifest on anything locally, even with memory pressure applied18:09
dansmithhonestly, I'm pretty burned out on this stuff18:10
clarkbdansmith: I haven't looked recently but every time I have in the past there is at least one openstack service that is using some unreasnoable (per my opinion) amount of memory18:10
clarkbthe last one I saw doing that was privsep18:10
dansmithand trying to care when it seems other people don't, or just want to say "no it's not that"18:10
clarkband ya I totally get the others not caring making it difficult to care yourself18:10
clarkbI think it would be useful for openstack to actually do a deeper analsys of memory utilization (I think bloomberg built a tool for this?) and dtermine if openstack needs a diet18:11
clarkbif not then proceed with adjusting the test env. But I strongly suspect that we can improve the software18:11
clarkbmemray is the bloomberg tool18:11
JayFclarkb: I think the overriding question is: who is going to do this work? Where does it fall in priority amongst the dozens of high priority things we're working on as well?18:11
clarkbJayF: I mean priority wise I would say issues affecting most projects in a continuous fashion should go right to the top18:12
JayFThe real resources issue isn't in CI; it's not having enough humans with the ability to dig deep and fix CI issues to tag-out those who have been doing that for a long time18:12
dansmithJayF: right, I think the implication is that we haven't tried dieting, and I resent said implication :)18:12
clarkbI don't know how to find volunteers18:12
dansmithJayF: we're all trying to make changes we think will be the best bang for the buck18:12
JayFyeah, exactly18:12
dansmithbecause those of us that sell this software have a definite interest in it being efficient and high-quality18:12
dansmithso anyone acting like we don't care about those things is a real turn-off18:13
JayFdansmith: Can you take an action to email-summary to the list the memory pressure stuff, and include a call for help in it? Especially if we have a single project (e.g. privsec?) that we can point a finger at18:13
dansmithJayF: TBH, I don't want to do that because I feel like I'm the one that has _been_ doing that and I'm at 110% burnout level18:13
JayFIn the meantime I think it's exceedingly reasonable to adjust the swap size, look into environmental helps such as zswap18:14
dansmith(not necessarily via email lately, it's been a year or so)18:14
JayFIs there any TC member with intimate knowledge of the CI issues willing to send such an email?18:14
knikollaI can take a stab at that profiling memory usage. I’m not planning to run for reelection in the TC this round and doing a deep technical dive on something sounds like something I’m itching for to attempt cure burnout. 18:14
clarkbmy main concern with going the swap route is that I'm fairly confident that the more we swap the worse overall reliability gets due to noisy neighbor phenomena. I agree that zswap is worth trying to see if we can mitigate some of that18:14
dansmithand I'll add, because I'm feeling pretty defensive and annoyed right now and so I shouldn't be writing such emails.. sorry, and I hate to not volunteer, but.. I'm just not the right person to do that right this moment.18:15
rosmaitawe should have this discussion at the PTG and take time to gather some data ... i have seen a bunch of OOMs, but don't see an obvious culprit18:16
JayFclarkb: the other side of that is also; if we can get the fire out in the short term it should free up folks to do deeper dives. There's a balance there and we don't always find it, I know, but it's clear folks are trying -- at least the folks engaged enough to be at a TC meeting.18:16
clarkbI think we also have the problem of a 1000 cuts rather than one big issue we can address directly18:16
clarkbwhich makes the coordination and volunteer problem that much more difficult18:16
fungii also get the concern with increasing available memory/swap. openstack is like a gas that expands to fill whatever container you put it in, so once there's more resources available on a node people will quickly merge changes that require it all, plus a bit more, and we're back to not having enough available resources for everything18:16
dansmithclarkb: that's exactly why this is hard, but you also shouldn't assume that those of us working on this have not been looking for cuts to make18:16
clarkbdansmith: I don't think I am18:16
dansmithif you remember, I had some stats stuff to work on performance.json, but it's shocking how often the numbers from one (identical, supposedly) run to the next vary by a lot18:17
slaweqfungi I think that what You wrote applies to any software, not only OpenStack :)18:17
dansmithfungi: I also think it's worth noting that even non-AIO production deployments have double-digit memory requirements these days18:17
gmannmany of us trying/tried our best on those thing and its reality that we hardly get enough support on debugging/solving the issue from project side. a very few projects worried about gate stability and  more is just happy with rechecksssss18:18
clarkbmy suggestion is more that this should be a priority for all of openstack and project teams should look into it. More so than that the few who have been poking at it are not doing anything18:18
JayFIn lieu of other volunteers to send the email; I'll send it and ask for help.18:18
JayF#action JayF to email ML with a summary of gate ram issues and do a general call for assistance18:19
slaweqgmann we should introduce some quota of rechecks for teams :)18:19
gmannI remember we have sent a lot of emails on those. a few of those a regular emails with explicit [gate stability] etc but I did not see much outcome of that18:19
clarkbdansmith: related we've discovered that the memory leaks affect our inmotion/openmetal cloud18:19
gmannslaweq: ++ :)18:19
JayFgmann: I know, and I have very minor faith it will do something, but if I'm able to make a good case while also talking to people privately, like I did for Eventlet, maybe I can shake some new resources free18:20
clarkbwe had to remove noav compute services from at least one node in order to ensure things stop trying to schedule there because there is no memory18:20
gmannsure, that will be great if we get help18:20
clarkband we have like 128GB of memory or something like that18:20
slaweqI can try to find some time and add to my "rechecks" script way to get reasons of rechecks and then make some analysis of the reasons - maybe there will be some pattern there18:21
slaweqbut I can't say it will be for next week18:21
slaweqI will probably need some time to finish some internal stuff and find cycles for that there18:22
JayFI'm going to quickly summarize the discussion here to make sure it's all been marked down:18:22
JayFknikolla offered to do some deep dive memory profiling. Perhaps he'd be a good person for clarkb /infra people to reach out to if/when they find another memory-leaking-monster.18:22
JayFslaweq offered to aggregate data on recheck comments and look for patterns18:22
JayFJayF (I) will be sending an email/business case to the list about the help we need getting OpenStack to run in less ram18:23
JayFand meanwhile, nova jobs are going to bump to 8G swap throughout, and begin testing zswap18:23
JayFDid I miss or misrepresent anything in terms of concrete actions?18:23
dansmiththat last one is not quite accurate18:23
JayFWhat's a more accurate way to restate it?18:23
dansmithwe probably won't mess with the standard templates, but we might move more of our custom jobs to 8G18:23
dansmithlike, the ones we inherit from the standard template we'll leave alone18:24
JayFack; so nova jobs are going to move to 8G swap as-needed and begin testing zswap18:24
dansmithbut, if gmann is up for testing zswap on some standard configs, that'd be another good action 18:24
JayFI would be very curious to see how that looks in an Ironic job, if you all get to the point where it's easy to enable/depends-on a change I can vet it from that perspective18:24
gmannsure, we do have heavy jobs in periodic or in tempest gate and we can try that18:24
dansmithJayF: it's available, I'll link you later18:25
JayFack; thanks18:25
JayFI'm going to move on for now, it sounds like we have some actions to try, one of which will possibly attempt to start a larger effort.18:25
JayFIs there anything additional on gate health topic? 18:25
JayF#topic Implementation of Unmaintained branch statuses18:26
JayFIs there an update on the effort to unmaintain things? :DD18:26
rosmaitadon't think so ... fungi mentioned that last week's email about the global group helped 18:27
frickleryoga is quite far, doing another round of branch deletions right now18:27
JayFack; and I'll reach out to Elod/Tony about them refreshing that18:27
knikollaawesome18:27
JayFfrickler: ack, thanks for that 18:27
JayFIs there anything the TC needs to address or assist with in this transition that's not already assigned?18:28
fricklersome PTLs did not respond at all18:28
fricklerwe did some overrides for the train + ussuri eols already, may need to do that for yoga, too18:29
JayFif no PTLs respond, that means they get the default behavior: that project is no longer responsible and pushes to global-unmaintained-core (?) group, right?18:29
fricklerwell currently the automation requires either a ptl response for release patches or an override from the release team18:30
gmannwe can go with the policy, keep it open for a month or so then EOL18:30
fricklerto eol someone would need to create another patch18:31
JayFgmann: AIUI, we already have volunteers to unmaintain these projects -- someone volunteered at a top level to maintain everything down from Victoria18:31
JayFfrickler: ack, do you want me as a TC rep to vote as PTL in those cases? Or is there a technical change we need to make?18:31
fricklerJayF: let me discuss this with the release team and get back to you18:32
knikollathanks for turning unmaintain into a verb JayF18:32
JayFI've been doing it in private messages/irl conversations for months; I'm surprised this is the first leak to an official place :D18:32
fungii had hoped that the phrasing of the unmaintained branch policy made it clear that the tc didn't have to adjudicate individual project branch choices18:33
gmannJayF: you mean the global group? or specific volunteer for specific projects?18:33
JayFgmann: I recall an email to the mailing list, some group volunteered to maintain the older branches back to victoria, I'm having trouble finding the citation now, a sec18:34
fricklerwell the gerrit group currently is elodilles + tonyb, I don't remember anyone else18:34
rosmaitaelodilles volunteered to maintain back to victoria18:35
gmannyeah, I am also do not remember if anyone else volunteer / interested to keep them alive even no response from PTL/team18:35
JayFrosmaita: ack; I'll try to find that email specifically18:36
gmannrosmaita: ok even there is no response/confirmation from PTL on anyone need those or not18:36
fricklerfungi: this is about moving from stable to unmaintained, so not quite yet covered by unmaintained policy. or only halfway maybe?18:36
fricklermight need to be specified clearer and communicated with the release team, too?18:36
rosmaitaJayF: not sure there was an email, may be in the openstack-tc or openstack-release logs18:36
gmannanyways, I think actual cases will ne known when we will go into the explicit OPT-IN time18:37
JayF#action JayF Find something written about who volunteered to keep things un-maintained status back to Victoria18:37
gmannthat time we will see more branches filter out and move to EOL18:37
JayFI'll do it offline, and reach out to elod if it was them (I think it was)18:37
JayFfrickler: ack; I think you're right we may have a gap18:39
JayFhttps://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html doesn't mention the normal supported->unmaintained flow18:39
JayFoh, wait, here it is18:40
JayF> By default, only the latest eligible Unmaintained branch is kept. When a new branch is eligible, the Unmaintained branch liaison must opt-in to keep all previous branches active.18:40
JayFSo every project gets one unmaintained branch for free, keeping it alive after the 1 year is what has to be opted into18:40
gmannand i think latest unmaintained branch is also need to be +1 from PTL/liaison otherwise it will go to EOL after a month or so18:41
gmannwe have that also in policy/doc somewhere18:41
JayFgmann: that is opposite of the policy18:41
JayF> The PTL or Unmaintained branch liaison are allowed to delete an Unmaintained branch early, before its scheduled branch deletion.18:41
JayFso you can opt-into getting an early delete18:41
JayFnothing in the policy, as written, permits us to close it early without the PTL or Liason opt-in18:41
gmannyeah I am not talking about delete. I mean those will not be opt-in automatically always18:41
gmannlet me find where we wrote those case18:42
gmann'The patch to EOL the Unmaintained branch will be merged no earlier than one month after its proposal.'18:43
gmannah this one it was, to keep those open for a month18:43
gmannhttps://docs.openstack.org/project-team-guide/stable-branches.html#unmaintained18:43
JayFthat reflects my interpretation of the policy ++18:44
gmannso we can close them after one month and no response 18:44
JayFonly if it's not the *latest* UM branch though18:44
JayFfrom the top of that:18:44
JayF>By default, only the latest eligible Unmaintained branch is kept open. To prevent an Unmaintained branch from automatically transitioning to End of Life once a newer eligible branch enters the status, the Unmaintained branch liaison must manually opt-in as described below for each branch.18:45
JayFSo I think that's reasonably clear. Is there any question or dispute?18:45
gmannone question18:46
gmann so in current case, should we apply this policy only for yoga the latest unmaintained or for xena to victoria also? as we agreed to make victoria onwards as unmaintained as initial level18:47
rosmaitaI think what applies to pre-yoga is in the resolution, not the project team guide: https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html#transition18:47
spotz[m]Yeah I would think anything older then the release in question should have already been EOL or be included in this18:48
gmannso all 4 v->y will be kept until they are explicitly proposed as EOL18:48
frickleralso a note that the policy is nice, but afaict a lot of that still needs implementation / automation18:48
gmannyeah, and that is lot of work. now i feel about that and think we could keep only stable/yoga moving to unmaintianed and rest all previous one EOL18:49
JayFWell, changing our course after announcing, updating documentation, and setting user expectations is also pretty expensive, too :/18:49
gmannyeah, I agree not to change now but I feel we added lot of work to release team18:50
JayFI'm going to move on, we have a lot of ground to cover and 10m left18:50
JayFfrickler: If you can document what work needs to be done (still), it'd be helpful, I know this is a form of offloading work to releases team and I apologize for that :/ 18:50
JayF#topic Testing Runtime for 2024.2 release18:50
JayF#link https://review.opendev.org/c/openstack/governance/+/90886218:51
gmannI need to check other review comments/discussion in gerrit but we can discuss about py3.12 testing.18:51
JayFThis has extremely lively discussion in gerrit; I'd suggest we keep most of the debate in gerrit.18:51
gmann I am not much confident on adding it as voting in next cycle considering there might be more breaking changes in py3.12 and we did not have any cycle with testing it as non voting so that we know the results before we make it voting.18:51
gmannagree with frickler comment in gerrit on that18:51
frickleryes, another case of "policy needs to be backed by code"18:52
JayFWell, the eventlet blocker that caused this to be completely undoable in most repos is gone now18:52
JayFis there further blockers to turning on that -nv testing?18:52
fricklerno distro with py3.12 as default18:53
gmannwe never tested it as -nv right? so we do not know what else blocker we have18:53
fricklerthat at least blocks devstack testing afaict18:53
JayFgmann: that's sorta what I was thinking, too18:53
gmannblocker/how-much-work18:53
JayFfrickler: ack18:54
fricklerdoing tox testing should be relatively easy18:54
fungitonyb was going to look into the stow support in the ensure-python role as an option for getting 3.12 testing on a distro where it's not part of their usual package set18:54
gmannwe usually add nv in at least one cycle advance before we make it voting18:54
gmannI think right path is 1. add -nv in 2024.2   2. based on result, discuss to make it voting in 2025.118:55
JayFI'm going to move on from this topic and suggest all tc-members review 908862 intensively with "how are we going to do this?" in mind as well. We have other topics to hit and I do not want us to get too deep on a topic where most of the debate is already async.18:55
JayFI'm skipping our usual TC Tracker topic for time.18:56
JayF#topic Open Discussion and Reviews18:56
JayFI wanted to raise https://review.opendev.org/c/openstack/governance/+/90888018:56
JayFand the general question of Murano18:56
JayFIMO, we made a mistake in not marking Murano inactive before C-2.18:56
JayFNow the question is if the better path forward is to release it in a not-great state, or violate our promises to users/release team/etc that we determine what's in the release by C-2.18:57
JayFfrickler: I saw your -1 on 908880, I'm curious what you think the best path forward is?18:58
gmannreplied in gerrit on release team deadline18:58
rosmaitai suspect from the release team point of view, it's ok to drop, but not add?18:59
JayFI am strongly biased towards whatever action takes the least human effort so we can free up time to work on projects that are active.18:59
gmannreleasing broken source code is more dangerous and marking them inactive without any new release is less 18:59
slaweqFrom previous experience I can say that if we would send email to the community with info that we are planning to not include it in the release, it is more likely that new volunteers will step up to help with the project18:59
rosmaitaslaweq: ++18:59
slaweqso I would say - send email to warn about it and do it at any time if project don't meet criteria to be included in release18:59
gmannwe did in murano case and we found volunteer let's see how they progress on this19:00
slaweqif nobody will raise hand, then maybe nobody really needs it19:00
fricklerif we do that, why do we have the C-2 cutoff point at all?19:00
gmannbut this is not just murano case but it can be any project going inactive after m-2 so we should cover that in our process19:00
slaweqif there is volunteer I would keep it and give people chance to fix stuff19:00
gmannfrickler: we do mark any inactive one as soon as we detect before m-2 but if we detect or it become inactive after m-219:01
slaweqfrickler maybe we should remove that cutoff19:01
JayFfrickler: I honestly don't know. That's why I asked for a release team perpsective. I suspect it was meant more to be a process of "we mark you inactive in $-1, you have until $-2 to fix it" and instead we are marking them late 19:01
fricklerso I'd say mark inactive but still keep in the release19:01
gmannour best effort will always be check before m-219:01
fungiif a project isn't in good shape before milestone 2, then it should be marked inactive at that time. if it's in good shape through milestone 2 then releasing it in that state seems fairly reasonable19:01
gmannrelease even with broken gate/code?19:02
JayFWe should finish this debate in gerrit, and we're over time.19:02
slaweqfrickler what if project have e.g. broken gates and it happend after m-2? How we would then keep it in release?19:02
JayFIt's a hard decision with downsides in every direction :(19:02
gmannslaweq: exactly 19:02
slaweqIMHO we should write there something like each case will be discussed individually by the TC before making final decision19:02
slaweqit's hard to describe all potential cases in the policy19:03
JayFI think that'd be a good revision to 90888019:03
JayFbut still doesn't settle the specific case here19:03
JayFbut we need to let folks go to their other meetings19:03
JayF#endmeeting 19:03
opendevmeetMeeting ended Tue Feb 20 19:03:24 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)19:03
opendevmeetMinutes:        https://meetings.opendev.org/meetings/tc/2024/tc.2024-02-20-18.00.html19:03
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/tc/2024/tc.2024-02-20-18.00.txt19:03
opendevmeetLog:            https://meetings.opendev.org/meetings/tc/2024/tc.2024-02-20-18.00.log.html19:03
JayFSorry, not my best show of topic:time management in that meeting19:03
spotz[m]Thanks all19:03
slaweqthanks all, and good night :)19:03
slaweqo/19:03

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!