slaweq | hi tc-members, recently xek pointed me to this article https://www.phoronix.com/news/Linux-Compliance-Requirements about "compliance requirements" for being linux kernel maintainer. IIUC this file contains some kind of "core reviewers" of the part of the kernel so I wonder if we (as community or maybe even openinfra foundation) should do some similar actions with our core reviewers (maintainers). Or maybe we don't need anything, which would | 13:34 |
---|---|---|
slaweq | be even better. I just wanted to raise this topic to see what others thinks about it | 13:34 |
TheJulia | Each project has authority delegated to implement their own pratices, controls, and ultimatley review processes and standards, so if the community wished to do something as such, the staff can surely review and advise, but it likely needs to be something each top level project approaches in it's own context. | 13:36 |
slaweq | TheJulia by project do you mean e.g. OpenStack? Or like nova, neutron and other teams each doing it on their own? | 13:37 |
TheJulia | Top level project under the Foundation | 13:38 |
slaweq | TheJulia ok, so that could be some topic for the TC then :) | 13:38 |
TheJulia | Indeed | 13:38 |
slaweq | thx | 13:38 |
TheJulia | np | 13:38 |
TheJulia | That being said... The first sentence of the second paragraph is quite revealing. | 13:39 |
TheJulia | I can discuss with foundation staff | 13:41 |
TheJulia | I have a call with some of them in about 20 minutes | 13:42 |
slaweq | I think if foundation would have any guidelines for us that would be great | 13:42 |
slaweq | thx TheJulia | 13:42 |
slaweq | I just want us to be on safe side with this, and apparently there are some reasons for taking actions there so maybe we also should think about it | 13:43 |
TheJulia | Problem ultimately is, in our structure it might require a board resolution to explicitly assert it | 13:44 |
TheJulia | projects are still free to move independently | 13:44 |
frickler | a common approach / guidance in order to help avoid legal risks for any contributor (person or company) in any of the affected legislations would be a good goal for the foundation IMHO | 13:51 |
TheJulia | We're largely looking towards the Open Regulatory Compliance working group output | 13:57 |
TheJulia | Ultimately OFAC enforcement is not about new legislation, but old regulatory controls which seems to now being pushed into Open Source collaboration. | 13:58 |
fungi | the foundation did get some guidance from counsel a few years ago about participation from companies on the usa sanctioned entities list, and their opinion was that "Open source software, collaboration on open source code, attending telephonic or in person meetings, participating in training and providing membership or sponsorship funds are all activities which are not subject to the EAR | 14:11 |
fungi | and therefore should have no impact on our communities." but maybe it's time to ask them for a fresh take | 14:11 |
fungi | https://lists.openinfra.dev/archives/list/foundation@lists.openinfra.dev/thread/PRTRFOCMJTQMRCNFMGZKLO6PJPBTEJOY/#PRTRFOCMJTQMRCNFMGZKLO6PJPBTEJOY | 14:11 |
fungi | where this topic has come up recently in other venues, most of the concerns seem to be around the nuance of handling embargoed reports of suspected security vulnerabilities, since that involves dealing with patches in private and so could be seen as a violation of the letter of the regulation (up to the point where said participation switches to public) | 14:13 |
fungi | but i am not a lawyer and this is not (my) legal advice | 14:13 |
TheJulia | So per foundation leadership, the foundation staff is aware of the the Linux kernel changes, and are are trying to find out more of the back story to better understand if there is changes in the guidance or the way rules are being enforced at the governmental level. | 14:16 |
fungi | yeah, nobody has come out and stated what the specific legal concerns from the linux kernel decision were, insofar as how having a maintainer employed by a sanctioned company might represent a violation of present usa regulations, but my guess is that it has to do with maintainers getting access to private information (public participation has been considered clearly exempt in the past at | 14:19 |
fungi | least) | 14:19 |
TheJulia | Taking off my chair hat, that aligns with my present understanding. That being said, I am and I'm sure our legal counsel is painfully aware at times, the government rules process in the US can also cause some changes to take place without everyone noticing. The why is the detail missing. | 14:39 |
fungi | yep | 14:44 |
TheJulia | I am also not a lawyer, nor do I play one on television. I've just worked with them for so many years of my career I tend to think like one. | 14:49 |
TheJulia | *like* being the keyword :) | 14:49 |
spotz[m] | It's hard to act or take a position without knowing what happened or what the concerns were in the first place | 14:49 |
fungi | yes, here's hoping that either 1. more specific details become available, or 2. openinfra counsel has heard additional scuttlebutt as to the underlying concern | 14:57 |
TheJulia | #2 would only come in the form of "hey, go read this regulation update" :) | 15:10 |
fungi | well, more that there are places where ip lawyers focused on open source communities and foundations discuss the ever changing landscape of things like this, not in the specific context of their particular clients, so there may be opinions that have shifted recently is what i mean | 15:34 |
TheJulia | Indeed and agree completely. | 15:45 |
JayF | I want into the lawyer slack ;) | 16:14 |
JayF | mainly just so I can see "WHEREAS" in a status message | 16:15 |
fungi | the "places" i'm aware of are mostly scholarly law journal articles and related in-person conferences | 16:21 |
fungi | but there are probably also backchannels for the backchannels yes | 16:21 |
gouthamr | tc-members: a gentle reminder that we're meeting here in ~1 hour | 17:00 |
gouthamr | #startmeeting tc | 18:00 |
opendevmeet | Meeting started Tue Oct 29 18:00:47 2024 UTC and is due to finish in 60 minutes. The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot. | 18:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 18:00 |
opendevmeet | The meeting name has been set to 'tc' | 18:00 |
gouthamr | Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct | 18:01 |
gouthamr | Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee | 18:01 |
gouthamr | #topic Roll Call | 18:01 |
noonedeadpunk | o/ | 18:01 |
gmann | o/ | 18:01 |
bauzas | \o | 18:01 |
gtema | O/ | 18:01 |
frickler | \o | 18:01 |
slaweq | o/ | 18:02 |
spotz[m] | o/ | 18:02 |
gouthamr | courtesy-ping: cardoe | 18:02 |
gouthamr | let's get started.. | 18:04 |
gouthamr | #topic Past Weeks' AIs | 18:05 |
cardoe | sorry I'm on. | 18:05 |
gouthamr | hey there, cardoe | 18:05 |
gouthamr | Debug quota test failures in the requirements patch and continue work on the grenade-related issues (gmann): | 18:05 |
gouthamr | gmann: seemed like separate issues that you brought up here in the past weeks | 18:06 |
gmann | greande job is fixed now and some project failing it also have fix up | 18:06 |
gmann | quota test failures is in sdk job not in requirement. | 18:06 |
gmann | I still did not get time to debug those, I will look into that this week | 18:06 |
gouthamr | #link https://review.opendev.org/q/topic:%22grenade-venv%22 (Grenade job changes) | 18:07 |
gouthamr | gmann: ty, any gerrit links we should track for that second item? | 18:07 |
gmann | yeah | 18:08 |
gmann | #link https://review.opendev.org/c/openstack/python-openstackclient/+/931858 | 18:08 |
gmann | this one ^^ | 18:08 |
gouthamr | thank you! we'll follow up with this next week | 18:08 |
gouthamr | next up, Lead the tracker related to translations (bauzas) | 18:08 |
bauzas | hmmm, sure | 18:09 |
gouthamr | ^ this was sorta actioned during the PTG week | 18:09 |
gouthamr | and probably a work item to track through the release/ | 18:09 |
bauzas | nothing really important to discuss as an AI | 18:09 |
gouthamr | bauzas: ack; worth tracking with the TC tracker? | 18:09 |
gouthamr | #link https://etherpad.opendev.org/p/tc-2025.1-tracker (Technical Committee activity tracker - 2025.1) | 18:09 |
bauzas | we started to discuss with ian and seangsoo about how to work on the AI translation | 18:09 |
bauzas | https://etherpad.opendev.org/p/oct2024-ptg-i18n#L47 | 18:10 |
frickler | maybe worth noting that translation updates from zanata are completely broken now | 18:10 |
bauzas | there was just a concern about legal issues | 18:11 |
frickler | since reqs dropped py3.8 pins and thus installing projects on bionic no longer works | 18:11 |
fungi | and the (abandoned) zanata cli tools need a jdk old enough that it was last available on bionic | 18:12 |
bauzas | we said that maybe just a documentation top paragrath would say "it's an automatically translated" | 18:12 |
clarkb | I was thinking about that and maybe a container image for the zanata client would be an "easy" thing to solve that problem | 18:12 |
clarkb | then you can have java 8 + zanata client wherever you are | 18:12 |
cardoe | that would make sense clarkb | 18:12 |
fungi | though also sounds like a good bit of work when people could be spending time finishing the weblate transition instead | 18:13 |
clarkb | ya thats the other thing to keep in mind I guess. zanata is still very dead end and we should move away from it | 18:13 |
gouthamr | is this the job? https://zuul.opendev.org/t/openstack/builds?job_name=propose-translation-update | 18:13 |
bauzas | honestly I don't know | 18:14 |
frickler | gouthamr: yes | 18:14 |
bauzas | that's a different question | 18:14 |
bauzas | (I just discussed with ian for the automatically translations for the service docs) | 18:14 |
gouthamr | it is a different question | 18:16 |
gouthamr | bauzas: can we deal with the legal question as a topic? | 18:16 |
bauzas | gouthamr: maybe we should ask the Foundation for that | 18:16 |
gouthamr | regarding the job failure, I think the i18n team needs some infra help - they mentioned that they lacked the zuul expertise to maintain these translation jobs.. if zanata doesn't work on anything other than bionic, i think we've hit the first schism | 18:16 |
fungi | in that case, it would be good to know what questions they have about automation | 18:18 |
gouthamr | "For the issue, I18n SIG is asking help to upgrade propose-translation-update job with ubuntu-bionic to newer version of Python" | 18:18 |
gouthamr | L44 of the etherpad bauzas linked | 18:18 |
frickler | so that would be the container solution that clarkb suggested. but I don't think the infra team has much time to help with that | 18:19 |
fungi | ah, yes, the counter-suggestion is to finish up the weblate migration instead | 18:19 |
bauzas | I wasn't there when they discussed this | 18:19 |
gouthamr | ^ my hope as well | 18:19 |
clarkb | and I don't think it needs much zuul expertise to switch the jobs | 18:19 |
clarkb | its copy paste and change out the tools inside the jobs | 18:19 |
fungi | since zanata is exactly the reason the jobs are breaking when moving to newer python/platform | 18:20 |
clarkb | I tried to stress this in the calls we had before that it really should be straightforward someone just needs to decide they are doing it and spend a little time understanding the jobs enough to replace the tools in copies of them | 18:20 |
gouthamr | i think, we can live without translations (for hopefully a short time) until the weblate infra is fixed up, and there are new jobs to propose translation updates from weblate | 18:20 |
gmann | yeah, this is not just job migration | 18:20 |
fungi | don't necessarily even need new jobs, just replace the zanata cli commands with weblate cli commands | 18:21 |
gmann | weblate is also licensed, hope we are all good on their subscription payment this year also? Last time it was paid by the foundation but not sure if that was for more than a year and need to renew or not ? | 18:22 |
fungi | i think it was set up to renew anually | 18:22 |
gmann | ok, thanks | 18:22 |
fungi | have the i18n folks indicated that weblate stopped working? | 18:23 |
bauzas | I don't know | 18:23 |
gouthamr | they discussed "Weblate with openinfraid authentication issue" - which they note is now resolved | 18:23 |
gmann | no, I have not heard that but just want to check if that part is ok or not | 18:23 |
gouthamr | don't want to spend the whole hour discussing this; i've alerted ianychoi and seongsoo to peek into this channel.. its possible they could start a ML discussion as well to follow up.. | 18:24 |
gmann | ML is better due to TZ | 18:25 |
gouthamr | +1 | 18:25 |
bauzas | +1 | 18:25 |
gmann | last time we synced on all these issue in ML and brian from TC leading the discussion | 18:25 |
gmann | that was productive and fast | 18:25 |
fungi | i follow the i18n ml as well as the i18n topic on openstack-discuss and haven't seen these issues brought up | 18:25 |
gouthamr | ++ bauzas volunteered or was voluntold to play the TC liaison role here, i cannot tell, but, its very much appreciated | 18:26 |
gmann | bauzas: ++ thanks | 18:26 |
gouthamr | lets move on to the next AI: | 18:26 |
gouthamr | Leaderless projects, next steps | 18:26 |
gouthamr | we have a couple of pending changes on the governance repository that still need eyes | 18:27 |
gouthamr | #link https://review.opendev.org/c/openstack/governance/+/927962 (Adding Axel Vanzaghi as PTL for Mistral) | 18:27 |
gouthamr | #link https://review.opendev.org/c/openstack/governance/+/933018 (Add watcher DPL for Epoxy) | 18:27 |
gouthamr | thanks for adding your RC votes and comments, but if you haven't done so yet, please do | 18:28 |
gouthamr | i'd like to land both these changes this week since they have spent sufficient time on review | 18:28 |
gmann | ++, done | 18:29 |
gouthamr | these do complete our "leaderless" tracking post the 2025.1 election | 18:29 |
gouthamr | #link https://etherpad.opendev.org/p/2025.1-leaderless (2025.1 Leaderless projects) | 18:29 |
slaweq | I just added my RC +1 | 18:29 |
gouthamr | thank you | 18:30 |
gouthamr | alright that's all the AIs I was tracking | 18:30 |
gouthamr | did any of you have any other items that you were working on that were actions from past meetings? | 18:30 |
bauzas | I think we can merge those, we have quorum right? | 18:30 |
bauzas | (when looking at r-c label) | 18:31 |
bauzas | (r-*v*) | 18:31 |
gouthamr | bauzas: yes, but we wait until 7 days after the last change.. | 18:31 |
bauzas | ack thanks for explaining | 18:31 |
gouthamr | bauzas: as a minimum :) i've learned to wait longer in case something needs more eyes, as JayF gmann and others used to.. | 18:32 |
gouthamr | we discussed dropping past TC chairs from the tech-committee-chair group (sad panda emoji) | 18:32 |
gmann | I think bauzas should know about our magic script which tell us about minimum (just minimum) requirements to merge the changes. | 18:33 |
gouthamr | soooo - the side effect is that i can tag the tc in changes taht need attention :) | 18:33 |
* gouthamr please don't ignore these mass reviewer tags | 18:33 | |
bauzas | gmann: I did read the TC docs but I forgot about when merging :) | 18:33 |
gmann | bauzas: https://github.com/openstack/governance/blob/master/tools/check_review_status.py | 18:33 |
gmann | cool | 18:33 |
gmann | ++ on tech-committee-chair cleanup | 18:34 |
gouthamr | sounds like no further AIs to catch up on.. | 18:34 |
gouthamr | lets move on | 18:34 |
bauzas | gmann: thanks for helping ;) | 18:34 |
gouthamr | #topic Goal Progress: Migration to noble | 18:34 |
gmann | not much update but I started working on this, created the plan and sent on ML | 18:34 |
gmann | #link https://etherpad.opendev.org/p/migrate-to-noble | 18:34 |
gouthamr | i have a minor-ish concern regarding these changes | 18:35 |
gmann | #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/JOMDY26TCW7OX3NXRGOYQCIDXNNJ4E25/ | 18:35 |
gmann | sure | 18:35 |
gouthamr | i really don't like keeping a test job variant on Jammy | 18:35 |
gouthamr | gmann explained the reasoning here: | 18:35 |
gouthamr | #link #link https://review.opendev.org/c/openstack/manila-tempest-plugin/+/932956 | 18:35 |
gouthamr | but i believe that we'd like to keep some testing on Jammy for trunk/main branches because it would allow python3.9 integration testing to continue | 18:36 |
gmann | just to note, this is requirement of this section in testing runtime | 18:36 |
gmann | #link https://governance.openstack.org/tc/reference/runtimes/2025.1.html#additional-testing-for-a-smooth-upgrade | 18:36 |
bauzas | yeah we need to continue to support Jammy until next release at least | 18:37 |
gmann | yeah, at least till next release | 18:37 |
bauzas | because $slurp | 18:37 |
gouthamr | until? | 18:37 |
gmann | what I need to update is to run jammy job with python 3.9 | 18:37 |
bauzas | until when we change the PTI values for F, G or some other cycle :) | 18:38 |
spotz[m] | So upgrade 3.8 to 3.9 but stay on Jammy? | 18:38 |
gmann | gouthamr: in next cycle, we can drop. this will help to have slurp 2024.1 -> 2025.1 migration | 18:38 |
gmann | gouthamr: I updated in nova change where we can run jammy on py3.9 so that we do not need to keep up the things for py3.8 | 18:38 |
gouthamr | gmann: wouldn't this mean the py39/jammy job will continue to run on stable/2025.1 until we sunset that branch? | 18:38 |
clarkb | I wrote down the migration path for centos 9 to centos 10 in an mailing list response and I think ti is similar for jammy to noble | 18:38 |
gmann | gouthamr: yes, even in stable/2025.1 also but in 2025.2, we do not need to run, we will keep only noble jobs | 18:39 |
gmann | clarkb: yes | 18:39 |
gmann | we did same in past also when we moved from bionic to jammy | 18:39 |
bauzas | https://devguide.python.org/versions/ says that 3.9 will be EOL next year | 18:39 |
gmann | #link https://governance.openstack.org/tc/reference/runtimes/2023.1.html#additional-testing-for-smooth-upgrade | 18:39 |
gmann | bauzas: yeah, hopefully we will be good to move to py3.10 as min by then | 18:40 |
noonedeadpunk | isn't jammy 3,10 by default? | 18:40 |
bauzas | I just wanted to say we have ~ 1 cycle (ie. G as the last one) for 3.9 | 18:40 |
noonedeadpunk | and it's el9 that's shipped with 3.9 iirc | 18:41 |
bauzas | but if we can drop it for F, that's cool | 18:41 |
gouthamr | #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/FJO4GPM25C4UKHDBOQBEIG7YVWTOWL4H/#FJO4GPM25C4UKHDBOQBEIG7YVWTOWL4H (suggested upgrade path for centos/rhel/rocky/alma by clarkb) | 18:41 |
noonedeadpunk | have someone looked into devstack for el10 already? | 18:41 |
gmann | noonedeadpunk: ah right. it is python 3.10. I was wrong on py3.8 things. | 18:42 |
noonedeadpunk | as I can imagine modular libvirt could be slightly different | 18:42 |
gmann | so we can run jappmy on either py3.10 or onb py3.9 which is our min python version | 18:42 |
gmann | jammy | 18:42 |
frickler | one can also update and use newer python on el9. kolla does this due to ansible reqs | 18:42 |
frickler | so not 100% necessary to wait for el10 | 18:43 |
noonedeadpunk | well, there's difference of ansible host and rest | 18:43 |
bauzas | I don't know what the el9 plan post 3.9 EOL | 18:43 |
noonedeadpunk | not sure what specifically kolla does, in osa we use 3.11 python for EL only as "deploy" host version | 18:44 |
frickler | I think we use it for services, too, but I need to doublecheck | 18:44 |
noonedeadpunk | the problem with non-default version, is that there're no python bindings for things, like ceph, libvirt, selinux, etc | 18:44 |
noonedeadpunk | so kinda makes things tricky | 18:44 |
gouthamr | i think my initial question was well answered through this discussion, thanks everyone.. i do wish we highlighted our upgrade path somewhere | 18:45 |
frickler | hmm, then maybe I misunderstood | 18:45 |
gmann | gouthamr: is your concern about Jammy job resolved? This is implementation of what we have defined in 2025.1 testing runtime. | 18:45 |
clarkb | gouthamr: yes I called that out with the relese team after sending that email I think | 18:45 |
gouthamr | would a "Reasoning" section under the tested run times help? | 18:45 |
clarkb | I also updated a diagram explaining slurp in release docs. If even we are confused about it then we're not documenting it properly and I suspect part of the issue is that text is not sufficient for communicating these ideas clearly | 18:45 |
clarkb | need more illustrations | 18:45 |
gmann | gouthamr: also I tried to described it in this section https://governance.openstack.org/tc/reference/project-testing-interface.html#upgrade-testing | 18:46 |
gouthamr | #link https://releases.openstack.org/#releases-with-skip-level-upgrade-release-process-slurp (Release upgrade illustration) | 18:46 |
gmann | #link https://governance.openstack.org/tc/reference/project-testing-interface.html#upgrade-testing | 18:46 |
gmann | if anything missing there, feel free to update | 18:46 |
gmann | this subsection | 18:47 |
gmann | #link https://governance.openstack.org/tc/reference/project-testing-interface.html#extending-support-and-testing-for-release-with-the-newer-disto-version | 18:47 |
gouthamr | ++ thank you; i'll read it | 18:47 |
gmann | that is where we started building those doc when SLURP things came and when we bump the distro version | 18:47 |
gmann | for smooth upgrade, testing the old distro one job for a one more cycle is migration path we can provide | 18:48 |
clarkb | as a side note the container based deployments actually care about this a lot less | 18:48 |
gouthamr | containers need OS too :D | 18:48 |
clarkb | because you can update 2024.1 to 2025.1 containers running the latest python version for each | 18:48 |
clarkb | gouthamr: yes but you don't need overlap | 18:48 |
clarkb | because you can replace 2024.1 python3.9 with 2025.1 python3.12 in the same step | 18:49 |
gouthamr | yes | 18:49 |
noonedeadpunk | clarkb: you do if you don't use containers | 18:49 |
clarkb | noonedeadpunk: correct I'm explaining the difference | 18:49 |
noonedeadpunk | or we're saying that openstack outside of docker images is unsupported? | 18:49 |
clarkb | you need overlap if not using containers. If using containers then you don't | 18:49 |
clarkb | no one is saying that I'm trying to respond to the discussion about osa and kolla above | 18:49 |
noonedeadpunk | ++ | 18:50 |
clarkb | they are actually orthogonal since they don't have the problem | 18:50 |
gouthamr | alright; anything else on $topic? we've 10 mins | 18:50 |
gouthamr | there're only regularly scheduled topics next | 18:50 |
noonedeadpunk | I'd say overlap is potentially always good to have | 18:51 |
noonedeadpunk | just to give a breather for ones who are not risky changing multiple things at the same time | 18:51 |
gmann | ++ | 18:51 |
gouthamr | these next topics (Gate Health and TC Tracker) will probably be expanded if we spend some time next week.. | 18:52 |
gmann | that was one of the key reason of our testing runtime and take care of such overlap when needed | 18:52 |
gouthamr | any objections moving to Open Discussion for these ~8 mins? | 18:52 |
gmann | one thing for gate | 18:52 |
gouthamr | #topic Gate Health | 18:52 |
gouthamr | go for it gmann | 18:52 |
gmann | Tempest broke for py3.8 drop and it is fixed now | 18:52 |
gmann | #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/FOWV4UQZTH4DPDA67QDEROAESYU5Z3LE/ | 18:52 |
gouthamr | ah \o/ thanks; you're encouraging pins for unmaintained branches | 18:52 |
gmann | but this fix where tempest master is used for testing means stable/2023.1 to master | 18:53 |
gmann | yes, for unmaintained branches, maintainers need to pin Tempest. which should have been done as soon branches move to unmaintained status | 18:53 |
gmann | like we did for EM branch status change in past | 18:53 |
cardoe | I'm ++ on that. | 18:53 |
gouthamr | ^ ah good place for a reminder that release folks will transition stable/2023.1 to unmaintained/2023.1 in the next couple of weeks likely | 18:54 |
gouthamr | (their target was end of Oct) | 18:54 |
frickler | yes, elod is on pto this week, will happen next week | 18:54 |
gmann | ++ | 18:54 |
gmann | that is all on gate thing | 18:54 |
gouthamr | thanks gmann | 18:54 |
gouthamr | #topic Open Discussion | 18:54 |
frickler | still need to decide how to handle existing unmaintained branches | 18:55 |
bauzas | thanks gmann | 18:55 |
gmann | frickler: I will say if no one fix then it is good signal for EOL | 18:55 |
frickler | IMHO they are in a bad state and should be eoled mostly | 18:55 |
gouthamr | #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/UDQAC7SR5JAQJE5WBAG54A2MTBVBTJ44/ ([PTL][release][stable] 2023.1 Antelope transition to Unmaintained) | 18:55 |
gmann | frickler: ++ | 18:55 |
gouthamr | frickler++ yes | 18:55 |
gouthamr | totally in support of pruning the long list we've accrued - and we can't do it selectively no matter what we tell ourselves :( | 18:56 |
gouthamr | selectively = one project at a time, at the request of the project team | 18:56 |
noonedeadpunk | I have 1 thing to raise with releases team about that actually | 18:56 |
fungi | note that unmaintained branches are supposed to be eol'ed by default if nobody volunteers as a caretaker | 18:56 |
noonedeadpunk | As I'd really love to have trailing projects to be also trailing for moving to unmaintained | 18:57 |
gmann | Tempest pin can be one good check here if we have anyone maintaining them or not | 18:57 |
fungi | well, even if tempest jobs are working on them, if nobody volunteers to take care of them they're supposed to go to eol anyway | 18:58 |
noonedeadpunk | There're multiple issues otherwise. First, I need anyway to wait for rest to move to unmaintained to reffer their tags for EM tag of osa, for instance | 18:58 |
noonedeadpunk | then, trailing projects have not released 2024.2 yet, and instead of focusing on release have to focus on unmaintained | 18:58 |
frickler | fungi: yes, but how is this volunteering to be happening, who tracks it, who proposes the actual eol patches? | 18:58 |
fungi | and the volunteers are supposed to renew their intent every 6 months | 18:58 |
gmann | noonedeadpunk: I see your point here, make sense. | 18:59 |
gmann | noonedeadpunk: just wondering how it was when we had EM thing? | 18:59 |
fungi | "By default, only the latest eligible Unmaintained branch is kept. When a new branch is eligible, the Unmaintained branch liaison must opt-in to keep all previous branches active." | 19:00 |
fungi | #link https://governance.openstack.org/tc/resolutions/20230724-unmaintained-branches.html#unmaintained-branches | 19:00 |
noonedeadpunk | I was always ... annoyed that it was not having trailing policy for EM as well :) | 19:00 |
bauzas | time check nope ? | 19:00 |
gmann | in that case, we should do the same trailing policy | 19:00 |
frickler | fungi: well technically the first sentence is wrong ... by default, nothing changes | 19:00 |
gouthamr | a few minor updates from me: | 19:01 |
gouthamr | - the TC PTG summary is still being written; so, please, in case you've any more thoughts to share on the discussions, go for it without any further delay :) | 19:01 |
gouthamr | - there's an "unofficial" official OpenStack website page for the TC: https://www.openstack.org/community/tech-committee ; the content seems to be broken, please fix your foundation profiles.. but also, rest assured, this page will die soon.. because the OpenStack website is getting overhauled | 19:01 |
gouthamr | - next week's meeting will be our Video Meeting! VMWare Working Group folks will be joining us to present their findings, and seeking opinions for organizing work items wrt the large number of new users coming over from VMWare now, and in the near future | 19:01 |
noonedeadpunk | and if you check history - osa was the latest for quite a while to produce em/eom tags | 19:01 |
fungi | frickler: so if there is no unmaintained branch liaison, there is nobody to opt into keeping the branches open, and they should go to eol | 19:01 |
gouthamr | that said, we're a minute over.. | 19:01 |
gouthamr | we can keep chatting.. but i'll end this meeting | 19:01 |
gmann | noonedeadpunk: I mean, good to have trailing policy for unmaintined | 19:01 |
gouthamr | thank you all for attending | 19:01 |
noonedeadpunk | ++ | 19:01 |
gouthamr | #endmeeting | 19:01 |
opendevmeet | Meeting ended Tue Oct 29 19:01:29 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 19:01 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/tc/2024/tc.2024-10-29-18.00.html | 19:01 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/tc/2024/tc.2024-10-29-18.00.txt | 19:01 |
opendevmeet | Log: https://meetings.opendev.org/meetings/tc/2024/tc.2024-10-29-18.00.log.html | 19:01 |
slaweq | o/ | 19:01 |
gmann | noonedeadpunk: but this is good topic to discuss in start as we run out of time :) | 19:01 |
gouthamr | noonedeadpunk: i can add it to next week's agenda if you'd like? | 19:02 |
noonedeadpunk | gmann: I'd say trailing should be applied not only for releasing, but also for sunsetting, as then lifetime of branch is very uneven | 19:02 |
gouthamr | or would you prefer to have a release team discussion first? | 19:02 |
noonedeadpunk | gouthamr: yeah, was thinking if it's up to release team to decide? | 19:02 |
noonedeadpunk | I've raised that already couple of times but it never had any tracktion | 19:02 |
gmann | noonedeadpunk: yes, same. for unmaintained or EOL doing it with trailing policy is good. | 19:03 |
clarkb | gmann: fwiw I think what is written for the upgrade process and choosing runtimes is correct. The problem is that it is very abstract and you have to check multiple documents (one for each release) to see where the overalps exits etc. What I think is specifically missing is an illustration/diargram/graph of what that means concretely for platforms like ubuntu, centos, etc. | 19:03 |
clarkb | Basically take what I wrote for that centos 9 to centos 10 upgrade path in text form and make it visual so that people can understand it more quickly and then keep that diagram up to date over time. The lack of a single concrete set of information leads to confusion like we've seen in that ml thread and this discussion here. | 19:03 |
noonedeadpunk | well we have some table for instance in osa | 19:04 |
noonedeadpunk | https://docs.openstack.org/openstack-ansible/latest/admin/upgrades/compatibility-matrix.html | 19:04 |
noonedeadpunk | not sure it's readable enough though | 19:05 |
gmann | yeah, this is nice. it will be good to have such matrix | 19:05 |
frickler | gouthamr: next week also would be video meeting by default? | 19:05 |
gmann | noonedeadpunk: clarkb: I can try to build something like this in pti doc and we can keep it up to dated | 19:06 |
gmann | I agree that referring to text and form multiple docs are not easy | 19:07 |
noonedeadpunk | maintaing that page is also a pita I'd say | 19:07 |
noonedeadpunk | rst tables are not great | 19:07 |
gmann | :) yeah | 19:07 |
clarkb | the graphviz dot diagrams aren't too bad once you start to get some of the expectations of that system | 19:14 |
clarkb | it does feel like a lot of copy and paste but thats probably fine for something like this | 19:15 |
fungi | also you could auto-generate them with a sphinx plugin from some other structured data, probably | 19:15 |
noonedeadpunk | yeah, generating from some structured yaml or json could make more sense for sure | 19:16 |
noonedeadpunk | but as it's 20 mins per year.... hard to justify time investment :( | 19:17 |
clarkb | ya exactly just copy paste :) | 19:17 |
fungi | the openstack/ossa repo has an example of generating rst from yaml with a sphinx plugin if you wanted to make the rst tables that way. but also you could generate graphviz dot data the same way | 19:18 |
* noonedeadpunk needs to look at that example | 19:18 | |
fungi | noonedeadpunk: https://opendev.org/openstack/ossa/src/branch/master/doc/source/_exts | 19:19 |
fungi | it uses yaml to fill in a jinja template basically | 19:20 |
noonedeadpunk | that looks quite neat. we actually have multiple places right now that maintain supported distros separately. So having one yaml that would be used makes total sense. Will take a look for sure in nearest vacation or christmas holidays :) | 19:23 |
noonedeadpunk | as that is actually very-very good idea | 19:24 |
ianychoi | Hi TC, thank you for supporting I18n SIG. Let me divide into the three action items. | 21:51 |
ianychoi | 1/ To start a 3-month PoC to test AI translation with Nova RST files (ianychoi) | 21:51 |
ianychoi | 2/ Zuul investigation for Weblate migration (owner: Seongsoo Cho ) | 21:51 |
ianychoi | 3/ Horizon Zuul job issue check - yep, mailing list should be the best so I am initiating "[i18n][infra][horizon] Zanata translation job failures after Python 3.8 deprecation" thread. | 21:51 |
fungi | ianychoi: cool, for #3 that problem should vanish if #2 gets done, zanata is the underlying problem for the job being stuck on python 3.8. seems like there would be a fair amount of work wasted to get the zanata jobs running again just to end up throwing it all away once weblate is going | 21:53 |
fungi | i'll follow up on the ml thread though when i get a sec | 21:56 |
ianychoi | I fully agree with ^ and thank you so much all. | 22:05 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!