17:00:09 #startmeeting tc 17:00:09 Meeting started Tue Aug 12 17:00:09 2025 UTC and is due to finish in 60 minutes. The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:09 The meeting name has been set to 'tc' 17:00:24 Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct. 17:00:27 Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee 17:00:32 #topic Roll Call 17:00:34 o/ 17:00:49 o/ 17:01:27 o/ 17:02:02 noted absence: b a u z a s, c a r d o e 17:02:08 \o 17:02:54 courtesy-ping: gmaan, mnasiadka 17:03:01 o/ 17:03:07 o/ 17:03:43 hello everyone o/ thank you for joining.. lets get started 17:03:49 #topic Last Week's AIs 17:04:13 we had lower than usual attendance last week, some ongoing activity that we need to check on 17:04:43 i took a few AIs that are at various degrees of progress 17:05:13 1) connect with stephenfin, and figure out closure for the proposed goal: https://governance.openstack.org/tc/goals/proposed/migrate-from-wsgi-scripts-to-module-paths.html 17:05:39 i don't have an update here.. will get to it today 17:06:12 2) Runtime update for 2026.1 17:06:12 i am drafting this now, will post after this meeting 17:06:14 seeing gerrit topic, there are few changes yet to merge 17:06:19 #link https://review.opendev.org/q/topic:%22remove-wsgi_scripts%22+status:open 17:06:26 ty gmaan 17:07:38 3) Retirement of monasca 17:07:53 i started working on these changes this week: 17:07:54 #link https://etherpad.opendev.org/p/monasca-retirement 17:08:30 there were comments on the first patch 17:08:31 thank you for the review on the project-config change.. i'll update it 17:08:32 thanks for working on those. I left comment to fix the project-config change and repo content removal 17:08:38 so some work needs to be done there 17:09:07 ack, pbkac on the noop jobs :D 17:09:15 frankly I didn't check it deper 17:09:31 i did leave a note in the commit message directed at you, probably noonedeadpunk 17:09:50 i see two related repos under OS Ansible: openstack/openstack-ansible-os_monasca, openstack/openstack-ansible-os_monasca-agent 17:10:01 ah 17:10:02 i think we've deferred retired these in the past 17:10:32 indeed, this should be pretty much just a revert of previous un-retirement 17:10:37 i don't know if we should do it with the same change, or in an immediate follow up so we don't forget, or we need to wait 17:10:49 as despite folks asked to revive roles, nobody actually picked up their maintenance 17:10:54 and I clean forgot about them 17:11:09 we did in separate proposal to governance as well other places 17:11:09 I will check what needs to be done there, thanks! 17:11:25 ty, separate changes do make sense 17:11:30 yeah, I say in a separate one 17:11:38 we can track them to closure with the etherpad no problem 17:11:43 ++ 17:11:51 its not like we'll mess up anything, there are only 17 repositories associated with the project team 17:12:10 * gouthamr expected a sensible chuckle 17:12:17 heh., yeah 17:12:31 what are the odds 17:12:31 hehe, and fungi can always recover something 17:13:08 haha, alright.. next AI 17:13:15 was around refstack-server 17:13:15 yeah, worst case i can rollback edits or restore from a database backup 17:13:35 #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/WNI4PE2TZ3G52C3U5FT2YNVRUAJB3CMO/ 17:13:59 i deleted the server itself just this morning, after saving a filesystem snapshot image 17:14:03 gmaan responded to this thread indicating that folks that needed this functionality can maintain a list of tests and use tempest in a similar fashion 17:14:14 ah ack fungi 17:14:48 it's kinda sad we never fgigured out a reasonable replacement for refstack 17:15:04 as idea of having providers "certified" is overall not bad 17:15:32 promotes both the project and highlights provider following "best practices" (in a way) 17:15:41 (at least proper interoperability) 17:16:01 totally, but, the interop program over the years grew too heavy to maintain.. i do support having a "certification" process that's lighter weight.. it'll take a lot of time/effort to think through one 17:16:20 its also just less necesasry as the total number of configuration options has reduced over time 17:16:33 it'll need inputs from qa-core, but hopefully not eat into their limited bandwidth 17:16:37 Don't want to repeat myself, but in the implemented form those tests prove 0 interoperability 17:16:40 I can recall gtema was having some ideas years back on some summit as well... but it never flew 17:16:52 well, it is not related to qa at all 17:16:58 (there are still differences but nothing like back when refstack was envisiioned) 17:17:08 certification is completely different things 17:17:08 that's why I said about reasonable replacement, and not current form :) 17:17:12 Right, not at all, gmann 17:18:18 but anyway having a project logo on providers website is a beneficial thing for both parties 17:18:22 yeah, in the past interop focus was more on the problem of providers "differentiating" their service offering by making incompatible downstream changes to services/apis and replacing openstack services with other things 17:19:10 they still have an option to use the trademark logos, the process is merely an administrative/contractual one now which doesn't involve mandatory testing 17:19:17 while it can be still a thing... now it's more about variety of configurations which may make things non-interoperable 17:19:20 We can chat years on this topic without any conclusions. I suggest not to continue it here. Maybe again during summit 17:19:27 ++ 17:19:33 ++ 17:19:40 if there's a proposal, we could have a discussion at the PTG 17:20:13 proposal and I will say more of 'requirements and scope' 17:20:46 interop group was dissolved because there was no clear interest or requirement 17:20:51 We may involve SCS into such discussion on the certification and interoperability side 17:21:03 if we want to discuss any solution, I will say we collect the interest and requirement of it 17:21:13 ack, gtema is this something you're motivated to drive? 17:21:33 if they would be open to open discussion rather then just trying to mandate their own thing... 17:21:54 Not really, after 8 years I figured out no csp is really interested and committed 17:22:07 ah 17:23:08 yeah commitment too 'Requirement, interest, and commitment' 17:24:39 okay, we'll put this out there to the community, and suggest that there's somewhat a desire to explore a replacement to certify OpenStack in a common, transparent way.. we lack however, a person/team to make it happen and it's outside the purview of the maintainers (QA or otherwise) 17:25:49 i suspect we had references to interop/refstack on https://openinfra.org/legal/trademark-policy - and these aren't there anymore.. so such an effort may be tangential to OIF's "Commercial Use Trademark License" process 17:26:14 or you may find that the foundation folks are interested to collaborate 17:26:26 anything else to add? 17:27:03 those are the trademarks that previously required interop testing 17:27:20 the board approved a change in policy at the end of 2023 17:27:39 Yeah that was the retirement of interop 17:27:53 ack.. 17:28:12 * gouthamr wants to get "OpenStack Expertise" added to his LinkedIn profile 17:28:27 yeah, I think nothing needed from TC side here until anyone need some tooling more tests etc 17:28:38 yes, or opinions 17:29:29 i think we have a loose vision for what would benefit the maintenance of OpenStack itself, or for operators that interface with us.. 17:30:02 alright, that's all the AIs i am seeing from the past week, was anyone else working on anything to note? 17:31:28 I keep working on dashboard for all project 17:31:35 *for tc projects 17:31:46 I'm not that happy about the result so far 17:31:47 ah, yes! :) 17:32:08 but the intermittent result is like this http://bit.ly/4lq4DQG 17:32:25 *intermediate 17:32:49 nice 17:33:27 but input on what tabs we want to see on the dashboard is super welcome 17:33:43 noonedeadpunk: maybe exclude the 'election' repo as that is maintained by separate group. bcz during nominations it can fill up the dashbaord where TC members does not need to vote 17:33:45 as we don't really have specs/dashboard work 17:33:52 ++ 17:33:59 yeah, was thinking about that 17:34:03 cool 17:34:12 another thing, is that list of projects is defined separately there 17:34:35 as it seems gerrit does not have awareness today about "ownership" of repos to filter on 17:35:13 for osa I did use parentprojects, but I don't think it's applicable to parent all these repos to governance 17:35:19 as it's just not true 17:36:39 yeah, i don't think there's an elegant way to tie the repos together 17:37:15 we could add an empty project like the one we use for the openstack/meta-config acl and inherit it, if that's really desired 17:37:33 we have openstack/openstack ? :D 17:37:51 but probabvly it's not good idea, as there repos are kinda not much related 17:38:00 that's something else, used to aggregate subrepos as a superrepo 17:38:21 i suppose doing something like that is also an option in theory 17:38:27 except of governance-website/governance-sigs/governance 17:38:35 sorry to side track, but lets get through the agenda and discuss this in Open Discussion perhaps? 17:38:40 ++ 17:38:44 #topic A check on gate health 17:38:45 yeah, or after the meeting 17:38:57 any gate concerns/updates to note this week? 17:39:12 #link https://www.debian.org/releases/trixie/ 17:39:27 ^ think fungi and frickler were chatting about this here a few days ago 17:39:47 yeah i linked the announcement in here on saturday 17:40:23 we brought the openmetal provider back online in zuul a few minutes ago since their data center relocation maintenance ended yesterday, but i guess keep an eye out for issues there 17:41:27 ack, noted 17:42:27 will the infra team be working on trixie mirrors? 17:43:13 or is that something that each zuul/cloud provider needs to setup and configure? 17:43:25 i think clarkb had mentioned freeing up space first by dropping xenial and some other similarly old content 17:44:11 yes I think we should clear out the existing content t hat we don't need then add in the new stuff 17:44:28 also we still need to figure out what to do longer term about the upstream disappearance of bullseye-backports which we've been mirroring 17:45:33 ack 17:45:47 at the moment the plan is, i think, to make local changes to our base job to stop enabling backports by default on debian nodes (which is the defaulter default in zuul/zuul-jobs at the moment) 17:46:08 then we can delete the content for backports that have been deleted upstream 17:46:10 otherwise if we delete the mirror of that, a lot of bullseye-based jobs are going to start hard failing 17:47:42 we might not notice unless we're monitoring Caracal jobs 17:48:24 not even sure you'd be using bullseye for that, bookworm was current for the caracal cycle wasn't it? 17:48:25 thanks for the call out 17:49:06 you're right 17:49:09 Kolla also only builds Bookworm images (we used Bullseye in 2023.2 and earlier) 17:49:12 we had bulsseye jobs on caracal just in case 17:49:22 for upgrade testing.. 17:49:23 we do test bullseye also in caracel for upgrade things 17:49:26 yeah 17:49:39 bullseye and bookworm both in caracal 17:49:44 as slurp should have work from 2023.1->20924.1 17:49:48 ++ 17:50:52 alright, we do have one more topic to get through - i see content for open discussion 17:51:06 lets switch to that, since we can update the tracker offline 17:51:11 #topic Open Discussion 17:51:20 Discuss runtimes for 2026.1 development cycle 17:51:41 ^ frickler, i was working on this and the answer to both your questions is yes 17:51:45 Add Debian Trixie and python3.13? 17:51:45 Drop python3.10 and Ubuntu 22.04? 17:51:56 we already dropped Ubuntu 22.04 in this cycle right? 17:52:10 we only test Ubuntu 24.04 in current cycle 17:52:10 I thought it's the case as well 17:52:24 why do we have py3.10 in the mix, then? 17:52:31 yes, no need to keep it for upgrades either ... we expect folks to upgrade to Ubuntu 24.04 before upgrading to Gazpacho 17:52:34 I'm not sure if we ever added CentOS 10 Stream as "complimentary though? 17:52:50 ++ 17:52:56 for python3.10, I would not be very aggressive to drop. Seeing it is EOLing in oct 2026, we can still support this as min version 17:53:11 i am adding CS10/Rocky10 although i'm not sure the state of the providers and their testing in Zuul 17:53:31 C10S working kinda. not sure about capacity 17:53:44 Rocky10 - there're images and it's possible to test 17:53:48 yeah, this is devstack change and job running fine 17:53:52 we have both now. They are limited to about 50% of our quota (that number varies depennding on which clouds are available, see earlier note about turning off openmetal for a couple weeks) 17:53:52 #link https://review.opendev.org/c/openstack/devstack/+/937251 17:53:54 If you need anything for CS10 let me know, I can possibly pull in folks to help 17:53:57 some things are missing there as a general though 17:54:13 as there's no compatability for many things between C10S and Rocky 10 17:54:21 oh 17:54:31 noonedeadpunk: let me guess, you also have COPR repos with rebuilds of packages that are not yet in EPEL? ;-) 17:55:04 I also have EPEL contacts:) 17:55:06 well. We have only 1 package coming from copr as maintainers forgot about it in EPEL 17:55:12 the main concern i'd have with not dropping python 3.10 is if we're stuck maintaining platforms that provide it until stable/gazpacho reaches end of maintenance, or even into unmaintained state 17:55:17 but then no systemd-networkd or ceph 17:55:44 (and then yeah - copr) 17:55:55 In Kolla I think we have two - mod_auth_mellon for SAML and glusterfs-fuse for Manila 17:56:03 gouthamr: we do have c10 and rocky 10 since this cycle, anything to change there? 17:56:06 #link https://governance.openstack.org/tc/reference/runtimes/2025.2.html#advance-unstable-testing 17:56:16 nope.. 17:56:17 oh, yes, whole gluster is copr indeed 17:56:22 though we did not find help to setup rocky 10 17:56:23 but it's same for C10S 17:56:27 we have called it out as "Advance/Unstable" 17:57:01 was thinking to leave it at that, but these links and discussions are making me think about promoting them back to the tested linux distributions section 17:57:08 spotz[m]: https://bugzilla.redhat.com/show_bug.cgi?id=2326534 - would be happy if anybody can have a look, there's even a volunteer - but no traction at all 17:57:22 I think the key change for next runtime can be if we want to bump python max version to python 3.13 or keep it py3.12 17:57:53 which depends on the eventlet things 17:59:04 I don't think we can drop 3.12 17:59:09 until we drop 24.04 17:59:20 Debian OpenStack is going to release Flamingo DEBs only for Trixie, and given Bookworm EOL (June 2026?) - we sort of should. 17:59:25 gmaan is talking about the max version 17:59:27 I think gmaan was mentioning the max version 17:59:30 yeah, min version we can keep py3.10 itswlf 17:59:39 we'd bump it imo, because there's been bugfixes in eventlet concerning 3.13 17:59:54 and there's none that's a blocker atm, please correct me if you know any 18:00:07 yeah, current max version is python 3.12, changing max version does not mean we need to drop py3.12 18:00:13 yeah. we need 3.13 for trixie 18:00:13 functional testing will probably mostly use Ubuntu 24.04, and hence python3.12 18:00:37 but, trixie will be tested by project teams increasingly given the eventlet work 18:00:38 do we have testing passing on py3.13? 18:01:10 and while debian is gonna release packages against 3.13 it's not a given they will work as intended if our code is buggy and was never tested against 3.13 18:01:11 * gouthamr timecheck 18:01:42 (even as unit tests) 18:01:58 lets continue this discussion on the gerrit change, i'll have it up in a little bit 18:02:02 https://review.opendev.org/c/openstack/devstack/+/954653 passes 18:02:07 anything else to note in the minutes today? 18:02:11 this is unit test job results 18:02:14 #link https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-py313&skip=0 18:02:32 one thing from me 18:02:45 go for it, gmaan 18:02:52 I will not be running for the next TC term 18:03:08 just that update ^^ 18:03:09 :( 18:03:26 ;( 18:03:39 I will be around if any input needed but not as a official TC members 18:03:58 it's been a long run with you on the TC so this would be a huge change 18:04:36 thank you for all the hard work, you can bet we'll still tag you with things 18:04:39 and the value you have can't be underestimate 18:04:55 thanks, sure I will be happy to help where I can 18:05:37 End of an era 18:06:13 speaking of which, we'll need at least 4 tc candidates to fill the open seats in the next election, and at least 5 if there's going to be a poll 18:06:39 ++, hope to see more candidates. 18:06:57 alright, we're well over our time slot.. lets wrap up this meeting 18:07:01 thank you all for joining 18:07:03 #endmeeting