18:01:04 <gouthamr> #startmeeting tc 18:01:04 <opendevmeet> Meeting started Tue Nov 26 18:01:04 2024 UTC and is due to finish in 60 minutes. The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:04 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:04 <opendevmeet> The meeting name has been set to 'tc' 18:01:18 <gouthamr> Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct. 18:01:21 <gouthamr> Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee 18:01:24 <gouthamr> #topic Roll Call 18:01:25 <noonedeadpunk> o/ 18:01:28 <frickler> \o 18:01:29 <slaweq> o/ 18:01:31 <gtema> o/ 18:01:34 <gmann> o/ 18:01:44 <gouthamr> noted absence: c a r d o e, b a u z a s 18:03:42 <gouthamr> courtesy ping: spotz[m] 18:03:58 <gouthamr> oh, spotz[m] may be away 18:04:04 <gouthamr> lets get started 18:04:09 <gouthamr> #topic Last week's AIs 18:05:04 <gouthamr> there was one that we took note of: resolving the ownership of "watcher-drivers" on Launchpad.. 18:05:10 <gouthamr> this has been addressed 18:05:18 <gmann> yeah 18:05:48 <gouthamr> thanks to LP admins, Billy Olsen.. and openstack-admins seems to have cleaned up everything for the team 18:06:28 <gouthamr> i was tracking no other meeting AIs, did anyone else have any? 18:06:30 <gmann> The next step is on watcher team side to add more members if needed and nothing specific for TC on this. 18:06:37 <gouthamr> gmann: ++ 18:06:51 <slaweq> gmann ++ 18:07:54 <gouthamr> we'e a relatively short agenda today.. i.e., no new topics.. so would you folks want to round up on PTG notes? or would you prefer to do that next week over Zoom/IRC? 18:08:40 * frickler would not prefer zoom 18:08:59 <gouthamr> ^ yes; lets start today and see if we have a spillover 18:09:17 <gouthamr> but, in the order of the topics there, I had: 18:09:20 <gouthamr> #topic A check on gate health 18:09:51 <gouthamr> we were talking about a number of UC bumps 18:09:59 <gmann> I did not see any blocker or frequent failure this week. 18:10:19 <gouthamr> it looks like we landed a lot more over this week 18:10:21 <gouthamr> #link https://review.opendev.org/q/project:openstack/requirements+status:merged 18:10:29 <gmann> I think those are settle down as projects are fixing the things. pillow bump one i remember is fixed multiple places 18:10:37 <frickler> new keystoneauth still failing for horizon, something system_scope related? https://zuul.opendev.org/t/openstack/build/fabcf39cc611462488546bb9517b6266 18:10:49 <frickler> but a lot of other reqs bumps merged, yes 18:11:13 <gmann> system scope should be disabled in horizon, my patch to enable it is still WIP 18:11:17 <gmann> I will check it 18:11:24 <frickler> https://etherpad.opendev.org/p/requirements-blockers is mostly up to date again 18:12:03 <gouthamr> #link https://etherpad.opendev.org/p/requirements-blockers (OpenStack Global Requirements tracker) 18:12:13 * gouthamr wasn't aware of this tracker 18:12:16 <frickler> other than that, lots of jobs still have issues with the new dockerhub rate limits 18:12:17 <gouthamr> thanks for sharing frickler 18:12:33 <gouthamr> newer new rate limits? :| 18:12:51 <frickler> new as of like 2 weeks ago? 18:13:31 <clarkb> ya sometime mid november they seem to have changed 18:13:46 <clarkb> they have a blog post about it but in my testing with their token system I wasn't able to get what their blog said in my token 18:14:07 <clarkb> so it is very confusing. However jobs are definitely less happy about using docker hub and not using docker hub is probably a good recommendation at this point 18:15:18 <frickler> sadly they seem to be the only source for plain debian/ubuntu images 18:15:38 <frickler> (used as source for kolla image builds) 18:16:02 <gouthamr> clarkb: this the post? https://www.docker.com/blog/checking-your-current-docker-pull-rate-limits-and-status/ 18:16:17 <clarkb> there is work in zuul/zuul-jobs to add generic tooling to copy images from one registry to another and we could set up jobs to copy commonly used base images (like debian/ubuntu/mariadb/etc) 18:16:41 <clarkb> gouthamr: thats the process for checking your rate limit according to the token values 18:17:15 <frickler> clarkb: I'm not sure whether there could be legal issues with that? iiuc docker considers those their intellectual property 18:17:59 <clarkb> https://www.docker.com/blog/november-2024-updated-plans-announcement/ is where they announce different lower rate limits that don't seem to be relfected in the tokens 18:18:12 * gouthamr doesn't know if we're opening another can of worms, but, 18:18:26 <gouthamr> is the recommendation to move to something like quay.io? 18:18:26 <clarkb> frickler: maybe, we'd have to check for any images we do 18:18:46 <clarkb> gouthamr: I think my recommendation at this point is avoid resources on dockerhub if you can. quay.io is one such alternative 18:18:59 <clarkb> but github, google, etc have alternatives too that may or may not work 18:19:09 <gouthamr> ack, and this is something that project teams have to individually opt-in to do? 18:19:30 <fungi> well, they've individually opted into using dockerhub, right? 18:19:42 <clarkb> its usually specific because projects indicate what image to pull which includes a registry name (or if not registry name is included the default is docker hub) 18:19:53 <clarkb> fungi: yes it is explicit. Any opt out would be explicit 18:20:10 <fungi> or merely un-opting-in 18:20:26 <fungi> (i.e. choosing to use something else, rather than choosing to use dockerhub) 18:21:38 <fungi> the challenge is that docker/moby built an entire ecosystem around assumptions that people could freely and conveniently download whatever they need from a central authority whenever they want, and now they've decided to upend that and invalidate those prior widespread assumptions throughout the container ecosystem 18:22:04 <clarkb> https://github.com/docker-library/official-images/blob/master/LICENSE the source code for the official docker hub images is apache 2 licensed at least. Not sure what they will argue about the binary images themselves 18:22:10 <fungi> so any solutions will necessarily involve some (or perhaps rather a lot of) pain to achieve 18:24:13 <clarkb> "As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within." from https://hub.docker.com/_/debian 18:24:46 <frickler> clarkb: yes, some time ago I looked into building images for our downstream ourselves, but it wasn't easy 18:24:58 <clarkb> in any case we've attempted to mitigate the problem by disabling our caching proxy by default 18:25:13 <frickler> "Use of Docker Official Images is subject to Docker's Terms of Service". on https://docs.docker.com/trusted-content/official-images/ 18:25:14 <clarkb> since the caching proxy was a single IP it was far more likely to get rate limited vs spreading requests across many IPs in the CI system 18:25:40 <clarkb> the rate limit errors still occur but at a lower rate I think since that change 18:26:00 <clarkb> and every change to not use docker hub is at least one less request to docker hub and will further improve the situation 18:26:15 <frickler> anyway I don't think we can solve this now and here, but the TC should know the status 18:26:22 <fungi> (single ip per test node region anyway) 18:27:37 <gouthamr> ++ if you folks have a recommendation to share, please do.. i can call this out as a concern so project teams that are explicitly opting into pulling from/uploading to dockerhub can notice and work on the recommended alternatives 18:28:09 <gouthamr> #link https://docs.opendev.org/opendev/base-jobs/latest/docker-image.html 18:29:04 <gouthamr> ^ this page for instance doesn't show a preference.. and i understand not writing one into the docs.. but, its possible some people would think we have a preference based on existing CI jobs/playbooks/roles that are easy to integrate with 18:30:32 <clarkb> ya we're not prescribing where you host your images 18:30:45 <clarkb> you can authenticate with docker hub and get better rate limits if you want 18:30:51 <clarkb> you can use quay or github or whatever 18:31:10 <fungi> though that document, for historical reasons, does mention docker hub rather a lot 18:31:12 <clarkb> we just want peopel to be aware that the rate limits our not really under our control and you may or may not need to do something about it 18:31:43 <clarkb> well it talks about docker images a lot not necessarily docker hub 18:31:56 <clarkb> but if people want to make that less confusing via overloading of terms that fine 18:32:28 <gouthamr> strive to eliminate something like that becoming a problem to test/ship your code.. 18:33:04 <gouthamr> clarkb: with my employer-colored-hat on, i'd call them "container images" :D 18:33:41 <clarkb> sure but for better or worse they've been called docker images for like a decade 18:33:41 <fungi> (it does say quite a bit about docker hub in the publishing section) 18:33:57 <gouthamr> true 18:34:06 <clarkb> my priority right now isn't in sanitizing that document to make certain businesses happy about terms 18:34:20 <clarkb> I've been focused on udnerstanding the udnerlying issues and adjusting job configs to alleviate the problem 18:34:36 <gouthamr> ack, shouldn't be your concern.. 18:34:42 <clarkb> then when things are better understood we can adjust the docuemntation if necessary 18:34:44 <gouthamr> okay; thank you for raising the concern here, frickler and for seeding this discussion.. 18:35:11 <gouthamr> any other gate concern to discuss here? 18:36:54 <gouthamr> #topic PTG AIs and the TC Tracker 18:36:59 <gouthamr> #link https://etherpad.opendev.org/p/tc-2025.1-tracker (Technical Committee activity tracker - 2025.1) 18:37:20 <gouthamr> some things need an update on that tracker 18:37:40 <gouthamr> #link https://etherpad.opendev.org/p/oct2024-ptg-os-tc-summary (OS TC Epoxy PTG Summary) 18:37:57 <gmann> on Noble migration this is current status 18:37:59 <gmann> #link https://etherpad.opendev.org/p/migrate-to-noble#L38 18:38:06 <gmann> Projects Green on Noble (already passing or changes mentioned below needs to be merged): 21 18:38:15 <gmann> Projects Failing on Noble: 18 18:38:40 <gmann> I send on ML but there are still many projects who have not ack/working on fixing the issue 18:39:01 <gmann> I will ping them on IRC/add in their meeting agenda today if that can help 18:39:18 <gouthamr> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/JOMDY26TCW7OX3NXRGOYQCIDXNNJ4E25/ ([all][tc][ptl][qa] Migrating upstream CI/CD jobs to Ubuntu Noble (24.04)) 18:39:53 <gouthamr> thanks gmann.. we're at the original deadline in 3 days 18:40:10 <gmann> yeah 18:40:27 <gmann> that all from me on this 18:41:29 <gouthamr> tkajinam brought up an eventlet related issue on the thread, and is discussing it further in #openstack-eventlet-removal 18:42:33 <gouthamr> with these many projects having issues, i guess we can't meet that original deadline, gmann 18:42:45 <gouthamr> not without overrides at least 18:43:12 <gmann> yeah which is ok. otherwise these failure will stay until final release and that will be bigger problem 18:43:26 <gouthamr> ack; would you be proposing these overrides? 18:43:37 <gmann> switching it on Nov 29 gives a good pre-holidays heads up to projects 18:44:04 <gmann> gouthamr: no, I will leave then failing and let project decide if they want to shift the migration for their job or fix them 18:44:34 <gmann> overriding the nodeset has drawback of people might not even remove it so failing CI is good way 18:44:42 <gouthamr> yes 18:44:47 <frickler> I agree, if projects are not even able to pin their nodeset, we should reconsider their active status 18:45:03 <gmann> and if project need more time then they can do but should be aware of what they are explicitly doing 18:45:10 <gmann> frickler: ++ 18:48:19 <gmann> gouthamr: I think we can move to next item 18:48:23 <gouthamr> thank you 18:48:32 <gouthamr> i was browsing through the PTG summary for topics that we don't have explicit AI owners for 18:49:42 <gouthamr> i need to work on a couple of AIs 18:49:59 <gouthamr> but there's one on the postgres discussion: 18:50:01 <gouthamr> "Reviving postgres support will need volunteers. The TC last published a resolution in 2017 [3] explaining the state of support for postgresql. There must be a follow up to state that non-MySQL backends are not tested within the community." 18:50:29 <gouthamr> i see that neutron dropped postgresql testing 18:51:12 <gouthamr> other project teams may follow suite.. but this is a TC/doc issue; anyone wants to take a dig at this? 18:51:53 <gmann> what is needed here 'a new resolution to remove the support' or documentation somewhere else? 18:52:24 <gouthamr> #link https://governance.openstack.org/tc/resolutions/20170613-postgresql-status.html (TC Resolution on the state of testing of database systems) 18:52:34 <gouthamr> this was done as a resolution 18:53:04 <spotz[m]> Sorry on PTO and got distracted 18:53:59 <frickler> iiuc neutron was the last team to drop psql testing, at least from the core services 18:54:17 <frickler> so I'm not sure what documentation you'd want to get updated? 18:54:23 <gouthamr> np spotz[m].. hope the horse is well, 18:54:40 <gouthamr> frickler: what's a "core" service 18:54:41 <gmann> yeah, I think we are all good here and neutron did same as other services already did 18:55:11 <spotz[m]> gouthamr: Doing much better thanks 18:55:26 <frickler> gouthamr: essentially those installed by devstack itself without the need of a plugin? 18:55:29 <gmann> gouthamr: that was the terminology used in past we do not need to clarify those in doc where ever ever it was used 18:55:52 <frickler> but yes, the term is not exactly well defined 18:56:29 <gmann> I think resolution still ok and service test other DB or not is up to them 18:56:32 <gouthamr> okay; i can call this out to the ML again and round up any projects still running postgres jobs 18:56:51 <gmann> if they do and want it is still ok right? 18:57:12 <gmann> I mean we can say minimum things to test/support but max is something we should not limit 18:57:20 <frickler> another can of worms will be opened when we see that we need to distinguish between mysql and mariadb. I think we mostly only test the latter these days 18:58:15 <gouthamr> gmann: yes, i guess so.. iirc we came about this discussion because we had that stance.. and project teams thought they had to test it and were surprised when things started breaking because some other project teams weren't testing with postgresql 18:58:53 <slaweq> frickler IIRC in neutron we are mostly testing with mysql and we have one periodic job with Maria DB 18:59:07 <gouthamr> so we were reminding people of that old resolution, and someone suggested we need to reiterate it.. and claim we're NOT testing this (i don't know where we'd post this either) 18:59:44 <gouthamr> oslo.db for instance supports it with no disclaimers 19:00:22 <gouthamr> security guide suggests it as an equal alternative: https://docs.openstack.org/security-guide/databases/database-backend-considerations.html 19:00:50 <gmann> if they support and test well then they do not any disclaimers right? 19:01:06 <gouthamr> our install guide: https://docs.openstack.org/install-guide/environment-sql-database.html 19:01:08 <gouthamr> "OpenStack services also support other SQL databases including PostgreSQL." 19:01:23 <gouthamr> sorry; we're over time and i just noticed 19:01:48 <gmann> I mean current resolution statements are still valid and there is no guarantee for non-MySQL support/test but if anyone test it is ok 19:01:58 <gmann> Maria DB is good example frickler mentioned 19:02:30 <gouthamr> but in essence, there's a bunch of documentation suggesting that postgres is well supported.. but, project teams aren't testing them anymore, so i think the AI was to help operators know that 19:02:56 <gouthamr> i'll endmeeting and let you folks disperse to other places :) 19:03:01 <gouthamr> thank you all for attending 19:03:06 <slaweq> o/ 19:03:07 <gouthamr> we'll pick up on this next week 19:03:28 <gouthamr> hope to see you then 19:03:31 <gouthamr> #endmeeting