18:00:37 <gouthamr> #startmeeting tc
18:00:37 <opendevmeet> Meeting started Tue Jun 18 18:00:37 2024 UTC and is due to finish in 60 minutes.  The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:37 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:37 <opendevmeet> The meeting name has been set to 'tc'
18:00:58 <gouthamr> Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct.
18:01:03 <gouthamr> Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee.
18:01:08 <gouthamr> #topic Roll Call
18:01:12 <gmann> o/
18:01:14 <gtema> o/
18:01:22 <slaweq> o/
18:01:27 <dansmith> o/
18:01:31 <noonedeadpunk> o/
18:02:18 <JayF> o/
18:02:24 <frickler> \o
18:02:28 <gouthamr> #chair frickler
18:02:28 <opendevmeet> Current chairs: frickler gouthamr
18:03:03 <gouthamr> courtesy ping: spotz
18:04:37 <gouthamr> we have the requisite quorum; lets get started..
18:04:45 <gouthamr> #topic AIs from last week
18:05:36 <gouthamr> frickler: any further update on the PyPi cleanup from you?
18:06:00 <frickler> no, didn't get to check further stuff yet
18:06:01 * gouthamr just noticed the question you left for me on the etherpad
18:07:29 <gouthamr> frickler: ack; ty.. np. the next steps here were to go through the lists again with a fine toothed comb to see any packages we've missed; and address the ones we need openstackci to be owner on
18:08:02 <gouthamr> i didn't get a chance to do any of this in teh past week; it's been a frenzy downstream.. but i'll commit some time this week
18:08:20 <frickler> there's also the open question of whether to add a second account for redundancy
18:08:42 <frickler> maybe this can be deferred until using an organization account is possible
18:08:48 <gouthamr> ah yes; i think we were in agreement here.. is this something the TaCT SIG can do for us?
18:09:17 <gouthamr> oh; #TIL
18:09:24 <gouthamr> #link https://docs.pypi.org/organization-accounts/ (PyPI Organization Accounts)
18:09:43 <noonedeadpunk> they're there still though?
18:09:48 <noonedeadpunk> *not there
18:09:59 <frickler> that feature is in beta testing for two years now iiuc, seems the foundation is searching for someone to work on the implementation
18:10:02 <noonedeadpunk> but that would makes sooo much sense
18:11:11 <frickler> adding another account now would have to be done manually again for all repos, so I wouldn't mind if we could wait with that maybe another year or so
18:11:52 <frickler> of course there might be some larger issue if we ever got locked out of the current account, so this is not completely without risk
18:12:41 <gouthamr> :( yes; and the wait for this feature doesn't look definitive
18:13:30 <gouthamr> i am in favor of adding a second maintainer account and then transitioning these accounts to organization accounts if necessary; maybe when this does happen, there'll be a sane warehouse API too
18:14:24 <JayF> Honestly, I would not count on those things happening at all unless someone here is going to volunteer to do them. As mentioned by frickler, pypi-related features are promised more frequently than delivered in general.
18:14:39 <JayF> So I would advise us to decide based on what exists today, and not wait for something that may not come.
18:14:57 <noonedeadpunk> ++
18:15:13 <frickler> fungi: ^^ any opinion from you as tact head? ;)
18:16:29 <fungi> i'd be inclined to wait, but willing to go along with the tc's preference
18:16:45 <fungi> working upstream with the warehouse maintainers to solve the missing collaborator api would be another option
18:16:57 <opendevreview> Goutham Pacha Ravi proposed openstack/governance-sigs master: Link SIG repos from index  https://review.opendev.org/c/openstack/governance-sigs/+/922162
18:17:31 * JayF is OK with going along with fungi's suggestion
18:17:35 <fungi> if warehouse grew an api for those tasks, we could script something akin to the accessbot we use for managing irc channel access lists
18:17:45 <gouthamr> that'd ease a world of pain
18:18:05 <fungi> because keep in mind that this isn't a one-time task
18:18:27 <fungi> the second collaborator would need to be manually added every time we publish the initial release of any new deliverable in the future too
18:19:16 <fungi> and remember that pypi projects aren't created ahead of time any more, now they're auto-created by the first uploaded release
18:19:31 <fungi> so just adding it to the project creation steps isn't going to work
18:20:20 <gmann> I think we should leave it to tact SIG as it is handled there and the way they would like to proceed. if any decision/policy is needed from TC we can check/help.
18:20:22 <frickler> fungi: do you have any contact to warehouse maintainers where more details could be discussed?
18:20:38 <frickler> but yeah, we can discuss that off-meeting
18:20:39 <fungi> frickler: the issue that's filed in github is my contact
18:20:58 <opendevreview> Merged openstack/governance-sigs master: Add PyYAML to doc/requirements.txt  https://review.opendev.org/c/openstack/governance-sigs/+/922205
18:21:06 <fungi> i mean, i do know some of the maintainers, but they prefer such things to go through issues and pull requests anyway
18:21:13 <frickler> that's not too much of a contact in my world, but ok
18:21:16 <gouthamr> #link https://github.com/pypi/warehouse/issues/13409 ([META] Determine API design for performing maintainer actions)
18:21:33 <frickler> I'll read up on that tomorrow, thx
18:22:15 <gouthamr> okay - good discussion
18:22:50 <gmann> one suggestion: it seems we are spending much time in every meeting on this, can we discuss/let tact SIG handle it and we can join discussion in #openstack-infra channel?
18:22:53 <spotz[m]> o/ sorry had the meeting time as laterduetotime zone
18:23:03 <gouthamr> #agreed we'll not implement the second maintainer on PyPi; we'll instead wait on organizational accounts or a warehouse API
18:23:31 <JayF> gmann: I like that we handle it in meetings, supply chain security for us is a top line topic for technical governance in a project like ours, IMO.
18:24:16 <gouthamr> the other AIs from last week pertain to zuul config errors
18:24:17 <gmann> JayF: but tact SIG is handling it carefully on behalf of TC, I do not see any issue in that
18:25:18 <gouthamr> lets switch to the gate health topic so we can review those
18:25:22 <gouthamr> #topic Gate Health
18:25:52 <gouthamr> i sent this link with our weekly summary:
18:25:56 <dansmith> I haven't had a lot of time this last week, but a bunch of nova patches have been on permafail status
18:26:01 <gouthamr> #link https://zuul.opendev.org/t/openstack/config-errors (Zuul config errors)
18:26:17 * gouthamr sorry; interrupted dan there
18:26:17 <gmann> from tempest jobs I have observed last week, i did not see any frequent failure
18:26:50 <frickler> also config errors is a different topic than gate health?
18:27:07 <gmann> there were some timeout in integrated compute job and suspect was http_client timeout increase
18:27:15 <dansmith> gmann: up to 12 rechecks: https://review.opendev.org/c/openstack/nova/+/912094/3?tab=change-view-tab-header-zuul-results-summary
18:27:16 <gouthamr> i think neutron-dynamic-routing and neutron-fwaas (perhaps others) started having issues with some recent wsgi refactoring in neutron
18:27:25 <JayF> Ironic had an issue related to pysnmp getting bumped in requirements (at our request), we missed that virtualbmc needed migration as well.
18:27:30 <gmann> but we will be observing that in this or coming week
18:27:39 <gmann> gouthamr: that is fixed right?
18:27:44 <frickler> vpnaas, yes, that should be fixed in neutron soon, however
18:28:03 <gouthamr> gmann: is it? we skipped installing neutron-dynamic-routing in manila jobs as a workaround
18:28:20 <frickler> #link https://review.opendev.org/c/openstack/neutron/+/922185
18:28:35 <slaweq> gouthamr: yes, there is/was some issue with those but I haven't had a lot of time this week and I don't know details
18:28:39 <gmann> gouthamr: ohk, it is not merged yet.
18:28:42 <gouthamr> ah tyty  (cc: carloss ^)
18:28:55 <frickler> approved, but not merged yet, and then needs a new neutron release
18:29:37 <frickler> for the ironic issue the reqs bump has been reverted
18:29:50 <frickler> #link https://review.opendev.org/c/openstack/requirements/+/922186
18:30:02 <gouthamr> +1
18:30:16 <JayF> Thanks for that, I'm going to be working the virtualbmc issue so we can get it rolled forward again. pysnmp did an asyncore->asyncio migration which is causing pain with how it's structured
18:30:16 <fungi> projects are installing neutron releases in integration tests?
18:30:24 <gtema> the neutron one got w+1 some minutes ago
18:30:37 <JayF> s/virtualbmc/virtualpdu/
18:30:45 <fungi> thought they all installed neutron branch tips, so just merging should be enough?
18:31:03 <gmann> I think it is from master with required-project so release might not be needed
18:31:12 <gouthamr> (^ that's true of manila jobs.. they use neutron-dynamic-routing from git)
18:31:20 <gtema> fungi - issue is when neutron is coming from tips and vpnaas not tips
18:31:30 <gtema> but tomorrow it should be all fine
18:31:53 <fungi> right, that's what i thought. okay thanks
18:32:40 <gouthamr> we might have to log some bugs for integrated jobs that timeout sporadically
18:33:41 <frickler> there was a suspicion that that might be related to specific providers, but sean mooney made some statistics which looked indifferent to me
18:33:42 <gouthamr> or is there a hunch that these have a pattern? i.e., when using this feature on slow test nodes on xyzzy provider, things timeout
18:33:56 <frickler> though that was only for the devstack part so far
18:35:06 <frickler> see the log of the -qa channel for more details (last 3 days or so)
18:35:19 <gouthamr> ack ty frickler
18:36:06 <gouthamr> "devstack-plugin-ceph-tempest-py3" is a base job for integrated jobs elsewhere and it timed out 25% of the time: https://zuul.opendev.org/t/openstack/builds?job_name=devstack-plugin-ceph-tempest-py3&project=openstack/devstack-plugin-ceph
18:37:12 <gmann> I think this job is still not so stable and many place we have it non voting
18:37:41 <frickler> a lot of the successful runs are also very close to 2h, so either reduce the test coverage or increase the timeout a bit?
18:38:37 <fungi> or find a way to make tests more efficient/faster
18:39:12 <dansmith> we made a lot of tests slower recently because it makes them more repeatable
18:39:18 <dansmith> so you know, it's a tradeoff :)
18:39:33 <dansmith> "fast and racy" vs "slow and predictable"
18:39:34 <gmann> frickler: yeah, may be. I think it include slow tests also which we excluded in tempest full job also https://github.com/openstack/devstack-plugin-ceph/blob/13f94aaaf2b4a59581b1be5979600ca18a8df5c3/.zuul.yaml#L36
18:39:40 <gmann> and with 2 hrs timeout
18:41:34 <gmann> we should divide this job to run slow marked tests and non-slow separately
18:41:59 <dansmith> don't we already do that for integrated-storage?
18:42:07 <frickler> was just writing this: splitting into two jobs might also be an option
18:42:19 <gmann> yes, we do that for other integrated jobs
18:42:30 <dansmith> and I think those timeout too, is my point
18:42:41 <dansmith> but sure, obviously can't be worse
18:43:21 <slaweq> +1
18:43:43 <gouthamr> okay looks like we've identified a strategy to try.. do we have a volunteer to test this strategy? :)
18:44:18 <gmann> i can propose
18:44:31 <gouthamr> gmann++ ty
18:45:01 <gouthamr> we'll circle back on this next week; are there any other gate concerns to discuss?
18:47:01 <gouthamr> #topic 2024.2 TC Tracker
18:47:05 <gouthamr> #link https://etherpad.opendev.org/p/tc-2024.2-tracker (Technical Committee activity tracker)
18:48:55 <frickler> gmann: are you still working on the affiliation diversity topic? the next election is coming up faster than we might think
18:50:03 <gmann> frickler: yeah, I have election dates in mind, I will propose something soon this week most probably
18:50:08 <gouthamr> amidst all the things he's up to :)
18:50:27 <JayF> Hopefully we won't need it, but it'll be nice to know we won't deadlock should it occur again.
18:50:32 <gouthamr> gmann: if you intend to pawn it off, i can help draft something that we can poke holes at
18:50:43 <gmann> considering it might take time to merge but let's see how it goes
18:50:56 <gmann> gouthamr: sure, will let you know
18:50:59 <gouthamr> ^ yes; we'll hopefully time box the reviews on this
18:51:21 <gmann> ++
18:51:41 <gouthamr> #link https://review.opendev.org/c/openstack/governance/+/902585 (Remove Eventlet From Openstack)
18:51:45 <gouthamr> ^ this has been refreshed
18:52:30 <gouthamr> JayF: i don't know if this now addresses/subsumes your WIP proposal: https://review.opendev.org/c/openstack/governance/+/916546
18:52:49 <JayF> TBH I thought I had abandoned that; I will now
18:52:56 <gouthamr> ++
18:53:39 <gouthamr> gmann: spotz[m]: i think i want to freeze https://review.opendev.org/c/openstack/governance/+/918488 if you can take another look
18:53:59 <gmann> gouthamr: yes, I have opened it in morning, will review today
18:54:13 <gouthamr> spotz[m]: i tested and used the suggested fix feature on gerrit on your review: https://review.opendev.org/c/openstack/governance/+/915021
18:54:25 <gouthamr> (clarkb ^ this has been working like a charm for me all over)
18:55:12 <gouthamr> spotz[m]: maybe you can help test the "accept suggestion" workflow :D - super neat improvement on gerrit
18:56:29 <gouthamr> nothing else is popping to me as being urgent on the governance repo.. please correct me if you disagree
18:57:47 <gmann> this one need one more review https://review.opendev.org/c/openstack/governance/+/920848
18:58:15 <gmann> and this is eligible to merge, gouthamr whenever you are running the script to check mergable one  https://review.opendev.org/c/openstack/governance/+/917516
18:58:39 <gouthamr> ack will do gmann
18:58:48 <gmann> thanks
18:59:03 <spotz[m]> I’ll take a look
18:59:35 <gouthamr> alright we're at the hour; so unfortunately, we'll skip open discussion
19:00:02 <gouthamr> thank you all for the great discussion; if there's anything urgent to chat about, feel free to ping right after on this channel
19:00:20 <gouthamr> see you all in-sync here again next week!
19:00:24 <gouthamr> #endmeeting