18:01:09 <gouthamr> #startmeeting tc
18:01:09 <opendevmeet> Meeting started Tue Nov 19 18:01:09 2024 UTC and is due to finish in 60 minutes.  The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:01:09 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:01:09 <opendevmeet> The meeting name has been set to 'tc'
18:01:28 <gouthamr> Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct.
18:01:36 <gouthamr> Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
18:01:41 <gouthamr> #topic Roll Call
18:01:44 <noonedeadpunk> o/
18:01:46 <gmann> o/
18:01:51 <gtema> o/
18:02:14 <bauzas> \o
18:02:39 <gouthamr> noted absence: c a r d o e, s l a w e q
18:02:49 <frickler> o/
18:03:23 <gouthamr> courtesy-ping: spotz [m]
18:03:45 <gouthamr> not in the channel, likely somewhere in the conference circuit :)
18:04:14 <gouthamr> lets get started..
18:04:15 <frickler> may be an issue with the matrix bridge hiding ppl
18:04:25 <gouthamr> ack
18:04:50 <gouthamr> #topic Last Week's AIs
18:06:01 <gouthamr> we took an AI to see if we can revive the erstwhile Third-party CI SIG through the help of project teams that have a requirement for third party CI
18:06:13 <spotz[m]> o/
18:07:06 <gouthamr> we had a pretty long discussion following our meeting, and jbernard brought it up at the cinder weekly meeting.. i don't suppose we have updates already.. we did suggest that project teams must evaluate the requirement if we're unable to support contributors putting together their CI
18:07:20 <gouthamr> re-evaluate*
18:07:32 <gmann> I think this is settle now? I saw it was discussed in cinder meeting and some operator having 3rd party CI mentioned to help/share doc etc to cinder team
18:07:53 <jbernard> i can give a quick update from cinder if there is interest
18:08:02 <gouthamr> please do, jbernard
18:08:05 <jbernard> sure,
18:08:56 <gouthamr> #link https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.log.html#l-23 (cinder meeting discussion of third party CI)
18:08:59 <jbernard> i reached out to Inori directly to understand the cinder-specific issues, i think progress as been made on that front, my plan is to incorporate that feedback into the doc we create
18:09:20 <jbernard> as a team, we agreed that we need to do better in documenting our ci requirements and setup proceedure
18:09:35 <jbernard> a few volunteered to help with a doc
18:10:15 <jbernard> im in the process of collecting all that we have currently, with the plan of consolidating it into a single workign document for our ci reference, both for newcomers and existing vendors for reference
18:10:39 <gouthamr> ++ that's great
18:10:46 <clarkb> would also be good to share with other projects that require/use third party ci so they can see if it can be adapted to their needs too
18:11:00 <gmann> jbernard: ++, thanks. really appreciate that.
18:11:10 <clarkb> I don't think cinder necessarily has to solve any manila or ironic or nova problems, but at least make them aware that the documentation is there and can be reconsumed
18:11:11 <jbernard> clarkb: certainly, once we have something that's usable, ill send a mail to the list
18:11:18 <clarkb> ++ thanks
18:11:22 <gouthamr> thanks jbernard
18:11:24 <gmann> clarkb: ++
18:11:42 <gmann> maybe we can add something in p-t-g even it is a cinder example but will be good ref
18:11:42 <gouthamr> that's all the AIs we took last week.. was anyone tracking anything else?
18:11:50 <jbernard> ideally we could have a working poc project to use as a starting point, but im not yet sure how feasible or how much time that will take
18:12:07 <jbernard> but if it looks like it coule be in reach, we'll try to go for it
18:12:23 <jbernard> that's all on ci that I have
18:12:28 <gouthamr> yeah, if the docs are frustrating to follow, someone will step in and automate it :)
18:13:40 <gouthamr> moving on..
18:13:44 <gouthamr> #topic Status of migration to Ubuntu Noble (gmann)
18:13:51 <gouthamr> gmann: floor is your
18:13:54 <gouthamr> gmann: floor is yours*
18:14:13 <gmann> I sent the status mail yesterday night
18:14:15 <gmann> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/thread/JOMDY26TCW7OX3NXRGOYQCIDXNNJ4E25/
18:14:30 <gmann> or you can see the current status in ertherpad which will be more live
18:14:38 <gmann> #link https://etherpad.opendev.org/p/migrate-to-noble#L37
18:14:59 <bauzas> thanks gmann for the hard work
18:15:20 <gmann> 14 projects are all green on Noble (for many fixes/changes are up  to merge once base devstack/tox jobs are ready to migrate
18:15:40 <gmann> other projects failing but there are cases which might be non-noble related issue
18:15:50 <gtema> for it is honestly a pain - openstackdocstheme is broken due to pbr and to fix this one is also not trivial
18:16:11 <gmann> I am going through those and trying to fix/filter out them for noble specific issue
18:16:13 <gouthamr> i'd throw in one more: manila.. we identified a couple of issues and i've pinned jammy to debug them
18:16:19 <clarkb> gtema: fwiw I've seen that pbr change but pbr works fine with python3.12 in zuul. It still isn't clear to my why that change is necessary
18:16:28 <gtema> I also wonder about some statuses in the etherpad since i.e. first change to keystone today failed on docs job where now we need to drop sphinxcontrib-*diag
18:16:32 <gmann> gtema: yes. doc job is not yet ready due to openstackdocstheme
18:16:42 <bauzas> gtema: because of ceph, heh ? :)
18:16:45 <gmann> #link https://bugs.launchpad.net/pbr/+bug/2088360
18:16:46 <bauzas> whoops
18:16:53 <bauzas> s/gtema/gouthamr
18:16:53 <clarkb> but also I think that change is close? its the logging changes that are holding it up now
18:17:10 <gtema> clarkb - locally it passes
18:17:23 <gmann> gtema: clarkb:  can you add the link?
18:17:23 <gtema> with py36 I mean (no test for py27)
18:17:42 <gtema> #link https://review.opendev.org/c/openstack/pbr/+/924216
18:17:51 <gouthamr> bauzas: actually, i'm scratching my head about ceph jobs.. i've to test locally to reproduce issues you and rdhasman reported.. but, gmann's changes to move ceph jobs to noble pass in the CI..
18:17:57 <gtema> it is referred in the launchpad issue
18:18:11 <bauzas> gouthamr: :cross_fingers_emoji_ish:
18:18:20 <gmann> gtema: ++
18:18:44 <gouthamr> bauzas: here's a no-op against devstack-plugin-ceph: https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/935398 ,.. the failure is in the docs job
18:18:46 <clarkb> gtema: if I had to guess setuptools in other platforms doesn't configure logging at that "low" (info) level or similar
18:19:04 <clarkb> and going through the old distutils mechanism did?
18:19:16 <gtema> and literally - without doing anything docs job in keystone are broken today (due to blockdiag) and it is also a pain to replace them - there are lot of those
18:19:17 <gmann> gtema: I logged for sdk as openstackdoctheme use story and it is very hard to log and track in two places
18:19:21 <clarkb> but again it isn't clear to me why this is needed
18:19:29 <gmann> but thanks for checing and adding detail there
18:19:30 <clarkb> pbr works under python3.12 if you install setuptools
18:19:51 <gtema> clarkb - openstackdoctheme does import from pbr which is not working
18:20:04 <clarkb> got it this is runtime use of pbr not install time
18:20:06 <clarkb> that is helpful
18:20:09 <gtema> clarkb: soo https://bugs.launchpad.net/pbr/+bug/2088360/comments/3
18:20:15 <fungi> is that with setuptools already installed?
18:20:28 <clarkb> fungi: based on that traceback I'm thinking no
18:20:30 <gtema> some bits in pbr still import distutils
18:20:36 <frickler> can we maybe discuss the technical details after the meeting? this is getting a bit out of scope for me
18:20:43 <clarkb> frickler: sure
18:20:48 <gmann> yeah, we can discuss after meeting
18:21:21 <fungi> yeah, newer python dropped distutils, if you need distutils on newer python you can get it by installing setuptools, which now provides it
18:21:32 <gmann> so my main purpose to add this topic here is that deadline for migrating the base jobs is Nov 29 and even there are many projects not green yet. I would like to know if we want to extent the deadline ?
18:21:57 <gtema> gmann - looking to amount of issues I do not believe we would make it
18:21:58 <gouthamr> awesome, thanks for sharing the status of the goal here and for all the hard work around this, gmann ..
18:22:05 <gouthamr> where does that deadline come from?
18:22:06 <gmann> personally I do not want to extend and if any project stay in broken state then we have option to pin the nodeset as jammy and they can continue fixing it
18:22:13 <gtema> especially that some changes were open since months and nobody reviewed them
18:22:14 <frickler> IMO nothings going to happen in december, either, so I don't think that that would help much
18:22:21 <gouthamr> ^ +1
18:22:51 <clarkb> frickler: the inverse may be true though right? its a low impact time to make changes like that and let them more slowly get fixed?
18:22:57 <gmann> well we will be migrating base jobs early in cycle and moving deadline make situation wortst
18:23:06 <clarkb> vs trying to do it closer to the release when everyone is cramming in features last second
18:23:24 <noonedeadpunk> I aree that it's better to do in the beginning
18:23:32 <frickler> yes, so keep the deadline, switch, put more pressure on projects to fix things
18:23:36 * gouthamr isn't clear how we picked the deadline..
18:23:38 <gmann> gtema: true but somewhere we have to break the things to fix and if any project is not activly checking their failure then they can pin the nodeset
18:24:09 <gtema> hmm, that is so untrue - pbr is broken and it affects everybody
18:24:16 <frickler> gouthamr: iiuc it is already delayed from "early in the cycle"
18:24:16 <gtema> and the change there is open since July
18:24:19 <gmann> and if some complex issue is there and project need time we have workarond also instead of holding complete CI
18:24:52 <noonedeadpunk> wait, so you're saying that pbr is broken with py3.12?
18:25:09 <gtema> yes, pbr is not working with 312 as of now
18:25:17 <bauzas> why exactly ?
18:25:19 <fungi> installing pbr without setuptools is broken
18:25:21 <gtema> it has imports of distutils which are dropped
18:25:22 <bauzas> I got no issues with noba
18:25:27 <bauzas> ah that
18:25:33 <gmann> disutils is not there
18:25:37 <fungi> distutils moves from python stdlib into setuptools
18:25:39 <noonedeadpunk> but wasn't 3.12 in pti for 2024.2?
18:25:45 <clarkb> this is fud
18:25:50 <gmann> noonedeadpunk: it was
18:25:53 <clarkb> pbr works fine it requires setuptools you must install setuptools
18:26:10 <gmann> where there is no setuptools it is issue
18:26:12 <gtema> noonedeadpunk - it makes so much difference whether you do tox -e py312 or you use py312 as a base for all jobs
18:26:20 <clarkb> it could be improved but it is not required to work with python3.12
18:26:22 <noonedeadpunk> Yeah, I think it's true what clarkb is saying
18:27:05 <noonedeadpunk> gtema: well, I'm running 24.04 for almost half a year, and never saw tox failing due to pbr so far
18:27:31 <noonedeadpunk> maybe it's not always used....
18:27:31 <gtema> right, so now pls get i.e. keystone project and try to build docs with py312
18:27:34 <gmann> let's review it on gerrit but I do not think that should be blocker for migration as things work with setuptools
18:27:39 <noonedeadpunk> sec
18:27:54 <gouthamr> okay; couple of you have expressed that this is ETOOMUCHDETAIL for this meeting; lets discuss implementation concerns right after we end..
18:27:59 <gmann> we still have 10 days and we can see how many get fixed. I am concentrating on this migration only and requested project team also
18:28:07 <gouthamr> gmann: it looks like there's no need to delay this
18:28:23 <gmann> so I think we are good to stay on deadline Nov 29 and not ok for extending it right?
18:28:37 <gmann> gouthamr: yeah, holiday time is good to fix the things also
18:28:39 <frickler> switching docs builds to noble/3.12 doesn't need to be coupled with that
18:28:48 <gmann> frickler: exactly
18:28:50 <gtema> -1 for blindly sticking
18:28:53 <frickler> so just switch unit tests + devstack
18:28:58 <gmann> we do not need to hold all things for a small portion/projects
18:29:29 <bauzas> how big is that portion ?
18:29:42 <gmann> gtema: doc job will not be migrated part of this
18:29:49 <bauzas> I tend to agree with you but I want to be sure that's a minority
18:29:51 <gmann> that is already separated out
18:29:53 <gouthamr> docs jobs, failing project jobs noted in the etherpad..
18:29:55 <gmann> #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/935459/1
18:30:04 <gtema> I hate this mix - it is too dirty and not helpful
18:30:31 <gmann> well, holding the integration testing for this is not good idea
18:30:47 <gouthamr> gtema: i think we'd want to move incrementally..
18:30:48 <gmann> and delaying the migration make it worst to do it during rlease time
18:31:08 <noonedeadpunk> gtema: it failed but with completely different issue: https://paste.openstack.org/show/bKzQbfNEiaAoG7QQbrz4/
18:31:20 <gmann> we have faced that in past when I personally wanted to have EVERYTHING GREEN before any job migration
18:31:30 <gouthamr> gtema: they're really not testing the same things, so we're batching things that are alike already.. no need to club all OpenStack CI Ubuntu jobs under one bucket, imo
18:32:04 <gtema> noonedeadpunk - that is what I mean, we just say we take different py version and all of a sudden there are plenty of issues not related to the project itself supporting this version or not
18:32:24 <noonedeadpunk> I don\t think it's py verison, I would put on u-c recent update
18:32:47 <gmann> it passed here
18:32:49 <gmann> #link https://review.opendev.org/c/openstack/keystone/+/932934
18:33:20 <gtema> I am frustrated - for me now nearly everything is broken, for already a week I am just on fixing (workaround) things everywhere in and out of my reach
18:33:50 <gtema> I can't work on any features, only repairing what was working before
18:33:56 <noonedeadpunk> I have exact same issue with python 3.11 fwiw
18:34:17 <fungi> gmann: 932934 ran openstack-tox-docs on jammy, fwiw
18:34:39 <gmann> fungi: yeah just checked https://zuul.opendev.org/t/openstack/build/a3d99e308ad74ee0ad7919f7b07490fd/log/job-output.txt#49
18:34:54 <gmann> it was after I separated out the doc job migration
18:34:55 <noonedeadpunk> could it be that? https://opendev.org/openstack/requirements/commit/376cf882ece239abee0e2c2ffb83f3bbefc1819c
18:35:16 <noonedeadpunk> well. plenty u-c were bumped today
18:36:00 <gmann> anyways, I am keeping deadline same and we will see how things goes in those 10 days. doc job migration can be done later once we are ready
18:36:08 <noonedeadpunk> let's try recheck your patch and see if it still works :)
18:36:18 <gtema> noonedeadpunk - everybody knows that those blockdiag are not really expected to work and majority of projects replaced them with graphviz, so it is not a question what broke it. We need to replace it. I am just tired of only fixing things to the left and right
18:36:36 <noonedeadpunk> I know, I know...
18:37:04 <noonedeadpunk> I'm not arguing that plenty of things are broken and bumping u-c results in failures like that quite often...
18:37:28 <gouthamr> okay, lets wrap up this discussion, and move to the next topic
18:37:32 <gouthamr> #topic Watcher's launchpad tracker recovery
18:37:37 <gouthamr> #link https://answers.launchpad.net/launchpad/+question/819336
18:38:00 <gmann> gtema: feel free to add bugs you are facing in etherpad that will be helpful to know where we are
18:38:02 <bauzas> no answers unfortunately from my side :(
18:38:11 <gouthamr> ^ so launchpad admins provided the projects (watcher, watcher-dashboard and python-watcherclient) to openstack-admins
18:38:25 <gmann> gouthamr: this is still not openstack-admin
18:38:26 <gouthamr> can you folks kick out the watcher-drivers here?
18:38:27 <gmann> #linl https://launchpad.net/~watcher-drivers
18:38:29 <gmann> #link https://launchpad.net/~watcher-drivers
18:38:33 <bauzas> I invited him thru LinkedIn but no response so far and I can't InMail
18:38:45 <gmann> or you are expecting to create a new driver team?
18:38:49 <gouthamr> yes
18:39:05 <gmann> I was expecting to have ownership of watcher-driver changed
18:39:10 <gouthamr> i'm asking openstack-admins to take over now, and allow sean-k-mooney and folks to create a different team
18:39:12 <gouthamr> gmann: me too
18:39:18 <spotz[m]> Yeah I don't see any of the admins there even individually
18:39:19 <gouthamr> but, is this sufficient?
18:39:49 <gmann> humm, this is workarond we can do but did they deny changing ownewrship of driver?
18:40:18 <gouthamr> bauzas: david's been reading my linkedin messages, but, like i mentioned earlier.. i don't think he can do anything, his launchpad account is associated with his former employer.. he may have no way to log in
18:40:18 <gmann> gouthamr: maybe we can try that also, if you can ask them to do this change also?
18:40:34 <gouthamr> gmann: because we want consistency?
18:40:45 <fungi> openstack-admins group members don't allow/prevent creating new groups in lp, just make sure to add openstack-admins as the owner when creating
18:40:49 <bauzas> gouthamr: ack
18:41:03 <gmann> gouthamr: yeah, otherwise we have to create another driver team with different name
18:41:15 <sean-k-mooney> i think normally there are 3 teams
18:41:25 <gouthamr> gmann: yeah, my problem was the delay we've had to get LP folks to act on this
18:41:27 <gmann> gouthamr: I mean that is doable but let's try if they can hand over the existing driver team
18:41:27 <sean-k-mooney> drvier, a open bug team and a coresec team
18:41:36 <gouthamr> i must send Billy Olsen a thank you message :D
18:41:45 <frickler> seems they added openstack admins as maintainers of the 3 projects, so we should be able to go on from that
18:41:49 <gouthamr> sean-k-mooney: if that's what the team wants
18:41:53 <gmann> as you asked "We'd like help to reassign the "watcher-drivers" team to "openstack-admins".. Or a way to change the ownership of the three projects to "openstack-admins"."
18:42:05 <gouthamr> i know, but they ignored that :D
18:42:10 <fungi> projects can get by without a bug supervisor group if they allow open bug assigning and targeting
18:42:12 <gmann> and they did 2nd portion. I think they can do 1st also?
18:42:38 <sean-k-mooney> gouthamr: well we dont nessiarly need 3 but i woudl like to have at least 2. but we can figure out the details later once openstack-admins is an owner of the current one
18:42:47 <gmann> gouthamr: can you post it explicitly and if they do not do withtin this week then we can go with another driver team
18:43:02 <gouthamr> gmann: okay can do..
18:43:06 <gmann> gouthamr: thanks
18:43:29 <gmann> as we are waiting for sean-k-mooney proposal for core team things also this week, we can wait for LP also for a week
18:43:45 <sean-k-mooney> yep
18:43:49 <gouthamr> sean-k-mooney: ack; do let openstack-admins know what you'd like once that's done..
18:44:11 <gouthamr> this topic is not entirely about watcher
18:44:24 <sean-k-mooney> we are hoping ot restart the watcher irc meetings so we can add it as an adgenda item and report back
18:44:28 <gmann> I can handle that from admin part side once it is added in driver or create new one. will work with sean-k-mooney on that
18:44:33 <gmann> sean-k-mooney: ++
18:44:49 <gouthamr> if there are any other launchpad trackers that are out of "openstack-admins" purview, we'd need to get them fixed.. please alert if you know any
18:45:06 <gouthamr> lets move on..
18:45:09 <gouthamr> #topic A check on gate health
18:45:23 <frickler> flaky I'd say
18:45:24 <gouthamr> any gate health concerns, not pertaining to the noble effort?
18:46:03 <frickler> lots of gate failures I saw during the release and reqs rush today
18:46:17 <clarkb> docker hub may or may not be enforcing more restrictive rate limits
18:46:20 <gmann> I have not checked on much this week but saw tempest jobs not so bad and things are merging in Tempest/nova
18:46:26 <frickler> nothing standing out in particular, the usual volume stuff
18:46:34 <clarkb> that probably doesn't affect most openstack projects though. The deployment projects are affected though
18:46:47 <gouthamr> there were 10 requirements bumps merged in the past few hours :O
18:46:50 <gtema> partially: https://review.opendev.org/c/openstack/releases/+/934416 (os-api-ref) release is necessary since some api-ref jobs are broken without people knowing that
18:47:00 <gmann> gouthamr: :) good timing :P
18:47:16 <gtema> nova api-ref is broken and fix is struggling to be done since 2 weeks
18:47:22 <frickler> gouthamr: and more pending
18:47:30 <gouthamr> \o/
18:47:56 <bauzas> gtema: yup, indeed
18:48:14 <bauzas> I can say that nova api-ref changes are broken now
18:49:41 <frickler> I'll ping the other release team members about that patch, missed it due to the e-1 rush
18:50:04 <gouthamr> thanks gtema frickler
18:50:27 <gouthamr> so gate flakiness must be expected as we start rolling out with these UC bumps
18:50:52 <gouthamr> we'll check back next week and hopefully there isn't a need for any reverts...
18:51:00 <gouthamr> anything else about $topic
18:51:09 <frickler> IMO it is unrelated, but we'll see, ack
18:51:20 <bauzas> yet again a question of how we can ensure we don't trample projects everytime we bump u-c
18:51:46 <bauzas> :)
18:51:49 <fungi> running a lot more test jobs is the only solution i'm aware of
18:51:52 <gouthamr> we have "cross" jobs.. but these are unit test jobs and maybe the coverage is insufficient - this is something project teams can assess and help with
18:52:01 <frickler> bauzas: have stable (non-flaky) integration jobs that we can afford to run on the reqs repo
18:52:14 <frickler> and make them faster than 2h if possible
18:52:18 <gmann> we hve many project testing on req but not all
18:52:38 <bauzas> yeah, I'd appreciate if we could run tempest scenarios on a u-c change
18:52:51 <gmann> we do right?
18:53:13 <gmann> tempest-full-py3 is there
18:53:14 <frickler> we run tempest-full-py3 and one sdk integration job
18:53:14 <bauzas> I dunno, b/c we recently hit hard problems to solve in a short timeframe
18:53:44 <bauzas> hmmm
18:53:54 <frickler> feel free to add more jobs, but also be prepared to have them removed again if they turn out to be failing too often
18:53:59 <bauzas> I then need to recollect *why* we missed that last time
18:54:19 * gouthamr recalls the issue being with openstacksdk changes
18:54:20 <bauzas> my brain is fried tonight so I miss context
18:54:26 <frickler> and also make sure to react to issues coming up. we have ~ 10 bumps blocked because projects need to fix stuff
18:54:27 <gouthamr> sry, openstackclient i think
18:54:32 <gmann> yeah, if jobs are stable those can be added and those should not be taking more time than tempest-ful-py3
18:54:58 <frickler> and I'm currently the only one that actually looks at those failures and pings projects, help could be really well spent there
18:55:02 <bauzas> gouthamr: correct, it was a major OSC change that broke our world, so we needed to revert to old OSC version
18:55:11 <gmann> bauzas: ah, i remember now.
18:55:18 <gmann> we talked about adding greande job there
18:55:27 <bauzas> so I wonder why we didn't captured that in CI
18:55:44 <frickler> bauzas: because nova doesn't run an sdk/osc-tips job?
18:55:50 <gouthamr> ~~ time check; we've 5 mins ~~
18:56:11 <gtema> bauzas - because like in the beginning of the meeting: project may run well with updated dep, but that doesn't mean that nobody else is using it in a totally different way
18:56:47 <bauzas> anyway, I agree with the timecheck, we shouldn't solve that problem now
18:56:57 <bauzas> I'll just take attention
18:57:14 <gmann> I was adding more cross service job on sdks but that is not green yet
18:57:30 <gouthamr> +1 it's in the TC tracker
18:57:37 <gouthamr> but, lets move the "TC Tracker" topic to next week; i do want to find owners for some PTG takeaways too.. we've three minutes to head into
18:57:41 <gouthamr> #topic Open Discussion
18:58:18 <gouthamr> a few new changes have showed up here:
18:58:19 <gmann> frickler: bauzas this one
18:58:20 <gmann> #link https://review.opendev.org/c/openstack/python-openstackclient/+/931858
18:58:26 <gouthamr> #link https://review.opendev.org/q/project:openstack/governance+status:open (Open Governance changes)
18:58:49 <gouthamr> can i have some eyes on https://review.opendev.org/c/openstack/governance/+/931254
19:00:02 <gouthamr> this moves a lot of the eventlet removal content out of the goal doc and into a wiki because (a) it is background information (b) alternatives will continue to be flushed out, this wasn't meant to be prescriptive of specific directions
19:00:12 <bauzas> I wanted to review the eventlet goal but had no time yet :(
19:00:29 <gouthamr> ack; please do take a look soon ..
19:00:37 <gouthamr> that said, we've reached the hour..
19:00:40 <gouthamr> thank you all for attending
19:00:58 <gouthamr> we can continue discussing things here after i endmeeting...
19:01:07 <gouthamr> but see you here next week if you have to run..
19:01:10 <gouthamr> #endmeeting