18:00:12 <knikolla> #startmeeting tc
18:00:12 <opendevmeet> Meeting started Tue Aug 29 18:00:12 2023 UTC and is due to finish in 60 minutes.  The chair is knikolla. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:12 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:12 <opendevmeet> The meeting name has been set to 'tc'
18:00:16 <knikolla> #topic Roll Call
18:00:17 <JayF> o/
18:00:19 <dansmith> o/
18:00:20 <knikolla> o/
18:00:22 <knikolla> Hi all, welcome to the weekly meeting of the OpenStack Technical Committee
18:00:23 <spotz[m]> o/
18:00:23 <gmann> o/
18:00:24 <rosmaita> o/
18:00:25 <knikolla> A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct
18:00:30 <knikolla> Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee
18:00:40 <knikolla> We have one noted absence from jamespage
18:00:48 <noonedeadpunk> o/
18:00:48 <slaweq> o/
18:01:04 <knikolla> Otherwise we’re all here :)
18:01:11 <knikolla> #topic Follow up on past action items
18:01:15 <knikolla> No items noted as to follow up.
18:01:34 <knikolla> #topic Gate health check
18:01:46 <knikolla> Any updates on the state of the gate?
18:01:56 <gmann> I have not observed any frequent failure this week. it is much better
18:02:05 <slaweq> I think it is much better recently
18:02:09 <knikolla> Great to hear that!
18:02:09 <dansmith> things are better, I no longer fear for the rc phase
18:02:20 <gmann> yeah
18:02:25 <dansmith> however, all the same volume fails are occurring
18:02:33 <dansmith> and I hope that work can continue on getting those to improve
18:02:45 <dansmith> I'd say we're back to Dec 2022 levels of fail ;)
18:03:04 <knikolla> #success Improvement on the state of the gate and no more fear for RC phase.
18:03:07 <opendevstatus> knikolla: Added success to Success page (https://wiki.openstack.org/wiki/Successes)
18:03:45 <fungi> we had a stale centos 7/8 mirror for most of a week owing to the file count in some directories exceeding afs's protocol limits. we started filtering out i686 rpms on our mirrors which worked around it, but just a heads up that if jobs need multi-arch packages for i686 stuff on x86_64, there could be new errors due to missing packages
18:04:33 <fungi> the odds of that seemed low enough we risked it
18:04:49 <fungi> also all zuul tenants except openstack have switched to ansible 8 for running zuul jobs by default
18:04:50 <clarkb> it was also broken anyway
18:05:03 <clarkb> (so risk was low we'd break the mirrors and the centos node further)
18:05:11 <fungi> no reports of problems with ansible 8 as of yet
18:05:33 <dansmith> I'd like to report a problem with ansible 8
18:05:38 <fungi> and i think clarkb did a test of devstack/tempest with it
18:06:02 * fungi directs dansmith to the suggestion box
18:06:07 <dansmith> okay, I have no actual problems to report, I'd just *like* it if I had one to report
18:06:08 <clarkb> yes I pushed a change to tempest to run devstack + tempest jobs under ansible 8 and none of the jobs failed due to ansible problems. A coupel non voting jobs failed in the test suite
18:06:36 <clarkb> Due to the lack of trouble so far we're planning to swap openstack over on Monday (its a holiday for some so should be slightly quieter) and see if we can just roll forward from there
18:07:13 <clarkb> if we run into major problems we can revert, but you can also test things ahead of time just like I did with tempest if you are worried about something in particular. Finally this was all sent to the service-announce@lists.opendev.org list which you'll want to subscribe to if you aren't already
18:07:16 <fungi> there's a security fix being backported to some em branches of oslo.messaging which is meeting with issues in some jobs, prognosis unclear yet but the solution may end up being disabling some jobs/tests there
18:07:54 <JayF> fungi: some weird docs failure that doesn't reproduce locally, at least in stable/wallaby
18:08:02 <clarkb> https://review.opendev.org/c/openstack/tempest/+/892981 <- testing tempest with ansible 8
18:08:46 <fungi> JayF: yeah, we can definitely still dig deeper, but it may end up being an "opportunity" to exercise the option of disabling some jobs if the solution is nontrivial
18:09:05 <JayF> ++
18:09:08 <dansmith> if we can't build docs though...
18:09:31 <JayF> Wallaby is going to be retired as soon as we take action on the unmaintained resolution, yeah?
18:09:45 <gmann> yeah
18:09:54 <JayF> So it seems very nonimpactful to break the docs build when no docs are getting updated (I doubt we'll have a doc update in the next ~month)
18:10:00 <dansmith> yeah, fine with just dumping wallaby, I meant probably not good to disable the docs job and merge to wallaby anyway
18:10:09 <dansmith> since it won't actually get a doc update saying it has the fix
18:10:16 <JayF> That's releasenotes, not docs
18:10:17 <fungi> odds are it's related to something like tox v4 that just never got fixes backported and nobody's noticed until now
18:10:22 <JayF> yeah that's my hunch too fungi
18:10:34 <gmann> it will be not opt-in by default not if anyone opt-in manually then it will stay not retire
18:10:45 <gmann> *but if
18:11:36 <gmann> but doc on stable branches does not run with tox4. I think we pinned that
18:12:24 <noonedeadpunk> there was recent issue with constraints for docs, but no idea if that's the case
18:12:28 <fungi> yeah, so likely something else or the pin didn't get applied
18:12:58 <JayF> I appreciate the dogpile of assistance, I'll paste the patch here post-meeting if folks want to offer suggestions
18:13:08 <JayF> probably not best to fix it in meeting :D
18:13:13 <gmann> ++
18:13:33 <knikolla> ++
18:13:56 <fungi> right, i mainly wanted to point out that we do see some bitrot on em branches which complicate merging security fixes (not that we require them anyway)
18:14:20 <knikolla> With Wallaby going away soon, we shouldn’t spend too much time on it
18:14:41 * JayF needs to get the patch working downstsream in Victoria; it's just a question of if it's upstream too
18:15:14 <knikolla> Anything else on the topic of the gate?
18:15:52 <knikolla> #topic 2022 user survey analysis
18:16:05 <knikolla> Thanks slaweq for getting that done :)
18:16:16 <knikolla> #link https://review.opendev.org/c/openstack/governance/+/892670/
18:16:18 <slaweq> it's not done yet definitely :)
18:16:29 <knikolla> started :)
18:16:35 <slaweq> thx gmann for first review, I will address those comments tomorrow
18:16:56 <gmann> slaweq: thanks for working on that.
18:17:02 <knikolla> I will review that later today as well. Anything that you feel worth highlighting slaweq?
18:17:10 <slaweq> and JFYI: it's not ready for sure but numbers are correct in each table there
18:17:32 <slaweq> generally numbers are very similar to last year
18:18:33 <slaweq> one thing which I didn't saw in previous year but I saw in numbers this year are projects which have users using something in production but no contributors
18:19:02 <gmann> that is important questions
18:19:09 <slaweq> this is something what we should maybe potentially check closer and maybe somehow reach out to those users to make them aware of that potential problem
18:19:28 <gmann> ++
18:19:45 <knikolla> seems more relevant now than ever with the amount of projects that have no PTL candidacies with ~1 days to end of nomination period
18:19:54 <fungi> you mean like the software is in use but the software is no longer an official part of openstack/retired, or still official projects with no development activity upstream at all?
18:20:17 <gmann> I think it is users of OpenStack projects not contributing in upstream at all
18:20:41 <slaweq> fungi some of those projects are still official projects now and there is 0 declared contributors
18:20:58 <slaweq> of course, it may be that contributors to those projects didn't made survey
18:21:12 <fungi> as in contributors declared within the survey (there may still be contributors not connected to any survey responses)?
18:21:14 <slaweq> I'm just saying about what I saw in the survey results for now
18:21:23 <gmann> or there might be case they are contributing in a few of them but not all they are using
18:22:06 <knikolla> but it points to an opportunity to make them aware of the lack of contributors in the survey.
18:22:07 <slaweq> so I'm not saying it's any "red flag", just something potentially to check closer :)
18:22:38 <gmann> I think these details are little hard to get from the survey and the reason of not contributing but survey result can be good first step and as slaweq mentioned to plan the reachout to them about knowing the reason and helping them to start the contributiion
18:23:14 <knikolla> Perhaps the under-NDA version of the survey has more useful information that we can use for outreach?
18:23:20 <knikolla> survey results*
18:23:29 <slaweq> knikolla I have no idea :)
18:24:16 <slaweq> the other interesting thing is response about "difficult process" for the question about "What prevents you or your organization from contributing more maintenance resources" - it was in top 5 of the answers to that question
18:24:45 <slaweq> I will make try to find more precise answers from that "category" and put them in the survey
18:24:47 <gmann> humm
18:25:20 <slaweq> because it is open question and for now I just tried to somehow aggregate them into some common "types of answers"
18:25:25 <gmann> and it will be nice to give more thoughts/discussion on those in video call or PTG
18:25:30 <spotz[m]> The under NDA version may or may not have enough information to link who responded. That version also is pre-cleaning up of wording so while one person says community engagement, the next says engagement, and a third says community and we try to see if they're saying the same thing
18:26:00 <knikolla> thanks spotz.
18:26:51 <knikolla> By process do they mean Gerrit/Devstack/Zuul, or internal organization processes?
18:26:53 <knikolla> We’ve spent a significant chunk of time in past PTGs talking about Devstack
18:26:53 <slaweq> thx spotz
18:27:26 <slaweq> knikolla I will need to get back to it and I will write a bit more details about that in the analysis
18:27:33 <slaweq> for now I don't remember really
18:28:14 <slaweq> that's basically all from me about this analysis for today
18:28:25 <knikolla> Thanks slaweq!
18:28:31 <gmann> well devstack/testing is anyways not going to be very easy things anytime and especially seeing gate stability and so it might give  new contributors a not very easy welcome but I think that is same in any OSS
18:29:07 <knikolla> especially of our scale.
18:29:14 <JayF> I kinda disagree? I've personally spoken with several openstack operators who indicated they didn't want to learn a new process (versus the 'typical' github+pr development process) to push a small patch
18:29:17 <gmann> yeah. writing code is easy, testing them is hard
18:29:53 <gmann> I am just taking about testing things. other process are good to discuss and how we can simplify
18:30:08 <fungi> most people who don't contribute to open source projects would cite difficult process as being the reason, whether that's because the projects require code review, detailed commit messages, test-driven development, whatever
18:30:23 <dansmith> yeah, nothing new in that regard
18:30:46 <gmann> commit message is good example also. we need detail and correct commit msg and not everyone like to do that :)
18:30:54 <dansmith> I personally place a very high value on tested code, especially for situations we can't otherwise replicate, which is the majority of the drive-by can't-write-a-test submissions I've seen
18:31:07 <gmann> agree
18:31:36 <knikolla> perhaps being on Gerrit will save us from the incoming ChatGPT powered drive-by happening on GitHub.
18:31:53 <spotz[m]> hehe
18:31:58 <slaweq> ++
18:32:10 <knikolla> :)
18:32:13 <JayF> knikolla: that blade does cut in both directions, you're right, but at this point I'd rather have low-quality submissions that we can curate/mentor into good submissions than very little external contribution at all :/
18:32:22 <dansmith> I wouldn't.
18:32:59 <gmann> IMOI, that is dangerous and introduce more instability in openstack
18:33:01 <fungi> it's being used to generate reputations for dummy accounts used to inflate "star" counts on projects (there are paid services behind most of the gold rush)
18:33:45 <knikolla> Yeah, I’m torn on the subject. Most of my open source work is on GitHub with the exception of OpenStack, so I live in both worlds.
18:33:46 <gmann> if anyone using openstack and have customer base I do not think it is hard to learn process and write test of the software used in their procduction
18:34:13 <knikolla> We do really need to work on mentoring new contributors, but we’ve had retention problems for the ones we have.
18:34:37 <spotz[m]> I have had contributions rejected in PR because I have too many commits because Gerrit allows me to patch on top of my patches. I love gerit:)
18:34:54 <gmann> retention problem due to process? or change in their role in company ?
18:35:07 <JayF> I'm not even trying to say we should change; I'm just saying we need to acknowledge we're paying a steep price to have a unique process.
18:35:11 <knikolla> change in role, mostly, I think.
18:35:36 <fungi> quite a lot of the latter, helping find new employment opportunities for established contributors who want to keep working on openstack would go a long way
18:35:38 <spotz[m]> Well we used to do OUI and the Git and Gerrit lunch and learn to help onboard people at events
18:36:02 <noonedeadpunk> For real - I never think that gerrit vs github will really prevent contributions. If person can not read 1 paper of doc on how to contribute because reading is too hard - we hardly need such contributors
18:36:12 <gmann> yeah, that is what I observed otherwise anyone started knowing process and contribution is not leaving because they hate openstack process.
18:36:18 <noonedeadpunk> As mentoring them is fighting with windmills
18:36:31 <spotz[m]> It's more the config noonedeadpunk
18:36:31 <gmann> noonedeadpunk: exactly
18:37:00 <fungi> i contribute to projects on at least half a dozen different code review platforms, and i'm not even a full-time developer
18:37:03 <JayF> noonedeadpunk: the people I interact with who have trouble are usually people who are operating openstack and want to contribute functional fixes up; gerrit requires at a base level a stronger understanding of git than github does
18:37:09 <noonedeadpunk> though missing ability to make a chain of patches will affect dramatically
18:37:29 <JayF> noonedeadpunk: I've been proxying some of those changes up personally when I understand enough to
18:37:31 <noonedeadpunk> Not sure how about you, but I do create couple of chains weekly
18:37:45 <gmann> I think it is all about which company want to spend time/money i upstream activity vs real reason of difficult process
18:37:55 <gmann> *in upstream
18:38:05 <noonedeadpunk> JayF: I do disagree here completely. That github requires less knowledge of git
18:38:20 <noonedeadpunk> For me it's way-way-way higher bar of knowing git
18:38:32 <JayF> noonedeadpunk: We shouldn't have that argument here, I'll sidebar with you if you don't mind
18:38:45 <noonedeadpunk> As you must know how to sync your fork with upstream, so configure upstream at very least
18:39:11 <knikolla> Deep down I think it’s more about visibility than the actual tool used. GitHub can drive further job opportunities in a way that developing in a siloed code review tool can’t.
18:39:37 <spotz[m]> I'm with noonedeadpunk on this but let's get back to the main topic
18:39:46 <knikolla> I’ll make a note to amend the contributors guide to explain how to make openstack commits appear in people’s GitHub profile.
18:40:35 <knikolla> With that, let’s move on to the next topic.
18:40:46 <knikolla> #topic Documenting implementation processes for Unmaintained
18:41:02 <knikolla> A quick update from me. I started amending the project team guide for the new process
18:41:07 <knikolla> specifically the stable branches page.
18:41:34 <knikolla> I’m detailing the process of opt-in to Unmaintained, which we briefly touched upon last week
18:41:58 <noonedeadpunk> knikolla: and for visability we have bitergia now  :)
18:42:05 <knikolla> That will be a -1 to the releases patch to EOL the branch from the PTL or release liaison.
18:42:51 <knikolla> If no -1 is posted within a 4 week period, the branch for the project goes EOL and not into unmaintained.
18:43:31 <knikolla> This seems to align with the tooling and the way we EOL things now, so it will not take significant resources to implement.
18:43:33 <JayF> ++
18:44:16 <knikolla> I hope to get the wording done by the end of this week.
18:44:24 <gmann> you mean this process after first automatic opt-in right? means when unmaintained move to next term of being unmaintained
18:44:34 <knikolla> yes.
18:44:39 <gmann> ++
18:45:01 <gmann> 4 week seems reasonable time considering anyone on PTO/holiday etc
18:45:08 <knikolla> this also allows for opt-out to happen with the same mechanism of a patch to releases.
18:45:26 <knikolla> and can be done at any point.
18:46:29 <knikolla> that’s all from me on the topic.
18:46:38 <JayF> this all sounds reasonable to me +1 thank you
18:47:25 <knikolla> #topic Open Discussion and Reviews
18:47:34 <knikolla> It’s time to register for the vPTG
18:47:35 <knikolla> #link https://openinfra.dev/ptg/
18:47:51 <clarkb> sean just pointed out in #openstack-qa that tox seems to be exploding on something. Possibly pep517 and/or pbr related
18:47:54 <slaweq> I just did :) thx for reminder
18:48:38 <clarkb> not sure what the actual issue is or how widespread it is. Will likely need someone to debug (it is likely reproduceable locally given it is fialing in a tox step)
18:49:05 <spotz[m]> done!
18:49:29 <spotz[m]> Did you get the TC added to the teams list?:)
18:49:38 <knikolla> yes
18:50:03 <knikolla> thanks spotz!
18:51:09 <spotz[m]> Welcome:)
18:51:14 <knikolla> Alright, thanks all!
18:51:17 <knikolla> #endmeeting