15:00:14 <gmann> #startmeeting tc
15:00:14 <opendevmeet> Meeting started Thu Oct 13 15:00:14 2022 UTC and is due to finish in 60 minutes.  The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:14 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:14 <opendevmeet> The meeting name has been set to 'tc'
15:00:19 <JayF> o/
15:00:21 <gmann> #topic Roll call
15:00:23 <gmann> o/
15:00:25 <noonedeadpunk> o/
15:00:38 <knikolla> o/
15:00:41 <slaweq> o/
15:00:56 <dansmith> o/
15:01:34 <gmann> in Absence section today
15:01:36 <gmann> arne_wiebalck will miss the meeting on Oct 13
15:01:36 <gmann> rosmaita will miss Oct 13
15:02:13 <gmann> let's start
15:02:15 <gmann> #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting
15:02:20 <gmann> today agenda ^^
15:02:23 <spotz> o/
15:02:34 <gmann> #topic Follow up on past action items
15:02:40 <gmann> gmann to update the wording the EM branch status and reality of maintenance policy/expectation
15:03:05 <gmann> I proposed patch yesterday #link https://review.opendev.org/c/openstack/project-team-guide/+/861141
15:03:09 <gmann> please review ^^
15:03:36 <gmann> #topic Gate health check
15:03:43 <gmann> any news on gate?
15:03:46 <dansmith> so, I have seen a bunch of POST_FAILUREs lately
15:03:54 <gmann> ohk
15:04:07 <dansmith> I don't have specific pointers, I might be able to find some, but they also seem undebuggable since there are no logs
15:04:17 <dansmith> I dunno if anyone else has been seeing that, or what might be causing it
15:04:35 <slaweq> nope, I didn't, at least not in neutron related projects
15:05:15 <dansmith> here's an example:https://zuul.opendev.org/t/openstack/build/3c40559f664543359c0109b28dc07656
15:05:55 <dansmith> anyway, I'll start collecting links for next week if I keep seeing them
15:06:16 <gmann> seems nothing showing in console also
15:06:16 <dansmith> there was one run where there were like six jobs that all POST_FAILUREd, so it seemed systemic
15:06:30 <noonedeadpunk> it's actually interesting, as logs are present
15:07:00 <noonedeadpunk> and there're not much of stuff to be executed afterwards
15:07:12 <dansmith> yeah, but I didn't see any reasons for the failure.. some of the other ones were just no logs, which are hard to debug
15:07:24 <slaweq> maybe it's timeing out on sending logs to swift?
15:07:45 <slaweq> I think we had such kind of issue few weeks ago in Neutron functional tests job
15:07:45 <noonedeadpunk> Well when there're not logs it's related to swift highly likely.
15:07:52 <dansmith> in the no logs case, perhaps
15:08:00 <dansmith> anyway, maybe just something to keep an eye out for
15:08:01 <noonedeadpunk> We had issues when upload logs to swift took more then 30 mins
15:08:15 <noonedeadpunk> And it was some specific provider just being slow
15:08:20 <gmann> yeah
15:08:29 <fungi> failures during log upload can be challenging to expose, since it's a chicken-and-egg problem (zuul relies on being able to serve the logs to provide results as it doesn't store that data elsewhere)
15:08:44 <dansmith> yup
15:08:48 <noonedeadpunk> But eventually even with logs present it can be same - as timeout for post jobs is 30 mins
15:09:13 <fungi> but if someone has a recent list of ones that are suspect, i can try to find causes (likely tracebacks from zuul) in the executor service logs
15:09:18 <gmann> one I can see 'process-stackviz: pip' role taking 10 min
15:09:24 <gmann> but not sure if that is causing timeout
15:10:34 <fungi> there was a recent complaint about aodhclient hitting timeouts from pip's dep solver taking too long because they don't use constraints. maybe the problem is more widespread (i don't know if the stackviz installation is constrained)
15:10:50 <noonedeadpunk> I actually don't think it's timeout for mentioned example - it ended at 00:16:19 and post jobs started 3 mins earlier
15:11:15 <gmann> usually it takes 20-30 sec
15:11:50 <gmann> fungi: that is not constraint i think, it is latest published one we use?
15:12:24 <fungi> gmann: it has dependencies though, right?
15:12:31 <gmann> yeah
15:12:46 <fungi> those are what i'm talking about possibly taking time to dep solve if they aren't constrained
15:13:09 <gmann> checked some other result and it was ok
15:13:13 <fungi> anyway, expect that there may be multiple causes for the observed failures, but we can try to classify them and divide/conquer
15:13:23 <gmann> yeah, let's check and debug later if it is occurring more
15:13:40 <gmann> any other failure observed in gate?
15:14:38 <gmann> Bare 'recheck' state
15:14:46 <gmann> #link https://etherpad.opendev.org/p/recheck-weekly-summary
15:14:49 <gmann> slaweq: please go ahead
15:14:51 <slaweq> things are good
15:15:16 <slaweq> what is IMO worth to mention is fact that we have less and less teams with 100% of bare rechecks in last 7 days
15:15:24 <gmann> nice
15:15:25 <slaweq> so it is improving slowly :)
15:15:30 <gmann> great
15:16:16 <gmann> Zuul config error
15:16:21 <gmann> #link https://etherpad.opendev.org/p/zuul-config-error-openstack
15:16:39 <JayF> Do we have any documentation on not doing bare rechecks, that can be sent to contributors who do bare rechecks?
15:16:40 <gmann> frickler added the effected projects for zuul config error in etherpad
15:16:58 <JayF> I continue to work, with other Ironic contributors, to get these config errors wrapped up in Ironic.
15:17:10 <gmann> JayF: yes, this one #link https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures
15:17:28 <gmann> knikolla: please go ahead if any updates
15:17:55 <knikolla> i'm updating the etherpad as i go. pushed patches for zaqar and senlin today
15:17:57 <fungi> be aware the configuration may be on any branch, that list in the pad isn't differentiating between branches but the details in the zuul webui will tell you the project+branch+file
15:18:02 <knikolla> starting with projects who have errors in the master branch
15:18:13 <gmann> +1, nice
15:18:24 <knikolla> adjutant's gate had been broken for a year, so pushed a fix for that as well.
15:18:35 <fungi> broken for... a year?!?
15:18:47 <knikolla> yes. the requirements.txt specificed django <2.3
15:18:48 <fungi> doesn't that mean the project is just de facto retired then?
15:18:49 <gmann> knikolla: thanks
15:19:01 <knikolla> upper contraind specified == 3.2
15:19:17 <gmann> there is only one maintainer (PTL) there, you can add him in review to get it merge
15:19:20 <noonedeadpunk> django version?
15:19:39 <noonedeadpunk> ah yes, sorry, missed
15:20:14 <knikolla> will do. i'm using this as an exercise in getting to know the ptl of the smaller teams
15:20:26 <gmann> perfect
15:20:28 <fungi> not so gentle reminder, projects with development blocked by broken job configuration which is unaddressed for a very long time should be a strong signal that it can just be retired
15:20:28 <gmann> let's continue that and everyone can pick up few projects to fix in their available time
15:21:02 <gmann> we have new volutneer to took over this project, let's see how it goes now.
15:21:07 <knikolla> fungi: totally agree
15:21:12 <fungi> keeping track of those situations would be a good idea, even if someone steps in to fix the testing for them
15:21:23 <gmann> we had long time that there was no maintainer so broken gate is very much possible
15:21:31 <gmann> sure
15:21:47 <gmann> anything else on zuul config error or gate health ?
15:22:00 <gmann> thanks knikolla for helping here
15:22:08 <noonedeadpunk> yeah, but adjutant was working nicely if u-c for django - we were testing it in osa
15:22:38 <gmann> ack
15:22:46 <fungi> it just had no development for a year straight, and no maintainers to address any bugs in it
15:23:21 <noonedeadpunk> we will go back to zun then now
15:23:30 <gmann> true but as we have new maintainer let's wait for this cycle how it goes
15:23:35 <noonedeadpunk> +1
15:23:59 <gmann> #topic 2023.1 cycle PTG Planning
15:24:05 <gmann> #link https://etherpad.opendev.org/p/tc-leaders-interaction-2023-1
15:24:13 <gmann> #link https://etherpad.opendev.org/p/tc-2023-1-ptg
15:24:53 <gmann> please add the topics in the etherpad, by friday I might try to schedule the present topics
15:25:32 <gmann> one news: I talked to kubernetes steering committee members to attend the TC PTG session like we have in past couple of PTG
15:26:15 <gmann> and two members Tim and Christoph agree to attend the session on Friday 21 Oct 16:00 - 17:00 UTC
15:26:23 <knikolla> awesome!
15:26:45 <gmann> feel free to add the topic you would like to discuss with them - #link https://etherpad.opendev.org/p/tc-2023-1-ptg#L72
15:26:59 <JayF> I might add a topic w/r/t fungi's point from earlier
15:27:12 <JayF> about more proactively identifying projects that are unmaintained or insufficiently maintained
15:27:26 <slaweq> ++
15:27:30 <gmann> sure, we can discuss that
15:27:51 <spotz> I suspect some of the others might be already headed to Detroit that Friday
15:28:06 <gmann> As discussed in last PTG, one thing we did for that is emerging and in-active project things
15:28:27 <gmann> JayF: this one #link https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html
15:28:35 <gmann> but it will be good to discuss it further
15:29:10 <gmann> we have enough slot for TC at least as per the current topics present in etherpad so feel free to add more
15:29:28 <gmann> but try to add before Friday central time 4-5PM
15:30:12 <gmann> Schedule 'operator hours'
15:30:26 <spotz> I can speak a little on this
15:30:29 <gmann> we have ~12 project signed up for operator hours which is good
15:30:38 <gmann> i sent ML reminder also
15:30:44 <gmann> spotz: please go ahead
15:31:23 <gmann> #link https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030790.html
15:31:52 <spotz> So we've put a link to all the operator hours in the main ops etherpad. Kendall puttogether a blog which has been mailed out to Large Scale and Public Cloud SiG Members as well as attendees from past OPS Meetups. We've also tweeted and retweeted
15:32:10 <gmann> yeah, +1
15:32:37 <gmann> many tweeting it. I did this week. spreading the information will be very helpful
15:32:45 <spotz> They've been invited to attend the enviro-sus meeting Monday morning for an intro to the PTG type session and on THursday we hope to get feedback for the future though there are more sessions on Friday
15:33:04 <gmann> #link https://etherpad.opendev.org/p/oct2022-ptg-openstack-ops
15:33:11 <gmann> central etherpad ^^
15:33:59 <gmann> spotz: thanks, please ask them to join IRC channel also in case they find any difficulties to join PTG or switch to projects operator-hours
15:34:17 <fungi> i've also been reaching out where it makes sense. brought all those sessions up during the scs-tech meeting earlier today, for example
15:34:29 <gmann> +1
15:34:37 <gmann> fungi: thanks
15:34:42 <fungi> there seemed to be interest in the large-scale and public cloud sig meetings too
15:35:11 <gmann> especially if they join #openinfra-events channel there we can help them for joining issues or so
15:35:17 <gmann> great
15:35:42 <spotz> gmann I'll mention that to Kendall for the Monday morning session
15:35:50 <spotz> I'll see her tonight
15:35:53 <gmann> spotz: cool, thanks
15:36:13 <gmann> anything else on PTG topic?
15:36:45 <gmann> #topic 2023.1 cycle Technical Election & Leaderless projects
15:37:02 <gmann> one thing left in this, appointment of Zun project PTL.
15:37:03 <gmann> #link https://review.opendev.org/c/openstack/governance/+/860759
15:37:41 <gmann> hongbin volunteer to server as PTL and patch has good amount of required votes. it needs to be wait until 15 oct and I will merge if no negative fedback
15:38:07 <gmann> #topic Open Reviews
15:38:11 <gmann> #link https://review.opendev.org/q/projects:openstack/governance+is:open
15:38:42 <gmann> we need review in this #link https://review.opendev.org/c/openstack/governance/+/860599
15:39:43 <noonedeadpunk> actually, ^ that is good topic
15:39:54 <noonedeadpunk> or well, quite valid comment to it
15:40:05 <gmann> discuss it in PTG?
15:40:14 <noonedeadpunk> on top of that we should actully decide what OS grenade job for N+2 should run?
15:40:37 <gmann> sure, we can discuss it there next week. I will add it. thanks
15:40:52 <noonedeadpunk> as it's either leaving py3.8 on 2023.1 or backporting py3.10 to Y
15:41:19 <noonedeadpunk> and focal vs yammy
15:41:23 <noonedeadpunk> *jammy
15:41:29 <dansmith> mmm, yammy
15:41:32 <fungi> yummy yams
15:41:35 <gmann> :)
15:41:37 <dansmith> I think we have to run it on focal right?
15:41:43 <dansmith> much easier to do that than anything else I think
15:41:44 <noonedeadpunk> For me - yes
15:41:46 <gmann> yammy could be good name though
15:41:51 <dansmith> but we should discuss next week
15:42:06 <gmann> yeah, let's discuss and clarify the things accordingly in doc
15:42:11 <gmann> all other open reviews are in good shape
15:42:12 <noonedeadpunk> yeah, that's why I always mix the first letter, as yammy really way better name :D
15:42:24 <slaweq> actually I have a question about https://review.opendev.org/c/openstack/governance/+/836888
15:42:24 <gmann> :)
15:42:36 <slaweq> should we maybe find new volunter who would work on this?
15:42:36 <spotz> I can't read Jammy without thinking of the dog I had with that name:(
15:42:41 <slaweq> or abandon it maybe?
15:42:44 <gmann> next week we will be in PTG, so I will send the meeting cancel on ML
15:43:16 <gmann> slaweq: yeah, good point
15:43:38 <gmann> jungleboyj: not sure if you will be able to update it or work on it? if no then either of us can pick it
15:43:59 <jungleboyj> Sorry, got pulled into another meeting.
15:45:25 <jungleboyj> Assume this is related to the review of the User Survey stuff?
15:45:32 <gmann> jungleboyj: yes
15:46:01 <jungleboyj> Ok.  I was planning to look at that this week before the PTG.  Other fires started.
15:46:19 <gmann> jungleboyj: thanks. ping us if you need help
15:46:22 <jungleboyj> I am going to try to look at it tomorrow or during the PTG next week so I can wrap it up.
15:46:28 <jungleboyj> gmann:  I will.  Apologies.
15:46:28 <gmann> cool
15:46:29 <slaweq> thx jungleboyj++
15:46:33 <gmann> np!
15:46:47 <gmann> ok with next week meeting cancel, our next weekly meeting will be on Oct 27.
15:46:57 <gmann> with that, that is all for today meeting
15:47:03 <opendevreview> Merged openstack/governance master: Add project for managing zuul jobs for charms  https://review.opendev.org/c/openstack/governance/+/861044
15:47:09 <jungleboyj> ++
15:47:10 <slaweq> o/
15:47:20 <gmann> if nothing else to discuss then let's close it little early (13 min before)
15:47:46 <spotz> woohoo
15:48:03 <gmann> thanks everyone for joining.
15:48:08 <gmann> #endmeeting