15:00:14 #startmeeting tc 15:00:14 Meeting started Thu Oct 13 15:00:14 2022 UTC and is due to finish in 60 minutes. The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:14 The meeting name has been set to 'tc' 15:00:19 o/ 15:00:21 #topic Roll call 15:00:23 o/ 15:00:25 o/ 15:00:38 o/ 15:00:41 o/ 15:00:56 o/ 15:01:34 in Absence section today 15:01:36 arne_wiebalck will miss the meeting on Oct 13 15:01:36 rosmaita will miss Oct 13 15:02:13 let's start 15:02:15 #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting 15:02:20 today agenda ^^ 15:02:23 o/ 15:02:34 #topic Follow up on past action items 15:02:40 gmann to update the wording the EM branch status and reality of maintenance policy/expectation 15:03:05 I proposed patch yesterday #link https://review.opendev.org/c/openstack/project-team-guide/+/861141 15:03:09 please review ^^ 15:03:36 #topic Gate health check 15:03:43 any news on gate? 15:03:46 so, I have seen a bunch of POST_FAILUREs lately 15:03:54 ohk 15:04:07 I don't have specific pointers, I might be able to find some, but they also seem undebuggable since there are no logs 15:04:17 I dunno if anyone else has been seeing that, or what might be causing it 15:04:35 nope, I didn't, at least not in neutron related projects 15:05:15 here's an example:https://zuul.opendev.org/t/openstack/build/3c40559f664543359c0109b28dc07656 15:05:55 anyway, I'll start collecting links for next week if I keep seeing them 15:06:16 seems nothing showing in console also 15:06:16 there was one run where there were like six jobs that all POST_FAILUREd, so it seemed systemic 15:06:30 it's actually interesting, as logs are present 15:07:00 and there're not much of stuff to be executed afterwards 15:07:12 yeah, but I didn't see any reasons for the failure.. some of the other ones were just no logs, which are hard to debug 15:07:24 maybe it's timeing out on sending logs to swift? 15:07:45 I think we had such kind of issue few weeks ago in Neutron functional tests job 15:07:45 Well when there're not logs it's related to swift highly likely. 15:07:52 in the no logs case, perhaps 15:08:00 anyway, maybe just something to keep an eye out for 15:08:01 We had issues when upload logs to swift took more then 30 mins 15:08:15 And it was some specific provider just being slow 15:08:20 yeah 15:08:29 failures during log upload can be challenging to expose, since it's a chicken-and-egg problem (zuul relies on being able to serve the logs to provide results as it doesn't store that data elsewhere) 15:08:44 yup 15:08:48 But eventually even with logs present it can be same - as timeout for post jobs is 30 mins 15:09:13 but if someone has a recent list of ones that are suspect, i can try to find causes (likely tracebacks from zuul) in the executor service logs 15:09:18 one I can see 'process-stackviz: pip' role taking 10 min 15:09:24 but not sure if that is causing timeout 15:10:34 there was a recent complaint about aodhclient hitting timeouts from pip's dep solver taking too long because they don't use constraints. maybe the problem is more widespread (i don't know if the stackviz installation is constrained) 15:10:50 I actually don't think it's timeout for mentioned example - it ended at 00:16:19 and post jobs started 3 mins earlier 15:11:15 usually it takes 20-30 sec 15:11:50 fungi: that is not constraint i think, it is latest published one we use? 15:12:24 gmann: it has dependencies though, right? 15:12:31 yeah 15:12:46 those are what i'm talking about possibly taking time to dep solve if they aren't constrained 15:13:09 checked some other result and it was ok 15:13:13 anyway, expect that there may be multiple causes for the observed failures, but we can try to classify them and divide/conquer 15:13:23 yeah, let's check and debug later if it is occurring more 15:13:40 any other failure observed in gate? 15:14:38 Bare 'recheck' state 15:14:46 #link https://etherpad.opendev.org/p/recheck-weekly-summary 15:14:49 slaweq: please go ahead 15:14:51 things are good 15:15:16 what is IMO worth to mention is fact that we have less and less teams with 100% of bare rechecks in last 7 days 15:15:24 nice 15:15:25 so it is improving slowly :) 15:15:30 great 15:16:16 Zuul config error 15:16:21 #link https://etherpad.opendev.org/p/zuul-config-error-openstack 15:16:39 Do we have any documentation on not doing bare rechecks, that can be sent to contributors who do bare rechecks? 15:16:40 frickler added the effected projects for zuul config error in etherpad 15:16:58 I continue to work, with other Ironic contributors, to get these config errors wrapped up in Ironic. 15:17:10 JayF: yes, this one #link https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures 15:17:28 knikolla: please go ahead if any updates 15:17:55 i'm updating the etherpad as i go. pushed patches for zaqar and senlin today 15:17:57 be aware the configuration may be on any branch, that list in the pad isn't differentiating between branches but the details in the zuul webui will tell you the project+branch+file 15:18:02 starting with projects who have errors in the master branch 15:18:13 +1, nice 15:18:24 adjutant's gate had been broken for a year, so pushed a fix for that as well. 15:18:35 broken for... a year?!? 15:18:47 yes. the requirements.txt specificed django <2.3 15:18:48 doesn't that mean the project is just de facto retired then? 15:18:49 knikolla: thanks 15:19:01 upper contraind specified == 3.2 15:19:17 there is only one maintainer (PTL) there, you can add him in review to get it merge 15:19:20 django version? 15:19:39 ah yes, sorry, missed 15:20:14 will do. i'm using this as an exercise in getting to know the ptl of the smaller teams 15:20:26 perfect 15:20:28 not so gentle reminder, projects with development blocked by broken job configuration which is unaddressed for a very long time should be a strong signal that it can just be retired 15:20:28 let's continue that and everyone can pick up few projects to fix in their available time 15:21:02 we have new volutneer to took over this project, let's see how it goes now. 15:21:07 fungi: totally agree 15:21:12 keeping track of those situations would be a good idea, even if someone steps in to fix the testing for them 15:21:23 we had long time that there was no maintainer so broken gate is very much possible 15:21:31 sure 15:21:47 anything else on zuul config error or gate health ? 15:22:00 thanks knikolla for helping here 15:22:08 yeah, but adjutant was working nicely if u-c for django - we were testing it in osa 15:22:38 ack 15:22:46 it just had no development for a year straight, and no maintainers to address any bugs in it 15:23:21 we will go back to zun then now 15:23:30 true but as we have new maintainer let's wait for this cycle how it goes 15:23:35 +1 15:23:59 #topic 2023.1 cycle PTG Planning 15:24:05 #link https://etherpad.opendev.org/p/tc-leaders-interaction-2023-1 15:24:13 #link https://etherpad.opendev.org/p/tc-2023-1-ptg 15:24:53 please add the topics in the etherpad, by friday I might try to schedule the present topics 15:25:32 one news: I talked to kubernetes steering committee members to attend the TC PTG session like we have in past couple of PTG 15:26:15 and two members Tim and Christoph agree to attend the session on Friday 21 Oct 16:00 - 17:00 UTC 15:26:23 awesome! 15:26:45 feel free to add the topic you would like to discuss with them - #link https://etherpad.opendev.org/p/tc-2023-1-ptg#L72 15:26:59 I might add a topic w/r/t fungi's point from earlier 15:27:12 about more proactively identifying projects that are unmaintained or insufficiently maintained 15:27:26 ++ 15:27:30 sure, we can discuss that 15:27:51 I suspect some of the others might be already headed to Detroit that Friday 15:28:06 As discussed in last PTG, one thing we did for that is emerging and in-active project things 15:28:27 JayF: this one #link https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html 15:28:35 but it will be good to discuss it further 15:29:10 we have enough slot for TC at least as per the current topics present in etherpad so feel free to add more 15:29:28 but try to add before Friday central time 4-5PM 15:30:12 Schedule 'operator hours' 15:30:26 I can speak a little on this 15:30:29 we have ~12 project signed up for operator hours which is good 15:30:38 i sent ML reminder also 15:30:44 spotz: please go ahead 15:31:23 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030790.html 15:31:52 So we've put a link to all the operator hours in the main ops etherpad. Kendall puttogether a blog which has been mailed out to Large Scale and Public Cloud SiG Members as well as attendees from past OPS Meetups. We've also tweeted and retweeted 15:32:10 yeah, +1 15:32:37 many tweeting it. I did this week. spreading the information will be very helpful 15:32:45 They've been invited to attend the enviro-sus meeting Monday morning for an intro to the PTG type session and on THursday we hope to get feedback for the future though there are more sessions on Friday 15:33:04 #link https://etherpad.opendev.org/p/oct2022-ptg-openstack-ops 15:33:11 central etherpad ^^ 15:33:59 spotz: thanks, please ask them to join IRC channel also in case they find any difficulties to join PTG or switch to projects operator-hours 15:34:17 i've also been reaching out where it makes sense. brought all those sessions up during the scs-tech meeting earlier today, for example 15:34:29 +1 15:34:37 fungi: thanks 15:34:42 there seemed to be interest in the large-scale and public cloud sig meetings too 15:35:11 especially if they join #openinfra-events channel there we can help them for joining issues or so 15:35:17 great 15:35:42 gmann I'll mention that to Kendall for the Monday morning session 15:35:50 I'll see her tonight 15:35:53 spotz: cool, thanks 15:36:13 anything else on PTG topic? 15:36:45 #topic 2023.1 cycle Technical Election & Leaderless projects 15:37:02 one thing left in this, appointment of Zun project PTL. 15:37:03 #link https://review.opendev.org/c/openstack/governance/+/860759 15:37:41 hongbin volunteer to server as PTL and patch has good amount of required votes. it needs to be wait until 15 oct and I will merge if no negative fedback 15:38:07 #topic Open Reviews 15:38:11 #link https://review.opendev.org/q/projects:openstack/governance+is:open 15:38:42 we need review in this #link https://review.opendev.org/c/openstack/governance/+/860599 15:39:43 actually, ^ that is good topic 15:39:54 or well, quite valid comment to it 15:40:05 discuss it in PTG? 15:40:14 on top of that we should actully decide what OS grenade job for N+2 should run? 15:40:37 sure, we can discuss it there next week. I will add it. thanks 15:40:52 as it's either leaving py3.8 on 2023.1 or backporting py3.10 to Y 15:41:19 and focal vs yammy 15:41:23 *jammy 15:41:29 mmm, yammy 15:41:32 yummy yams 15:41:35 :) 15:41:37 I think we have to run it on focal right? 15:41:43 much easier to do that than anything else I think 15:41:44 For me - yes 15:41:46 yammy could be good name though 15:41:51 but we should discuss next week 15:42:06 yeah, let's discuss and clarify the things accordingly in doc 15:42:11 all other open reviews are in good shape 15:42:12 yeah, that's why I always mix the first letter, as yammy really way better name :D 15:42:24 actually I have a question about https://review.opendev.org/c/openstack/governance/+/836888 15:42:24 :) 15:42:36 should we maybe find new volunter who would work on this? 15:42:36 I can't read Jammy without thinking of the dog I had with that name:( 15:42:41 or abandon it maybe? 15:42:44 next week we will be in PTG, so I will send the meeting cancel on ML 15:43:16 slaweq: yeah, good point 15:43:38 jungleboyj: not sure if you will be able to update it or work on it? if no then either of us can pick it 15:43:59 Sorry, got pulled into another meeting. 15:45:25 Assume this is related to the review of the User Survey stuff? 15:45:32 jungleboyj: yes 15:46:01 Ok. I was planning to look at that this week before the PTG. Other fires started. 15:46:19 jungleboyj: thanks. ping us if you need help 15:46:22 I am going to try to look at it tomorrow or during the PTG next week so I can wrap it up. 15:46:28 gmann: I will. Apologies. 15:46:28 cool 15:46:29 thx jungleboyj++ 15:46:33 np! 15:46:47 ok with next week meeting cancel, our next weekly meeting will be on Oct 27. 15:46:57 with that, that is all for today meeting 15:47:03 Merged openstack/governance master: Add project for managing zuul jobs for charms https://review.opendev.org/c/openstack/governance/+/861044 15:47:09 ++ 15:47:10 o/ 15:47:20 if nothing else to discuss then let's close it little early (13 min before) 15:47:46 woohoo 15:48:03 thanks everyone for joining. 15:48:08 #endmeeting