15:00:04 #startmeeting tc 15:00:04 Meeting started Thu Oct 27 15:00:04 2022 UTC and is due to finish in 60 minutes. The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:04 The meeting name has been set to 'tc' 15:00:07 #topic Roll call 15:00:10 o/ 15:00:11 tc-members: meeting time 15:00:13 o/ 15:00:17 o/ 15:00:19 o/ 15:00:28 o/ 15:00:28 o/ 15:02:10 from absence section: arne_wiebalck will miss the meeting on Oct 27 (PTO) 15:02:16 o/ 15:02:20 #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee 15:02:27 ^^ today agenda. let's start 15:02:35 #topic Follow up on past action items 15:02:57 none from previous meeting 15:03:07 #topic Gate health check 15:03:18 any news on gate 15:03:48 I have not observed any frequent failure. 15:04:03 nothing from me 15:04:49 ok 15:04:55 this is not quite related, but any word about the ceph job failures on Jammy? 15:05:27 I did not follow up on that discussion, noonedeadpunk did you get chance to talk to ceph plugin team in qa team? 15:05:50 gouthamr ^^ 15:07:21 no big failures on my radar either 15:07:34 I think gouthamr had some discussion last week, anyways will check with them laster 15:07:36 k 15:07:59 up, no, I have not I guess 15:08:15 In osa we have just overriden couple of variables for ceph-ansible 15:08:59 Eventually, infra mirrors regarding EPEL are broken now 15:09:16 So projects relying on centos/rocky jobs are likely affected 15:09:25 At very least kolla and osa are affected 15:10:11 I'm trying to reach fedora ppl who are mentioned as point of contact 15:10:23 ok 15:10:32 If not - patch proposed to switch mirror 15:11:50 noonedeadpunk: will be good to send it on ML also if any other projects affected in case they are running centos in gate 15:11:59 +1 15:12:01 will do 15:12:06 thanks 15:12:31 next is Bare 'recheck' state 15:12:32 #link https://etherpad.opendev.org/p/recheck-weekly-summary 15:12:36 slaweq: please go ahead 15:12:41 all is good there 15:13:03 I will contact few teams this or next week but overall it's good 15:13:12 nothing more to say really 15:13:34 cool 15:13:59 next is Zuul config error 15:14:02 Is the bare recheck still enough of an ongoing issue to be worth a dedicated TC agenda item at this point? 15:14:02 #link https://etherpad.opendev.org/p/zuul-config-error-openstack 15:14:30 JayF: we would like to spread the message and monitor to help gate health 15:14:47 slaweq doing good job keeping stats for every week and reaching out to project team 15:15:05 yeah, I think these recurring items have been successful in keeping tabs on stuff like this 15:15:08 I think keep checking the stats and status in our weekly report does not take much time 15:15:26 yeah 15:15:30 Fair enough; if I'm the minority opinion that's that :) 15:15:37 used to be more issues with the gate, but lately it's mostly "everything is cool" .. but if we stop monitoring, it slips easily 15:15:49 agree 15:15:56 +1 15:16:02 lets just keep an eye on it 15:16:16 and hopefully all will still be like "all is good" :) 15:17:06 we could perhaps move it to the end of the meeting though? 15:17:39 doing it under Gate health as it is related 15:17:49 or you mean to move gate health at the end of meeting? 15:17:54 I think they mean move recurring things to the end 15:18:03 correct dansmith 15:18:04 oh, per the opendev meeting this week, we are losing the iweb cloud at the end of the year. it currently accounts for ~20-25% of our total quota for job resources 15:18:26 that's bad 15:18:38 tbh, I think the recurring stuff is some of the more useful things we've done lately, and since we can get caught up in other discussions and run out of time, I'd prefer to keep it up front 15:18:43 (we were originally losing it at the beginning of this year, but they generously pulled a lot of strings to keep it running for us) 15:19:01 we can try to hit it quicker and move on though, but.. fungi's thing just now is a good example of it being important :) 15:19:28 knikolla: dansmith: that is reason I keep them at start so that we finish them quickly and spend more time on other things 15:19:42 ++ 15:20:02 since we don't seem to spend a lot of time at max capacity these days, openstack probably won't feel that capacity reduction too badly, but it's worth keeping in the back of your mind 15:20:17 also, as always, we're in talks with other openstack providers about possibly becoming resource donors 15:20:18 fungi: seems like 20-25% of quota can impact the testing right? 15:20:20 fungi: iweb is internap/ 15:20:46 gmann: it may mean waiting longer for test results when there's a lot of concurrent activity, yes 15:20:57 yeah 15:21:01 so it's second provider which we are loosing this year, do I remember correctly? 15:21:02 I agree that it is useful, it's just a huge chunk of not-usually actionable back and forth. When there is a pressing issue that needs to be presented, we can add it to the front and discuss it. 15:21:05 but as i said, we don't run maxxed out nearly as much as we used to, so it may go unnoticed much of the time 15:21:21 knikolla: that's essentially why I asked if we should keep doing it; because it's been non-actionable the last several times 15:21:27 slaweq: it's the same one, they just extended the time we could keep using it 15:21:49 by a lot 15:21:52 ahh, ok, then it's not that bad :) 15:21:53 thx fungi 15:21:54 I feel the flow of the meeting should be sorted by importance. 15:22:05 and actionability (or necessity for a discussion). 15:22:13 fungi: ok, let's keep eyes on it and thanks for update on checking with another provider 15:22:58 knikolla: IMHO, keeping the developer resources healthy is one of the more important things we discuss.. even when they're non-actionable immediately, because sometimes it's a "we need to thin jobs" or "we need to drive some different behavior" 15:23:18 longer-term actions, not immediate "take action in this meeting" but... it's important to me 15:23:39 Gate health checks are important activity TC is doing even at least monitoring and keep eyes on it 15:24:30 I can move recheck and zuul config errors at the end but let's keep gate health at the start 15:24:40 fair point dansmith. 15:24:42 knikolla: JayF ^^ ok for you? 15:24:58 works for me. 15:25:06 The status quo is OK for me too; I asked a question and didn't expect to spawn this much discussion. That seems like a sane middle ground though. 15:25:31 ok, let's move to zuul config error 15:25:46 knikolla anything from your side to bring on zuul error? 15:26:12 nothing new from what i reported last week at the PTG. 15:26:26 ok 15:26:38 #topic 2023.1 cycle PTG 15:26:41 first is Discussion summary sent on ML: 15:26:51 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030954.html 15:26:57 #link https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030953.html 15:27:07 I sent the summary of PTG discussion over email 15:27:19 2023.1 TC tracker 15:27:21 #link https://etherpad.opendev.org/p/tc-2023.1-tracker 15:27:49 ^^ as discussed in PTG, this is our tracker for this cycle, feel free to add the working item you are or would like to work in this cycle 15:28:21 and there are few items need assignee, please write your name in etherpad if you would like to help 15:29:11 that is all on post PTG things. and from next meeting we can discuss more in tracker side if needed 15:29:15 #topic TC questions for the 2023 user survey 15:29:23 #link https://etherpad.opendev.org/p/tc-2023-user-survey-questions 15:29:38 I added the questions we want to remove/add in this etherpad 15:29:49 hope everyone got chance to review it 15:30:30 If not please do, I will send these to aprice after the meeting 15:31:15 I will update here if we hit the limit on number of question by a group 15:31:16 btw I'm not sure about how correct is question `How users are consuming OpenStack` 15:31:25 Because it's not users, but operators I guess? 15:31:33 or how are you consuming openstack? 15:32:13 sounds good even for next question also? 15:32:15 Though I can imagine oeprators not always know how their deployment tool does install 15:32:30 Well, next depends 15:32:39 As I think that next is indeed about users 15:33:20 sure, done 15:33:37 if any more updates, feel free to edit in etherpad 15:34:43 yeah, was not just sure how to phrase better at first :D 15:34:59 +1, thanks 15:35:07 #topic TC chair election process 15:35:31 seems we did not have conclusion in PTG so I proposed both the option we discussed in PTG 15:35:42 #link https://review.opendev.org/c/openstack/governance/+/862772 15:35:44 #link https://review.opendev.org/c/openstack/governance/+/862774 15:36:07 please vote on those and at the end we will merge one with majority positive votes 15:36:34 or even feel free to add comments/improvement if needed in current proposal 15:37:30 any discussion needed on this or we just review in gerrit? 15:37:48 I will review in gerrit but tomorrow morning 15:38:05 i haven't had a chance to look at them yet. Discussion on Gerrit sounds good to me. 15:38:20 cool 15:38:23 moving next 15:38:26 #topic TC weekly meeting time 15:38:34 #link https://framadate.org/xR6HoeDpdXXfiueb 15:38:53 as discussed in PTG, I started the poll to select the new time for our weekly meeting 15:39:15 seems arne_wiebalck and spotz not yet voted and both are not here 15:39:58 I will ping them and if they can vote before our next week and we can do next meeting on new time 15:40:58 anything else to discuss for this? 15:41:39 #topic Open Reviews 15:41:42 #link https://review.opendev.org/q/projects:openstack/governance+is:open 15:41:50 current open reviews ^^ 15:42:12 2021 user survey is ready to vote #link https://review.opendev.org/c/openstack/governance/+/836888 15:42:57 upgrade testing things I need to update as per discussion happened in PTG 15:43:36 I am updating few things in CHAIR.rst so this is also good to vote - #link https://review.opendev.org/c/openstack/governance/+/862637 15:43:55 and rest other we already discussed in meeting 15:44:27 that is all from today meeting agenda, we have ~15 min left if anyone would like to discuss anything else? 15:45:37 Fallout from the jammy default base node switch seems to be minimal (as expected openstack is pretty good about being specific about needed nodes) 15:45:51 yeah 15:46:04 Zuul will drop ansible 5 support in opendev this weekend when our automated upgrade process runs 15:46:33 we've been default to ansible 6 for a bit now and I don't think we had ansible 5 long enough for anyone to override to it, but heads up anyway 15:47:25 ack thanks for updates 15:48:26 ok, if nothing else.. 15:48:45 our next week meeting will be on Nov 3 and video call 15:49:04 let's close today meeting. thanks everyone for joining 15:49:06 #endmeeting