14:00:15 #startmeeting releaseteam 14:00:16 Meeting started Fri May 21 14:00:15 2021 UTC and is due to finish in 60 minutes. The chair is hberaud. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:19 Ping list: elod armstrong 14:00:20 The meeting name has been set to 'releaseteam' 14:00:24 re o/ 14:00:25 #link https://etherpad.opendev.org/p/xena-relmgt-tracking Agenda 14:00:29 We're way down on line 115 now 14:00:33 ttx: re 14:00:59 Will just wait a couple minutes for folks. 14:01:05 o/ 14:01:54 not sure if you were waiting for me, but if you were i'm around 14:02:06 thanks fungi 14:02:50 o/ 14:03:08 ok let's go 14:03:11 #topic Review task completion 14:03:19 Review cycle-trailing projects to check which haven’t released yet. => Done 14:03:23 Here is the ML thread => http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022518.html 14:03:53 And that was all for this week 14:04:06 #topic Assign R-19 tasks 14:04:24 Ensure that all trailing projects have been branched for the previous series. 14:04:42 I take it 14:04:56 thanks ttx 14:04:58 I'll do the aclcheck 14:05:04 as usual :) 14:05:36 Ok next topic 14:05:41 #topic Review countdown email contents 14:05:48 https://etherpad.opendev.org/p/relmgmt-weekly-emails 14:07:24 checking 14:08:03 LGTM, ship it 14:08:09 ok thanks 14:08:19 I'll send it after your meeting 14:08:33 #topic ocata-eol status 14:08:48 elod: the floor is yours 14:09:10 first of all, I'm planned to continue with the delete already *-eol tagged branches 14:09:25 ok 14:09:34 so far ocata-eol and pike-eol branches were deleted 14:09:40 ++ 14:09:55 today will come queens, rocky and stein 14:10:09 cool 14:10:11 + the ones that had patches on top, 14:10:22 but agreed to delete anyway 14:10:34 do you need help somewhere? 14:10:53 hberaud: no, i think i can manage that with the script :) 14:11:01 but thanks :) 14:11:21 and the next is horizon's ocata-eol 14:11:23 I followed the ML thread and everything seems smooth 14:11:40 yes, fortunately 14:12:18 ( the horizon patch: https://review.opendev.org/c/openstack/releases/+/791702 ) 14:12:25 yes I planed to discussed a bit about the horizon topic 14:12:27 https://review.opendev.org/c/openstack/releases/+/791702 14:12:40 *planned 14:13:04 Do we want to wait for Akihiro? 14:13:30 I think yes 14:13:57 maybe i can ping him, but Ivan, Radomir have +1'd 14:14:10 I'll ping him 14:14:21 As you want, but yes can't hurt to ping him 14:14:37 thank you 14:14:41 np 14:14:46 Anything else for ocata? 14:15:07 maybe, 14:15:18 is there a feeling for when we'd want to consider integrating the manual script into release jobs? 14:15:37 like, maybe in roughly a cycle? two? 14:15:50 the mechanism seems to be working out well, at least 14:16:12 fungi: good question. the script requires now to enter some password, so it needs some refactor :) 14:16:31 sure, presumably we'd authenticate it the same way we do branch creation 14:16:50 AFAIK we didn't considered to integrate this script into a job but why not 14:17:10 i just hate to think of release managers constantly manually running the script in coming years 14:17:23 true 14:17:33 I'll add that to my todo list :) 14:17:35 it's not urgent, just something to keep in mind as you're grooming the rest of the release automation over time 14:18:12 it can be a hook triggered by eol tag or something 14:18:27 in our machinery 14:18:30 that should do the trick 14:18:38 also are externally added periodic jobs being cleaned up once the eol branches get deleted? 14:18:57 i do still see over a hundred job failure notifications every day to the stable list 14:19:24 i've discovered some failing periodic job, 14:19:26 if there are project-config changes which need reviewing to remove some, please give me a heads up and i'm happy to take a look 14:19:35 for projects that wanted to use neutron's stable/ocata, 14:19:46 which was deleted ~ a week ago 14:19:57 I've sent a mail to the team 14:20:41 fungi: thanks, will remember if we need such changes 14:21:00 cool 14:21:04 where are defined periodic jobs? in project-config too? 14:21:14 many are 14:21:34 if the jobs are defined in the repositories themselves then they disappear when the branches do, of course 14:21:41 yes 14:21:43 defined or added to pipelines 14:22:25 ttx I will like to assit in your task for this week, of you don’t mind 14:22:27 (i've added a reminder to look into projects-config's periodic job definitions :)) 14:22:54 armstrong: ok let em see when i could do it 14:23:25 armstrong: would Wednesday 1300utc or 13:30utc work for you? 14:23:48 Ok sounds good 14:23:59 which one? 13? 14:24:11 I'm trying to block the time to be sure 14:24:13 13 14:24:21 \ok noted, I'll ping you here 14:24:29 Ok 14:25:05 and about general/"mass" ocata-eol - I will get there next week I think to check the activities in projects, and will propose ocata-eol patches (i guess multiple ones?) + mail to ML 14:25:17 if this is OK for you ^^^^ 14:25:38 WFM 14:26:10 i think that's it for ocata-eol/*-eol 14:26:17 do you plan to propose patch per team? 14:26:30 (multiple ones) 14:26:40 hberaud: i think that would be the best 14:26:45 WFM 14:27:05 ++ 14:28:23 fungi: I just have a question concerning the periodic jobs, can we identify if they are bound to series? 14:28:51 i'd have to look at some example failures 14:29:05 probably easiest to approach from concrete examples 14:29:11 there is a periodic template, with the branches listed in it 14:29:22 (actually multiple templates, but anyway) 14:29:26 I suppose that's depends on the job, some could be for all series, some could be specific 14:29:28 ok 14:29:55 thanks 14:30:13 until ocata is not fully EOL we should not touch that, only if there are such that would run especially only against branches that are already eol'd 14:30:23 yeah, part of the challenge with project-templates is that if you remove a branch from a multi-branch template then you stop running it for all projects rather than just those which have eol'd those branches, and if you remove the template from the project then you likely stop running the jobs for active branches too 14:30:44 I see 14:30:50 elod: yes 14:30:51 an alternative would be to make branch-scoped project-templates and add or remove them separately 14:31:10 similar to how we do with the pti jobs 14:31:30 (victoria template, wallaby template, and so on) 14:31:35 I see 14:32:05 branch-scoped could be a good thing 14:32:13 i think ocata can be removed from the branches list later on, that should not disturb other branches. but I might miss something 14:32:28 (from the periodic templates) 14:32:28 so depending on where/how the jobs are currently getting added, it might not be a simple matter of just deleting some lines, we may need to refactor how that's being done instead 14:32:43 which is why i say specific examples matter 14:32:59 ok 14:33:47 thanks for these details 14:33:58 also the job failures are obviously not just noise on a mailing list, they represent a lot of wasted ci resources 14:34:13 which is a big part of why i keep checking up on the situation 14:34:23 that's true 14:34:25 indeed 14:34:53 actually right now most of the failures are the non-SNI client related ones 14:34:59 if we expect the jobs to start working again at some point then it's probably okay to keep running them for now, but if they're never going to work again we should stop running them 14:35:30 hmmm. yes. 14:35:53 is someone working on getting a workaround figured out for the sni support issue on those? 14:35:55 there are projects that are quite abandoned, so maybe we can remove the periodic for those 14:36:23 (like *-powervm) 14:36:48 that could be a way to start to reduce the resources usage 14:36:55 fungi: It's on my todo as well o:) 14:37:14 fungi: and did some fix already, but for grenade jobs 14:37:55 thanks! 14:38:24 anyway, I'll propose some periodic removal patches as well for inactive projects (hope someone will review + approve them) 14:38:45 i'm happy to review any for project-config and openstack-zuul-jobs 14:38:59 just let me know when you push them so i can prioritize 14:38:59 +1 14:39:12 fungi: nice, thanks! 14:39:31 fungi: some are i guess in the project's repository 14:39:36 but we will see 14:40:07 yeah, if you can put together a list of the ones which are inside abandoned projects, i can also come up with a strategy there 14:40:44 fungi: ok, thanks! 14:40:45 the opendev sysadmins are free to exercise control over the repository hosting to remove job configuration when it's problematic 14:41:36 sure, then it won't be a problem :) 14:42:04 maybe i can use my stable-maint-core power as well, but we will see 14:43:19 I think that we can continue on the next topic 14:43:25 #topic train-em status 14:43:36 https://review.opendev.org/q/topic:%2522train-em%2522+status:open 14:43:46 it's less than a page \o/ 14:43:47 :] 14:43:54 :) 14:44:17 some have -1 that we should wait for the teams 14:44:23 I don't expect PTL responses for a couple of them 14:44:51 yes, some don't seem to get responses 14:45:04 are those situations we need to relay to the tc? 14:45:20 let's wait one more week for those without response 14:45:58 projects not at least acknowledging release changes and blocking series transitions due to inactivity are probably signs the project is mostly defunct 14:46:03 hm... usually, for the current series we force patches without response (depends-on the topic of these patches) 14:46:04 i don't know whether we should relay to the tc or simply just force the train-em transition there 14:46:16 oh, i see 14:46:30 I think that in this case we should force 14:47:04 what fungi says is right, though 14:47:19 these project should follow the life cycle of the series 14:47:23 yes 14:47:25 the question is whether the projects just missed the release patch, 14:47:26 yeah, not saying to ask the tc for permission, just letting them know so they can check on whether the project is still active 14:47:37 or inactive... 14:47:50 need to run, ping me if you need me 14:48:05 i think it worth to ping the teams on ML / IRC first 14:48:06 if projects are really dead the tc can take over and retire them, which means less work for the release team 14:48:18 (in the long run anyway) 14:48:32 but yes I think we more need to inform the TC to decide the status of these project for the current or the next series 14:48:56 ttx: ack, thanks 14:49:01 (projects like, keystone, swift, monasca, ironic, etc) 14:49:31 the plan could be 1) ping the team 2) inform the TC 3) force the patches 14:49:50 let's start with the 1st option :) 14:49:58 also remember keystone doesn't have a ptl, they're supposed to have an active release liaison though under the dpl model 14:50:10 right 14:50:24 as oslo 14:50:27 yes 14:50:29 I'll ping the teams 14:50:44 thanks elod 14:51:00 no problem :) 14:51:18 Anything else for train-em? 14:51:35 and we will see whether we need to inform TC regarding any of the projects 14:51:55 hberaud: nothing from my side 14:52:01 I already discussed with the TC about some of them at the end of wallaby 14:52:21 oh, good 14:52:32 at the end of each cycle we check the project/release activity with the TC 14:52:54 but yeah, continuing to communicate missed deadlines will help them know if the situations change 14:53:00 they are already aware for some of them 14:53:08 yes 14:54:07 ok thanks 14:54:15 #topic Open Floor 14:54:27 Anything else to discuss today? 14:54:47 nothing else from me :X 14:55:00 +1 14:57:00 OK, thanks everyone. Let's wrap up. 14:57:02 #endmeeting