16:00:01 #startmeeting releaseteam 16:00:02 Meeting started Thu Jul 23 16:00:01 2020 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:05 The meeting name has been set to 'releaseteam' 16:00:09 Ping list: ttx armstrong 16:00:13 o/ 16:00:16 #link https://etherpad.opendev.org/p/victoria-relmgt-tracking Agenda 16:00:19 o/ 16:00:30 Line 178-ish 16:00:41 203 for the meeting agenda, if you want to be technical. 16:01:08 #topic Review tasks completion 16:01:22 ttx: Want to cover the governance checks? 16:01:55 sure 16:02:49 So that check basically checks if we have the deliverables files matcging current state of governance 16:03:10 There were a few extras, which I removed at https://review.opendev.org/742200 16:03:22 There are also a few missing, which we need to review 16:03:44 Deliverable files are missing when new deliverables get added and no deliverable file is created 16:04:01 Like for example oslo.metrics, added May 6 16:04:21 For those, the question is, are they going to be ready for a victoria release 16:04:26 Is that process documented somewhere that we can call out the step of adding the new deliverable file? I can't recall at the moment. 16:04:47 It does not make sense until the deliverable is ready to release 16:04:54 which can take short or long timew 16:05:03 better to review it as consistency check 16:05:06 Two added since this May. Those I could see not being ready, but the others listed have had some time. 16:05:11 Yeah, true. 16:05:47 The ones added in May+ i would not even look into. If ready they will let us know, otherwise we'll pick then up next time 16:06:01 that leaves us with: 16:06:21 monasca-ceilometer and monasca-log-api ... were released in train, but not in ussuri 16:06:31 We need to ask monasca folks what their plans are 16:06:33 Repos are not retired. 16:06:43 is it abandoned? If not, why did we skip them in ussuri? 16:07:03 and should we track them for victoria ? 16:07:04 Ah, they were marked as deprecated: https://opendev.org/openstack/monasca-ceilometer/commit/875bc660ee68664d0ab4a21442c69ffd164d2ddf 16:07:23 hmm, not in governance :) 16:07:28 And https://opendev.org/openstack/monasca-log-api/commit/4eccad156f282f2eb300be7a306703c90dcba996 16:07:46 So at least those two, I think we should remove the files. They should follow up on governance updates. 16:07:51 so maybe the fix here is to mark them as deprecated in governance 16:08:04 I think so. 16:08:41 barbican-ui (Added Oct 2019) -- never released yet 16:09:03 would be good to ask for their plans 16:09:11 js-openstack-lib (Added January 9) -- never released yet 16:09:48 Maybe mordred would know? 16:09:51 the bunch of xstatic things... I don;t see what their point is if they don;t get released 16:10:01 e0ne: ^ 16:10:13 yes all of those are tasks that we need to follow up with people 16:10:21 i thought they did get released, but they needed "special" version numbering? 16:10:30 fungi: yes. My point is... 16:10:36 Definitely some of the xstatic ones have been released. 16:10:49 Their only point is to be released 16:10:55 true 16:11:04 there is no "work" in them, just a packaging shell for Pypi release 16:11:08 i see, some were added and never released 16:11:17 so I'm surprised why they would be created but not released yet 16:11:34 finally openstack-tempest-skiplist (Added Mar 20) 16:11:41 no idea if the plan was to release that 16:12:07 Last two I would ignore for now as too young 16:12:42 Who can do the followup? I'm off next week so would rather not take it 16:13:16 I can call them out in the countdown email at least. I didn't do all of them in this week's (that I just finally sent out yesterday). 16:13:38 Or maybe better if I do it as its own message. 16:13:51 That way it might get more visibility and I can tag affected projects. 16:14:01 I would try to ping people in IRC, but your call :) 16:14:22 If someone can do that, it would be best. I'm not sure if I will have time to, but I can try. 16:14:37 not super urgent 16:14:46 We can pick it up at next meeting if we prefer 16:15:02 Let's see how far we can get. 16:15:50 The other task is the countdown, and I have written down a big reminder to make sure I don't get too busy and forget to send it tomorrow. 16:16:00 #topic Octavia EOL releases 16:16:08 #link https://review.opendev.org/#/c/741272/ 16:16:17 #link https://review.opendev.org/#/c/719099/ 16:16:22 Yeah, I think those are ready. 16:16:27 There are a couple Cinder ones now too. 16:16:45 We would just need to follow up removing the branches. 16:16:51 I was unclear if there were ok to +2a 16:16:58 will do now 16:17:04 I forget, did we figure out the release managers have the necessary permissions to delete those branches? 16:17:27 I know we talked about it, I just can't remember what we determined. 16:17:34 we can't delete branches 16:17:39 only create them. 16:17:54 OK. At least we can bug fungi :) 16:18:20 yep, also i can temporarily grant that permission 16:18:40 it's just that under the version of gerrit we're still on, that permission comes lumped in with a bunch of much more dangerous ones 16:18:59 so even my admin account doesn't have that granted to it normally 16:19:08 I'd rather not have the rights to be more dangerous. ;) 16:19:35 We can follow up on those afterwards. 16:19:36 smcginnis 007, license to delete 16:19:37 hi, sorry, just one question: is there an easy way / common place to check which repositories are EOL'd on a certain branch? 16:20:01 They should have a $series-eol tag. 16:20:09 fungi: re your earlier question about job fails, about 15% of the jobs had an AFS failure this morning 16:20:15 But not a great visible way like we had with the table in releases.o.o. 16:20:15 the others worked ok 16:20:41 #topic Review email content 16:20:48 #link https://etherpad.opendev.org/p/relmgmt-weekly-emails 16:21:08 smcginnis: ok, thanks 16:21:08 Nothing too exciting. Just reminders of the upcoming deadlines. 16:21:09 hhhm 16:21:10 ttx: thanks, that does lead me to believe it could either be a temporary connectivity problem or an issue with afs writes from a subset of our executors 16:21:23 that sounds off 16:21:23 i'm hoping to get back to running those errors down the rest of the way shortly 16:21:30 ttx: Yeah, too soon. 16:21:31 smcginnis: Victoria-2 is next week. 16:21:36 Is this a skip week? 16:21:40 * smcginnis looks again 16:21:40 so the email you sent this week is the right one 16:21:53 just a bit early, rather than a bit late 16:22:10 fungi: does the permissions on Repos come from the Infra team? 16:22:13 Honestly, really not liking having the emails in the process docs. It's a bit confusing. 16:22:30 It should raelly not be confusing. Just tells you what to send every week :) 16:22:46 It shouldn't be, but it has been. 16:23:06 Shoudl probably say " at the end of the week, send 16:23:38 I was thinking about some sort of script to use schedule.yaml and jinja templates, but that's probably overkill. :) 16:23:55 OK so the email for this week was sent already 16:24:12 ☑️ 16:24:13 armstrong: there are access controls which we manage in a git repository, but those also inherit from a shared access configuration where we centrally grant some permissions to specific gerrit groups. one group called "project bootstrappers" is used by our project creation automation and has basically full access to delete things from a repository, so one of our admins generally adds themselves or 16:24:15 some delegate to that group temporarily to do things like branch deletion 16:24:19 (there was none to send last week) 16:24:47 Let's move on then. 16:24:49 #topic AFS-related job failures 16:24:57 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016064.html 16:25:00 yeah, i'm looking into these 16:25:12 Potentially something to do with nodes restarting? 16:25:14 Those will liekly require manual work to fix 16:25:27 so far it appears that the two tarball write issues happened from ze11, and the rather opaque docs upload error came from a build on ze10 16:26:00 So the important ones are probably oslo.messaging and designate. 16:26:04 and i noticed that both of those executors spontaneously rebooted (perhaps provider was doing a reboot migration to another host) in the past day, though still hours before the failed builds 16:26:14 They were tagged but no tarballs uploaded? 16:26:25 the pypi uploads worked, so we need to find a way to upload the corresponding tarballs 16:26:37 yeah, i can do that part manually 16:26:51 + Missing constraint updates 16:26:53 and also the signatures, now that we actually spit out copies of them in the job logs 16:26:59 missing release announce we can probably survive 16:27:26 i need to test afs writes from ze11 and also check the executor debug log from ze10 to see what specifically the docs error was 16:27:38 These were stable releases, so the nightly constraints update won't pick up oslo.messaging. 16:27:43 I can propose that one. 16:27:47 also all three failures occurred within an hour of each other, so it's possible this was a short-lived network connectivity issue 16:27:51 AFS appears to be exceptionally brittle, or at least not liking our setup :) 16:28:45 well, if it was a connectivity issue, scp and rsync would have broken similarly 16:29:12 Ah, oslo.messaging one was victoria, so that actually will be picked up by the nightly updates. 16:29:17 So we just need the tarballs. 16:29:44 fungi: Seems like scp and rsync have more retrying built in though. 16:30:56 fungi: I've put a note in the etherpad as a reminder that we will need you to upload the tarballs. That good? 16:30:58 afs actually does too 16:31:04 yep, that's good 16:31:11 Thanks! 16:31:21 #topic Assign tasks for R-11 week 16:31:40 ttx is out all week, so unfortunately we can't assign them all to him. 16:31:50 (it's the command line tools which aren't retrying because they treat afs like they do a local filesystem, afs itself continually rechecks for connectivity to be reestablished) 16:31:57 I mean, you /can/ 16:32:10 :) 16:32:42 Maybe hberaud would be willing to pick those up. 16:32:58 I will leave it unassigned for now and do them if no one else can. 16:33:15 Mostly just running some scripts and then seeing if there is anything to do based on that. 16:33:42 ++ 16:33:57 #topic AOB 16:34:02 Anything else? 16:34:31 Merged openstack/releases master: Octavia: EOL Rocky https://review.opendev.org/741272 16:34:32 Merged openstack/releases master: Octavia: EOL Queens branch https://review.opendev.org/719099 16:34:48 OK, we can end early then. \o/ 16:34:56 Thanks everyone. 16:34:57 o/ 16:34:59 Thanks 16:35:07 #endmeeting