16:01:16 <smcginnis> #startmeeting releaseteam
16:01:17 <openstack> Meeting started Thu Feb 27 16:01:16 2020 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:20 <openstack> The meeting name has been set to 'releaseteam'
16:01:22 <smcginnis> Courtesy ping: ttx armstrong diablo_rojo, diablo_rojo_phon
16:01:23 <diablo_rojo> o/
16:01:26 <ttx> o/
16:01:27 <hberaud> o/
16:01:27 <smcginnis> #link https://etherpad.openstack.org/p/ussuri-relmgt-tracking Agenda
16:01:29 <elod> o/
16:01:45 <fungi> i wondered why i was getting a highlight in the oslo channel ;)
16:02:00 <evrardjp> can't attend today, sorry :/
16:02:09 <smcginnis> evrardjp: No worries, thanks.
16:02:15 <smcginnis> We'll make sure to assign all tasks to you.
16:02:27 <ttx> It's christmas all over again
16:02:33 <smcginnis> To be fair, not the worst IRC screw up I've done. :)
16:02:55 <hberaud> ~ line 380
16:02:55 <smcginnis> #topic Release-post job issues from last night
16:03:05 <armstrong> o/
16:03:14 <smcginnis> #link http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2020-02-26.log.html#t2020-02-26T22:06:58
16:03:17 <smcginnis> Some context
16:03:32 <smcginnis> Not sure if fungi can or wants to add anything.
16:03:32 <ttx> cool.. what about the ones in last three hours
16:03:38 <ttx> +1
16:03:41 <smcginnis> But root cause was a config change that has been fixed.
16:03:51 <smcginnis> We did have a couple in the last few hours.
16:04:04 <smcginnis> Looked like one of the intermittent ssh connection errors?
16:04:15 <fungi> i can, but we're about to restart the scheduler
16:04:21 <smcginnis> fungi: No problem.
16:04:43 <ttx> the ceilometer tags seem to be something else?
16:04:46 <smcginnis> So the good part at least was these were all on docs jobs, so they will correct themselves with the next merge.
16:04:55 <ttx> intermittent fail ok
16:04:57 <fungi> memory pressure is resulting in zookeeper connection flapping, and jobs are getting retried until they occasionally hit the retry limit
16:04:57 * smcginnis looks
16:05:13 <fungi> seems like it's been severe for maybe the past 12 hours
16:05:25 <fungi> or increasingly severe starting 12 hours ago
16:05:58 <smcginnis> #link http://lists.openstack.org/pipermail/release-job-failures/2020-February/001278.html python-octaviaclient failure on announce.
16:05:59 <fungi> smcginnis: oh, the git redirect problem got fixed
16:06:07 <smcginnis> fungi: Excellent, thanks!
16:06:21 <ttx> announce fail we can probably survive
16:06:24 <fungi> one rewrite rule was missing a leading /
16:06:50 <fungi> due to context change migrating the redirects from a .htaccess file to an apache vhost config
16:06:51 <ttx> more concerned about the node fail at 15:58 UTC
16:06:55 <smcginnis> And actually, it does look like the announce failure actually did get the announcement out.
16:07:03 <ttx> and the tag release fail at 16:04 UTC
16:07:17 <ttx> as those might need to be retried
16:07:34 <smcginnis> ssh: connect to host 192.237.172.45 port 22: Connection timed out\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.1]
16:07:39 <ttx> node fail has no indication of what it was though
16:08:00 <smcginnis> #link http://lists.openstack.org/pipermail/release-job-failures/2020-February/001275.html Ceilometer failure
16:08:08 <smcginnis> ttx: Which one are you looking at?
16:08:15 <ttx> ALL OF THEM
16:08:23 <ttx> I should focus on one
16:08:32 <smcginnis> ceilometer appears to be the same, ssh timed out.
16:08:39 <ttx> Currently on the last one 16:04 UTC
16:08:45 <smcginnis> #link http://lists.openstack.org/pipermail/release-job-failures/2020-February/001276.html Ceilometer 11.1.0
16:09:07 <smcginnis> Looks like all three of those are ssh timeouts after the fact.
16:09:08 <ttx> That last one is successful. Failed at collecting logs
16:09:18 <ttx> and then skipped docs release
16:09:19 <smcginnis> On log collection.
16:09:47 <ttx> that leaves the NODE_FAILURE at 15:58
16:10:04 <ttx> hard to know what it was attached to
16:10:17 <fungi> yeah, i suspect those are all boiling down to the zookeeper connection going up and down because the scheduler's out of memory
16:10:40 <smcginnis> ttx: Where is that node failure? I've only seen the ssh timeouts.
16:11:23 <smcginnis> Oh, this one? http://lists.openstack.org/pipermail/release-job-failures/2020-February/001279.html
16:11:43 <smcginnis> Now I see two recent failures came in.
16:11:53 <smcginnis> Does seem likely it's down to the zookeeper thing.
16:11:58 <smcginnis> Let the animals out of the cages.
16:12:40 <smcginnis> Looks like the tagging actually happened, just failed again on log collection.
16:12:46 <smcginnis> So just missing docs again.
16:12:51 <ttx> got ya
16:13:02 <ttx> monasca-ui 1.14.1
16:13:20 <ttx> Merged at https://opendev.org/openstack/releases/commit/2897a098897231f86b4410c66d10d6b8f8945046
16:13:26 <ttx> Did not result in a tag
16:13:36 <ttx> That's probably our ghost
16:13:54 <smcginnis> OK, finally looking at the NODE_FAILURE one.
16:13:57 <smcginnis> #link http://lists.openstack.org/pipermail/release-job-failures/2020-February/001277.html
16:14:07 <smcginnis> Doesn't link anywhere.
16:14:10 <ttx> that's the one I just mentioned
16:14:24 <smcginnis> So yeah, if the tagging never happened, then at least we can reenqueu that one.
16:14:25 <ttx> monasca-ui 1.14.1
16:14:53 <ttx> everything else is accounted for
16:14:54 <smcginnis> fungi: Is that something you can help us with once the restart is done and things look calmer?
16:15:24 <fungi> smcginnis: absolutely
16:15:45 <smcginnis> Thanks!
16:16:04 <ttx> doublechecking
16:16:09 <smcginnis> I made a note in the tasks for the week.
16:17:37 <smcginnis> That seems to be it. Nothing new has come through in the ML that I've seen.
16:17:43 <ttx> ok confirmed monasca-ui is the only one missing in the last hours
16:17:53 <ttx> we can move on
16:17:57 <smcginnis> #topic Review task status
16:18:13 <smcginnis> Switching single release c-w-i to c-w-rc.
16:18:43 <smcginnis> So the idea here is if someone is using cycle-with-intermediary, the expectation is that they need to do multiple releases over the course of the cycle.
16:18:45 <ttx> in the propsoed weekly-email I said if no answer end of next week
16:18:55 <smcginnis> That makes sense.
16:19:11 <smcginnis> Here are the outstanding patches:
16:19:14 <smcginnis> #link https://review.opendev.org/#/q/status:open+project:openstack/releases+branch:master+topic:ussuri-cwi
16:19:19 <ttx> it's an easily reverted change anyway
16:19:45 <smcginnis> Some good responses so far. A few have said to go ahead. A few others have said they will get releases out and want to stay with intermediary.
16:19:55 <smcginnis> So I think we're good on that one.
16:19:58 <ttx> but yeah, if you have trouble making more than one per cycle, with-rc is probably a good bet for you
16:20:06 <smcginnis> Next, update on rocky-em status.
16:20:15 <smcginnis> #link https://review.opendev.org/#/q/status:open+project:openstack/releases+branch:master+topic:rocky-em
16:20:23 <smcginnis> Quite a few patches out there yet.
16:20:34 <smcginnis> But it's everything, so it's actually not that bad.
16:20:39 <smcginnis> We've had responses on those too.
16:20:47 <smcginnis> Some have said to go ahead and I've been approving them.
16:20:53 <smcginnis> Others have said they need a little more time.
16:21:27 <smcginnis> Only real issues has been some questionable monasca backports in some of their repos.
16:21:35 <smcginnis> Thanks hberaud for calling those out!
16:21:42 <hberaud> the majority that I've already checked looks fine, I'll continue my journey on these ones
16:21:50 <hberaud> you are welcome
16:21:59 <smcginnis> Just waiting on PTL acks on many of them.
16:22:19 <smcginnis> I think probably approve next week if no response from the team?
16:23:44 <smcginnis> We could check if any outstanding unreleased commits, but I don't think this team should be driving that. Nor has the bandwidth to do so.
16:24:18 <ttx> smcginnis: maybe that could be added to the email
16:24:33 <hberaud> good idea
16:24:36 <elod> i've commented 4-5 patch where I saw possible unreleased but would be good to release changes
16:24:39 <smcginnis> Yeah, we should add that to make sure there's a chance they are all aware.
16:24:44 <elod> just for the record :]
16:24:51 <smcginnis> Thanks for checking on those elod
16:25:10 <smcginnis> Hopefully the teams notice that and respond.
16:26:16 <smcginnis> OK, only other task was the countdown email, but we'll cover that shortly.
16:26:23 <smcginnis> #topic Questions on xstatic
16:26:31 <smcginnis> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012878.html
16:26:48 <smcginnis> I'll be honest - I haven't had a chance to follow this.
16:26:58 <smcginnis> ttx: Do you have a summary of the situation?
16:26:59 <ttx> I did
16:27:12 <ttx> Yeah, so...
16:27:46 <ttx> IIUC xstatic things are Javascript thingies that are packaged as PyPI modules
16:28:00 <ttx> Not updated very often
16:28:15 <ttx> So a bunch of them used to be published before we drove releases
16:28:39 <ttx> At one point there was a cleanup, as some xstatic repos were never released/used
16:28:57 <ttx> That included xstatic-angular-*, which were part of a transition that never happened
16:29:21 <ttx> But it seems we caught some in the cleanup that shoudl not have been caught
16:29:59 <ttx> So we have a bunch of xstatic-?? releases on PyPI for things Horizon depends on... that do not have deliverable files to match
16:30:33 <ttx> The way we fixed that precise situation in the past (for other xstatic things) was to do a new release and start fresh
16:30:49 <ttx> so that PypI situation matches openstack/releases latest
16:30:52 <smcginnis> Were these cycle based but they should have been independent?
16:31:07 <ttx> no they always were independent I think
16:31:19 <ttx> The issue is that those were manually uploaded
16:31:35 <ttx> you have to understand this is just a thin layer around a Javascipt module
16:31:41 <smcginnis> Ah, so they just were released so infrequently that they never made it into our managed process?
16:32:08 <ttx> so the temptation to take xstatic-foobar 1.2.3 and push it to PyPI as 1.2.3.0 is high
16:32:12 <ttx> yes
16:32:25 <ttx> but they also never used tags
16:32:53 <ttx> which is why I missed then last time I looked and assumed they never were released
16:33:00 <smcginnis> Oh?! So it's not an issue of them having too many rights with the current ACLs. They just manually threw it out there?
16:33:05 <ttx> yes
16:33:29 <openstackgerrit> Witold Bedyk proposed openstack/releases master: Switch monasca-* to cycle-with-rc  https://review.opendev.org/709848
16:33:30 <ttx> It's more an issue of too much rights on PyPI really :)
16:33:46 <ttx> but then it was 6-8 years ago
16:33:50 <smcginnis> Sounds like the next steps then would be to 1) get deliverable files added, 2) get releases done of current repos.
16:34:03 <smcginnis> And 3) slap some wrists and tell them not to do that.
16:34:04 <smcginnis> :)
16:34:12 <ttx> 1-2 can be done at the same time, since you can't import history
16:34:19 <smcginnis> Yeah
16:34:33 <ttx> 3 would be to remove the "deprecated" tags from governance
16:34:46 <smcginnis> Can/should we get the pypi permissions updated so only openstackci can publish new releases there?
16:34:53 <smcginnis> Oh right, that too.
16:35:09 <ttx> amotoki wanted to do it cleanly and recreate the missing tags, but that's likely to be complicated
16:35:40 <ttx> but that would result in having something in tarballs.o.o that does not match what's already in PyPI
16:35:46 <ttx> so more confusing than helping
16:35:56 <openstackgerrit> Witold Bedyk proposed openstack/releases master: Switch monasca-* to cycle-with-rc  https://review.opendev.org/709848
16:36:37 <ttx> so yes, push a new x.y.z.a+1 release by creating a matching deliverable file
16:36:57 <ttx> IIRC that also involves updating a metadata file in the repo to be released.
16:37:12 <openstackgerrit> Michael Johnson proposed openstack/releases master: [octavia] Transition Rocky to EM  https://review.opendev.org/709903
16:37:15 <smcginnis> I suppose we could lockstep: delete release from pypi (gasp), merge equivalent release in releases repo, let automation get things back to right place.
16:37:45 <ttx> yeah https://opendev.org/openstack/xstatic-hogan/src/branch/master/xstatic/pkg/hogan/__init__.py#L16
16:37:53 <smcginnis> Basically rebuild history.
16:37:58 <smcginnis> I don't really like that though.
16:38:03 <smcginnis> I'd rather move forward.
16:38:04 <ttx> smcginnis: you cannot do that
16:38:18 <ttx> getting two difefrent artifacts with same release number is nonono
16:38:33 <ttx> them not being available at the same time is not enough
16:38:43 <smcginnis> Yeah, bad idea.
16:38:53 <ttx> a+1 is the only way to resync
16:39:04 <smcginnis> Was that suggested on the ML?
16:39:34 <ttx> I'll clarify
16:40:04 <openstackgerrit> Witold Bedyk proposed openstack/releases master: Do not release monasca-ceilometer for Ussuri  https://review.opendev.org/710312
16:40:32 <smcginnis> OK, thanks.
16:40:36 <smcginnis> Sounds like we have a plan then.
16:40:38 <smcginnis> Anything else?
16:43:12 <ttx> nope
16:43:15 <smcginnis> #topic Validate countdown email
16:43:19 <smcginnis> #link https://etherpad.openstack.org/p/relmgmt-weekly-emails
16:43:31 <smcginnis> Look for "Milestone 2 week +2"
16:44:03 <openstackgerrit> Witold Bedyk proposed openstack/releases master: Do not release monasca-log-api for Ussuri  https://review.opendev.org/710313
16:44:20 <diablo_rojo> Should we add cycle highlight mentions now that I sent that kickoff email?
16:45:20 <ttx> diablo_rojo: I was thinking of mentioning it in the next one
16:45:27 <ttx> as a reminder
16:45:33 <diablo_rojo> ttx, that works too
16:45:37 <ttx> this week sounds a bit early for a reminder
16:45:49 <diablo_rojo> fair:)
16:46:13 <ttx> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012892.html <- solution for xstatic, detailed
16:46:46 <smcginnis> Thanks!
16:47:07 <smcginnis> I added a section on the rocky-em patches to the countdown. Please take a look and let me know if it looks ok.
16:47:20 <smcginnis> Or feel free to tweak. I will send this out tomorrow morning my time.
16:47:39 <ttx> +1
16:48:07 <smcginnis> ttx: That description for xstatic looks good.
16:48:26 <smcginnis> #topic AOB
16:48:38 <smcginnis> Any other topics to cover this week?
16:48:39 <ttx> I'll be off starting tomorrow, back on March 9
16:48:54 <smcginnis> No meetings the next two weeks.
16:49:24 <rosmaita> (i have something for open discussion)
16:49:38 <smcginnis> I think I should be able to be here for the R-8 meeting, but if not I may need to ask someone to cover for me or we can skip.
16:49:50 <smcginnis> rosmaita: The floor is yours!
16:49:55 <ttx> i should be back
16:50:07 <rosmaita> i'm seeing a weird validation error on https://review.opendev.org/#/c/709294/
16:50:17 <smcginnis> Thanks, I was going to raise that.
16:50:19 <rosmaita> for cinder.yaml i think
16:50:24 <rosmaita> oh, ok
16:50:26 <rosmaita> all yours!
16:50:36 <smcginnis> Haha, no, too late. :)
16:50:49 <smcginnis> I took a quick look yesterday, but I couldn't tell what was happening.
16:51:12 <openstackgerrit> Witold Bedyk proposed openstack/releases master: Switch monasca-kibana-plugin to independent  https://review.opendev.org/710316
16:51:14 <smcginnis> Since we merged the patch to tell reno to ignore the older branches. But it still looks like it is choking on trying to parse the older releases.
16:51:48 <smcginnis> So it's something with reno.
16:52:12 <smcginnis> We probably need to check out stable/rocky cinder and try building release notes to see if we can repro it locally.
16:53:04 <rosmaita> ok, i can do that now
16:54:13 <smcginnis> Kilo was the last 2015.x versioned release, so ignoring kilo and older *should* work.
16:54:34 <openstackgerrit> Merged openstack/releases master: Release monasca-persister 1.12.1  https://review.opendev.org/710224
16:54:37 <smcginnis> I'll try to take a look too later today.
16:54:45 <smcginnis> Anything else for the meeting?
16:54:54 <rosmaita> ok, me too, we can talk offline
16:55:14 <diablo_rojo> none from me
16:55:42 <smcginnis> OK, thanks everyone!
16:55:49 <smcginnis> #endmeeting