fungi | and it merged | 00:01 |
---|---|---|
fungi | so we should be all set | 00:01 |
ttx | Happy release day! | 06:29 |
ttx | I have a doctor's appointment this morning but should be back around 10:00utc | 06:29 |
frickler | so, what are we going to do with docs? I've approved https://review.opendev.org/c/openstack/openstack-manuals/+/914602 now but the actual release patch isn't mergeable in its current state. sadly next to no feedback from tc-members either | 06:38 |
hberaud | o/ | 07:07 |
elodilles | ~o~ | 07:51 |
frickler | so I updated https://review.opendev.org/c/openstack/openstack-manuals/+/914603 now to add dalmatian | 07:52 |
elodilles | frickler: thanks. let's see how it renders and then i'm OK with it | 07:57 |
elodilles | the semaphore patch has merged, thanks o/ | 07:58 |
hberaud | everything looks operational on the python infra https://status.python.org/ | 07:59 |
elodilles | release-team: don't forget to not approve any stable release patch today. we have to avoid any interference | 08:00 |
hberaud | ack | 08:00 |
opendevreview | Elod Illes proposed openstack/releases master: Add release note links for 2024.1 Caracal #2 https://review.opendev.org/c/openstack/releases/+/914942 | 08:15 |
elodilles | please review & merge this when the jobs have finished ^^^ | 08:17 |
elodilles | hberaud frickler : jobs have finished, please review: https://review.opendev.org/c/openstack/releases/+/914942 | 08:47 |
hberaud | ack | 08:47 |
frickler | +4 | 09:06 |
elodilles | \o/ | 09:06 |
elodilles | thanks | 09:06 |
elodilles | i'll disappear now to grab some food, but be back before 10:00 UTC | 09:07 |
opendevreview | Merged openstack/releases master: Add release note links for 2024.1 Caracal #2 https://review.opendev.org/c/openstack/releases/+/914942 | 09:15 |
frickler | hmm, seems the foundation has already published the release, we are done? see https://www.openstack.org/software/openstack-caracal and also the landing page | 09:30 |
frickler | maybe someone should update the map to drop things that no longer exist or aren't part of the release https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-assets-prod/openstack-map-v20210201-01-1.pdf | 09:35 |
ttx | hmm | 09:36 |
ttx | That page should not have been linked to until release time | 09:36 |
ttx | If you click on lastest release on https://www.openstack.org/software/ it still (correctly) links to Bobcat | 09:37 |
ttx | But the openstack.org main page seems to have jumped the gun | 09:37 |
ttx | I'll try to get it fixed | 09:39 |
ttx | frickler: the map is driven from changes proposed to https://opendev.org/openinfra/openstack-map | 09:40 |
ttx | 2023.05.01 version is the current release | 09:41 |
ttx | Version linked from https://www.openstack.org/software/ is current | 09:41 |
ttx | frickler: Where did you get the link to that old version? | 09:42 |
frickler | ttx: directly on openstack.org, very big with video link and all | 09:43 |
frickler | oh, you mean the map? | 09:44 |
elodilles | yeah, on the main page it says: Latest Release: OpenStack Caracal --> https://www.openstack.org/ if you scroll down a bit | 09:44 |
frickler | ttx: that's on the caracal page | 09:44 |
ttx | OK I found it | 09:44 |
ttx | Should be fixed to link to latest too | 09:45 |
frickler | and the 2023.05 map is still outdated, ec2-api, tripleo, chef, all gone. some more inactive and not released | 09:45 |
ttx | frickler: yeah nobody proposed those updates to openstack-map apparently | 09:46 |
ttx | maybe we should have a release process step to check for that after the milestone-2 membershipfreeze | 09:47 |
ttx | If someone can propose changes I can review them and get staff to update the PDF accordingly | 09:49 |
* hberaud school run, bbiab | 09:51 | |
frickler | yeah, I'll look at te map repo | 09:55 |
frickler | seems some things are already in there, like tripleo removal | 09:56 |
fungi | frickler: https://www.openstack.org/software/openstack-caracal was published early for double-checking (like every year), just not linked from other pages yet | 09:59 |
ttx | Alright, openstack.org is back to Bobcat | 10:00 |
fungi | i suppose it's a questionable workflow choice, but up to the folks making that content i suppose | 10:00 |
ttx | looks like there was an error staging the page earlier | 10:00 |
ttx | frickler: thanks for the flag! | 10:00 |
elodilles | meanwhile, i think that things look fine to proceed with the Final release patch. https://status.python.org/ looks green, zuul is OK too, right fungi? | 10:01 |
fungi | oh, i see, the problem is the main page of the site in that case | 10:01 |
ttx | yeah, that specific bit should definitely not have been enabled | 10:01 |
fungi | elodilles: checking | 10:02 |
elodilles | ~o~ | 10:02 |
ttx | yeah map 2023.05.01 already has tripleo removed normally | 10:03 |
ttx | Also feels like Skyline could be added? Not sure what status it has those days | 10:04 |
fungi | zuul status graphs look okay | 10:05 |
elodilles | fungi: ACK, thanks o/ | 10:05 |
frickler | ttx: skyline is still listed as "emerging". but I've been thinking about that too, to fill the space emptied by EC2API. not sure how to layout that, though | 10:05 |
fungi | ttx: skyline is still an "emerging technology" according to the tc | 10:05 |
elodilles | release-team: let's review the release patch: https://review.opendev.org/c/openstack/releases/+/914764 | 10:05 |
frickler | doesn't mean it may not be listed, though, IMO | 10:05 |
ttx | frickler: our designer will adjust width if needed | 10:06 |
fungi | "emerging" status was a reason not to merge sunbeam's release highlights, but i suppose that was more because they're not part of the release officially? | 10:07 |
ttx | yeah... skyline deliverables are part of release so I think it would make sense to add them | 10:08 |
fungi | oh, nevermind, sunbeam isn't emerging, just not released with the other deliverables | 10:09 |
frickler | https://review.opendev.org/c/openinfra/openstack-map/+/914948 not sure if it will work that way though, maybe better just mark inactive projects as "don't publish on map"? | 10:13 |
frickler | also we should probably add that update to the TC inactive workflow | 10:13 |
frickler | also interesting question whether sunbeam should still be on that map then @jamespage? | 10:15 |
elodilles | gentle reminder that we are 15 minutes late behind the schedule, this should be reviewed as soon as possible: https://review.opendev.org/c/openstack/releases/+/914764 | 10:15 |
fungi | i guess the map is official openstack projects grouped by function, not necessarily limited to release-managed projects | 10:15 |
frickler | are the navigator pages also fed from that repo? https://www.openstack.org/software/project-navigator/deployment-tools | 10:16 |
frickler | elodilles: ok, if nobody else will go ahead, I'll just approve it, then? | 10:17 |
elodilles | frickler: +1 | 10:18 |
jamespage | frickler: sunbeam should be in that map - we're still emerging and likely to be somewhat release independent in terms of what and when we release - which made it a bit awkard to fit into the cycle highlights | 10:18 |
jamespage | but we'll figure that out | 10:18 |
frickler | oh, the CI for the map doesn't actually build one, /me sad | 10:21 |
elodilles | :/ | 10:21 |
elodilles | (btw, now that we are talking about https://www.openstack.org/ , there is a teeny-tiny issue i'd like to mention: under the 'OpenInfra Foundation Member Spotlight' when randomly Ericsson is showed, its link is false (404) as it points to 'https://www.openstack.org/www.ericsson.com'. anyone knows where can this be fixed?) | 10:30 |
fungi | elodilles: i can bring it up with the folks who manage the site in a few hours once they're awake | 10:32 |
elodilles | fungi: ACK, thanks in advance! | 10:33 |
fungi | and yeah, the map isn't auto-generated because it requires some manual layout work to produce, from what i understand | 10:33 |
frickler | gate almost done | 10:49 |
hberaud | \o/ | 10:49 |
opendevreview | Merged openstack/releases master: Caracal Final https://review.opendev.org/c/openstack/releases/+/914764 | 10:49 |
elodilles | there it is ^^^ \o/ | 10:49 |
elodilles | now comes the post-release jobs | 10:50 |
hberaud | IIRC we should now wait for the end of the post-release job before launching the check of missing tarballs | 10:50 |
hberaud | s/jobs/ | 10:51 |
frickler | everybody says post-release, why is the pipeline named release-post? that got me confused multiple times already | 10:51 |
hberaud | I don't have the historical naming decision | 10:52 |
fungi | it's the release-specific version of the "post" pipeline (so its priority can be raised above other pipelines) | 10:53 |
* hberaud note this definition | 10:53 | |
fungi | it runs after the gate pipeline (so post-gate), not really after a release (which post-release would imply) | 10:54 |
fungi | but the rationalization there is weak, pipeline names are arbitrary strings anyway | 10:55 |
elodilles | frickler: you are right, the queue name is release-post o:) i guess we just write it 'post-release' as that would be how it should look like grammatically. though it's better to emphasize that it is a 'release' job so i'm OK with the naming o:) | 10:56 |
elodilles | fungi: +1 | 10:56 |
frickler | looks like tarballs should be ready? 64 refs now in release pipeline | 11:30 |
frickler | zuul seems to get stuck when trying to show builds or buildsets though :-/ | 11:31 |
hberaud | IIRC we have to wait for end of these jobs | 11:32 |
hberaud | not sure all tarballs are already there at this point | 11:32 |
hberaud | and AFAICS a couple jobs are just queued and not yet started | 11:33 |
elodilles | hmmm, zuul shows that 'openstack-upload-github-mirror' jobs are running, but if i look into the jobs it shows e.g. "Build ID 900ba831db5d455d8b0b47a430fb168c not found --- END OF STREAM ---" | 11:35 |
elodilles | and the rest of the jobs haven't even started. strange. | 11:35 |
fungi | seems like the pipeline status update is waiting for the management events to process | 11:35 |
fungi | completed builds open fine, but the in-progress builds aren't connecting to a log stream yeah, probably means they either haven't really started to run yet or already finished but the result hasn't been popped off the queue yet | 11:37 |
fungi | if i fabricate a build result url for one of them, it only has the start data recorded so far | 11:39 |
hberaud | I didn't remember seeing significant updates since ~20 min | 11:40 |
elodilles | where 'release-openstack-python' job started, there the situatuion is the same: by opening the job it says the same "Build ID 9ad86b1f9c654d198d609fd7b2df1a34 not found --- END OF STREAM ---" | 11:40 |
elodilles | yes it seems it's not updating | 11:41 |
fungi | "5 management events" | 11:41 |
fungi | i don't think it updates the queue state until it processes those | 11:41 |
fungi | node requests and in-use nodes definitely picked up at the same time the tags got pushed | 11:43 |
frickler | let's hope that the sandbox bug reproducer doesn't have any global effect | 11:45 |
fungi | 2024-04-03 11:07:43,133 DEBUG zuul.Pipeline.openstack.release: [e: 561ba4a3f77049a1b4b12b98d8292489] Build <Build 9ad86b1f9c654d198d609fd7b2df1a34 of release-openstack-python voting:True> started | 11:46 |
fungi | 2024-04-03 11:07:53,278 DEBUG zuul.Scheduler: [e: 561ba4a3f77049a1b4b12b98d8292489] [build: 9ad86b1f9c654d198d609fd7b2df1a34] Processing result event <BuildStatusEvent build=9ad86b1f9c654d198d609fd7b2df1a34 job=c2702684fb5c4737b3ca095b62159032> | 11:46 |
fungi | that's the adjutant release-openstack-python build that's still showing as in-progress | 11:47 |
fungi | just the first example i searched the logs for | 11:48 |
fungi | so yeah, it seems that result events are just queuing up, but also that build only ran 10 seconds | 11:49 |
* hberaud short dad taxi | 11:50 | |
fungi | i think that project might have a problem, but perhaps unrelated to the overall situation | 11:50 |
fungi | https://pypi.org/project/barbican/ shows barbican did get released, but its result also hasn't been processed | 11:50 |
hberaud | zaquar just showed some updates on my side | 11:51 |
fungi | oh, it's updating again | 11:51 |
fungi | the management events count fell to 0 | 11:52 |
fungi | so yeah, seems we were waiting for a pipeline reconfiguration, possibly due to something merging that changed job configs? | 11:52 |
fungi | as the results count falls, more of the statuses should get corrected | 11:53 |
fungi | false alarm on adjutant, it does seem to have completed successfully | 11:54 |
fungi | i forgot its name on pypi is different | 11:54 |
fungi | not sure why the started and result processing events were so close together in the debug log, the build itself took 2 minutes to complete | 11:55 |
fungi | results queue is caught back up now | 11:59 |
elodilles | yepp, now the queue is shrinking | 12:00 |
elodilles | so, fingers crossed | 12:00 |
* hberaud back | 12:08 | |
hberaud | will have to do another dad taxi run in ~1h15 | 12:08 |
elodilles | hberaud: ACK | 12:08 |
hberaud | Wednesday is kids days... | 12:09 |
elodilles | :) | 12:09 |
elodilles | almost there: queue length is 4 :-o | 12:13 |
frickler | nova upload seems to have failed https://zuul.opendev.org/t/openstack/build/567d434246344041a1a8a4f7a19fce06 | 12:13 |
elodilles | :S | 12:13 |
fungi | yeah, looks like pypi barfed | 12:14 |
hberaud | /o\ | 12:14 |
fungi | i think we can reenqueue the tag, double-checking | 12:14 |
frickler | same for glance and possibly some others | 12:14 |
elodilles | and some other, too: trove trove-dashboard skyline-console.. | 12:14 |
elodilles | etc :/ | 12:14 |
fungi | argh | 12:14 |
fungi | terrible time for a pypi outage | 12:15 |
hberaud | yes | 12:15 |
fungi | i guess we need to collect a complete list of those and then i can batch re-enqueue the corresponding tags | 12:15 |
elodilles | 12 recent ones: https://zuul.opendev.org/t/openstack/builds?job_name=release-openstack-python&result=POST_FAILURE&skip=0 | 12:15 |
elodilles | interestingly the times are matching with the time when the queue became unblocked | 12:17 |
elodilles | (~11:50) | 12:17 |
hberaud | we surely reach a timeout... | 12:17 |
hberaud | or something like that | 12:18 |
fungi | those are start times | 12:18 |
hberaud | indeed | 12:18 |
fungi | that's when zuul's queue processing resumed and it started a lot of builds | 12:18 |
fungi | the durations are all short enough that they shouldn't have encountered timeouts | 12:19 |
fungi | nor do i see any evidence of job timeouts in the logs | 12:19 |
hberaud | ack | 12:19 |
frickler | so do we need to protect pypi uploads with a semaphore? maybe like max 5 or 10 in parallel? | 12:19 |
fungi | i would be surprised if we broke pypi | 12:19 |
hberaud | AFAIK that's the first time we face such situation | 12:20 |
hberaud | so IMO I don't think we need a semaphore | 12:20 |
fungi | https://status.python.org/ isn't indicating any problems | 12:20 |
fungi | but it may only update when the admins find out something broke | 12:21 |
hberaud | and I agree with fungi, I don't think we broke pypi, their infra seems robust | 12:21 |
fungi | likely we just got unlucky with something else they've got going on | 12:21 |
fungi | i'll try to reenqueue one tag and see if it works a second time | 12:21 |
hberaud | wfm | 12:22 |
fungi | wait, no i won't | 12:22 |
fungi | https://pypi.org/project/nova/ | 12:22 |
fungi | we're not going to be able to rerun these | 12:22 |
fungi | the upload worked, but returned an error | 12:22 |
fungi | oh, actually it only uploaded the wheel and not the sdist | 12:23 |
fungi | https://pypi.org/project/nova/#files | 12:23 |
fungi | this is going to be a real problem. pypi won't allow us to reupload any file with the same filename | 12:23 |
hberaud | woot... | 12:23 |
elodilles | /o\ | 12:24 |
fungi | and we discarded the sdist and signatures when the build aborted due to the error | 12:24 |
hberaud | can't we remove existing artifacts manually and then reenqueue? | 12:24 |
fungi | we can remove them, but we can't reupload them. pypi blocks upload of any filename which ever previously existed, as a security measure | 12:25 |
hberaud | I see | 12:26 |
frickler | so we need new tags for these? | 12:26 |
fungi | https://pypi.org/project/trove/#files is in the same situation (wheel but no sdist) | 12:26 |
fungi | new tags would be the fastest solution, yes, bump them all by a patchset of .1 | 12:26 |
hberaud | wfm | 12:27 |
fungi | other options include hacking up the job to only upload an sdist, or manually building the missing artifacts and signatures and uploading them manually, all of which will take more time than we have | 12:27 |
hberaud | all jobs are now finished so I think 12 is the final number for the failing series | 12:28 |
fungi | also the .0 tags are going to get forever marked as erroneous in our consistency checker | 12:28 |
fungi | ttx: ^ heads up in case you're not following closely | 12:28 |
elodilles | so a PATCH bump for the 12 failed deliverables, using the same hash as was in the Final patch, right? | 12:28 |
hberaud | as this is an error I think the best way is to bump the patchset | 12:28 |
fungi | elodilles: correct | 12:29 |
hberaud | patchset would remove ambiguity | 12:29 |
fungi | so basically release nova 29.0.1 with the same commit that 29.0.0 pointed at | 12:30 |
frickler | patchset = patch version or am I misunderstanding something? | 12:30 |
hberaud | bugfix version | 12:30 |
fungi | yes, sorry, patch level component of the version string | 12:30 |
fungi | my fingers like to type patchset for obvious reasons | 12:30 |
hberaud | lol | 12:31 |
frickler | elodilles: do you want to make that patch? or shall I? | 12:32 |
elodilles | i'm doing it right now | 12:32 |
ttx | I'm here, just having a hard time to follow | 12:33 |
fungi | ttx: 12 of the deliverables had successful wheel uploads to pypi but then it errored back during sdist uploading | 12:34 |
ttx | so we have partial uploads to PyPI and are forced to change the version number to do a new one | 12:34 |
fungi | since pypi won't allow re-uploading of files that already exist, we can't reenqueue those tags | 12:34 |
fungi | so, yes, fastest solution is x.y.1 versions of the 12 deliverables that were impacted | 12:34 |
ttx | yeah that makes sense | 12:34 |
opendevreview | Elod Illes proposed openstack/releases master: Re-release final RC for 12 failing deliverables https://review.opendev.org/c/openstack/releases/+/914958 | 12:35 |
elodilles | please review carefully ^^^ | 12:35 |
fungi | and live with the fact that the x.y.0 versions have missing artifacts | 12:35 |
fungi | we can also yank them on pypi if we want and/or remove them from the releases version list i guess, but that's less urgent | 12:36 |
fungi | mainly just hiding them so they cause less confusion | 12:36 |
ttx | elodilles: patch looks good, hapy to push W+1 if everyone is ok | 12:38 |
elodilles | ACK | 12:38 |
ttx | are we ready to approve that one? previous jobs completed ? | 12:38 |
hberaud | lgtm | 12:38 |
hberaud | I think that yes we are ready | 12:39 |
fungi | yes, release pipeline is empty | 12:39 |
hberaud | previous jobs completed | 12:39 |
ttx | Alright here it comes | 12:39 |
hberaud | thanks elodilles | 12:40 |
elodilles | np | 12:40 |
frickler | fungi: seems the check pipeline has quite some backlog, do we want to move ^^ directly into gate for timing reasons? | 12:41 |
fungi | yes, we can enqueue it to the gate | 12:41 |
fungi | do you want to do that, or shall i? | 12:42 |
frickler | fungi: if you have the command handy please go ahead | 12:42 |
fungi | done | 12:44 |
frickler | what about the manuals patch, do we want to proceed with that, too, or does that have to wait until the end really? | 12:44 |
fungi | i'll also dequeue it from check in order to avoid confusion | 12:44 |
fungi | 9 minutes eta until it merges | 12:45 |
fungi | frickler: i think the main reason the docs update is held to the end is to avoid implying it's released too far in advance, and in case the release is aborted for some reason | 12:47 |
elodilles | yepp. though it took some time last occasion, so we should start merging it before 13:00 UTC. maybe the re-release patch will be merged by then :/ | 12:49 |
elodilles | though if we want to wait for pypi ACK, then we can wait a bit more until every package appears well on pypi | 12:50 |
fungi | or at least be sure one of them uploads and that we don't have some persistent problem affecting those 12 sdists | 12:51 |
elodilles | fungi: +1 | 12:51 |
elodilles | anyway, please review the patch in advance, so that we just need a +W to proceed if everything turns out to be fine | 12:52 |
elodilles | btw, the "openstack-tox-docs" job has the same situation when looking at the job logs: 'Build ID 189ef8a138e6499aa147081dd0c03d06 not found' :S | 12:54 |
elodilles | (0 management events) | 12:54 |
fungi | you caught it between when the build ended and when zuul reflected its status as completed, i suspect | 12:57 |
elodilles | hmmm, validation runs longer than i thought :S | 13:03 |
hberaud | skyline-console... | 13:03 |
hberaud | 12 deliverables to checks... | 13:04 |
hberaud | trove-dashboard | 13:04 |
elodilles | compared to 66 in the original patch | 13:04 |
elodilles | though nova might be the biggest deliverable / repo | 13:04 |
hberaud | almost there... | 13:05 |
hberaud | trove | 13:05 |
hberaud | done | 13:07 |
opendevreview | Merged openstack/releases master: Re-release final RC for 12 failing deliverables https://review.opendev.org/c/openstack/releases/+/914958 | 13:08 |
elodilles | finally \o/ | 13:08 |
elodilles | let's see the release-post queue o:) | 13:08 |
* hberaud grab pop corn | 13:08 | |
fungi | seems there's a management event being processed for the release-post pipeline, and the trigger event to enqueue the tagging job is waiting behind that | 13:09 |
hberaud | nothing appear queued on my side | 13:14 |
fungi | yeah, it's that "3 trigger events, 1 management events, ..." under the heading of the release-post pipeline | 13:15 |
hberaud | ok | 13:15 |
fungi | seems there's another reconfiguration underway affecting that pipeline (the "management event") which has to complete before the trigger from the change-merged event (one of the "trigger events") gets processed | 13:16 |
fungi | unfortunately, reconfigurations seem to be taking a very long time. not sure if it's that we have too many projects with too many branches and zuul is struggling to recheck all the configuration in them, or if we have some other condition making it take so long | 13:17 |
fungi | it's something to look into after the release is done, for sure | 13:18 |
fungi | anyway, it's in there now | 13:19 |
fungi | and tag-releases is running | 13:20 |
ttx | should we run missing-releases? | 13:20 |
hberaud | I think we should wait for these late deliverables | 13:21 |
hberaud | (IMO) | 13:21 |
ttx | oh right, misunderstood current state | 13:21 |
fungi | yeah, we're almost at the point where we should see the 12 tags start appearing in the release pipeline | 13:22 |
fungi | there they are | 13:25 |
*** jbernard_ is now known as jbernard | 13:26 | |
hberaud | \o/ | 13:28 |
hberaud | I need to grab my son from his ping pong lesson | 13:30 |
* hberaud dad taxi part 3! | 13:30 | |
elodilles | ACK | 13:30 |
elodilles | :) | 13:30 |
hberaud | back in minutes | 13:30 |
elodilles | (management events continue to turning up time to time :S) | 13:31 |
fungi | at least they seem to be running finally | 13:35 |
fungi | waiting for one of the release-openstack-python builds to kick up | 13:37 |
* hberaud back | 13:45 | |
ttx | that tag pipeline is not moving fast | 13:50 |
ttx | ah, some jobs queued | 13:51 |
fungi | finally | 13:51 |
fungi | watching glance to see if it uploads this time | 13:51 |
elodilles | fingers crossed :X | 13:51 |
hberaud | oars crossed even | 13:52 |
elodilles | :D | 13:52 |
hberaud | too much suspence today :) | 13:53 |
frickler | from the console log the glance pypi upload succeeded | 13:54 |
elodilles | glance seems to be there: https://pypi.org/project/glance/#files | 13:54 |
fungi | uploaded! https://pypi.org/project/glance/#files | 13:54 |
fungi | yep | 13:54 |
hberaud | same thing for manila | 13:55 |
hberaud | https://pypi.org/project/manila/#files | 13:55 |
elodilles | nova, too \o/ | 13:55 |
hberaud | networking-bagpipe too | 13:56 |
elodilles | it's nice that jobs are not finishing according to zuul, all having 'Build ID not found' :P but at least the files are there | 13:57 |
frickler | yes, zuul is taking its time to process results again | 13:57 |
fungi | so the good news is that whatever pypi's problem was has cleared up, but these stalls for build processing are pretty crippling | 13:57 |
elodilles | yepp :/ | 13:58 |
ttx | running missing-releases to see how far we are | 13:58 |
elodilles | ttx: ++ | 13:58 |
hberaud | ok thanks ttx | 13:59 |
fungi | also we could probably consider approving the docs change at this point, since the pypi issue seems to not be persistent | 13:59 |
ttx | yeah... just a sec as I complete the missing-releases | 13:59 |
elodilles | (8 out of 12 release-openstack-python jobs have officially finished) | 14:00 |
hberaud | \o/ | 14:00 |
ttx | so far so good, standing by to approve doc change | 14:03 |
hberaud | nice | 14:03 |
elodilles | (all 12 release-openstack-python jobs succeeded) | 14:04 |
hberaud | awesome | 14:04 |
ttx | I feel confident we can approve te docs patch at this point | 14:05 |
ttx | We can fix missing tarballs if any while the oatch is processed | 14:05 |
hberaud | ttx: can we strikeout the missing-release task? | 14:05 |
ttx | almost,... in progress | 14:06 |
hberaud | ack | 14:06 |
ttx | I can W+1 the docs change once you pile up your approvals | 14:06 |
elodilles | here is the patch: https://review.opendev.org/c/openstack/openstack-manuals/+/914603 | 14:07 |
ttx | https://review.opendev.org/c/openstack/openstack-manuals/+/914603 | 14:07 |
frickler | I won't +2 since I submitted the patch, feel free to go ahead though | 14:08 |
ttx | alright w+1 | 14:08 |
elodilles | ~o~ | 14:09 |
ttx | missing-releases completed successfully | 14:10 |
elodilles | \o/ | 14:10 |
ttx | https://review.opendev.org/c/openstack/releases/+/914858 is up next | 14:10 |
fungi | so no new surprises from missing-releases? | 14:10 |
ttx | it did not report any issue | 14:11 |
fungi | not even the broken partial/missing x.y.0 artifacts? | 14:11 |
ttx | no... | 14:11 |
ttx | but then it only checks latest | 14:12 |
fungi | oh, got it | 14:13 |
fungi | that makes sense then | 14:13 |
frickler | I guess one could run it on HEAD^1 to cross-check that | 14:14 |
* frickler goes to test that | 14:14 | |
fungi | i thought there was a script that checked all the historical links, but maybe that's a different step | 14:15 |
elodilles | (yepp, it reported the missing x.y.0 to me ~2 hrs ago) | 14:15 |
elodilles | and we didn't even get false alarms like in the past cycles (due to py2py3 universal wheels, or something like that) | 14:16 |
frickler | elodilles: like this? did not find python 3 wheel https://tarballs.openstack.org/ansible-role-atos-hsm/ansible_role_atos_hsm-7.0.0-py3-none-any.whl | 14:17 |
hberaud | absence of evidence is not evidence of absence | 14:17 |
elodilles | frickler: nope, it was some error that was reported at the end of the run | 14:18 |
hberaud | indeed we seen these errors since a couple of series now | 14:18 |
hberaud | but in the same time it seems to me that several things have been done around these universal wheels | 14:19 |
hberaud | on our side, on things like distutils/setuptools/pbr and on pypi | 14:20 |
frickler | seems there's only this single repo remaining with that situation. it does have a good py2/3 wheel though | 14:23 |
Clark[m] | I don't know if matters or is tool late but a x.y.0.post0 type version may be more accurate. However figuring out the right format is probably more trouble than it is worth a .1 releases are cheap and easy | 14:25 |
elodilles | frickler: my bad, those were exactly the errors (e.g.: https://paste.opendev.org/show/bAagRq3rkx39T10YwG1v/ ) | 14:25 |
elodilles | so who want's to push the button here? https://review.opendev.org/c/openstack/releases/+/914858 o:) | 14:26 |
* frickler likes pushing buttons | 14:27 | |
hberaud | AFAICS I'd argue that yes | 14:27 |
elodilles | there it goes, thanks frickler \o/ | 14:28 |
frickler | I even have a big red button lying around here somewhere. always wanted to insert some circuitry to make it an USB keyboard with just the enter key ;) exactly for this use case | 14:29 |
elodilles | :D | 14:29 |
hberaud | :) | 14:29 |
elodilles | maybe next time ;) | 14:29 |
hberaud | you have 6 months | 14:29 |
elodilles | :] | 14:29 |
frickler | that's some motivation indeed :D | 14:30 |
elodilles | :)) | 14:30 |
elodilles | btw, i've prepared my part of the release announce mail, feel free to review: https://etherpad.opendev.org/p/relmgmt-weekly-emails | 14:30 |
fungi | Clark[m]: also i think we've never used post releases, so didn't seem like the time to experiment | 14:30 |
Clark[m] | ++ | 14:31 |
Clark[m] | Do we know why pypi failed in the first place? I wonder if they were throttling us | 14:31 |
elodilles | maybe we can test it with release-test repo, so that we can use next time (i really hope we'd never need it) | 14:32 |
hberaud | and I just pasted my part at the bottom | 14:32 |
frickler | du we want to mention the .1 releases in the mail? | 14:32 |
frickler | *do | 14:32 |
hberaud | maybe in the openstack-discuss part | 14:32 |
elodilles | hberaud: LGTM! | 14:32 |
fungi | Clark[m]: i suppose it could have been an account-level throttle | 14:32 |
hberaud | at the bottom | 14:32 |
fungi | but seems odd it would have been applied at the end of the upload rather than the start | 14:33 |
hberaud | frickler: or maybe in a separated threads directly targetted to the right team rather | 14:33 |
hberaud | opinion? | 14:34 |
elodilles | frickler: IMO no need to mention in the 'announce' list, as hberaud says, maybe on openstack-discuss | 14:34 |
fungi | i'm standing by to approve the openstack-announce post when the time comes | 14:34 |
elodilles | fungi: thx | 14:34 |
fungi | but after that i need to take a break so i can get the shower i meant to take at 10:00z | 14:34 |
hberaud | I think adding it to the official announce emails would diluate the broken release topic | 14:35 |
elodilles | fungi: do you suggest then to send it now so that you can shower? :) or should we wait a bit to be closer to 15:00 UTC? | 14:35 |
frickler | ok, I'm also fine with not mentioning, was just an idea of mine | 14:36 |
fungi | we should send it before 15:00 since that's when all the press releases go out | 14:36 |
hberaud | sure np | 14:36 |
fungi | also it could be mentioned in a follow-up reply on openstack-discuss rather than in the announcement message | 14:36 |
elodilles | btw, we usually wait until docs.o.o updates, right? is everything up to date there? | 14:37 |
hberaud | will send my part once openstack-announce will shown the first one | 14:37 |
frickler | zuul says the docs publishing will finish in about 18 mins | 14:38 |
elodilles | 14:56 UTC | 14:38 |
fungi | and then there's a 0-5 minute wait for the cron that releases the afs volumes | 14:38 |
elodilles | if that holds | 14:38 |
elodilles | hmmm | 14:39 |
fungi | it runs every 5 minutes to sync changes from the writeable volume to the read-only replicas which the site serves | 14:39 |
fungi | basically if the "Synchronize files to AFS" task completes between 14:55 and 15:00 then the content of the site will update at 15:00 | 14:44 |
opendevreview | Merged openstack/releases master: Mark 2024.1 Caracal as released https://review.opendev.org/c/openstack/releases/+/914858 | 14:45 |
elodilles | fungi: i've sent the announcement, feel free to approve any time | 14:46 |
*** whoami-rajat_ is now known as whoami-rajat | 14:47 | |
frickler | sadly there's no indication of progress at all on the job console | 14:48 |
fungi | cool, i'll give it a few minutes so hopefully the releases site update and docs update are close to being reflected publicly | 14:49 |
elodilles | +1 | 14:50 |
frickler | finished | 14:52 |
fungi | https://docs.openstack.org/ redirects to https://docs.openstack.org/2024.1/ | 14:53 |
elodilles | \o/ | 14:53 |
fungi | but some files may not be fully updated for another 1.5 minutes | 14:53 |
fungi | also the publish-tox-doc-releases build waiting in release-post hasn't started yet | 14:53 |
fungi | approved the announcement now | 14:55 |
fungi | having the releases site trail the announcement by a few minutes won't hurt | 14:55 |
fungi | also mailman takes at least that long to send copies to all the subscribers | 14:56 |
ttx | should we flip the switch on the openstack.org website? | 14:56 |
fungi | i suppose that can happen at any time too now | 14:56 |
hberaud | updated the template with the link from openstack-announce, please double check https://etherpad.opendev.org/p/relmgmt-weekly-emails | 14:57 |
ttx | ok will ask | 14:57 |
elodilles | hberaud: link is working, mail LGTM! | 14:58 |
hberaud | thx | 14:58 |
frickler | hberaud: +1 | 15:00 |
hberaud | sent | 15:00 |
*** blarnath is now known as d34dh0r53 | 15:00 | |
frickler | \o/ party time | 15:00 |
elodilles | ~o~ | 15:01 |
fungi | builds are finally starting for the releases site update | 15:01 |
ttx | Zuul certainly feels lazy today | 15:05 |
hberaud | I think we can break out the champagne | 15:08 |
elodilles | 🍾 | 15:09 |
fungi | discussion on #opendev has turned up a possibility, we're getting a lot of web queries that might be overloading the relational database where it records build results and such | 15:09 |
fungi | if db writes are taking a long time (or timing out) then zuul might be blocking in places where we just don't expect the db to behave pathologically | 15:12 |
fungi | or this could be a recent regression in zuul leading to some sort of cascade effect | 15:14 |
elodilles | :S | 15:26 |
fungi | the publish build is finally starting | 15:28 |
ttx | It's up now | 15:37 |
elodilles | \o/ | 15:37 |
fungi | confirmed, https://releases.openstack.org/ looks correct | 15:38 |
elodilles | yepp | 15:38 |
elodilles | thanks everyone for your work \o/ | 15:39 |
hberaud | \o/ | 15:58 |
*** gthiemon1e is now known as gthiemonge | 15:59 | |
clarkb | thinkign out loud here: if there is a throttle (or even an unintentional one due to how our release process works) we may want ot reach out ot pypi and ask if there is nayhting we should do different to avoid issues int he future | 16:11 |
clarkb | I wouldn't care so much except they don't let you easily modify existing content which puts us in a weird spot | 16:11 |
fungi | i suppose it's possible that whatever's going on with zuul caused those jobs to be more bursty than usual and we tripped some limit | 16:29 |
fungi | lots of articles are out, for people who enjoy seeing the press around releases: | 16:46 |
fungi | https://www.itprotoday.com/iaas-and-paas/openstack-caracal-release-focuses-ai-performance-security | 16:46 |
fungi | https://www.techzine.eu/news/infrastructure/118369/caracal-release-of-openstack-bets-on-ai-workloads-and-vmware-refugees/ | 16:46 |
fungi | https://www.computerweekly.com/blog/Open-Source-Insider/OpenStack-Caracal-improves-agility-delivers-bite-as-VMware-alternative | 16:46 |
*** vishalmanchanda_ is now known as vishalmanchanda | 16:55 | |
*** gmann_ is now known as gmann | 16:55 | |
*** carloss_ is now known as carloss | 19:46 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!