Wednesday, 2024-09-18

*** bauzas- is now known as bauzas07:22
zigoWhen installing Dalmatian, I have (yet) another regression with openstackclient:08:17
zigohttps://paste.opendev.org/show/bSKLTvF36ChOGBfUgVhw/08:17
zigoSomehow, it fails loading the metric module with stevedore.08:17
zigoReverting to openstackclient 6.6.0 fixed everything.08:17
ttxstephenfin: see above ^08:46
elodillesbauzas also said on nova channel: 10:14 < bauzas> so, I'd prefer to rollback the u-c by OSC6 now08:54
zigoThat's what I did in my CI, though it'd be nicer to understand what's going on.08:59
fricklerzigo: can you check whether the "metric delete" command actually is present for you in 6.6.0? I think what's new in 7.x is only the logging for the import failure. otherwise that's a bug in https://opendev.org/openstack/python-observabilityclient/src/branch/master/observabilityclient/v1/cli.py08:59
zigoWhat is observabilityclient btw? I packaged it, but I don't get it.09:00
fricklersomething invented by the telemetry project afaict, I don't remember the details09:01
zigofrickler: Sorry, can't really check easily, my CI seems broken somehow, I have to repair it.09:01
zigoOh, just got what's happening ... :P09:02
zigoOne VM staid up after clean-up, looks like.09:02
zigoI'll be able to tell soon.09:02
zigooh, silly me, it's easy to check,09:03
zigoCarcal is what I have in production, hang on ! :)09:03
zigoAh no, Caracal is openstackclient 5.4.0 ...09:04
zigoFAIL again ... :P09:04
zigofrickler: 09:04
zigo# openstack metric delete 09:04
zigousage: openstack metric delete [-h] [--resource-id RESOURCE_ID] metric [metric ...]09:04
zigoopenstack metric delete: error: the following arguments are required: metric09:04
zigoThat's with python3-openstackclient 6.6.009:05
zigopython3-gnocchiclient being installed.09:05
frickleroh, wait, that's with gnocchiclient, not observabilityclient? the former isn't even openstack09:06
fricklerI only checked with codesearch, but couldn't find the "HOME" reference09:06
*** bauzas_ is now known as bauzas09:06
zigofrickler: gnocchiclient is indeed what brings the "metric" sub-command.09:07
zigoThat's indeed from telemetry, and not from OpenStack, thought there's no other ways, is there?09:07
zigoThe fact that gnocchi and gnocchiclient are not maintained within OpenStack is IMO a **very bad** thing.09:08
zigoThe only reason Red Hat doesn't care, is because Red Hat (wrongly) believe it's not needed and that everything should go to prometheus (which cannot handle the load of a moderately large public cloud...).09:09
fricklerwell iirc that was the decision from the gnocchi team, not from openstack09:11
frickleralso while we're getting ever more offtopic: what backend do you use with gnocchi for scale? I remember using ceph a long time ago and that didn't scale well either09:13
zigoWe use Ceph indeed.09:13
zigoWe have about 20 / 25 metrics per VM, and like 6k VMs, and it's ok-ish ...09:13
zigoWe have bottlenecks elsewhere.09:14
zigoLike so many gnocchi-api calls ...09:14
zigoInfluxdb is nice and fast (500k timeseries point per second), but it's license isn't allowing clustering unless you pay, so we don't use it.09:15
*** bauzas_ is now known as bauzas09:16
fricklerok, interesting, thx for the data point09:22
gibiare we going to merge https://review.opendev.org/c/openstack/requirements/+/929552 now as osc 7.1.1. is still broken, or does the release team still prefers to roll forward to 7.1.2. with https://review.opendev.org/c/openstack/python-openstackclient/+/929726 ? 09:39
gibito be fair I preferred rolling to 7.1.1. but now that it turned out to be broken too. I have to say that dansmith was right and we should not rush these things09:41
sean-k-mooneywe can do both09:47
sean-k-mooneyif we merge my patch first it will unblock nova and we can try an test 7.1.2 with a dnm form nova to the requirements patch i think09:48
sean-k-mooneyi need to confrim the rquiremetns repo is in requred porjects09:48
ttxI'd like to make the u-c pin decision by end of week, for sure. In the meantime we should definitely roll forward09:52
ttxbut agree things are trending in the pin direction right now09:52
sean-k-mooneywe really do not have that kind of time09:52
sean-k-mooneyi mean the final rc is ment to be in 8 days09:53
bauzasfwiw we also have another bug report with evacuate once we use my OSC change for providing the right parameter https://bugs.launchpad.net/nova/+bug/208102309:54
gibiyeah we have RC bugs waiting to land before we can cut RC09:54
ttxyeah09:54
bauzasthat's why I'd prefer to pin up to OSC609:54
bauzasat least for Dalmatian09:54
bauzasand then once we have nova RC1, we could just accept OSC7.1.2 (once it's created)09:54
ttxI'd like to hear from stephenfin first, see if he agrees at this stage it looks like the best compromise... but yes ideally we'd make the call today09:57
sean-k-mooneyi can  try and create some patches to devstack and nova to try an pull in the different things in flight for 7.1.2 but i would perfer to proceeed with https://review.opendev.org/c/openstack/requirements/+/929552 and see if we can land the rc regresions in nova in the next hour or so09:57
ttxIdeally we'd also have the TC weigh on this because a dalmatian pin that excludes all dalmatian releases is going to look bad... but I recognize it might be hard to get09:58
sean-k-mooneywe also neeed them to weigh in on possible delaying the integrated relese09:59
sean-k-mooneybecause we are getting close to needing to consider that09:59
sean-k-mooneyonce the osc issue is resolved im expecting it to take a day or 2 for our patches to actully land before nova can cut rc110:00
zigoI'm all for 6.6.0, though I'd prefer if openstackclient 7.x was released with reverted patches instead, so we can continue in a monotonic increase.10:00
zigoJust like you wrote ttx ...10:01
sean-k-mooneyzigo: we dont really know what other bugs there are 10:01
sean-k-mooneyso we dont realy know what the revert would be10:01
zigogit diff -u -r 6.6.0 0 -r 7.1.1 >1.patch ; patch -p1 -R <1.patch10:01
zigo:)10:02
sean-k-mooneyreverting everything and asking the sdk team to do all that owrk again is not really an option unless you just ment on stable not master10:02
sean-k-mooneyat which point its better to just use 6.6.010:03
zigoYeah, just on stable ... so we can have a 7.2 that works.10:03
sean-k-mooneywell we are not really ment ot bump uc on stable once a release happens10:04
zigoI only uploaded 7.x to Experimental, so I don't really care, leaving it bitrot there until there's something better is ok for me.10:04
sean-k-mooneyso it 7.2 woudl not be pulled into stable10:04
sean-k-mooneyso im not seeing the point of a revert fo all the code just on stable10:04
zigoThe point was only for having an always increasing version, if someone downstream already released 7.x.10:04
zigo(not my case, as I just wrote)10:05
bauzasthat's why I'd prefer to pin to 6.x 10:14
bauzasI know this is hard for the OSC team but they can document that people can use OSC7 if they wish, only if they don't need to evacuate10:14
bauzasthat's what releasenotes are made for10:14
bauzasand distros can choose to ship OSC7 with Dalmatian once 7.1.2 is there10:15
bauzaswe're just talking of the integrated release10:15
zigoRight, though as I wrote, that's not the only bug. I have this issue where "user show" didn't work. Hard to tell in which case, though I'm having this with my keystone_user provider resource ...10:18
zigoI'll try and reproduce, but I'm not sure how (yet).10:18
sean-k-mooneyi have pushed 3 patches to test things10:31
sean-k-mooneyi have a dnm to nova to pull in the pin to 6.6.010:31
sean-k-mooneyand then a pair of patches to nova/devstack to use osc form master10:31
gibijust because we don't have an agreement yet, should we also prepare the 7.1.2 osc release so that if the agreement will be to roll forward then we have the release to roll to?11:02
sean-k-mooneyi would say prepare but maybe dont merge until we get feedback form the job using master11:04
sean-k-mooneyhttps://zuul.opendev.org/t/openstack/buildset/8f1b2e082a1a4a3e85334ca0dfa5a96f11:04
sean-k-mooneyhttps://zuul.opendev.org/t/openstack/build/3a9bc76be9d6488cadf5b5a14c3e0fae11:06
sean-k-mooneynova live migration passed11:06
sean-k-mooneyso the evacuate issue realy does seam to be fixed now11:06
opendevreviewRiccardo Pittau proposed openstack/releases master: Release ironic-inspector 12.3.0 for dalmatian  https://review.opendev.org/c/openstack/releases/+/92976411:28
opendevreviewBalazs Gibizer proposed openstack/releases master: python-openstackclient 7.1.2  https://review.opendev.org/c/openstack/releases/+/92976511:28
gibiproposed 7.1.2 ^^11:29
opendevreviewElod Illes proposed openstack/releases master: Release cyborg RC1 for 2024.2 Dalmatian  https://review.opendev.org/c/openstack/releases/+/92852311:48
elodillesthanks gibi o/11:55
*** mnasiadka1 is now known as mnasiadka12:28
opendevreviewRiccardo Pittau proposed openstack/releases master: Release ironic-inspector 12.3.0 for dalmatian  https://review.opendev.org/c/openstack/releases/+/92976413:04
dansmithmy preference is to either pin 6.x or mega-revert 7.x. However, I'd say pinning 6 makes more sense to me..13:29
dansmithpinning 6 but releasing 7 tells people that we have only confirmed 6 but they could install 7 on a workstation to get new features if they work and/or 7 after enough backports are made and we lift a pin or something13:29
dansmithbut I understand that's complicated for distros13:29
bauzasyup, as I said earlier, I'm on the same direction13:30
bauzaswe should pin dalmatian u-c to 6 so we could eventually release nova RC113:30
bauzasand once that done, we can bump u-c, I'm fine13:31
bauzasbut for the moment, we're holding two regression fixes to be merged plus the procedural patches for paperwork13:31
bauzassorry, but while I understand that OSC folks prefer to have the latest in Dalmatian due to their efforts, the nova folks are tho themselves impacted while they also worked hard for fixing those regressions quickly13:32
elodillesrelease-team: this is waiting for a 2nd core review: python-openstackclient 7.1.2  https://review.opendev.org/c/openstack/releases/+/92976513:52
elodillesdansmith bauzas : i'm not happy about it, but if no other possibility exist to be able to release nova rc1 soon, then capping osc <7.0 is the option we should choose. (which we tried to avoid as much as possible in the past)13:55
dansmithelodilles: https://review.opendev.org/c/openstack/requirements/+/92955213:55
dansmithelodilles: yep, it's unfortunate for sure, but I think we've shown (faster than normal, and luckily before the users did) that this was not a good idea13:56
bauzaselodilles: I'm also a bit concerned by the behavioural change in OSC that defaults to the latest microversion13:56
dansmith("this" being a rushed roll forward)13:56
dansmithyeah, I haven't confirmed, but if that's the case, the risk is *way* higher than I even thought13:56
bauzasand I want us to buy time with only supporting 7.x in Epoxy, so we could fix bugs in the service projects13:56
bauzasif we allow 7.x to ship with Dalmatian, that's doable to fix bugs by backporting them, but that's an unnecessary hurry13:57
bauzasas I already said, this doesn't prevent users and distros to use 7.x with Dalmatian, provided OSC mentions in their relnotes the known bugs13:58
bauzas(which I never saw)13:58
bauzasbtw. I don't see any bug report for the evacuate parameter issue, I just uploaded a fix without a tracker and I dislike that13:59
dansmithalso reminder that nova is waiting to merge a pretty substantial bug fix that *has* to go in this release, and the longer we wait the the less soak-time that gets as well,13:59
dansmithso we're really compounding the bad decisions here :/14:00
ttxalright, we are running out of time, and need to make a call15:04
ttxthere is no good solution but the least worse at this stage seems to be to pin to <715:04
dansmiththanks ttx15:09
dansmithelodilles: can you +W or are you waiting for more comments?15:10
ttxfungi elodilles frickler hberaud last call for objections15:10
elodillesttx: i don't object 15:12
elodillesi don't like it, but probably this is the easiest way-forward15:12
fungii suppose the main weirdness with that choice is that the version listed as part of the coordinated release isn't actually used in testing that release15:12
elodillesfungi: yepp15:12
ttxfungi: yes15:13
ttxnote that the pin is on specific versions, so a 7.1.2 would bypass it, we need to be careful with that15:13
ttxespecially with https://review.opendev.org/c/openstack/releases/+/929765 being proposed15:13
dansmithdoesn't the u-c prevent that though?15:14
ttxwe might want to turn it into a proper <7, but that can wait15:14
dansmithif not then yes let's just convert the g-r to <7 just to be safe and tweak later15:15
fungiso how does this factor into stable branch matching? there's a stable/2024.2 branch of python-openstackclient which was created from and includes the 7.1.1 tag15:16
fungii guess any fix for the existing problems would need to be backported to stable/2024.2 then?15:16
ttxgood question... will stable/2024.2 testing be able to find 6.6.1 while it's only on stable/2024.1 ?15:18
fungier, i guess it was branched from 7.1.0 technically, and then 7.1.1 was tagged only on stable/2024.215:18
ttxoh well we can merge that and find out, i guess15:19
ttxit's not as if the requirements change could not be reverted15:19
fungii suppose the state of stable/2024.2 could be bulk reverted back to what was in 6.6.1 and then tagged, but should probably only be tagged there as a patch version increase like 7.1.2 unless we can guarantee that any subsequent releases from master will still be higher15:20
elodilles(sorry, i need to commute home now (office day) but will look back later)15:20
fungialso release notes always get weird when trying to roll back versions like that, so has its own confusion to contend with15:21
ttxelodilles: would be good to approve it in the coming hours to unblock the nova RC115:21
fungii wonder if would make sense to plan to backport fixes into stable/2024.2 asap after release and then try increasing the reqs pin for that branch to bring it into consistency with the coordinated release itself?15:23
ttxfungi: that would be tricky to reconcile with deliverable files... branches are cut from one of the release in the series15:23
fungiyeah, i don't mean changing the branch state, i mean merging a huge revert commit in stable/2024.2 as one of the options if we weren't going to the conflicting reqs pin15:24
fungier, didn't mean recreating the branch from a different tag15:25
fungithough i'm talking about too many possible alternative approaches in parallel right now15:25
ttxeh yes15:25
fungii think where my thoughts are headed at this point is: 1. go ahead with the proposed requirements pin for !7.1.0,!7.1.1 (or <7, whatever); 2. after the dust from the coordinated release settles, the maintainers merge regression fix backports into stable/2024.2 and request a 7.1.2 tag there, 3. remove the pin in requirements for stable/2024.2 and reconcile the constraints list to use15:29
fungithe newer point release so that what's tested is consistent with what the release page says is included15:29
fungiit's an exception to our processes, but no matter what we choose it will still be an exception and at least this one leads to a state of eventual consistency and won't be too much of a surprise if it's stated as a clear plan15:30
fungias it stands right now, the distros will probably end up shipping the newer osc version because that's what we say is part of 2024.2, but at least the window of time where that's not what we're actually using upstream would be fairly short15:32
fungithe later part is also a plan we can check with the tc on, since it's not urgent to resolve15:36
clarkbseems like we should be able to communicate one way or another to distros that they should avoid those releases15:37
dansmithisn't that kinda the point of u-c?15:37
dansmithI mean, it's weird for us to exclude our own thing, but that's basically what we're trying to do here :)15:38
dansmithmaybe just some global reno to explain the situation?15:38
fungiwell, upper-constraints is for us to freeze the versions of our external dependencies we test with, not to serve as a recommendation for what versions of our dependencies should be packaged by distributions (because they often need to make compromises around shared dependencies with other software in their distribution)15:41
sean-k-mooneyfungi: its kind of historically been both15:42
fungithat we also end up pinning our own internally-developed dependencies in constraints is more of a side effect, and one we work around by manually increasing them when we tag new versions15:42
sean-k-mooneyits the know workign baseline for a distobution to pull form15:42
fungiit's "this is what we tested with, if you use something else then we can't make any promises it works" yes15:42
fungion the other hand, the releases.openstack.org site lists what versions of the software our community has developed for each coordinated release15:43
fungiand for logistical reasons, it only rolls forward, we can't really rewind versions there15:43
fungimy point was, the versions of software developed by our community, if appearing in upper-constraints.txt, are an implementation detail and not a description of our coordinated release15:45
fungii don't expect distribution package maintainers to take what version of python-openstackclient appears in the stable/2024.2 branch upper-constraints.txt as an indication of which version they should package for openstack 2024.2/dalmatian, i expect them to go by what's in our release announcements and listed on the releases.openstack.org site15:46
sean-k-mooneywell i kind of would15:47
fungia conflict between the two is likely to cause confusion, but it's confusion we could plan now to resolve shortly after the coordinated release15:47
sean-k-mooneyit alwasy annoys me when our downstream doesnt :)15:47
dansmithit sounds like we've got at least a high level approval.. can we please approve *some* pin so nova can land and cut rc1?16:02
* bauzas makes cat eyes16:06
bauzasfwiw, I don't want to hold OSC 7.1.216:08
bauzasif packagers want to ship 7.1.2 with Dalmatian, that will be their decision, not ours16:09
bauzasbut for the sake of the integrated gate, the pin is unfortunately the right way forward16:09
dansmithfungi: thanks16:11
fungisean-k-mooney: it's all well and good until there's a security vulnerability found affecting python-openstackclient v6 and we have no choice but to figure out how to patch that in stable/2024.2 without bringing back the regressions from v716:19
fungiwhich is why i think we need to reconcile it in stable soon after the release rather than just blithely keep it pinned and hope everything will be okay16:19
sean-k-mooneyi explcitly didn not pin it to <7 to allow going to a futre 7.x.y16:20
fungiyep, though if we did pin to <7 that can also be undone fairly trivially. it's more about making sure we're okay with rolling forward from 6.6.1 to something >7.1.1 in stable at some point16:21
sean-k-mooneyya so with in the limit of our curernt testing both could be ok because the sepcific issues we hit are adressed on master16:22
fungiwe're not resolving the problem, we're just choosing to postpone the fix in stable a bit so we can get through the release push16:22
sean-k-mooneybut we know there are some open bugs on masster not nit in the gate jobs16:22
sean-k-mooneywe are fully green with 6.6.0 https://review.opendev.org/c/openstack/nova/+/929758?tab=change-view-tab-header-zuul-results-summary and have one unrelated test failure with master osc https://review.opendev.org/c/openstack/nova/+/92975716:25
sean-k-mooneyif we can get RC1 of nova out this week we could go to 7.2.0 or whatever for osc next weeks and see if its stabel enouch16:25
sean-k-mooneygoing forward we may want to consier actully merging https://review.opendev.org/c/openstack/devstack/+/929754 so that we test from master osc by default to avoid this or have a tips job that runs in the weekly job16:27
sean-k-mooneyif we are going to treat osc like a service project we shoudl be using it form master not release anyway16:28
clarkbsean-k-mooney: fwiw I don't think testing with master by default is the answer either16:29
clarkbbecause then you get into the trap of things only working with unreleased code. Ideally we check both things that development tip works as well as the latest release16:30
sean-k-mooneyya so part of the issue is the devstack functional job in osc16:30
sean-k-mooneyis not covering all the edgecases16:30
clarkbonce upon a time everything was tested from git master together and then we found out that nothing worked with the various lib releases and switched to the current default but the expectation was that testing against master would also continue16:30
sean-k-mooneywe could imporve that and then have a single tips job in consuming projects16:31
sean-k-mooneyfor example nova-next coudl use it form master whiel the reset use it form main16:31
sean-k-mooney*form release not main16:31
sean-k-mooneyclarkb: the quetion is is the clint a lib or a standalone deliverable16:32
sean-k-mooneyif its a lib then it was subject to the final release for client lib which was m316:33
clarkbsean-k-mooney: right that exact setup is what we expected people to do when we moved to pulling lib releases by default. And it happend for a while but I think overtime people forgot it it became less obvious and now we have the old problem in reverse16:33
clarkbsean-k-mooney: functioanlly nova appears to be using it as a library16:33
sean-k-mooneyno we are just it as a clinet via a ansibel hook16:34
clarkbsean-k-mooney: wait16:34
clarkbits just a test thing with an osc install in an ansible playbook?16:34
fungithere are some blurred lines between openstackclient and openstacksdk at this point16:34
clarkbor is it something that nova installs as part of a nova installation to make stuff work?16:35
sean-k-mooneyosc is installed by devstack and we have a post run playbook that test evacuate using the client16:35
clarkbbecause if it is ^ then it is functionally a lib16:35
fungiso in this case, a test dependency16:35
sean-k-mooneycorrect16:35
sean-k-mooneynova does not use osc at all16:35
sean-k-mooneyin the nova code16:35
clarkband we can't update the code in testing to do the right thing for new osc bceause new osc doesn't support the required functionality? Basically we can't fix the test because the require tooling ws removed?16:36
sean-k-mooneyclarkb: the code in the test was correct16:37
sean-k-mooneyhttps://github.com/openstack/nova/blob/master/roles/run-evacuate-hook/files/test_evacuate.sh#L2816:37
fungiosc switched its nova interaction backend from novaclient to the combined sdk16:37
sean-k-mooneyopenstack client was broken16:37
fungiand there were regressions as a result16:37
clarkbya just double checking this wasn't an expected behavior chagne and thus the major version bump. But sounds like no it is a proper bug so osc needs fixing16:38
fungiwhich went undiscovered in master osc because it doesn't exercise every function it supports16:38
fungiits testing doesn't exercise every function i mean16:38
sean-k-mooneyyep and it was but we fixed it incorrectly becasue of reasons16:38
sean-k-mooneyso its  fixed again correclty this time but we have not released it yet16:38
sean-k-mooneyhowever in the last week there have been at least 4 diffent osc regression reported16:39
sean-k-mooneyfungi: ya so one of the thing we could do is make the devstack-functional job multi node and add in this hook or simialr16:39
sean-k-mooneyor have some other jobs that will trigger on some changes.16:40
clarkbseems like we should focus on osc testing to improve coverage of use cases to avoid regressions when changing implementations. Then also in the future try to make those changes early in cycles and release them early so that we can ensure things work well before the big integrated release16:40
sean-k-mooneyi.e. you change compute/* run this extra job 16:40
clarkbnot sure we necessarily need to stop releaseing tools like osc after m3 but try to get big updates in before then16:40
sean-k-mooneythat pretty hevery weight however becuase we dont really use the client much16:40
sean-k-mooneyclarkb: well we are not ment to based on the release schdule16:40
sean-k-mooneywell i gues that depend16:41
sean-k-mooneyhttps://releases.openstack.org/dalmatian/schedule.html#final-release-for-client-libraries16:41
clarkbsean-k-mooney: sure. Neither is opendev but I haven't changed the default ansible versions to version 9 for openstack yet because I'm trying to avoid unexpected impacts to the release16:41
sean-k-mooneythat why i asked is it a lib or a standalone deliverable16:41
clarkbsean-k-mooney: just because we aren't explicitly beholden to a schedule doesn't mean we can't make good decisions to try and make potentially disruptive updates as painless as possible16:42
sean-k-mooneyif its a standalone deliverbale (which i think it is) then that freeze does not apply16:42
clarkbright, but I think that is the wrong way of looking at it16:42
clarkbyou should still try to coordinate and work with the larger pciture16:42
clarkbinfra/opendev have long been "slushy" around the openstack release16:43
clarkbwe don't actually freeze, but we ask people to take more care16:43
sean-k-mooneyright and we coudl also impove that by improvign testing in the requiremetn repo16:43
sean-k-mooneyyou realise that we are tryign to take the wider view, we tried to fix the issue proactively and get the release out to unblock things and role forward to 7.x16:45
clarkbI'm not talking about any specific release. I'm merely referring to your question on whether or not you must freeze at m316:45
clarkbI'm suggesting that no a complete freeze is probably not necessary. But if we're planning big updates like replacing the nova backend for osc ideally we would do those before m3 that is all16:46
clarkbsimilar to how opendev does not make big changes to zuul job runtimes or gerrit releases around the openstack release16:46
clarkbwe make lots of other changes (we just upgraded etherpad for example)16:46
clarkbits all about risk and potential for fallout16:46
stephenfinclarkb: the issue is that we did. I'm writing a mail about this now, but OSC 7.0.0 was release at the beginning of August16:46
stephenfinbut the upper-constraint bump failed CI and never merged, and no one told us it didn't merge so we couldn't even prioritise a 7.0.116:47
clarkbgottcha16:47
sean-k-mooneyclarkb: it was relese in augst but not promoted until 11th of september 1 week ago16:47
sean-k-mooneyhttps://releases.openstack.org/dalmatian/schedule.html#final-release-for-client-libraries16:47
sean-k-mooneysorry wrong link16:48
sean-k-mooneyhttps://github.com/openstack/requirements/commit/59ddc5f862e46c065337467bad673f3d716dc77a16:48
stephenfinsean-k-mooney: worse: it never got promoted. 7.1.0 got promoted16:48
sean-k-mooneystephenfin: well yes but that was for other reasons16:48
clarkbbut also I'm not part of the release team. I'm just trying to offer my perspective as someone who does try to assess risk and apply updatse carefuly around the openstack release16:48
dansmithyeah I initially thought we needed a process change from this, but I think it's more just that multiple cascading fails caused us to get into a bad situation16:49
stephenfinclarkb: not to ruin the premise of my email too much, but this is *exactly* what happened with oslo.db and castellan in previous releases 16:49
dansmithI think the bigger potential process improvement we need is some gospel over how to handle a post-rc-week breaking change like this (i.e. what we're doing right now)16:49
clarkband maybe an explicit step in the reelase process that ensures we're up to date with all of our own deliverables before rc time?16:50
stephenfindansmith: Yes. But also, a failing u-c bump for one of our own dependencies should send alarm bells ringing16:50
fungii concur16:50
dansmithstephenfin: yes, fair point of improving whatever happened there as well16:50
sean-k-mooneyya but im not sure why tht dint16:50
sean-k-mooneywoudl that not have come up in the sdk/clint team meeting16:51
stephenfinWe could have fixed this weeks ago and been sitting drinking (virgin) piña coladas right now, but alas 😅16:51
fungimaybe we can find a better way to plumb notice of such failures to the maintainers of whatever the dependency was that failed16:51
clarkbstephenfin: soju16:51
stephenfinNo emails, no IRC pings, nothing. I can't speak for gtema but I only became aware of the issue yesterday16:51
fungii'll take a virgin piña coladas and a double rum, please16:52
stephenfinclarkb: touché :)16:52
fricklerstephenfin: the current issues were only discovered very recently, when 7.1.0 was released and could be bumped in u-c. the issues with 7.0.0 were discovered early in august and I discussed them with you and you fixed those, like the "-c ID" vs. "-c id" issue.17:35
stephenfinfrickler: Yes, I knew there were issues and we worked to fix those quickly (I'm pretty sure I fixed it the same day). What I did not know was that that issue was preventing a u-c bump17:36
stephenfinfrickler: tbc, I'm not blaming you or anyone else. The failure lies with our processes, not our people17:37
fricklerstephenfin: sure. I just checked logs and I linked to https://zuul.opendev.org/t/openstack/build/ce0ea76a7c394b35aaeec5ee02d9914d back then, but I wasn't explicit about being a reqs blocker. I'm also not sure what processes could get improved. what I do think is both the sdks and the reqs teams could use more help and this is what shows here17:40
stephenfinAgreed. I would really like it if the Nova team was as involved with SDK/OSC as many other services are. We frequently get both patches and reviews from Manila, Cinder, Neutron etc. However, in this instance, something as simple as e.g. an IRC bot alerting us to these failures would likely have avoided much of this17:45
stephenfinAlso, the saying the reqs team could do with more help is a definite understatement. But hopefully at least I personally have pulled my weight (and continue to pull my weight) there17:47
*** bauzas_ is now known as bauzas19:25
sean-k-mooneystephenfin: im not going to revert your patch but i really think it was incorrect ot go with <7 and inconsitent with our previous stance on puting hard caps as a last resort23:14
sean-k-mooneyim also sad that that patch was merge basically with no discssuion while the patch that actully unblocked the gate took a week to merge. im not casting blame here just expressing my furstration as im trying to manage my personal burnout and this really has not helped23:17
*** bauzas_ is now known as bauzas23:52

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!