Friday, 2024-07-05

*** bauzas_ is now known as bauzas03:23
*** bauzas_ is now known as bauzas04:37
fricklernova-next failed with two unrelated issues in tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps for https://review.opendev.org/c/openstack/nova/+/923284 , I'm now going to resubmit that stack into gate directly06:54
fricklerbauzas: sean-k-mooney: there are some failures on the 2023.1 stack, also these still need reviews so they can be gated once the 2023.2 ones go in (in 2+ hours if no more gate issues happen) https://review.opendev.org/q/project:openstack/nova+branch:stable/2023.1+status:open+topic:%22bug/2059809%2208:05
fricklerand nova-grenade-multinode failing on https://review.opendev.org/c/openstack/nova/+/923286 :-(08:29
fricklerbauzas: sean-k-mooney: gibi: please check ^^ and sorry for keeping to nag you, but I think it would be really great to get the remainder of the stack merged before the weekend09:24
opendevreviewMerged openstack/nova stable/2023.2: Reject qcow files with data-file attributes  https://review.opendev.org/c/openstack/nova/+/92328409:24
opendevreviewMerged openstack/nova stable/2023.2: Check images with format_inspector for safety  https://review.opendev.org/c/openstack/nova/+/92328509:24
fricklerI've resubmitted the two remaining 2023.2 patches into gate now, but please check those errors later, just in case09:34
fricklerdansmith: ^^ fyi09:34
sean-k-mooneyfrickler: yep this is still our priority today09:53
sean-k-mooneyim just going though my morning email routine but ill take a look at the backport before i go back to the iso regression09:55
sean-k-mooneyi asked gibi to respine the backport yesterday for the ami/ari/aki format regression so it looks like that has been doen back to 2023.1 correctly09:56
fricklersean-k-mooney: yes, the backports look fine, but 2023.1 had some CI failures and is also missing approvals09:59
stephenfinsean-k-mooney: Seeing as it's the last quiet day before ze Americans return, any chance of some further reviews on the OpenAPI series so that can keep ticking forward? https://review.opendev.org/c/openstack/nova/+/915738/10:13
sean-k-mooneyill see how things go with the cve stuff but i may have a chance to look at some of them later today. if not then ill look at them on monday10:15
stephenfinack, ty!10:16
sean-k-mooneyok downstream email and pings sorted10:24
sean-k-mooneyfrickler: anywhere i should start with the backports10:25
sean-k-mooneyor will i just work my way form top to bottom10:25
fricklersean-k-mooney: well bottom to top I'd say in how I look at the stack (and gerrit UI seems to agree)10:27
sean-k-mooney i ment form newest to oldes branch10:27
sean-k-mooneybut yes bottom to top within the chain10:27
sean-k-mooneyi was just checkign where we are in teh diffentb branches https://review.opendev.org/q/topic:%22bug/2059809%22+project:openstack/nova10:27
sean-k-mooneylooks like 2023.2 is almost done10:28
sean-k-mooneyso 2023.1 is next10:28
frickleryes, only 2023.1 open, so starting at https://review.opendev.org/c/openstack/nova/+/923288/2 should be fine10:28
fricklerI'll ignore unmaintained branches fwiw10:29
sean-k-mooneymnaser: may have na invested intreset :) they have the zed backports up10:30
sean-k-mooneybut i cant appoved them there anyway so ill stick to stabel10:31
sean-k-mooneyon a lighter note this is the before https://fileshare.seanmooney.info/average_tempest_runtime_for_tempest-integrated-compute_by_provider.png and after https://fileshare.seanmooney.info/average_tempest_runtime_for_tempest-integrated-compute_by_provider_2024-07-03.png for truning on the the devstack optimisations. both graphs are build form 600-700 ish succesful job runs orver10:37
sean-k-mooneythe proceeding 2 weeks10:37
sean-k-mooneynew runs are faster but still not quite enough to account for the job to job varince sometimes causing timeouts on slow nodews but we went form about a 10% timout rate to aroudn 5% and hopefully that is still droping10:38
fricklersean-k-mooney: do you also want to approve the stack or do you think you need to wait for another review? just for comparison melwitt single-approved the 2023.2 ones10:40
sean-k-mooneyfrickler: ill approve it once i get to the top im goign to +w all of the rest and then ill +w the bottom patch10:40
sean-k-mooneyim hoping the 2 that are pending in 2023.2 will merge before then so that the backport check does not kick the final patches out of gate10:41
fricklerdoes that check work on the speculative state in zuul or on the actual repos? the former should be fine as long as the other patches are ahead in the queue10:42
sean-k-mooneyactual10:42
sean-k-mooneyalthough if you have stable-only in the commit i think it skip it10:43
sean-k-mooneythat how we gerenally workaround it when we need to fast merge10:43
sean-k-mooneywe mark it stable only even when its not10:43
fricklerhmm, ok, then let's wait for the 2023.2 patches, then I can simply promote the whole 2023.1 stack once it is approved and don't need to do more reshuffling10:44
sean-k-mooneyok in that case ill leave +w off the base until those merge then we can send the full set10:45
fricklerack10:45
sean-k-mooneylooks like there is about 30 left on thje  2023.2 gate jobs so ill keep an eye on them 10:59
sean-k-mooneyok coffee then back to the iso regression11:00
sean-k-mooneypart of me does not like release the cve fixes without that as its a pretty common usecase but im hoping we can have that fixed properly today or very early next week11:00
fricklerI wouldn't worry about a release too much at this stage, as most people are running with the provisional patches currently anyway and might switch to head of master after today, but getting the iso issue fixed soon certainly will be a good idea11:15
fricklerhumm, nova-multi-cell failing on https://review.opendev.org/c/openstack/nova/+/92328711:16
sean-k-mooneypersonally im not sure i would deploy this in production without the iso fixes but ya11:16
sean-k-mooneyannorying but letss see why11:17
sean-k-mooneyits one of the volume test sbut not one of the normal kernel panics or similar11:17
frickleryou would rather run your cloud vulnerable?11:18
fricklersecurity group cleanup failure https://zuul.opendev.org/t/openstack/build/1f6d9308d0bb46999682ee43e856ee1911:18
sean-k-mooneythen break all usage of iso in nova (and ami iamges if you dont have the stable version of the patches) 11:19
sean-k-mooneyit would depend on what my custoemrs are using11:19
sean-k-mooneyand if its a public or private cloud11:19
sean-k-mooneyfor a private cloud with semi trusted workloads or iamges like our internal ci cloud i would wait11:20
sean-k-mooneyif i was vexhost or ovh i proably woudl not11:20
sean-k-mooneyfrickler: anyway lookign at the failure i dont think its related so we can recheck when the job reports back11:21
fricklerI wouldn't even recheck but promote into gate again immediately, just behind the 2023.1 stack11:23
sean-k-mooneyfair, that works for me too11:24
opendevreviewMerged openstack/nova stable/2023.2: Additional qemu safety checking on base images  https://review.opendev.org/c/openstack/nova/+/92328611:35
fricklersean-k-mooney: ready to approve 2023.1 ^^11:36
sean-k-mooneydont we still need to wait for https://review.opendev.org/c/openstack/nova/+/923287/311:36
fricklersean-k-mooney: no, that is the one that failed, I'm re-enqueueing it now11:37
sean-k-mooneyi can appove and most of them will merge but i think its imporant to not cover the vmdk part too11:37
sean-k-mooneyoh ok11:37
sean-k-mooneyill send them but we might need to handel that on 2023.1 seperatly11:39
sean-k-mooneywe will see what the timeing looks like11:39
fricklersean-k-mooney: ah, right, maybe drop the W+1 from 923291 for now, then11:40
sean-k-mooneysure done11:41
fricklerthx, promoted all the other changes now, we'll see more in 2h hopefully11:44
sean-k-mooneylet me know if i can help with anything else, ill check on them perodicly11:45
fricklerack11:50
fungilooks like https://review.opendev.org/c/openstack/nova/+/923291 got unapproved at some point, are we ready to reapprove it?12:45
fungioh, though it's not explicitly listed in the ossa so i guess it's not as urgent12:47
sean-k-mooneyi unapproved it because the 2023.2 version failed and it would fail if it went to the gate on the backport check12:47
sean-k-mooneyfungi: so ill reappoave it when https://review.opendev.org/c/openstack/nova/+/923287 is merged12:48
fungihopefully the backport check wouldn't fail when the 2023.2 was ahead of it in the gate unless that test isn't properly speculative, but it seems there's no harm in waiting on that one either12:48
sean-k-mooneyit will12:49
sean-k-mooneyits intentially not sepculiive12:49
fungiwhat's the reasoning for that?12:49
sean-k-mooneythe reason it exits is to make sure we never merge soemthign in an older branch if tis not merged in the newer branch12:49
sean-k-mooneywe can overried it by marking the change stable-only12:50
sean-k-mooneywhich is what we normally do for CVEs but didnt this time12:50
fungiyeah, but making it check zuul's speculative state would still accomplish that, because it would get retested if the newer branch's version gets kicked out for any reason and then would fail12:50
sean-k-mooneyfungi: if you know how to update the script to do that feel free to propse it :)12:51
fungii don't even know what job you're talking about, but if you have a pointer to the script i can take a look12:51
sean-k-mooneyhttps://github.com/openstack/nova/blob/master/tools/check-cherry-picks.sh12:51
fungiit's likely just a matter of making the script do less than it does now12:51
sean-k-mooneythis also need to work locally 12:52
sean-k-mooneythe end goal of the script is to make sure we dont regress going form 2023.1 to 2023.2 ectra12:54
fungii guess i'll need to look at the context where the script is run, because i don't see it doing anything that should inherently break speculation, but if some earlier step is checking out from the origin remote instead of the current branch state that would break it12:54
sean-k-mooneyit runs in check as non voteing and gate as voting12:55
fungii mean the job that runs that script12:56
sean-k-mooneyfungi: from my perspective not doing speclicitive execution is safer and more correct12:56
sean-k-mooneyso i dont consider this to be a bug12:56
fungii don't see it, but not my circus, not my monkeys ;)12:56
sean-k-mooneyits called nova-tox-validate-backport12:57
sean-k-mooneyhttps://zuul.opendev.org/t/openstack/build/a6be643734b44f139bca7b4eb896381412:57
sean-k-mooneyCherry pick hash 11301e7e3f0d81a3368632f90608e30d9c647111 not on any master or stable branches12:57
fungiif you want backports to be delayed from being enqueued until other changes merge, and slow down throughput as a result even though zuul can ensure the correctness of the thing you actually want without that additional inefficiency, i'm not going to argue12:57
sean-k-mooneyfungi: can you expalin how zuul can ensure that a patch is merged on all newer stable branches before it will allow it to merge on the older ones12:58
fungiyes, speculative testing12:58
sean-k-mooneyfungi: because as far as im aware it cant. it would be incorrect to use depend on12:58
sean-k-mooneyand it does not have awareness of the depeency in any other way12:59
fungizuul assumes the newer branch backport ahead of it will merge, but if it doesn't then testing gets rerun without that change present in the queue and the job will fail12:59
sean-k-mooneythat only true if they are both in the queu at the same time12:59
sean-k-mooneythat is not normlaly the case12:59
fungiyes, if they're not in the queue at the same time then the job will just fail when that change isn't already merged, or succeed if it is13:00
sean-k-mooneywhy woudl it fail13:00
fungizuul is presenting all the branches to the job with a speculative future state, but if that speculative state changes then the presumed future is invalidated and a new possible future is constructed and tests are rerun13:01
sean-k-mooneyfungi: what your stating does not fit with my mental model of how zuul works and im pretty sure it not corect13:01
sean-k-mooneyfungi: how would zuul know about the precondition 13:01
sean-k-mooneyits not express in the normal jobs or via depned on13:02
fungizuul assumes that changes in the same queue share a possible future13:02
sean-k-mooneywe worte this script to explictly close the fact that zuul did not block this13:02
sean-k-mooneyright again we cant rely on that for this case13:02
sean-k-mooneybackporting thet same patch acroos branch and ensuring it merged in reverse chronalogical order13:02
fungiyes, i'm not saying the job is unnecessary, i'm saying the job should be able to use the speculative future zuul has constructed instead of blocking on the actual present state of the repositories13:02
fungibecause if that future changes the job will be re-run13:03
sean-k-mooneyit would work if zuul sepcetivly merged it into the relevnet stable branch in the git repo it prepared13:03
fungiit does13:03
sean-k-mooneywell the job still fails so apprently not. or there is a bug in our script13:04
fungiit prepares all branches of any required-project in the job (implicitly including the project that triggered the job)13:04
fungiif the script is using the locally prepared repository state without resetting any branches and without querying remote git states then it should see the prepared speculative future13:05
sean-k-mooneywe are not downlaoding a seperat repo we are runnign the script on the repo prepared by zuul13:05
sean-k-mooneyit is https://github.com/openstack/nova/blob/master/tools/check-cherry-picks.sh13:05
sean-k-mooneywe are doing read only operations on the repo in the script13:05
fungiyeah, i agree i'm not immediately seeing why it wouldn't work13:06
fungii'll enqueue it and test the assumption that it can't pass13:06
fungioh, never mind, i can't enqueue it if it's not approved13:07
sean-k-mooneyok i can re add +w13:07
sean-k-mooneyit failed in check which is why i expect it to fail in gate13:07
sean-k-mooneybut maybe that would pass in this specic case13:07
fungiwell, it failed in check because the check pipeline lacks the speculative state of the gate pipeline13:07
sean-k-mooneybecause of the queing and the fact gate is a dependet pipeline13:08
fungiif you did a depends-on in check it would likely pass there13:08
fungiby creating the same speculative future13:08
sean-k-mooneyit woudl but the ohter juobs weoul dnot be happy with zuul doing a merge of two branches of the same repo13:08
fungiwhy? it still merges them to different branches13:09
fungiwe do this all the time to test upgrade jobs like grenade13:09
sean-k-mooneywell its the same commit so would that not be a merge conflict 13:09
sean-k-mooneyi know you can do depens on in the same repo13:09
fungichanges on different branches aren't going to merge conflict with one abother13:09
sean-k-mooneyjust never had it work in the same repo for the same comit form two branches13:10
fungithe dependency will be merged to the stable/2024.1 branch, the current change will be merged to the stable/2023.2 branch13:10
sean-k-mooneydid that change at some point13:10
fungithey don't get combined into the same branch together, so no merge conflict13:10
sean-k-mooneygood to know but i tough they woudl both be merged into the branch under test13:11
fungizuul has supported cross-branch dependencies this way since ~201413:11
sean-k-mooneyfungi: i reappoved https://review.opendev.org/c/openstack/nova/+/923291 so you can try an promotet it if you want13:11
fungisupporting dependencies across branches was a major design requirement because of grenade13:11
fungiit'll get enqueued automatically in the current state anyway13:12
fungiwhich it just did13:12
sean-k-mooneyok im pretty sure if hit edgecase with that but it may have been related to somethign other then the merger13:12
funginow to see if nova-tox-validate-backport passes13:13
sean-k-mooneyif it does that will be good to know13:13
fungiyeah, if it does pass then that means in theory it should also be able to pass in check with a depends-on to the version on the newer branch13:15
fungiif it fails then i should be able to trace through the steps in the job to see where assumptions about the constructed future state on the other branch are getting invalidated13:16
fungimulti-branch speculative states are something we've had for ages because it's necessary for making sure that grenade jobs run in the gate pipeline reflect the end state when the commits on different branches later merge, so we don't land changes on two different branches that then break upgrade tests when used together13:20
fungivalidate-backport: OK13:23
fungilooks like it's going to succeed13:23
fungiyep, just went green13:23
fungiso the job is actually correctly designed to support this, there's not an actual need to wait for the other change to merge before you approve13:24
fungiand using depends-on in cases like this would have two benefits: 1. you'll get an actual passing result (you don't even necessarily have to keep the job non-voting in check if you don't want to), and 2. zuul will ensure that the changes can't get enqueued into the gate pipeline in the wrong order so you can just approve them in any order you like13:25
fungithe only caveat i'm aware of is that you need to be careful you don't inadvertently create dependency loops when you do that, but the same can be said for any use of depends-on13:27
fricklerok, looks like I could reenqueue the deferred patches, then13:40
dansmithfrickler: sean-k-mooney: FWIW, I'd much rather have broken iso support and angry users than have this exposure.. I expect images to start showing up on download sites that look to contain something awesome, but are just the exploit13:44
fungiin cases like this where there are series of changes for each branch, i'd probably conservatively just set depends-on in the commit message of the first change in the series pointing to the change url for the last change in the series on the newer branch13:44
dansmithsomething a user of a private cloud might download and then upload into the internal glance and then ..boom13:44
fricklerhumm, third nova-multi-cell failure on https://review.opendev.org/c/openstack/nova/+/923287 in a row, looks like something more serious13:44
dansmithfrickler: why serious? one fail..13:46
dansmithlooks like failure to talk to metadata server,13:46
dansmithwhich could be the same sort of thing as prevents us from SSHing to the guest13:47
dansmithoh actuall,13:47
dansmithwe're doing it from inside the guest, so "ssh guest curl metadata"13:47
fungiso could be exactly the same problem in that case13:48
opendevreviewMerged openstack/nova stable/2023.1: Reject qcow files with data-file attributes  https://review.opendev.org/c/openstack/nova/+/92328813:48
fricklerok, I miscounted, twice in a row still https://zuul.opendev.org/t/openstack/build/51577af322ee4809889698bde352efac13:48
frickleractually https://review.opendev.org/923289 is going to fail, too, stuck in devstack not finishing, waiting for the timeout to strike13:49
fricklergoing to do some gate reshuffling to override13:50
fungiseems like a good opportunity13:52
opendevreviewsean mooney proposed openstack/nova master: [WIP] add iso file format inspector  https://review.opendev.org/c/openstack/nova/+/92357313:55
sean-k-mooneyoh i messed up that commit message13:57
sean-k-mooneylet me abandon the new review13:57
opendevreviewsean mooney proposed openstack/nova master: [WIP] add iso file format inspector  https://review.opendev.org/c/openstack/nova/+/92353313:57
*** bauzas_ is now known as bauzas14:30
fricklerand another gate failure in the cve stack https://zuul.opendev.org/t/openstack/build/90c78eec9270430a8fcaad8652348d0c14:54
dansmithnova.exception.InternalError: Unexpected vif_type=binding_failed14:57
dansmithI assume that means some neutron fail14:57
dansmithsean-k-mooney: ^14:57
dansmith(frickler that was from the gate fail you quoted, in case it's not clear)14:58
frickleryes, it is well possible that it is all unrelated and just unstable tests, I just wanted someone to check it15:01
dansmithyep, definitely doesn't look related15:03
*** bauzas_ is now known as bauzas15:17
fungidid 923288 get reenqueued into the gate accidentally? it's already merged15:19
fungiif we're reshuffling for the current failures anyway, should probably eject that one at the same time15:20
dansmithfungi: maybe someone wanted to be really really sure it landed?15:21
fungidouble-merged!15:21
fungia change so nice we'll merge it twice15:21
dansmiththere were definitely things in check and forced into gate at the same time yesterday, perhaps a zuul bug or race?15:21
fricklerhmm, according to my bash history, I didn't reenqueue 923288, but zuul has been doing some additional requeues of patches on top of others15:24
fricklermaybe a race condition when I did 923289 and zuul wasn't aware yet of 923288 having been merged15:25
funginormally only if they're not yet merged and meet criteria to be enqueued and depend on a change you've enqueued (either explicitly via depends-on or implicitly through git commit parentage)15:25
fungibut yeah, can't rule out bugs either15:26
fricklerbut likely if that buildset passes, it will result in a merge failure and cause a gate reset anyway, so we can as well abandon it now IMO15:26
fungii agree we should kick it out, i'd give 50/50 odds on whether it would actually cause a gate reset or just no-op (not sure what gerrit's behavior is if you call the submit api on an already-merged change)15:27
fricklerand I'm reaching EOD quickly (time to watch some soccer), so if you could take over, fungi, that'd be nice15:27
fungiglad to!15:27
fungienjoy the fußball!15:28
fricklerthere's also a new set of deferred patches in https://etherpad.opendev.org/p/cve-gate-reshuffling in case the gate gets lets crowded, else I'll handle these tomorrow15:28
fungithanks, i saw that too when i was looking back at the pad to see if i had maybe been the one to accidentally reenqueue the already-merged change15:29
opendevreviewsean mooney proposed openstack/nova master: add iso file format inspector  https://review.opendev.org/c/openstack/nova/+/92353315:29
sean-k-mooneyfrickler: is that the saem one i looked at eariler15:30
sean-k-mooneyi commented on the binding failed failure and recheck one fo the jobs for it when i was reviewing 2023.115:31
sean-k-mooneyhttps://review.opendev.org/c/openstack/nova/+/923289/2#message-348bc1fded02d0ec6cd19807bcc2aec0245f606f15:31
fungiokay, so going to reshuffle the gate putting these first: 923287,3 923289,2 923290,2 923291,3 (and drop 923288,2 since it's already merged)15:32
fungiany other requests while i'm causing a full gate reset anyway?15:33
fungitaking your silence as a resounding "no"15:35
*** bauzas_ is now known as bauzas15:38
mnaserhas anyone pushed/proposed the idea of rack-level failure domains in openstack?16:51
mnaserI know a cloud is a cloud, but VMware people really feel like they see the need to get there16:51
sean-k-mooneyfailure domaisn are really not a thing in openstack16:52
JayFFYI; I know it's pretty low on you all's list right now, but the fix for the un-upgraded rlocks in eventlet for py3.12 is in the next release16:52
sean-k-mooneythe only way to really do that is using seperate keystone regions and isntalls of all the service per rack16:53
sean-k-mooneymnaser: looking only at nova netiern cells or AZ map to failure domains16:53
sean-k-mooneyi.e. if the host running your schduer or api dies that affacts all hosts in that region16:54
sean-k-mooneyneutron also spanc all cells/AZs16:54
fungipretty sure the same can be said for the host running your vcenter api in a vmware environment16:54
sean-k-mooneyyep so it depens on your defintion of failure domains and if your inlcuding the contol plane in that16:55
sean-k-mooneyi.e. workload failrue domains vs netwrok vs storage vs contolplane16:55
sean-k-mooneycinder backend are stroage level failure domains16:55
sean-k-mooneyof couse that is only true if you access them over seperate stoage networks16:56
fungior have some sort of high availability for the storage network16:57
sean-k-mooneyyep16:57
mnasersean-k-mooney: workload failure domains16:57
sean-k-mooneymlag or similar with two diffent top of rack switch ectra16:57
mnaserim thinking a scheduler hint maybe of some sorts + a filter?16:57
fungilike cross-chassis lag16:57
sean-k-mooneymnaser: we brifly discussed having somethign like cephs cluster map16:58
mnaseryeah16:58
sean-k-mooneybasically building a tree of host aggrates and grouping things in a heriachy then allowing affintiy/anti affinity ectra16:59
mnaserexactly16:59
mnaserim thinking server group + an extra scheduler hint other than `group` to get that16:59
mnaserbut im wondering if im walking a path someone tried before and I can pick up work from or16:59
sean-k-mooneywe said it was proably too complex and out of scope of nova but that was also like 6 years ago so thngs have changes somewhat16:59
sean-k-mooneythe last time we really talks about this properly was pre/start of the pandmeic17:00
sean-k-mooneymnaser: there is a request ot do az anti affinty this/last cycle17:00
sean-k-mooneyin generally i woul dnot be agsint this in principal. it just would not be high on my list of thing to do next17:01
mnaserworking in this case, it feels odd to have 1 az per rack17:01
mnaserlike that feels like its brute forcing the az feature if you're on rack level17:01
sean-k-mooneyaz's are just named aggreates17:01
mnaseryeah but I feel like in the perspective of a consumer of the cloud they might be like "oh isn't this like a azure/aws/etc az?"17:02
sean-k-mooneyand the answer to hell no17:03
sean-k-mooneybut i get that persective17:03
sean-k-mooneyso what i would say is we could and might want to make nova eand other services better able to supprot that usecase17:03
mnasercause like maybe it would be nice to be like.. this is an az .. but also you can have different fault domains17:03
sean-k-mooneybut i think we would need some feature in placment to do it effeictly17:04
sean-k-mooneydoing it in a filter or weigher would likely be slow an maybe buggy17:04
sean-k-mooneyim not saying we could not do it in nova alone but we would need to becareful with how we design it17:05
fungiin retrospect it was probably a mistake to call them availability zones when the implementation wasn't equivalent to aws's az concept, because we've had to fight that confusion ever since, but it'd be hard to change now (i still refer to projects as tenants, after all)17:05
sean-k-mooneynova still has instance_type and flavor depending on what your looking at17:06
fungiindeed17:06
sean-k-mooneyso i think an aggerate groups feature is definlty doable and i like cpehs model17:07
fungiwe just missed rounding out the suite of renames with "color" and "texture" and maybe "odor"17:07
sean-k-mooneyi just feel like crambing that into the existing filters/weither might be sub optimal 17:07
sean-k-mooneyim sure if we renamed flavors to Terroir it would confuse noone17:09
mnasersean-k-mooney: I think the path of least resistance is a filter that checks if there is a server group assigned, and a scheduler hint called `different_failure_domain` .. and that goes over host aggregates which have a `failure_domain` attribute and ensures that the hosts in the server group dont end up in that same host aggregate17:23
sean-k-mooneymnaser: i really dont like schduler hits17:24
sean-k-mooneybtu thats an option17:24
mnasersean-k-mooney: could also use `rules` for server groups to have `different_failure_domain` too17:25
opendevreviewDan Smith proposed openstack/nova master: Reproduce iso regression with deep format inspection  https://review.opendev.org/c/openstack/nova/+/92350717:25
opendevreviewDan Smith proposed openstack/nova master: Add iso file format inspector  https://review.opendev.org/c/openstack/nova/+/92353317:25
mnaserbut that would mean an api change 17:25
sean-k-mooneyserver group/instance wawere filteres are some of the most ram/compute intesitve you can create17:25
mnaseryeah17:27
mnaserthat is for sure going to be a busy one17:27
mnaserim not sure if there's another approach17:28
*** bauzas_ is now known as bauzas17:39
fungi923287 failed out on nova-next and nova-multi-cell: https://zuul.opendev.org/t/openstack/buildset/0a50e0745e3e4759bf743b39208ed1dc17:43
fungii've reenqueued it and repromoted17:44
opendevreviewDan Smith proposed openstack/nova master: Add iso file format inspector  https://review.opendev.org/c/openstack/nova/+/92353318:02
dansmithfungi: confirmed at least one of those was a guest kernel panic so obviously unrelated18:27
fungiexciting!18:28
fungiand good to know, thanks for taking a look18:28
dansmithnot really, we see guest kernel panics way too often :(18:28
fungiyeah, my excitement was understated sarcasm really, sorry18:28
dansmithheh, okay18:28
fungii've about worn out all other tools at my disposal, so sarcasm is what's left at this point18:29
dansmithyeah, you don't have to apologize to me for that.. I'm beyond exhausted.. I dunno what's after that before death, maybe "acceptance" ?18:30
opendevreviewPavlo Shchelokovskyy proposed openstack/nova master: Fix device type when booting from ISO image  https://review.opendev.org/c/openstack/nova/+/90961118:51
opendevreviewDan Smith proposed openstack/nova master: Add iso file format inspector  https://review.opendev.org/c/openstack/nova/+/92353319:33
opendevreviewMerged openstack/nova stable/2023.2: Fix vmdk_allowed_types checking  https://review.opendev.org/c/openstack/nova/+/92328719:54
opendevreviewMerged openstack/nova stable/2023.1: Check images with format_inspector for safety  https://review.opendev.org/c/openstack/nova/+/92328919:55
fungi923290 hit a package install error, and that knocked out 923291 as a child commit. i'll get them both back into play but they're the only changes left from the ossa20:11
fungiwaiting to promote them to the front since there's a stack of 5 succeeding changes that are due to merge in the next 10 minutes20:13
fungibut once those land i'll shuffle them up20:14
fungiand done20:27
*** bauzas_ is now known as bauzas20:44
opendevreviewMerged openstack/nova stable/2023.1: Additional qemu safety checking on base images  https://review.opendev.org/c/openstack/nova/+/92329022:31
opendevreviewMerged openstack/nova stable/2023.1: Fix vmdk_allowed_types checking  https://review.opendev.org/c/openstack/nova/+/92329122:32
fungiyay!!!22:32
dansmithfungi: thanks for all your work, I didn't appreciate how much that would involve you and your clicky finger22:44
fungino worries, it was thoroughly calloused already22:45
fungi(no actual clicking was done, unless you count the clickity-clack of my mechanical keyboard)22:46
fungilike all real software, zuul has a proper cli22:46

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!