| priteau | Hello Nova team. I have just hit the `mkfs.vfat: Label can be no longer than 11 characters` issue while live migrating an instance. There is a fix in Nova but backports are waiting for review. Could you please merge? https://review.opendev.org/c/openstack/nova/+/952003 | 08:47 |
|---|---|---|
| priteau | Uggla: Would it be possible to merge these backports? | 10:04 |
| Uggla | priteau, I think so. I'll try to have them merged. | 10:09 |
| priteau | Thank you! | 10:09 |
| gibi | priteau: approved the backort now | 10:16 |
| opendevreview | Doug Szumski proposed openstack/nova master: Stop corrupting ephemeral volumes during cold migration https://review.opendev.org/c/openstack/nova/+/940900 | 10:36 |
| priteau | Thanks gibi. I might ping you for the next ones! | 10:46 |
| gibi | priteau: sure | 10:47 |
| nathharp | hey knowledgable nova people. I'm having problems with driving Ironic via nova-compute: when building multiple instances, i see errors like: "status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict ", "code": "placement.concurrent_update". Is this likely to be an issue with nova, or placement? This is a 2025.1 cloud | 10:58 |
| nicolairuckel | https://specs.openstack.org/openstack/nova-specs/specs/2025.1/approved/vtpm-live-migration.html Is it correct, that there is nothing to be implemented for the user mode since in that case live migration is not possible? | 11:23 |
| opendevreview | Merged openstack/nova stable/2025.1: libvirt: Use common naming convention for ephemeral disk labels https://review.opendev.org/c/openstack/nova/+/952003 | 12:50 |
| priteau | gibi: 2025.1 backport has merged, thanks! 2024.2 backport is ready too: https://review.opendev.org/c/openstack/nova/+/952004 | 12:56 |
| *** bauzas3 is now known as bauzas | 13:07 | |
| opendevreview | Balazs Gibizer proposed openstack/nova master: Default native threading for conductor and compute https://review.opendev.org/c/openstack/nova/+/982237 | 13:29 |
| opendevreview | Kamil Sambor proposed openstack/nova master: Replace eventlet with standard threading in novncproxy https://review.opendev.org/c/openstack/nova/+/976089 | 14:32 |
| opendevreview | Merged openstack/nova stable/2024.2: libvirt: Use common naming convention for ephemeral disk labels https://review.opendev.org/c/openstack/nova/+/952004 | 15:09 |
| opendevreview | Kamil Sambor proposed openstack/nova master: Replace eventlet with standard threading in novncproxy https://review.opendev.org/c/openstack/nova/+/976089 | 16:06 |
| clif | sean-k-mooney: could you give me some context on ProviderTree? does the tree represent a heirarchical relationship between nodes? I also see ProviderTree calls out the ironic driver specifically for having 'large number of roots' | 17:11 |
| sean-k-mooney | clif: placement is a forset data stucre where we have a forest of provider that are them selve aranged in a tree stucture with root RPs representign a comptue node and potically nested resouce provider as childer repcesting gpus or pci device on that compute host | 17:15 |
| sean-k-mooney | clif: in the context of ironic each ironic node is create in placement as a seperate resouce provider with invetory of 1 resouce class fo custom_baremtal_something | 17:15 |
| sean-k-mooney | so a single nova compute agent with the ironic driver manage a set of trees of depht 1 | 17:16 |
| clif | ah, ok | 17:16 |
| sean-k-mooney | the uuid of the placment resouce provider is the ironic node uuid which is alsoce the nova compute node uuid | 17:16 |
| sean-k-mooney | so it a 1:1:1 mapping between compute_node in the nova db, resouce provider in the placment db and node in the ironic db | 17:17 |
| clif | so each tree is a single ironic node and each tree's children is information about the ironic node? | 17:17 |
| sean-k-mooney | clif: for almost every other dirver each nova comptue agent manages exactly 1 compute node and 1 root resouce provider | 17:18 |
| sean-k-mooney | clif: for hte ironic diver we dont create childer sinc you cant subdevide ironic hosts | 17:18 |
| JayF | (this is, I presume, the root cause for how so much perf was left on the table in the Ironic case?) | 17:18 |
| sean-k-mooney | we could in the future but we didnt have a suecase today | 17:18 |
| sean-k-mooney | JayF: yes more or less | 17:19 |
| sean-k-mooney | JayF: the first botlenec (the deep copy of the providers manged by this compute agent) | 17:20 |
| sean-k-mooney | is cheap if the number of provider is 1-8 max in 99% of cases | 17:20 |
| sean-k-mooney | its really expensive if you have 100s or 100s espcially if your doing it in a loop | 17:20 |
| clif | could we create a seperate provider implementation for ironic driver use? | 17:21 |
| sean-k-mooney | JayF: so i think that is 1 of the hopefully simple things to fix | 17:21 |
| sean-k-mooney | clif: we try not to have any virt driver awareness in teh compute manager or resouce tracker | 17:21 |
| sean-k-mooney | and that is where the provider stuff lives | 17:21 |
| sean-k-mooney | but short answer is yes i think we can optimize it | 17:22 |
| clif | alright | 17:22 |
| sean-k-mooney | jsut without the need for an if ironic check | 17:22 |
| sean-k-mooney | clif: that is what https://review.opendev.org/c/openstack/nova/+/980676 is ment to do | 17:22 |
| sean-k-mooney | specifically https://review.opendev.org/c/openstack/nova/+/980676/1/nova/compute/resource_tracker.py#1314 | 17:23 |
| clif | right, I'm trying to load enough nova context into my noggin to get to the point where I can understand what's going on with that patch, the perf sim, etc | 17:23 |
| sean-k-mooney | when we get the provider tree for a spcific node that shoudl copy only that node RP insated fo every provider in our provider cash https://review.opendev.org/c/openstack/nova/+/980676/1/nova/scheduler/client/report.py#935 | 17:24 |
| clif | it seems reasonable on the surface | 17:25 |
| sean-k-mooney | clif: ya that a non zeor amount of context to load | 17:25 |
| sean-k-mooney | the other optimisation that i think has prommis is this patch https://review.opendev.org/c/openstack/nova/+/980679/1 | 17:26 |
| sean-k-mooney | so that that one does is very simple | 17:26 |
| sean-k-mooney | today we have a for loop over every ironic node managed by this agent (limited via shard key or peer_list) | 17:26 |
| sean-k-mooney | the patch just kick the processing of each node out into a thread pool | 17:27 |
| sean-k-mooney | so you can actully do them in parallel an tune it via a config option | 17:27 |
| sean-k-mooney | defaulting to 1 for backward compatiablity | 17:27 |
| JayF | And I hope this is all building up, once the identified low hanging fruit are picked, to a deeper look at if there's things we can do to improve it further. I am skeptical that this will be enough to bring us within an acceptable startup time :) | 17:29 |
| clif | I will continue to delve | 17:29 |
| clif | thanks for the context sean | 17:29 |
| sean-k-mooney | JayF: these were the hotspots in the current profiling but if we can get a beter repoducer with the ironic fake driver or soemthign elese we can see what the next bottel neck is | 17:30 |
| JayF | ironic devstack plugin support for fake driver nodes is now #2 on my todo list (I got the PRs up removing 15k worth of retired driver code lines from Ironic) | 17:30 |
| sean-k-mooney | JayF: we could as you suggested skip the placement management entirly in nova and delegate that to ironic but i dont know if that will really help | 17:31 |
| sean-k-mooney | we may need to consider other change liek batch updating placement which may need an api change | 17:31 |
| JayF | sean-k-mooney: basically in the places we poll Ironic currently, any way to switch that to something more evented | 17:31 |
| JayF | sean-k-mooney: or batching into placement, yep | 17:31 |
| sean-k-mooney | well yes so we coudl extend the nova external event api for ironic | 17:31 |
| JayF | sean-k-mooney: either way once we pluck the low hanging fruit I want us to take a look at doing the bigger stuff to help; myself and clif can be the hands making the action work | 17:32 |
| sean-k-mooney | that woudl allow use to get ride of poolign for power state for example | 17:32 |
| JayF | I just don't always know which way is up in Nova :) | 17:32 |
| sean-k-mooney | JayF: if you are not familary with https://docs.openstack.org/api-ref/compute/#create-external-events-os-server-external-events | 17:32 |
| sean-k-mooney | that is how cidner and nova tell nova that the sate of soemthign has changed or a workflow has completed in an event based manner | 17:33 |
| JayF | Interesting | 17:33 |
| sean-k-mooney | power-update might actuly be used by ironic already | 17:33 |
| sean-k-mooney | but there is no reason that we coudl not add other event if we had a reason too | 17:34 |
| JayF | I am 99.999999% it's not | 17:34 |
| * JayF is checking | 17:34 | |
| sean-k-mooney | im just tyring to think what ealse is power related that it could be | 17:34 |
| JayF | yep we 100% do | 17:34 |
| JayF | TIL | 17:35 |
| sean-k-mooney | https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id69 | 17:35 |
| JayF | https://opendev.org/openstack/ironic/src/branch/master/ironic/common/nova.py#L91 | 17:35 |
| sean-k-mooney | yep so i think this was added so we can turn off the power monitring perodic in ironic deployments | 17:35 |
| JayF | if we could just trigger update of placement relevant info from Ironic e.g. on certain state changes | 17:35 |
| JayF | I don't care if any of that comm goes Ironic<>Nova<>Placement or Ironic<>Placement | 17:36 |
| sean-k-mooney | well ironic can go do placmeent adn do some limited things today | 17:36 |
| JayF | especially if we were able to somehow get those calls to impact the node cache inside the virt driver itself | 17:36 |
| sean-k-mooney | JayF: so if you tell me waht you want to update there i can say if it doable today or not without nova changes | 17:37 |
| JayF | because then we can do the "reconciliation job on a timer" change like you did that I already -1'd | 17:37 |
| JayF | sean-k-mooney: I don't know that answer (yet); but whatever happens when a node goes unavailable (e.g. cleaning or move out of AVAILABLE state) and whatever happens when a node goes available (anything->AVAILABLE state) | 17:37 |
| JayF | Plus maybe something that would trigger on a node trait add to expose that trait to placement | 17:38 |
| sean-k-mooney | so the way to model that in placement today is to set reserved=total on the rp | 17:38 |
| sean-k-mooney | that is something that ironic can do directly | 17:38 |
| JayF | Right now my goal is to get all these written down so we can evaluate them after the lower hanging fruit have been plucked | 17:38 |
| JayF | because hopefully by then we'll have devstack w/fake driver nodes | 17:39 |
| sean-k-mooney | ack sound good | 17:39 |
| JayF | and can have real data for what is helpful vs what just seems helpful at a surface read | 17:39 |
| * JayF going to go toss it in an etherpad for the shared ptg sesh | 17:39 | |
| sean-k-mooney | JayF: you know if we do deploy with the fake driver with a 1000s inically devstack is definlly going to cail because it will time out waiting for nova-comptue to start :P | 17:43 |
| sean-k-mooney | so maybe ill try with 10 and then add them once it up | 17:44 |
| JayF | sean-k-mooney: Ironic has a long, long, long history of knowing where all devstack timeouts are so we can set them high enough to make an infra team member weep | 17:44 |
| JayF | lol | 17:44 |
| JayF | sean-k-mooney: maybe add 1000 fake nodes, only put 3 of them in a shard managed by nova-compute by default or something :D either way I'll figure it out | 17:45 |
| JayF | sean-k-mooney: clif: I put some notes in here https://etherpad.opendev.org/p/ironic-nova-compute-startup-2026.2 | 17:47 |
| sean-k-mooney | clif: if you want to rebase of rearrage those wip patches feel free | 17:47 |
| sean-k-mooney | i wont get back to them for a while but if they are helpful to you go for it | 17:48 |
| clif | thank you, will do | 17:51 |
| melwitt | gmaan: hey, I have a hopefully easy patch to fix openstacksdk usage for using endpoint discovery for unified limits if you might be able to look at your convenience https://review.opendev.org/c/openstack/nova/+/975106 | 18:51 |
| erlon | sean-k-mooney++ melwitt++ thanks for the througly review on the RBD XML update functionality for migration process patch | 19:04 |
| sean-k-mooney | erlon: im not sur eif keeping the sepreate small funciton for nova provisioned rbd storage is bettere or not for what its worth | 19:05 |
| sean-k-mooney | i could be preswaded either way | 19:05 |
| sean-k-mooney | but i woudl really prefer to see a functional repoducer for that | 19:05 |
| erlon | melwitt: did you tried the XML "<ip family="ipv6" address="[2001:db8:ca2:2::1]" prefix="64"/>" with a ipv6 addr without the []? did it pass the checks? | 19:07 |
| erlon | sean-k-mooney: you mean to have that code inside other function like _update_volume_xml? | 19:10 |
| erlon | I believe that having the code in a separate function is better for readability and testing | 19:11 |
| erlon | Another thing I was considering, is that when I brought it up in the meeting, we discussed wanting another managed command to perform the bulk update. I didn't start to think about how I'll do that implementation, but having things modularized usually help to reuse code. | 19:14 |
| sean-k-mooney | erlon: my conder is driffet between the nova and cidner voulem management | 19:15 |
| sean-k-mooney | erlon: like currently you are just updating the xml but not th bdms right | 19:16 |
| melwitt | erlon: yes I did I pasted the output in the code block of my comment, I ran the virt-xml-validate CLI without brackets (passed) and with brackets (failed) | 19:16 |
| sean-k-mooney | i dont recall if we store rbd info in teh bdms for nova provisioned stroage | 19:16 |
| sean-k-mooney | melwitt: its possible that libvirt version depenent | 19:17 |
| melwitt | no we don't, BDMs with rbd info is for cinder volume only | 19:17 |
| sean-k-mooney | ok then that not an issue with the current patch so | 19:17 |
| melwitt | sean-k-mooney: yes true. it would be ideal if we could test it for real somehow but regardless I think it's safest to do it without brackets bc that's how XML is currently generated for rbd | 19:18 |
| sean-k-mooney | we have a tempest-ipv6 job in our gate | 19:18 |
| sean-k-mooney | i just doen not ahve ceph in it | 19:18 |
| sean-k-mooney | but i think we can look at that in a followup | 19:18 |
| melwitt | I posted a comment on the review but I am wondering if the patch is actually needed right? bc AFAICT all cases should be covered already for nova local/ephemeral rbd | 19:18 |
| melwitt | I wonder if people just want the patch to force an update to XML right away instead of waiting for a lifecycle event. just not sure I get the use case | 19:19 |
| sean-k-mooney | so its not if you reboot the vm | 19:20 |
| sean-k-mooney | but if we try to live migrate with the old mons i think the dest vm will fail to connect to the storage | 19:20 |
| melwitt | ceph handles all of mon IP update transparently while the guest is running and it's not clear from the bug report if the lack of XML update there results in a failure or error of some kind | 19:20 |
| sean-k-mooney | and the migration will fail as a result if those mons are note aviabel | 19:20 |
| melwitt | right and if you reboot the VM the existing code already grabs fresh mon IPs from ceph | 19:20 |
| melwitt | ok, that is what I wanted to know. I wish they had pasted the error in the bug | 19:21 |
| sean-k-mooney | ya so unless libvirt is proxiyng that info without telling us i suspect that is the failure mode | 19:21 |
| melwitt | ok, if it's actually a failure and not just a "I will feel better if the XML is updated now vs later" then I think the patch is useful | 19:21 |
| erlon | melwitt, what would be the lifecicle events that would update that? | 19:21 |
| melwitt | erlon: basically anything that calls _hard_reboot() internally so a stop/start, a hard reboot, an unshelve, I'm not sure what else off the top of my head. the main point is as long as the guest is running, the xml does not need an update, that's for later when it's starting up from being shutdown | 19:23 |
| erlon | afic, ephemeral rbds are not updated unless the instance is sheved/unshelved | 19:23 |
| sean-k-mooney | it should be updated any tiem teh domain is redefiend | 19:23 |
| melwitt | erlon: however, if live migration would in fact _fail_ without the xml updated, then that is the problem that makes the patch needed. it was just not clear to me from the bug report whether this is a real failure or if someone did dumpxml and saw old mon IPs and that made them believe there is a problem | 19:24 |
| erlon | yes, but the use case is to keep the instance running. | 19:24 |
| sean-k-mooney | so cold migrate, shleve hard,reboot,evacuate | 19:24 |
| melwitt | yup every time it's redefined. for local/ephemeral RBD to be clear. Cinder volume is NOT that way | 19:24 |
| melwitt | for Cinder volume, currently, you have to live migrate to make it consume new mon IPs for its next xml redefinition. and I wonder if that might be the confusion | 19:25 |
| erlon | melwitt: it does fail since, when RBD is trying to connect to one of the monitors, if the monitor times out, rbd will not retry | 19:25 |
| melwitt | ack ok | 19:26 |
| erlon | melwitt: no, the patch clearly handles that difference between Cinder back the volumes and RBD ephemerals and config drivers | 19:27 |
| erlon | *cinder backed | 19:27 |
| sean-k-mooney | erlon: sure but its addign supprot for somethign we do not supprot | 19:28 |
| sean-k-mooney | changign the mon ips for ceph cluster attached to a nova instnace | 19:28 |
| sean-k-mooney | we do not actully supprot that even if it works | 19:28 |
| sean-k-mooney | we can make it better but it still operaotr error to ever do that | 19:28 |
| melwitt | to be clear, if live migration is failing with local RBD disk when new mon IP happens, then I am personally ok with the patch. and I have a draft comment written up about that if that is the case | 19:29 |
| sean-k-mooney | im ok with the patch to make thing bettter as long as we are clear this is not a supproted operation that operaors shoudl do freely | 19:29 |
| melwitt | I had thought that as long as the guest remains running, that ceph handles the change transparently and we should not need to update any XML but I was not 100% and realized I could be wrong | 19:29 |
| melwitt | *not 100% sure | 19:30 |
| sean-k-mooney | in the cidner case it causes db cupprtion on our side at least as the connection info will be out of sync | 19:30 |
| melwitt | well it's weird actually bc for live migration specifically, we _are_ refreshing connection_info. so ironically cinder volume works fine in this case already | 19:30 |
| sean-k-mooney | melwitt: i bielive that is only true if 1 of the osd remains active at least that was a limitat of krbd in the past | 19:30 |
| erlon | melwitt, yes, it does fail in that case | 19:30 |
| melwitt | sean-k-mooney: I see | 19:31 |
| sean-k-mooney | melwitt: actully ya your right we get an updated atachment | 19:31 |
| sean-k-mooney | but only if we live migrate | 19:31 |
| melwitt | ok, since failure is confirmed, I will post my draft comments that explain why I think the patch is ok | 19:31 |
| sean-k-mooney | if we jsut hard reboto the vm wont come back up if none of the old mon ips are reachable | 19:32 |
| erlon | If you hard reboot, it will get the correct addresses from the monitor | 19:33 |
| melwitt | sean-k-mooney: right. what we are missing for cinder volume is the domain redefine case. bc we didn't want to be pulling connection_info needlessly for 99.999% cases. but I actually had an idea about that, how we could do a cheaper check by calling ceph and comparing to xml, and only if it's different call cinder for fresh connection_info | 19:33 |
| sean-k-mooney | wwe coudl i jsut dont know if we ever want to really suprot this | 19:33 |
| melwitt | sorry not compare to xml but compare to bdm | 19:34 |
| sean-k-mooney | well either woruld work i guess btu i dont like dependignon the xml | 19:34 |
| sean-k-mooney | if we can avoid it | 19:34 |
| melwitt | yeah, I am not looking at it as much as supporting a thing we said it unsupported but rather making everything consistent | 19:34 |
| sean-k-mooney | my conder with this in genrel is it opens us up to a buch of bugs relate dto snapshot ro other operatoin with stale info | 19:34 |
| erlon | sean-k-mooney: this is something that is floating around for a long time, and many people has showed interest in having this fixed. The bug is the 2nd in heat/affteced: https://bugs.launchpad.net/nova/+bug/1452641 | 19:35 |
| sean-k-mooney | we coudl totally burn all of those down but im not sure what we are opening our selve up to if we try | 19:35 |
| melwitt | because today, we have a mon IP update for ephemeral RBD hard reboot, we have a mon update for cinder volume RBD live migration, but the vice versa for each are missing. and that to me seems ... not great | 19:35 |
| gmaan | melwitt: checking | 19:35 |
| sean-k-mooney | erlon: yep its not a new compleaint | 19:36 |
| sean-k-mooney | i would question the validiy of the bug since we intentally iddint suprpot this in the past | 19:36 |
| sean-k-mooney | erlon: what i really wish we coudl do is jsut use fqdns for the mons instead of ips | 19:37 |
| sean-k-mooney | or multicast/vip ips or simialr | 19:37 |
| erlon | That would not come without problems as well, since you would have to rely on DNS | 19:38 |
| erlon | We have discussed in this possibility | 19:38 |
| sean-k-mooney | anyways erlon if we build out the funcitonal tests to demostrate and validate the fix im ok with improving it as long as we dont docuemnt that htis is somethign operators shoudl expect to work in general | 19:38 |
| melwitt | yeah to be clear, for me it is the inconsistent behavior. but, I do also wonder if it could open up any problems we don't expect. I can't think of anything but of course I guess anything is possible. I am not that much of a RBD expert | 19:38 |
| erlon | sean-k-mooney, so with that I would assume that a nova managed command would be out of the table | 19:39 |
| sean-k-mooney | we do not suprpot krbd but when we had it at a workaroudh chanig the mon ip coudl cuase the host kernel to soft lock | 19:39 |
| sean-k-mooney | similar to how iowrites to iscsi block device that are disconnected would hang for ever | 19:39 |
| sean-k-mooney | althoguht that was a long time ago so may that less terribel now | 19:39 |
| sean-k-mooney | erlon: nova mange wont help here | 19:40 |
| melwitt | gmaan: thanks :) also your ack would be appreciated on this backport proposed that fixes a http 500 => http 400 and if you think that is ok for stable https://review.opendev.org/c/openstack/nova/+/980540 | 19:40 |
| sean-k-mooney | erlon: it woudl for cinder | 19:40 |
| sean-k-mooney | but not for nova provisioned volumes | 19:40 |
| sean-k-mooney | we dont use nova-mange to update xmls | 19:40 |
| erlon | Did you would be that once a ceph mon list had changed, it will be possible to fix all vms without having to migrate or shelve them | 19:41 |
| melwitt | we don't but we have for cinder volume nova-manage for refreshing bdm connection_info which would in turn update XML | 19:41 |
| sean-k-mooney | right but we do it via the contoelr nodes by updating the db | 19:41 |
| erlon | but I believe that would be something we could live without | 19:41 |
| melwitt | ok I think you just said that | 19:41 |
| melwitt | yeah | 19:41 |
| sean-k-mooney | i dont think we should ever add a nova mange command that talks to libvirt and modifes the runing domain | 19:42 |
| melwitt | yeah so for rbd local/ephemeral if you hard rebooted all the VMs then you would get all the new mon IPs today. but I get that a lot of operators don't want to be rebooting all the guests | 19:42 |
| sean-k-mooney | ya so "healing it" on live migratoin is ok | 19:42 |
| sean-k-mooney | healign it in place to me voids yoru warrenty even with somthgn liek nova-mnage | 19:43 |
| sean-k-mooney | we have alwasy said out of band updates to the domain (any write to libvirt not form nova-compute) voids the warrenty on the domain | 19:43 |
| sean-k-mooney | it would be very easy to create security issues by fat fingering the cli | 19:44 |
| sean-k-mooney | and it woudl be inharently racy with api actoins too | 19:44 |
| opendevreview | sean mooney proposed openstack/nova master: [WIP] how borken is ipv6 only ceph. https://review.opendev.org/c/openstack/nova/+/982302 | 19:53 |
| opendevreview | sean mooney proposed openstack/nova master: [WIP] how borken is ipv6 only ceph. https://review.opendev.org/c/openstack/nova/+/982302 | 19:54 |
| erlon | sean-k-mooney: regarding the direct change/replace all monitors in the function, I understand that comparing that with the old values give us the opportunity to issue a warning, but it should not cause any impact on running instances, since we are just manipulating an in memory XML object, that will be sent to the destination host | 19:55 |
| sean-k-mooney | right but if that warning ever fires it means that an operator did somethign wrong | 19:56 |
| sean-k-mooney | so it warning them that thre may be problems that need to be fixed | 19:56 |
| gmaan | melwitt: done, backport of 500->400 lgtm too | 20:04 |
| melwitt | thank you gmaan | 20:04 |
| sean-k-mooney | looking at zuul there is currently no job wiht ipv6 and (ceph or cinder) in the name so im not going to hold my breath on ^ working | 20:06 |
| sean-k-mooney | it shoudl be doable we woudl jsut need to tweak the devstack plugin to resovle what ever it finds | 20:07 |
| melwitt | dunno if this is a known issue but I noticed grenade-skip-level-always (non-voting) is failing on stable/2025.1 with "AttributeError: module 'setuptools.build_meta' has no attribute 'prepare_metadata_for_build_editable'. Did you mean: 'prepare_metadata_for_build_wheel'?" | 21:32 |
| melwitt | https://zuul.opendev.org/t/openstack/builds?job_name=grenade-skip-level-always&project=openstack%2Fnova&branch=stable%2F2025.1&skip=0 | 21:32 |
| sean-k-mooney | melwitt: that might be related to jammy and setuptools/pbr versions when upgrading form 2024.1 | 21:45 |
| sean-k-mooney | 2024.2 has not got the pkg_resouces fixes | 21:45 |
| sean-k-mooney | at least not in the requriemetn repo and has not got the newerever pbr release as a result | 21:45 |
| sean-k-mooney | https://review.opendev.org/c/openstack/requirements/+/976602 needs https://review.opendev.org/c/openstack/requirements/+/976903 to be cherry picked first | 21:47 |
| sean-k-mooney | well it might not be that expiclty but its related to setup_tools and pbr and i think those mibg help | 21:49 |
| sean-k-mooney | melwitt https://github.com/openstack/nova/commit/42de440471abf5d1953eacd21f191c1f0ff28ef7 | 21:51 |
| sean-k-mooney | so ya its to do with the 2024.1 install | 21:51 |
| sean-k-mooney | 2024.1 is not actully supproted for upgrades anymore | 21:51 |
| sean-k-mooney | so we proably shoudl just turn that job off permently | 21:52 |
| opendevreview | Merged openstack/nova master: unified limits: Fix openstacksdk usage for endpoint discovery https://review.opendev.org/c/openstack/nova/+/975106 | 22:15 |
| opendevreview | Merged openstack/nova stable/2025.2: api: Handle empty imageRef alongside null for local BDM check https://review.opendev.org/c/openstack/nova/+/980540 | 22:27 |
| opendevreview | Merged openstack/nova stable/2025.1: Add functional reproducer for bug 2125030 https://review.opendev.org/c/openstack/nova/+/979864 | 22:27 |
| opendevreview | Merged openstack/nova stable/2025.1: Move cleanup of vTPM secret from driver to compute https://review.opendev.org/c/openstack/nova/+/979865 | 22:35 |
| opendevreview | Merged openstack/nova stable/2025.1: Preserve vTPM state between power off and power on https://review.opendev.org/c/openstack/nova/+/979866 | 22:36 |
| opendevreview | Merged openstack/nova stable/2025.2: Preserve UEFI NVRAM variable store https://review.opendev.org/c/openstack/nova/+/978766 | 22:55 |
| opendevreview | Merged openstack/nova stable/2025.1: Fix string format specifier https://review.opendev.org/c/openstack/nova/+/961337 | 22:56 |
Generated by irclog2html.py 4.1.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!