Friday, 2025-08-29

opendevreviewcid proposed openstack/ironic-python-agent master: Type annotations and checking  https://review.opendev.org/c/openstack/ironic-python-agent/+/95833301:33
opendevreviewPierre Riteau proposed openstack/tenks master: Keep support for Python 3.9  https://review.opendev.org/c/openstack/tenks/+/95882706:04
opendevreviewPierre Riteau proposed openstack/tenks master: Keep support for Python 3.9  https://review.opendev.org/c/openstack/tenks/+/95882707:41
opendevreviewPierre Riteau proposed openstack/tenks master: Keep support for Python 3.9  https://review.opendev.org/c/openstack/tenks/+/95882707:45
abongalegood morning ironic o/07:55
hberaud[m]Hey folks, following my previous patch to add hacking rules to ban eventlet in ironic (https://review.opendev.org/c/openstack/ironic/+/958738) I looked through all the ironic projects (https://governance.openstack.org... (full message at <https://matrix.org/oftc/media/v1/media/download/AQYZY1mt_dTSrGUWnAu81fi6SZQACvWZIkPmk0TduKMhF41CcRc6rxuZeMGm2-aK8oU3_RDfdynrfuLR4U8I761CeZPWfDtQAG1hdHJpeC5vcmcvRXlRQUhLaVB5SFJyWkxRc0dkdml6T1pL>)10:10
dtantsurDoes anyone know if bifrost-integration-redfish-vmedia-uefi-centos-9 is expected to pass still?10:29
dtantsurWe're running it on Ironic, and I think it's permafailing10:29
dtantsurIf it's hopeless because of CS9, I'd rather replace it with a working job10:30
dtantsurTheJulia: https://review.opendev.org/c/openstack/ironic/+/958776 passes our CI, and passes various flavours of Metal3 CI.10:30
dtantsurI'm not sure why it did not occur me back then that we don't really need the local RPC... probably because I was in a terrible need of a vacation :)10:31
priteaudtantsur: probably some conflict between the newest requirements in master and need for py39? I guess Bifrost master could switch to c10s?10:49
priteauOn a somewhat related note, I am looking for reviews on https://review.opendev.org/c/openstack/tenks/+/95882710:49
dtantsurI remember rpittau doing something to keep bifrost working.. but I don't recall what it was11:03
dtantsurmm, we don't have CS10 support, I guess there have been roadblocks. Maybe because some of the OpenStack testing nodes don't support it.11:04
dtantsuryeah, 3.9 is gone in Ironic, so Bifrost cannot be used11:05
priteauCS10/RL10 support is going to be required for deploying Bifrost with Kolla11:06
dtantsurI imagine. Let me see what can be done..11:08
opendevreviewPierre Riteau proposed openstack/bifrost master: [WIP] Test CentOS 10 support  https://review.opendev.org/c/openstack/bifrost/+/95885311:12
priteauI just did a find and replace on the jobs for now, to check the status11:13
dtantsurI'll take over rpittau's CI patch to unblock us11:13
opendevreviewPierre Riteau proposed openstack/bifrost master: [WIP] Test CentOS 10 support  https://review.opendev.org/c/openstack/bifrost/+/95885311:17
opendevreviewPierre Riteau proposed openstack/bifrost master: [WIP] Test CentOS 10 support  https://review.opendev.org/c/openstack/bifrost/+/95885311:18
opendevreviewDmitry Tantsur proposed openstack/bifrost master: Recover the CI  https://review.opendev.org/c/openstack/bifrost/+/95784711:22
dtantsurThis ^^ is a bit nuclear, but nobody has time to figure out non-default Python on these versions11:23
dtantsurrpittau tried and got stuck in a rabbit hole11:23
opendevreviewPierre Riteau proposed openstack/bifrost master: [WIP] Test CentOS 10 support  https://review.opendev.org/c/openstack/bifrost/+/95885311:45
priteaudtantsur: I think it's fine.11:47
opendevreviewPierre Riteau proposed openstack/bifrost master: [WIP] Test CentOS 10 support  https://review.opendev.org/c/openstack/bifrost/+/95885311:48
opendevreviewDmitry Tantsur proposed openstack/bifrost master: Fix the dnsmasq.leases location on Debian systems  https://review.opendev.org/c/openstack/bifrost/+/95887312:33
opendevreviewPierre Riteau proposed openstack/bifrost master: [WIP] Test CentOS 10 support  https://review.opendev.org/c/openstack/bifrost/+/95885312:55
dtantsurAlso, folks, the slurp-upgrade job is permafailing on bifrost: https://zuul.opendev.org/t/openstack/builds?job_name=bifrost-slurp-upgrade-ubuntu-jammy&project=openstack/bifrost13:13
dtantsurif you care about this feature, please fix it13:14
dtantsurMeanwhile, https://review.opendev.org/c/openstack/bifrost/+/957847 brings the bifrost voting jobs to a coherent state, I'd appreciate an urgent review13:15
dtantsuriurygregory, TheJulia, JayF, cardoe ^^13:15
dtantsurThe pxe-uefi job does not actually work as well, despite https://review.opendev.org/c/openstack/bifrost/+/958873/13:18
dtantsurpriteau: your cs10 patch is looking promising but it won't pass without my changes, maybe rebase?13:20
TheJuliaCores: we likely need to weigh some release process handling, specifically because https://review.opendev.org/c/openstack/ironic/+/958776 seeks to revert something we released in 31.0.0.13:23
* dtantsur parse error13:23
dtantsurWhat exactly are you suggesting?13:24
TheJuliaI'm on the fence, FWIW, but if we can reach consensus quickly I think that is for the best13:24
TheJuliawe released the change your seeking to revert out13:24
dtantsurYep, that's unfortunate, but.. I still don't see a problem? It's not that we're removing an API?13:24
TheJuliaTrue, and it was a super short term lived configuration13:25
TheJuliaso, overall, maybe its all okay13:25
TheJuliaas an intermediate step13:25
TheJuliaI just want to make sure we're all on the same page13:25
iurygregoryI think it would be ok13:34
opendevreviewMerged openstack/ironic master: docs: trivial: clarify pull secrets for OCI image access  https://review.opendev.org/c/openstack/ironic/+/95876513:43
priteaudtantsur: sure I will rebase onto yours13:46
TheJuliaJayF or cardoe, any opinion?13:49
Sandzwerg[m]Ohai wonderful people of ironic o/13:51
TheJuliaO'Hai!13:51
cardoeopinions on coffee? yes please.13:52
dtantsur++13:52
TheJuliammmm coffeeeeeee13:52
Sandzwerg[m]I have a question regarding cleaning. In the config we configure a cleaning network. If a node goes through a automatic cleaning it will use that network. But if I change the status of a node so it get's through cleaning, for example "openstack baremetal node provide" on a node in manage with automated cleaning active it will create the cleaning ports in the context of the project where I execute the command instead of in the13:54
Sandzwerg[m]dedicated cleaning network. For a create it doesn't matter in which project I create a server/instance. The ports for the deployment will always be in the deployment network. Is this intended? Can it be changed?13:54
* TheJulia gets more coffee before re-reading that to make sure she is reading it properly13:59
TheJuliaokay, so hmm14:01
priteaudtantsur: bifrost-integration-tinyipa-centos-10 passed :o14:02
JayFTheJulia: meh. I don't love that we're theoretically breaking an API, but I also don't really think anyone is going to care in any meaningful way except for theory or philosophy14:03
TheJuliaThis sounds like a bug, but it sounds like the admin context your managing nodes is not the same network, which the operator requesting a manual clean is generally expected to be14:03
TheJuliaSandzwerg[m]: ^^^14:04
TheJuliaSandzwerg[m]: Specifically, sounds like your using the neutron or a similar network interface on the backend, not flat?14:04
TheJuliaJayF: I can agree with that14:04
TheJuliayour request has inbound context, which is geared for end users to trigger certian state out on the request, we likely need to detect/signal its an adminy action and explicitly ignore the embedded credentials for the action which creates the case your reporting14:06
TheJuliaSandzwerg[m]: can you please create a bug in launchpad?14:06
cardoeSo we're effectively trying to say whoopsie... 31.0.0 just kidding... or if we merge that do we want to be noisy and say 31.0.0 is "yanked" (to use a pip term) and it should have never happened... there be breakage?14:07
TheJuliaonly breakage is if someone explicitly tried to use "none" as local_rpc14:07
TheJuliawhich was the first path we ended up taking to navigate eventlet removal14:08
TheJuliathe change signals that, fwiw14:08
TheJuliawell, the revert signals that.14:08
cardoeI'm just thinking if we merge it we've gotta have a clear reason why we released and why we're reverting other than its no longer relevant. Sort of like a lesson learned to avoid that mistake in the future?14:09
TheJuliaYeah, I know someone who was already burned by the change in some of their own development originally, maybe this will make it easier as well, btu yeah14:11
TheJuliaThere is a cost to pay with any effort and sometimes that cost is also learning we made a mistake or found a better way14:11
dtantsurTheJulia, cardoe, to be clear: I don't ancitipate any breakage in any case14:12
TheJuliafor a deployed state, yeah14:13
cardoeokay so let's do it but have a very clear note.14:13
TheJuliait is sort of an edge case outside of Metal314:13
dtantsur(burned is putting it mildly btw: our port 8089 conflicts with kubernetes-nmstate)14:13
TheJuliaDOH14:13
TheJuliathat is super ouchy14:13
dtantsuryeaaaaah.. since OCP 4.20 is based on 31.0, we're moving the RPC to a different port14:13
TheJuliajoy14:14
TheJuliaokay14:14
TheJuliacardoe: did you look at the release note dtantsur put in?14:14
* TheJulia begins wondering if this morning's IRC Channel Coffee was secretly made with decaf....14:15
cardoeyes. It's fine. It's matter of fact.14:15
TheJuliaheh, okay14:15
TheJuliawait, was that about the coffee!?! I'm sooo confused now14:15
TheJulia(this is why Julia should not get up at 4:30 AM local)14:15
cardoeI haven't sipped my coffee yet. I came into my office and forgot to make a cup. ADHD powers.14:16
dtantsurouch14:16
cardoeI don't want to be pedantic but we could be more explicit in the lesson learned.14:16
cardoeBut as you said, it's likely not going to mess anyone up and it was in one recent release.14:16
Sandzwerg[m]TheJulia: yeah we use neutron. Our deployment network is in domain/project: Default/Service but there only technical users have access. But when I trigger a provide -> cleaning I do it in cloud_admin/master because there I have cloud_baremetal rights and we have port quota available. I can create a bug. What kind of information or log do you need? (We later have an issue because of this, but that is on us and our self-written14:16
Sandzwerg[m]neutron driver, that is unrelated to ironic) 14:16
TheJuliacardoe: is there someone we should call to ensure coffee appears ? ;)14:17
TheJuliaSandzwerg[m]: cloud_baremetal, I take it, is some sort of custom role or right?14:17
dtantsurWell, I could say "we should not have merged any eventlet migration until we were sure it was perfect for all use cases".. but I don't know if it would be a constructive lesson14:17
TheJuliaand perfection is also the enemy of good too....14:17
TheJuliaso...14:18
TheJuliawe had to sort of find a line there someplace14:18
Sandzwerg[m]TheJulia: yeah that's the policy "has all ironic related rights"14:18
TheJuliaThis is why we have system scope support :)14:18
Sandzwerg[m]Our policy is still using the outdated format and might be a bit messed up, it's high on my to do to fix that14:19
Sandzwerg[m]Would that solve this here?14:20
TheJuliadtantsur: approved your revert with a longish note in gerrit backing up the discussion14:22
TheJuliaSandzwerg[m]: likely not14:22
dtantsurthx!14:22
TheJuliaSandzwerg[m]: we *might* have fixed this though, if your still defaulting to a much older policy model14:22
TheJuliaIf you want to privately message me a version, I can do the legwork to double check the logic before my next call14:23
opendevreviewMerged openstack/ironic master: Drop wsgi script, docs around mod_wsgi  https://review.opendev.org/c/openstack/ironic/+/95105514:23
TheJuliaFWIW, it seems super familiar14:23
JayFTheJulia: I suggest as soon as that lands we get a release request out and bump the major version14:23
JayFWe also need releases of some stuff according to the mailing list14:24
TheJuliaJayF: we already need to bump the major since we removed eventlet14:24
dtantsurwe're like 2 weeks from the release anyway, right14:24
dtantsur?14:24
TheJuliaso, our next release will be 32.0.014:24
dtantsura nice round number!14:24
TheJuliaWait until we get to 4214:25
TheJuliaI suggest we decide to skip semver and stay on 42 for a long time ;)14:25
TheJulia</joking... mostly)14:25
Sandzwerg[m]TheJulia: it's even opensource in all it's outdatedness :D https://github.com/sapcc/helm-charts/blob/master/openstack/ironic/templates/etc/_policy.json.tpl14:26
* TheJulia tears up14:26
dtantsurTheJulia: we could do 42.X.Y where X increases for feature releases, Y - for bug fixes :D14:27
dtantsur(and just assume that every X release is breaking lol)14:27
TheJuliadtantsur: +214:27
Sandzwerg[m]We are currently on antelope but I haven't updated the policy yet, that is mostly unchanged since puh, 8 years or so. Not sure which OpenStack that must have been. Train or even something older?14:27
TheJuliaSandzwerg[m]: fyi, policy has many more rules now :) https://github.com/openstack/ironic/blob/master/ironic/common/policy.py14:28
TheJuliaSandzwerg[m]: uhhhhhh Kilo, liberty, or maybe mitaka?14:30
TheJulianewton at the latest14:32
JayFI mainly just appreciating that we're likely going to have a 32.1.0 😂14:32
TheJuliaI think what your encountering may still be a bug14:34
dtantsurAn absolutely random Friday afternoon thought: I want a tatoo with that ancient on-metal's bear playing guitar14:41
JayFdtantsur: I think I still have the original somewhere on a file system14:43
JayFdtantsur: I mailed the poster with it to rackspace though cardoe hopefully has it now14:43
cardoehrm... when?14:44
dtantsurI guess copyright can be an issue :D (okay, my lazyness is probably a much bigger issue though)14:44
Sandzwerg[m]Hmm I think it might have been mitaka 😵‍💫14:46
Sandzwerg[m]TheJulia: ok anything besides the info I mentioned here I should add to a bug report?14:47
JayFcardoe: I think I mailed it to James Denton? Is that the right name?14:47
cardoeYep14:51
cardoeI'll ask him if he got it.14:51
TheJuliaSandzwerg[m]: nah, I think that should be good, I'm on my call now but I'll look more when I'm off the call14:54
Sandzwerg[m]Alright, thanks 14:55
TheJuliaQuestion of the day: Why does the kitten want to eat the 40G DAC cable next to my desk?!?15:15
dtantsurTheJulia: lack of copper in the nutrition? :D15:19
TheJuliaI was more thinking she wants to be come hackerkittah!15:20
dtantsurYep, that's a better idea :)15:26
TheJuliaNow she is cuddled up next to the cable15:27
dtantsurPhotos requested :)15:27
TheJuliaShe is also next to the corgi, I don't think this situation will last15:27
opendevreviewMerged openstack/ironic master: Revert "Switch from local RPC to automated JSON RPC on localhost"  https://review.opendev.org/c/openstack/ironic/+/95877615:28
JayFI know the question was probably non-serious, but apparently small animals are attracted to low-level electric Fields. It's the reason why so many coax cables on poles get destroyed by squirrels.15:29
dtantsurTIL!15:32
TheJuliadefinitely non-serious, and definitely not powered :)15:36
priteauCould an Ironic core please W+1 this fix please? https://review.opendev.org/c/openstack/tenks/+/95882715:42
priteauIt is fixing Kayobe stable CI15:43
priteauThanks TheJulia 15:43
opendevreviewMerged openstack/tenks master: Keep support for Python 3.9  https://review.opendev.org/c/openstack/tenks/+/95882716:03
Sandzwerg[m]Here is the new bug. While writing it up I noticed that I missed up some details: https://bugs.launchpad.net/ironic/+bug/212170216:40
TheJuliaack, thanks16:42
JayF...are we OK with tenks being stuck on an earlier version of python?17:04
JayFthat PR looks unpleasant to have in an active openstack project, even with the caveats we were talking about yesterday :)17:05
TheJuliaWe could always kick it out of the apartment17:31
TheJuliaif that is such a strong concern. Its a testing tool, not a end user tool.17:31
JayFI just feel a little like a hypocrite how much I, at a openstack-level, rail against unmaintained projects when there's one sleeping on the couch in Ironic :)17:35
TheJuliafair tough17:36
TheJuliaerr17:36
TheJuliafair enoguh17:36
TheJuliaenough17:36
* TheJulia can't spell17:36
TheJuliaI'm wondering if we should just cut a release for n-g-s at this point and wrap up the development cycle for ironic. n-g-s appears to be broken testing wise due to unhappiness with nova18:35
TheJuliaI need to sit down and look at it next week I guess18:35
JayFclif: didn't you have some working devstack config locally with ngs?18:41
JayFclif: I wonder if you might be able to help troubleshoot networking-generic-switch CI18:41
JayF(he should see that on Tues)18:42
clifI have something that "works" for some definition of working18:43
clifI'm willing to try to help18:43
clifbut I haven't done anything with it besides get it running to the point where ironic and its three tests nodes seem to come up properly18:44
clifI haven't gotten to the point where I need to directly mess with networking yet18:44
clifI also have IRC wired to ping me whenever I'm mentioned so please use judiciously ;)18:45
JayFMight not be a bad thing to point your brain at, given that network architeture is something you should have a chance to go deeper in18:47
*** gmaan is now known as gmaan_afk19:30
cardoeits a trap! don't do it!19:38
cardoeSo random... we're talking about dynamic portgroups... what about dynamic volume connectors?19:39
* JayF doesn't use any of ironic's volume magic 19:47
cardoeThat's not water you're standing in JayF.... those are my tears.19:51
JayFI mean, I'm always happy to review changes made by others19:51
JayFbut that's not on my list anywhere and I'd be unlikely to help beyond normal pushing patches around stuff19:51
TheJuliacardoe: I'm not opposed to such, I'm just getting worried about people failing to read between the lines or even read any of the documentation when it comes to such19:52
cardoeI haven't gotten any of it to work yet for my use case.19:52
cardoeBut apparently promises were made.19:53
TheJuliawut?!19:57
TheJuliawho? what? how many jelly beans? 19:57
cardoelike twelve old licorice jellybeans from somebody's pocket... they're kinda melty with lint stuck in them 19:58
TheJuliaA reno revision that gets you20:00
TheJuliaI need the caps to build my post-apoclyptic vault kit ;)20:01
TheJuliaSerious question when Julia is burnt out and like.. needing comedy: Do we need a "I want ironic to support all the magical VMey features" sig?20:06
cardoeWe can for sure.20:12
cardoeSo the dynamic connector comment for example just stems from seeing the dynamic portgroup work and thinking about some of that.20:14
cardoeSo today in the KVM side, there's os-brick in nova-compute20:14
JayFOh, you mean you wanna adopt the out of tree libvirt management driver for Ironic?20:14
cardoeWell no.20:15
cardoeSo I haven't gotten this all to work yet.20:15
JayFhttps://tinyurl.com/ironic-libvirt am I the only person who knows this exists!?20:15
cardoeWell so the concrete example for me is not attaching a boot volume off a NetApp. But attaching other volumes.20:16
cardoeSo the OS is still on disk.20:16
TheJuliaJayF: can you have an AI make a my little pony version of that, please?20:17
JayFcardoe: what's the value that ironic provides doing that vs post-boot automation of sorts?20:17
JayF♫ We're no stranger to rainbows ♫20:18
cardoehttps://infohub.delltechnologies.com/sv-se/l/nvme-tcp-boot-from-san-with-dell-powerstore-storage-arrays/enable-nvme-of/20:20
JayFso attaching via DRAC gives you offloading?20:21
cardoeLet's just assume your " key didn't work in the above statement20:25
JayFassume I haven't touched a unique piece of hardware personally in 5 years20:26
JayFwhich means I don't know if airquotes are needed20:26
cardoeSo I think its evolving still is the best answer.20:28
cardoeBut stepping back from that specific answer.20:28
cardoeToday the nova libvirt path uses os-brick to get the connector info. In Ironic world, that must be created first.20:29
cardoeWith the introduction of extra metadata on ports. I could put something on those interfaces that says this is my storage path interface.20:31
cardoeIf we're booting a systemd based Linux distro or Windows on the bare metal, we can know what the IQN or NQN is.20:31
cardoeFor other OSes, one could fall back to the systemd method which means you'll have all the data you need in Ironic already.20:33
JayFironic can't run os-brick in OS though20:33
JayFbecause it requires secrets you can't trust for a member in most use cases20:34
cardoeIt doesn't need to is what I'm saying.20:34
cardoeSo today nova libvirt calls out to os-brick for get_volume_connector(). And nova ironic calls out to Ironic's volume connector list for get_volume_connector().20:36
cardoeSo as a hack for experimentation, we've removed the call to Ironic there and instead create it from a pattern.20:37
TheJuliaso, There is nothing saying we can't post through an update to volume connectors which could trigger an async task20:44
TheJuliawhich under the hood talks to a bmc....20:44
* TheJulia whistles innocently20:45
cardoeyeah. there could be certain cases where it can all work.20:47
TheJuliaso, one of the things I inherently expected was resyncing with cinder on just the base volume id20:48
TheJuliaso entirely doing the whole dance with cinder20:48
TheJuliato resync target/connector data as applicable20:48
rm_work[m]Does ironic not actually have a native concept of Availability Zones? Is that a nova only thing? Trying to figure out how they interact, if we have racks of baremetal nodes grouped into pods and we want to have each set of racks be a different AZ21:11
rm_work[m]Looking at resource classes, traits, etc but failing to picture what actually ties it together, is there a doc with this somewhere21:11
JayFYeah I don't have much familiarity with availability zones. I suspect you would have to use shards or conductor groups to segregate machines into separate Nova compute instances which lived in different az21:14
JayFAs far as ironic directly, we have no first class API around availability zones21:14
JayFBut essentially, you can map a conductor group or shard to another compute, map those Nova computes into separate availability zones just like you would map conductors into separate availability zones and I believe it should work21:15
JayF***hypervisors into azs21:15
TheJuliaYeah, thats really the best way21:37
TheJuliaPart of the underlying challenge is the scaling model is a bit different than other services, and often your not building a bunch of distinct bmc management networks. Besides, loosing bmc access in not ht eworst thing.22:05
TheJuliaSo an idea might be to toggle ports to FibreChannel and to use them as dedicated interfaces, i.e. not try and cross over the function to ask the OS if it wants the packet for network processing before presenting it over storage22:10
TheJuliaThen it would *really* just be a matter of the host rescanning for a new lun just like FC22:11
cardoeYeah22:17
TheJuliawhich ain't horrible... really. :\22:17
cardoeAnd that's really how my hardware for this is setup anyway.22:17
TheJuliaThe total question is... can a machine see the card when its in that mode or what does it see it as with an OS running22:25
TheJuliaBecause then that exposes you to the inner workings under the hood22:25
TheJuliaIn theory, we could also make it easy to configure a card22:26
TheJuliaWe could also just expose it on the port and map it over22:26
TheJuliabut we would need to disable it from being used for normal IP traffic, which is fine22:26
*** gmaan_afk is now known as gmaan23:20
* TheJulia scribbles new topics into the ether pad for the PTG.23:21

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!