*** ChanServ changes topic to "Bare Metal Provisioning | Status: http://bit.ly/ironic-whiteboard | Docs: https://docs.openstack.org/ironic/ | Bugs: https://bugs.launchpad.net/ironic | Contributors are generally present between 6 AM and 12 AM UTC, If we do not answer, please feel free to pose questions to openstack-discuss mailing list." | 04:51 | |
rpittau | good morning ironic! o/ | 06:52 |
---|---|---|
sylvr | Good morning TheJulia ! I added logs and configuration files to the bug report : https://bugs.launchpad.net/ironic/+bug/2072550 :) | 08:51 |
opendevreview | Merged openstack/ironic master: Imported Translations from Zanata https://review.opendev.org/c/openstack/ironic/+/923809 | 09:14 |
TheJulia | good morning everyone | 13:10 |
opendevreview | Jacob Anders proposed openstack/ironic master: Add Targets to firmware.update on multi system BMCs https://review.opendev.org/c/openstack/ironic/+/922438 | 13:12 |
sylvr | TheJulia: o/ | 13:28 |
TheJulia | sylvr: were there any logs in /var/log/ironic-inspector/ramdisk ? | 13:47 |
TheJulia | nvmd, it is getting an address | 13:50 |
TheJulia | sylvr: so it seems like an order of operations issue | 13:56 |
TheJulia | basically, it looks like your triggering intorspection of a known node | 13:57 |
TheJulia | which means the "hardware discovery" plugin which enrolls an entire node (and sets the ipmi_address field) is never triggered | 13:57 |
TheJulia | the only way to generally trigger that phase is to power on a node on the attached introspection network *without* ironic knowing of it | 13:58 |
sylvr | TheJulia: so, the nodes are trying to contact an older version of the Ironic instance ? | 13:58 |
TheJulia | Ironic baremetal nodes live forever and are not independent based upon version | 14:00 |
TheJulia | If your done with the record of a node, it needs to be deleted | 14:00 |
TheJulia | Otherwise, the state/node needs to be actuated as you normally would to achieve the given task | 14:00 |
sylvr | yeah, I thought I got that right... | 14:01 |
TheJulia | for example, discovery is entirely modeled around someone pushing a power button in the data center | 14:01 |
TheJulia | if your doing anything at an API level and the node already exists, it will just re-introspect, not discover. | 14:01 |
sylvr | so, if I'm getting this right, I need to make ironic re-discover my nodes? | 14:02 |
TheJulia | ... or just set the value if you already know it, yes | 14:02 |
sylvr | well, I need to make sure the discovery is automated using Bifrost and working on the current nodes... I'm doing PoC | 14:03 |
TheJulia | but it sounds like you *already* know about the hardware? | 14:05 |
sylvr | when I want to redeploy Bifrost, I usually delete the node, power them off using ipmi-over-lan, redeploy Bifrost and then power them on again this should be the complete cycle and re-discover the nodes right? | 14:06 |
TheJulia | that should if they no longer exist in "baremetal node list" output | 14:06 |
sylvr | ooh okay | 14:06 |
sylvr | then I'm pretty sure there's a but then | 14:07 |
sylvr | bug* | 14:07 |
TheJulia | or bifrost's config files know about the nodes and it is re-creating them | 14:07 |
TheJulia | I'd check to ensure the node doesn't exist with baremetal node list after deploying bifrost before you power on the node again | 14:08 |
sylvr | oh yeah I'm sure of that too | 14:08 |
sylvr | I'll let the node off 'til I can run watch baremtal node list so I can see them appear one at a time | 14:08 |
TheJulia | okay, the logs from inspector is pretty clearly re-inspection where I can see inspector being notified with a POST request as well | 14:09 |
TheJulia | ... you might need to consider clearing the inspector cache, it has been ages since I've thought about records of nodes in in inspector | 14:10 |
TheJulia | JayF: when your awake and have your coffee, ping me | 14:10 |
sylvr | well, I'm going to prune the docker volume and hoping this would do the trick | 14:10 |
TheJulia | hehe | 14:10 |
TheJulia | okay | 14:11 |
sylvr | thank you very much ! | 14:11 |
TheJulia | good luck, and let us know how it goes | 14:13 |
sylvr | will do! | 14:15 |
JayF | TheJulia: meeting until 8:00 | 14:23 |
TheJulia | cool cool | 14:26 |
sylvr | TheJulia: if I post update directly on the bug report it's okay? we're not in the same timezone, and I don't want to spam the IRC more than I already did ^^" | 14:27 |
TheJulia | sylvr: sure, just ping me in the morning :) | 14:28 |
TheJulia | well, my morning, your afternoon | 14:28 |
sylvr | yes okay! well the discovery should get going as we speak | 14:29 |
anshul | Hi! I am trying to install devstack with ironic enabled and I am running into this error when I run ./stack.sh about ~1 hour into the process: "[ERROR] /opt/stack/devstack/lib/glance:634 g-api did not start" multiple times with the script breaking. Any suggestions? | 14:30 |
TheJulia | anshul: I'd look at the glance logs, to be honest | 14:31 |
anshul | how would i do that? | 14:31 |
TheJulia | well, there should be a g-api.log, I think in ? /opt/stack/data/logs ? | 14:32 |
TheJulia | if you cd /opt/stack and run a command like "find ./|grep g-api" | 14:32 |
TheJulia | it should help you find it | 14:32 |
anshul_ | It's showing this error in the error.log file: [ERROR] /opt/stack/devstack/lib/glance:634 g-api did not start | 14:35 |
TheJulia | depending on your configuration, it could be launching glance as a wsgi app | 14:36 |
TheJulia | It likely is, so you'll need to find your apache logs most likely | 14:36 |
anshul_ | what should i do to correct that? | 14:37 |
sylvr | TheJulia: destorying the volume and container didn't fix it, the nodes are still missing all the driver_info (not just the address), I'm going to retry something really quick (this time all ports are saved and kept) | 14:41 |
TheJulia | sylvr: was the database on that volume? | 14:42 |
anshul_ | TheJulia: how would i change it not launch as a wsgi app/what should it ideally launch as? | 14:42 |
TheJulia | anshul_: I don’t know, perhaps ask folks in #openstack-glance to see if they have any ideas? Ultimately your using development tooling and we don’t know what config you’ve used. | 14:43 |
anshul_ | got it, thanks! | 14:43 |
sylvr | TheJulia: well, I think so, I'll need to check to be sure | 14:57 |
sylvr | but I think it was, as I told you before, I didn't have this issue before, and I used to destroy the volume between re-deployment to be sure | 14:57 |
sylvr | TheJulia: leaving for the day, if you want me to run some other test you can send me an email or on the bug report page :) thanks again and have a nice day ! | 15:13 |
*** bodgix2 is now known as bodgix | 15:40 | |
rpittau | good night! o/ | 16:11 |
opendevreview | Jay Faulkner proposed openstack/ironic stable/2024.1: noop change; testing ci https://review.opendev.org/c/openstack/ironic/+/924254 | 17:03 |
opendevreview | Jay Faulkner proposed openstack/ironic stable/2023.2: noop change; testing ci https://review.opendev.org/c/openstack/ironic/+/924255 | 17:05 |
opendevreview | Jay Faulkner proposed openstack/ironic stable/2023.1: noop change; testing ci https://review.opendev.org/c/openstack/ironic/+/924256 | 17:06 |
opendevreview | Jay Faulkner proposed openstack/ironic-lib master: noop change; testing ci https://review.opendev.org/c/openstack/ironic-lib/+/924264 | 17:15 |
opendevreview | Jay Faulkner proposed openstack/ironic-lib stable/2024.1: noop change; testing ci https://review.opendev.org/c/openstack/ironic-lib/+/924265 | 17:16 |
opendevreview | Jay Faulkner proposed openstack/ironic-lib stable/2023.2: noop change; testing ci https://review.opendev.org/c/openstack/ironic-lib/+/924266 | 17:16 |
opendevreview | Jay Faulkner proposed openstack/ironic-lib stable/2023.1: noop change; testing ci https://review.opendev.org/c/openstack/ironic-lib/+/924267 | 17:16 |
JayF | I'm obviously currently going on a CI rampage. Going to be doing whatever is necessary to get these branches green, including disabling jobs if they are not broken in a semi-obvious way. Tired of our stable branches being red :D | 17:16 |
opendevreview | Jay Faulkner proposed openstack/ironic-python-agent master: noop change; testing ci https://review.opendev.org/c/openstack/ironic-python-agent/+/924271 | 17:21 |
opendevreview | Jay Faulkner proposed openstack/ironic-python-agent stable/2024.1: noop change; testing ci https://review.opendev.org/c/openstack/ironic-python-agent/+/924272 | 17:22 |
opendevreview | Jay Faulkner proposed openstack/ironic-python-agent stable/2023.2: noop change; testing ci https://review.opendev.org/c/openstack/ironic-python-agent/+/924273 | 17:22 |
opendevreview | Jay Faulkner proposed openstack/ironic-python-agent stable/2023.1: noop change; testing ci https://review.opendev.org/c/openstack/ironic-python-agent/+/924274 | 17:22 |
JayF | Just noting, for folks who care about unmaintained, looks like we never got gitreview merged for yoga/zed ironic-lib and xena/wallaby/victoria IPA, I'm getting fresh logs and will work to repair | 17:25 |
JayF | commits to unmaintained branches don't come in here fwiw | 17:25 |
JayF | other than those, rest of my CI science is at https://review.opendev.org/q/topic:%22jayf-ci-test%22 | 17:26 |
frickler | JayF: you'd have to add unmaintained here if you want gerritbot notices about those https://opendev.org/openstack/project-config/src/branch/master/gerritbot/channels.yaml#L694-L697 | 17:59 |
JayF | I don't know if we want them or not tbh, just making the comment | 17:59 |
frickler | same ;) | 18:25 |
cid | o/ | 18:40 |
JayF | \o | 18:42 |
JayF | https://etherpad.opendev.org/p/ironic-ci-audit-july-2024 still waiting for IPA jobs to check in, but this seems to be the current state of things. | 21:35 |
JayF | I'm going to start with getting unmaintained branches created, then go on to the failures from most recent -> oldest | 21:36 |
JayF | https://review.opendev.org/c/openstack/ironic-lib/+/918257 is failing because the XFS version in our test image is too new for our test job to mount, seemingly | 21:40 |
JayF | I am highly perplexed how this worked on uefi-ipmi-src ... probably tinyipa image there? | 21:41 |
JayF | bingo | 21:41 |
JayF | Unless someone has a good reason otherwise, I'm going to edit the jobs to require tinyipa | 21:42 |
TheJulia | JayF: which job are you seeing fail that way? | 21:46 |
JayF | the linked patch | 21:46 |
JayF | 918257 | 21:46 |
JayF | one of the jobs scheduled to a cloud that required tinyipa; it passed | 21:46 |
JayF | the other failed with unable to mount the xfs filesystem because it had a too-new-feature | 21:47 |
JayF | the logs are in comments there | 21:47 |
TheJulia | oh, okay | 21:47 |
TheJulia | oh right, to extract the contents of the image | 21:48 |
TheJulia | yeah, thats fine imho | 21:48 |
JayF | if it was IPA or IPAB I'd have a little pause | 21:49 |
JayF | but for ironic-lib, it's there *to test ironic-lib* | 21:49 |
JayF | now, lets just ignore for a sec that IPA is almost certainly identically broken :P | 21:49 |
JayF | more things passing CI > less things passing CI in my book ;) | 21:49 |
TheJulia | joy | 21:50 |
JayF | real joy is realizing that yoga CI uses centos 8 | 21:50 |
JayF | which is dead | 21:50 |
TheJulia | kill the jobs with fire then that use it | 21:52 |
JayF | I think that's all the integration jobs on ir-lib | 21:53 |
TheJulia | can't they just be changed to tinycore? | 21:53 |
JayF | that's what I'm heading down | 21:53 |
JayF | https://opendev.org/openstack/ironic/src/branch/unmaintained/victoria/zuul.d/ironic-jobs.yaml#L653 it's in ironic-proper | 21:53 |
TheJulia | lolsob | 21:54 |
JayF | I'm fixin it though | 21:54 |
JayF | paint everything tinyipa color | 21:54 |
JayF | my bikeshed is painted the color of minimal, insecure linux distros /s | 21:54 |
JayF | Hmm. I think I was wrong re: centos8, I think it may only be unsupported from an opendev perspective; Iv'e seen some centos8 builds succeed | 22:34 |
JayF | but a handful of them all getting DNS failures, so I think we might have gotten network-blipped | 22:35 |
JayF | I have some rechecks in and will see tomorrow | 22:35 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!