janders | ajya dtantsur looking at scrollback and your yesterday SecureBoot conversation - wanted to confirm if this has any impact on https://review.opendev.org/c/openstack/sushy/+/856597, or are you looking only at what happens after this code it's done it's part? My understanding is your discussion is out of scope for this patch but just making sure. TY! | 00:14 |
---|---|---|
ajya | janders: not related | 06:52 |
janders | ajya thank you! :) | 07:00 |
janders | If you have time, I'd appreciate re-review of https://review.opendev.org/c/openstack/sushy/+/856597 - once we are happy with it I will re-test on the hardware it's supposed to fix. Users are chasing me a bit about it. | 07:01 |
ajya | janders: will do, I was testing it with my hardware, need to have another test for Legacy/UEFI field. | 07:35 |
janders | ajya thank you, I really appreciate this! | 07:57 |
janders | let me know if there are any further code/unittest changes needed and I will try sort it out ASAP | 07:58 |
kubajj | Good morning Ironic! | 09:07 |
kubajj | What are these Dell EMC Ironic CI running on changes to Ironic? | 09:47 |
ajya | kubajj: 3rd party CI, there are from other vendors too. | 09:48 |
kubajj | ajya: I see. Thanks. Is there any other summary other than the job-output text files so I could figure out what went wrong? | 09:51 |
ajya | kubajj: not at the moment, though one day maybe we will expose zuul web, same as upstream CI. Still job-output.txt should contain the info, just different UI, or lack of it. | 09:55 |
ajya | kubajj: can you share the patch with an error? | 09:55 |
rpittau | good morning ironic! o/ | 10:03 |
opendevreview | Riccardo Pittau proposed openstack/sushy stable/zed: Increase server side retries https://review.opendev.org/c/openstack/sushy/+/864037 | 10:07 |
opendevreview | Riccardo Pittau proposed openstack/sushy stable/yoga: Increase server side retries https://review.opendev.org/c/openstack/sushy/+/864038 | 10:07 |
opendevreview | Riccardo Pittau proposed openstack/sushy stable/wallaby: Increase server side retries https://review.opendev.org/c/openstack/sushy/+/864039 | 10:07 |
kubajj | ajya: this is the change https://review.opendev.org/c/openstack/ironic/+/862569 | 10:07 |
opendevreview | Riccardo Pittau proposed openstack/sushy stable/xena: Increase server side retries https://review.opendev.org/c/openstack/sushy/+/864040 | 10:07 |
ajya | kubajj: the failures not related to your patch, some intermittent issues with the environment that we are aware of and working on. Don't think this patch can break anything on Dell hardware if other tests are passing. | 10:16 |
ajya | dtantsur: I wasn't able to confirm that setting secure boot clear boot device settings, so still can't reproduce. Any chance to get ironic logs for failures? | 10:17 |
opendevreview | Dmitry Tantsur proposed openstack/ironic master: [WIP] [PoC] A metal3 CI job https://review.opendev.org/c/openstack/ironic/+/863873 | 10:46 |
rmart04_ | Morning All. Does anyone know if its possible/safe to update the nova_host_id in order to move provisioned Ironic machines to a new nova-compute-ironic instance? | 10:48 |
opendevreview | Riccardo Pittau proposed openstack/sushy master: [WIP] Make server connection retries configurable https://review.opendev.org/c/openstack/sushy/+/864102 | 10:56 |
rpittau | ^ ajya janders dtantsur JayF when you have a moment trying to make the server retries configurable in sushy :) | 10:58 |
dtantsur | rmart04_: I quite doubt that, arne_wiebalck may know for sure | 11:00 |
opendevreview | Aija Jauntēva proposed openstack/ironic master: Create 'redfish' driver Redfish Interop Profile https://review.opendev.org/c/openstack/ironic/+/754061 | 11:17 |
TheJulia | Aww rmart04_ left :( | 11:28 |
iurygregory | good morning Ironic | 11:30 |
janders | hey iurygregory o/ | 11:38 |
janders | ajya w/r/t https://review.opendev.org/c/openstack/sushy/+/856597/comments/1a667696_e60a2a0c would you be happy with https://paste.openstack.org/show/817502/ (added lines 10-14)? | 11:39 |
janders | I suppose it may be a good idea to remove all the quotes from the sample error and wrap the entirety in double ticks (``) | 11:40 |
kubajj | dtantsur: TheJulia: or anybody who has a moment, https://review.opendev.org/c/openstack/ironic/+/862569 should be ready for review 🥲 | 11:40 |
dtantsur | w00t! putting on my queue for today | 11:40 |
opendevreview | Jacob Anders proposed openstack/sushy master: Fix setting boot related attributes https://review.opendev.org/c/openstack/sushy/+/856597 | 11:45 |
opendevreview | Jacob Anders proposed openstack/sushy master: Fix setting boot related attributes https://review.opendev.org/c/openstack/sushy/+/856597 | 11:50 |
janders | ^ bloody spaces & tabs - couldn't see the mixup in my vim. Should be good now :) | 11:51 |
opendevreview | Dmitry Tantsur proposed openstack/ironic master: [WIP] [PoC] A metal3 CI job https://review.opendev.org/c/openstack/ironic/+/863873 | 12:05 |
dtantsur | kubajj: reviewed, a couple of issues found, but otherwise great job! | 12:20 |
kubajj | dtantsur: thanks, I'll have a look at them in the afternoon | 12:23 |
opendevreview | Dmitry Tantsur proposed openstack/ironic bugfix/21.0: Fix the invalid glance client test https://review.opendev.org/c/openstack/ironic/+/864042 | 12:26 |
opendevreview | Dmitry Tantsur proposed openstack/ironic stable/zed: Fix the invalid glance client test https://review.opendev.org/c/openstack/ironic/+/864043 | 12:26 |
opendevreview | Dmitry Tantsur proposed openstack/ironic master: [WIP] [PoC] A metal3 CI job https://review.opendev.org/c/openstack/ironic/+/863873 | 13:46 |
opendevreview | Dmitry Tantsur proposed openstack/ironic master: [WIP] [PoC] A metal3 CI job https://review.opendev.org/c/openstack/ironic/+/863873 | 13:50 |
opendevreview | Merged openstack/ironic bugfix/21.0: Fix the invalid glance client test https://review.opendev.org/c/openstack/ironic/+/864042 | 14:04 |
opendevreview | Dmitry Tantsur proposed openstack/ironic master: [WIP] [PoC] A metal3 CI job https://review.opendev.org/c/openstack/ironic/+/863873 | 14:36 |
JayF | I'm going to be out today on PTO; if you need me urgently send a DM or something and I'll check it periodically. | 14:53 |
kubajj | dtantsur: about the last comment you left - I looked at deployment.py and it implemented the get_by_node_uuid exactly the same. Is that one wrong as well? (I am not really sure how the objects work yet, so I just used deployment for inspiration) | 15:06 |
dtantsur | kubajj: it can easily be wrong: deployments are not currently used anywhere | 15:09 |
kubajj | dtantsur: would anything similar to this make sense instead of what I have there now? https://paste.opendev.org/show/bCeFPoiwdS9RD5TgadbL/ | 15:20 |
dtantsur | kubajj: not sure. I mean, this is correct code, but chances are very high the calling layer (the API) will already have a node object loaded. | 15:21 |
dtantsur | maybe we should limit this change to only getting by node_id for now. | 15:21 |
kubajj | dtantsur: ok, I'll remove this | 15:22 |
opendevreview | Merged openstack/ironic stable/zed: Fix the invalid glance client test https://review.opendev.org/c/openstack/ironic/+/864043 | 15:28 |
opendevreview | Jakub Jelinek proposed openstack/ironic master: Implements node inventory: database https://review.opendev.org/c/openstack/ironic/+/862569 | 15:31 |
arne_wiebalck | rmart04_: not sure I fully get your question, but even moving uninstantiated nodes between nova computes across conductor groups caused issues for us | 16:00 |
TheJulia | arne_wiebalck: I think they are thinking of editing ComputeNode.host and Instance.host… | 16:01 |
arne_wiebalck | TheJulia: thanks ... I cannot answer with certainty but would with certainty advise to be careful :) | 16:14 |
arne_wiebalck | TheJulia: how would that affect alloactions/placement? | 16:15 |
arne_wiebalck | *allocations | 16:15 |
rpittau | good night! o/ | 17:06 |
arne_wiebalck | bye everyone o/ | 17:46 |
TheJulia | arne_wiebalck: I don’t think it really *would* since it is by computenode | 17:47 |
TheJulia | As long as things match, for the next loop…. Should be okay. But… nova-compute cannot be running | 17:48 |
*** dking is now known as Guest978 | 20:18 | |
Guest978 | I'm having some trouble cleaning a node that has software RAID configured. It looks like the only methods to remove an existing hardware RAID are with the 'raid' interface, and not during the normal 'deploy' clean steps? | 20:27 |
Guest978 | What is the proper way to allow software RAID to be removed as part of a normal clean step, or perhaps what could I check to see what I'm doing wrong? | 20:55 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!