janders | TheJulia yes | 02:39 |
---|---|---|
janders | (sorry for slow response) | 02:40 |
TheJulia | janders: just wondering if can sync up/get an update on virtual media. Looks like hpe may have put in a url length limit | 03:59 |
TheJulia | at least, one that is pretty aggressively short | 03:59 |
janders | oh dear that doesn't sound like good news | 04:00 |
janders | I'm totally bogged down with an urgent lifecycle controller related issue, hopefully will get on top of that in a day or two | 04:01 |
janders | unfortunately I don't have much news on push-based vMedia - I haven't been able to find time to push it further, transition back from parental leave hasn't been exactly easy | 04:02 |
janders | will try to plan better for this one this quarter | 04:03 |
TheJulia | janders: ack, okay. No worries. I suspect we need to likely point out to vendors, at least with the url that there are both standards where there is a reasonable maximum url, and reason for longer urls in anything dealing with automation. | 04:06 |
TheJulia | Anyway, I’m going to likely try and get some sleep. | 04:06 |
janders | good night, sleep well | 04:06 |
janders | we'll chat more soon | 04:06 |
dtantsur | TheJulia: mmm, I see. but we still need to make it random, so that it cannot be guessed? (although, it can be opt-in for now to avoid backport issues) | 07:20 |
rpittau | good morning ironic! o/ | 07:26 |
rpittau | TheJulia: for the secure boot topic I've added it to Thursday since we have still some time | 07:31 |
drannou | Hello Ironic ! I'm playing with soft raid in Ironic, and I'm suprrised about something: during a recycling process, IPA is doing a shred, which cost a lot. In my usecase, I'm recycling a host that have 2 NVME and 1 SSD. They all managed instant erase (so should take a couple of seconds), and in my case it took 1 hour for shred to be executed. Would it be better to just | 09:14 |
drannou | "delete the raid", "clean all devices" (with instant erase if device support), and "re-create the raid" | 09:14 |
iurygregory | good morning Ironic | 10:48 |
*** sfinucan is now known as stephenfin | 12:25 | |
dtantsur | drannou: yeah, it's a known omission.. | 12:34 |
dtantsur | drannou: you can enabling delete_configuration in automated cleaning, then build configuration at deploy time (via deploy templates / steps). I haven't tried it. | 12:35 |
dtantsur | CERN folks have experience with sw RAID, but they're hard to catch nowadays. | 12:35 |
drannou | dtantsur: Yeah, as an operator, it's pretty weired to have to first make a manual clean to activate MDADM, and then call provide for an automatic clean to have a host in available state | 12:36 |
drannou | As an operator, calling a provide to a host should put the host in the required configurationb | 12:37 |
dtantsur | Clean steps were invented long before deploy steps, so we still tend to lean towards the former where we should recommend the latter | 12:37 |
drannou | Right now I have to : activate the raid (node set --target-raid-config), manual clean to activate the raid, AND ask for a global clean (via "provide"), that's very weird. And more over if I don't make the manual clean, the provide will go on without error | 12:40 |
drannou | it also means that if my final customer completely destroy the RAID, when he delete the instance, will it recreate the RAID ? | 12:41 |
TheJulia | dtantsur: the value in the publisher be randomized and matched to the kernel command line, totally | 13:06 |
dtantsur | drannou: if you go down the deploy steps path, you can make sure the RAID is rebuilt every time. Otherwise, only if you do the manual cleaning dance each time. | 13:09 |
dtantsur | TheJulia: nice ++ | 13:09 |
TheJulia | I really wish it was just the uuid field, but that seems to be hard mapepd out to the creation time | 13:10 |
TheJulia | mapped | 13:10 |
* TheJulia sips coffee | 13:10 | |
drannou | dtantsur: ok, can you confirm me that the mindset is : clean is there to REMOVE things, and deploy will configure (so create things that are not there) ? That's an importnat point for my SED ongoing development | 13:11 |
dtantsur | drannou: "cleaning" is a bad name in retrospective (better than the initially proposed "zapping" but still). It's more of what we used to call "ready state" at some point at RH. | 13:12 |
dtantsur | Deploy steps were meant to allow instance-specific tailoring. But the actual difference is vague. | 13:12 |
drannou | dtantsur: yeah, even more with raid, why raid1, and not raid 10 ? is it up to the admin to define that ? | 13:15 |
drannou | but if it's the instance customer, how should he configure that with nova ? | 13:15 |
drannou | no easy answer | 13:15 |
dtantsur | drannou: linking traits with deploy templates | 13:15 |
dtantsur | may or may not be easy depending on how you look | 13:16 |
drannou | for stock management, that's pretty hard | 13:16 |
TheJulia | drannou: could you elaborate on what you mean by "that" ? I ask because we're discussing a whole next evolution on deploy templates | 13:17 |
dtantsur | I don't disagree with that, but I believe we've exhausted all ohter options with Nova | 13:17 |
* dtantsur lets the actually knowledgeable person aka TheJulia drive the conversation further :) | 13:17 | |
TheJulia | oh, yeah, *that* is.... difficult since they want everything directly value discoverable | 13:17 |
* TheJulia will fondly remember the "we will never agree on any naming" forum session | 13:19 | |
drannou | dtantsur: that's hard : if, as a public cloud provider, you want to propose "some" raid management, if you have to "hard fix" the type of each host, how do you know that your customers will need 10 raid1, 2 raid 0, 5 raid-10 ? | 13:20 |
TheJulia | oh, it doesn't have to be fixed hard per host | 13:20 |
TheJulia | But you have to establish some meaning behind flavors | 13:21 |
drannou | if you let your customer be able to say that during the spawn, that's easier :) | 13:21 |
TheJulia | you publishize/detail those out, and then you map those flavors to have a trait which matches a deployment template pre-configured in ironic. | 13:21 |
drannou | but well we are too far, for the moment I need to be sure that if I tag a host as raid1, the host is in raid1 during the deploy :) | 13:22 |
TheJulia | yes, except scheduling needs to be acutely aware of the fine details behind the ask, and what that translates out to to see if it is possible. | 13:22 |
TheJulia | And you *can* kind of do that, with traits today but you have to articulate the meaning if that makes sense | 13:23 |
TheJulia | There is a whole aspect of quantitive versus qualitative traits to the backstory though. Hardware folks are generally seeking quantitative matching because we're generally in worlds of absolutes, which doesn't entirely align with a virtual machine in a hypervisor because while you are functionally restricted by the limits of the hardware, that is your upper bound and your not given the whole machine outright. | 13:47 |
opendevreview | Mohammed Boukhalfa proposed openstack/sushy-tools master: Add fake_ipa inspection, lookup and heartbeater to fake system https://review.opendev.org/c/openstack/sushy-tools/+/875366 | 14:02 |
Continuity | drannou: we decided we would force our customers to have a RAID1 on the first 2 disks of a certain size, for that very reason. It was hard to allow the customer to choose, or to provide loads of flavours with it pre loaded. | 14:27 |
TheJulia | Continuity: I've heard similar from folks | 14:27 |
Continuity | To be honest with NVMe and the longevity of the drives, im starting to wonder if SW Raid is even *needed*. Not having that resilience makes the operational part of my mind wince.... | 14:28 |
TheJulia | heh | 14:29 |
Continuity | But its a toss up between ease of management and protection of the customers service. | 14:29 |
TheJulia | Yeah | 14:29 |
TheJulia | Disks failing, ages ago, was largely the spinny parts | 14:29 |
Continuity | As long as we don't shred the drive every time it cleans :D | 14:29 |
TheJulia | "whiirrrrrrrrrrl.... knock knock knock..... whiirrrrrrrrrrlllll" | 14:29 |
Continuity | Love that noise... | 14:30 |
TheJulia | Yeah | 14:30 |
TheJulia | Continuity: I ran one of the largest image farms at it's time of *just* scanned documents in litigation back in the 2000s, and sooo many platters that came out of our data center stripped of the magnetic coating from head crashes and got hung on the walls | 14:32 |
Continuity | Computing has lost something since the removal of spinning parts.... | 14:33 |
Continuity | Its not as visceral | 14:33 |
Continuity | Now its just noisy fans | 14:33 |
TheJulia | largely, yes | 14:33 |
iurygregory | is just me or what we recommend in https://opendev.org/openstack/bifrost/src/branch/master/doc/source/user/troubleshooting.rst#obtaining-ipa-logs-via-the-console is not supported? I've found https://opendev.org/openstack/ironic/src/branch/master/releasenotes/notes/remove-DEPRECATED-options-from-%5Bagent%5D-7b6cce21b5f52022.yaml | 14:33 |
TheJulia | oh, the docs need to be fixed, kernel_append_params ? | 14:34 |
iurygregory | I think it would be the case | 14:47 |
iurygregory | the person who found will submit a patch =) | 14:47 |
dtantsur | TheJulia: you may want to jump on https://review.opendev.org/c/openstack/ironic-specs/+/912050 real quick if you want to prevent `[driver]verify_ca` in favour of `[conductor]bmc_verify_ca` | 14:50 |
dtantsur | I'm fine either way tbh | 14:50 |
opendevreview | Mohammed Boukhalfa proposed openstack/sushy-tools master: Add fake_ipa inspection, lookup and heartbeater to fake system https://review.opendev.org/c/openstack/sushy-tools/+/875366 | 15:17 |
dtantsur | meanwhile, looking for a 2nd review https://review.opendev.org/c/openstack/ironic/+/914972 | 15:20 |
opendevreview | Riccardo Pittau proposed openstack/ironic master: Fix redifsh detach generic vmedia device method https://review.opendev.org/c/openstack/ironic/+/914978 | 15:25 |
opendevreview | Riccardo Pittau proposed openstack/ironic master: Fix redifsh detach generic vmedia device method https://review.opendev.org/c/openstack/ironic/+/914978 | 15:26 |
TheJulia | dtantsur: the proposal seems to work for me | 15:36 |
dtantsur | ack, thanks for checking | 15:38 |
rpittau | good night! o/ | 16:06 |
opendevreview | Julia Kreger proposed openstack/ironic-tempest-plugin master: Unprovision iso ramdisk boot from test https://review.opendev.org/c/openstack/ironic-tempest-plugin/+/914980 | 18:16 |
opendevreview | Steve Baker proposed openstack/ironic-python-agent master: Step to clean UEFI NVRAM entries https://review.opendev.org/c/openstack/ironic-python-agent/+/914563 | 21:24 |
opendevreview | Julia Kreger proposed openstack/ironic master: Inject a randomized publisher id https://review.opendev.org/c/openstack/ironic/+/915022 | 21:31 |
TheJulia | dtantsur: Take a look at ^ and let me know. It excludes the irmc driver's own direct calls (why?!) to the cd image generation which I'm sort of split on | 21:34 |
opendevreview | Jacob Anders proposed openstack/sushy-oem-idrac master: [DNM] Wait for BIOS configuration job to complete https://review.opendev.org/c/openstack/sushy-oem-idrac/+/915092 | 22:31 |
janders | ( /me hacking on sushy-oem trying to get around "one LC job at the time" issue breaking adjusting BIOS settings) | 22:32 |
opendevreview | Jacob Anders proposed openstack/sushy-oem-idrac master: [DNM] Wait for BIOS configuration job to complete https://review.opendev.org/c/openstack/sushy-oem-idrac/+/915092 | 22:33 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!