TheJulia | NobodyCam: if memory serves we send a power off command | 00:35 |
---|---|---|
TheJulia | in the standby handler | 00:35 |
TheJulia | err | 00:35 |
TheJulia | extension | 00:35 |
NobodyCam | ++++ | 00:35 |
NobodyCam | thank you@ | 00:35 |
NobodyCam | thank you! | 00:35 |
holtgrewe | What's the best way of getting a separate partition for /tmp on an ironic bare metal server managed via nova? | 07:03 |
arne_wiebalck | Good morning, Ironic! | 07:29 |
arne_wiebalck | holtgrewe: A single disk on which you have a partition for /tmp? | 07:30 |
arne_wiebalck | holtgrewe: Or a second disk which should become /tmp? | 07:30 |
holtgrewe | arne_wiebalck: I have a single hardware SSD (actually hardware RAID1) that needs a separate XFS partition for /tmp. I already figured out how to add an EFI partition to the stock CentOS 7.9 qcow2. | 07:31 |
holtgrewe | If I understand correctly, I could "somehow" add a partition to the qcow2 and then "somehow" make it grow together with the root parition. | 07:32 |
holtgrewe | Or I could "somehow" use cloud-init do create the partition. I'm looking for some guidance on best practice and pointers towards manuals on where to look to support this. | 07:32 |
arne_wiebalck | holtgrewe: if it is a single disk, making it part of the image is maybe easiest ... do you build the image yourself ? | 07:36 |
holtgrewe | arne_wiebalck: I've taken the centos.org provided qcow2 image of centos7.9 and subjected it to "disk-image-create -o centos7-uefi block-device-efi dhcp-all-interfaces centos7" | 07:37 |
rpittau | good morning ironic! o/ | 07:42 |
dtantsur | morning ironic | 07:43 |
dtantsur | holtgrewe: we recommend people to use cloud-init indeed | 07:43 |
dtantsur | with DIB you can specify the exact partition layout. if you leave the root partition to go the last, cloud-init will be able to grow it to occupy the whole disk | 07:44 |
holtgrewe | dtantsur: OK, thanks | 07:45 |
arne_wiebalck | dtantsur: something to add to our docs, like "common use cases" ? ^^ | 07:50 |
dtantsur | arne_wiebalck: there should definitely be some words on partitioning and cloud-init somewhere.. | 07:50 |
arne_wiebalck | dtantsur: cloud-init is mentioned in some places (e.g for portgroups), but I couldn't find "how to create and mount an additional partition" | 07:54 |
dtantsur | if you have experience with it, would be awesome to update the docs | 07:55 |
arne_wiebalck | dtantsur: we do the partioning via the image and use cloud-init to do some extra things, e.g. mounting an md device | 08:00 |
arne_wiebalck | holtgrewe: would you be willing to summarise what needed to be done once you're finished? | 08:00 |
holtgrewe | arne_wiebalck: I'll try. | 08:04 |
arne_wiebalck | holtgrewe: cool, ty! | 08:04 |
holtgrewe | I just discovered that I will need mdadm in one way or another as my machines don't have hardware RAID controllers... | 08:05 |
*** amoralej|off is now known as amoralej | 08:16 | |
opendevreview | Riccardo Pittau proposed openstack/bifrost master: Install pip package in dib based images https://review.opendev.org/c/openstack/bifrost/+/821223 | 08:17 |
arne_wiebalck | holtgrewe: for s/w RAID, we should have docs :) | 08:38 |
holtgrewe | arne_wiebalck: yes, I found them. I'm just exploring how to use them for my purpose. | 08:39 |
janders | good morning holtgrewe arne_wiebalck dtantsur and Ironic o/ | 09:12 |
arne_wiebalck | hey janders, good morning! o/ | 09:12 |
holtgrewe | Good morning. New error. "Failure prepping block device" / "ironicclient.common.apiclient.exceptions.BadRequest: Unable to attach VIF <UUID>, not enough free physical ports." now how did I manage to make this? | 09:20 |
opendevreview | Arne Wiebalck proposed openstack/ironic-python-agent master: Burn-in: Dynamic network pairing https://review.opendev.org/c/openstack/ironic-python-agent/+/821244 | 09:26 |
arne_wiebalck | dtantsur: rpittau: "make tooz optional" ^^: I was probably overthinking things, sorry :) What you suggested is to have an [extra] section in setup.cfg to have tooz installed only when it is requested, e.g. in IPAB. Is that correct? The change has now tooz as an optional package, and in test-requirements. | 09:29 |
rpittau | arne_wiebalck: that should be fine, that makes the entire burnin optional, is that intended? | 09:40 |
arne_wiebalck | rpittau: hmm ... no :) | 09:42 |
arne_wiebalck | rpittau: does the group name carrry any meaning? I thought this is a free name to pick? | 09:43 |
arne_wiebalck | rpittau: what I would like to do is make tooz optional, so it can be pulled in if needed | 09:44 |
* arne_wiebalck seems like I do not understand how it works :) | 09:44 | |
rpittau | arne_wiebalck: apologies, I expressed myself poorly, I mean that burnin is not a mandatory feature now, not that it's not usable by default | 09:45 |
rpittau | with that change tooz will be installed if required | 09:45 |
rpittau | but still needs to be installed for any use of burnin | 09:46 |
arne_wiebalck | since burnin imports tooz | 09:46 |
rpittau | yep | 09:47 |
arne_wiebalck | is that what we want or do we want sth else? | 09:48 |
rpittau | if we want burnin to always work (and that's my understanding reading the story) then I think we should actually keep tooz in the requirements and just leave its drivers as optional (kazoo, etcd, etc...) | 09:56 |
arne_wiebalck | oh, ok! | 09:57 |
rpittau | it's one more req for a feature, seems fair, but let's see what others think, maybe there's an alternative | 09:57 |
arne_wiebalck | I guess the group should also be more specific then | 09:58 |
arne_wiebalck | the group name | 09:58 |
arne_wiebalck | how would we add another driver like etcd | 09:59 |
arne_wiebalck | ? | 09:59 |
arne_wiebalck | different group within the extra section? | 10:00 |
rpittau | I'm talking about tooz drivers, that depends on how tooz is installed https://docs.openstack.org/tooz/latest/user/drivers.html | 10:01 |
rpittau | that's why I think it should be addressed in the ipa-builder | 10:01 |
arne_wiebalck | so, when you say "optional" you mean nothing in the IPA itself for the drivers? | 10:04 |
arne_wiebalck | I was thinking having the drivers in a extras section would ease the installation, no? | 10:04 |
opendevreview | Arne Wiebalck proposed openstack/ironic-python-agent master: Burn-in: Dynamic network pairing https://review.opendev.org/c/openstack/ironic-python-agent/+/821244 | 10:14 |
arne_wiebalck | rpittau: ^^ latest suggestion :) | 10:15 |
holtgrewe | I'm getting a weird error about raid configuration. The physical disk parameter is supposed to be a list of device hints? https://paste.openstack.org/show/811718/ | 10:19 |
holtgrewe | ah, I probably need to specify the controller... | 10:19 |
*** sshnaidm|afk is now known as sshnaidm | 10:54 | |
opendevreview | wangjing proposed openstack/ironic master: add Debian for doc https://review.opendev.org/c/openstack/ironic/+/821958 | 11:02 |
*** amoralej is now known as amoralej|lunch | 11:59 | |
janders | see you on Monday Ironic o/ (I'm off tomorrow) | 12:42 |
janders | Have a great weekend everyone | 12:42 |
rpittau | bye janders :) | 12:59 |
holtgrewe | arne_wiebalck, dtantsur: I have to correct myself. I actually need to write to software RAID-1. When using the following raid configuration, I get /dev/sd{a,b}1 with linux_raid_member configuration. However, if I understand coorectly it would be simpler to have an mdadm-based device and I write the image to that with cloud-init. Or is this unsupported? How do I proceed here in general? Is | 13:06 |
holtgrewe | there some documentation on that? | 13:06 |
*** amoralej|lunch is now known as amoralej | 13:12 | |
arne_wiebalck | holtgrewe: AFAIU, cloud-init is used to configure the instance upon its first boot (not to write the image to a device in the first place). | 13:24 |
arne_wiebalck | holtgrewe: so, the sequence (we use anyway) is: configure the RAID with Ironic, create an instance, have cloud-init finish up the instance | 13:25 |
arne_wiebalck | holtgrewe: this is still about having a separate /tmp (now on top of a s/w RAID)? | 13:26 |
arne_wiebalck | holtgrewe: if so, I would configure the s/w in Ironic with two md devices, have the first one for the system, and prepare/mount the second one via cloud-init | 13:27 |
opendevreview | Merged x/sushy-oem-idrac master: Re-add python 3.6/3.7 in classifier https://review.opendev.org/c/x/sushy-oem-idrac/+/821672 | 13:44 |
holtgrewe | arne_wiebalck: OK, I think I now understand. The RAID configuration created /dev/sd{a,b}1 that I can `mdadm --assemble` into a /dev/md0. | 14:04 |
arne_wiebalck | holtgrewe: that assembly should happen automatically (if the image is configured correspondingly) | 14:06 |
holtgrewe | arne_wiebalck: thanks, is there anything to look out for when using UEFI? The documentation talks about an EFI partition on the "holder devices". | 14:06 |
arne_wiebalck | holtgrewe: depending on the boot mode in Ironic for this node, Ironic will prepare things as needed | 14:07 |
arne_wiebalck | holtgrewe: the main thing to look out for is that the image you install is UEFI capable | 14:07 |
arne_wiebalck | holtgrewe: as I think dtantsur pointed out earlier this is not the case for all | 14:08 |
arne_wiebalck | holtgrewe: this may help https://techblog.web.cern.ch/techblog/post/bios_uefi_cloud_image/ | 14:09 |
holtgrewe | arne_wiebalck: thanks, I believe I now have apropriate images built... I'll just give it a try. For all of these things related to hardware, there are so big delays on reboot and machine startup (at least with dell servers) that sometimes I'm overr-reading the documentation rather than just trying out things... | 14:11 |
holtgrewe | Anyway, I really appreciate your support and all of the heroically hard work that the ironic community has put into this. | 14:11 |
arne_wiebalck | holtgrewe: :) | 14:31 |
opendevreview | Dmitry Tantsur proposed openstack/ironic master: Derive FakeHardware from GenericHardware https://review.opendev.org/c/openstack/ironic/+/821984 | 14:32 |
holtgrewe | arne_wiebalck: So now I end up with a node that has /dev/sd{a,b} configured properly for mdadm and the mdadm RAID1 contains teh EFI partition, /boot and then / in three partitions. | 15:25 |
holtgrewe | arne_wiebalck: it turns out that Dell UEFI at least is not able to boot from that | 15:25 |
holtgrewe | arne_wiebalck: I'll look into your blog now, probably my image is not good enough | 15:26 |
arne_wiebalck | holtgrewe: yeah, you need an UEFI capable image | 15:26 |
TheJulia | This morning I got to wake up to a message with someone pointing me to https://www.youtube.com/watch?v=L0JqO35EepM | 15:33 |
TheJulia | Also, good morning everyone | 15:33 |
rpittau | good morning TheJulia :) | 15:33 |
holtgrewe | arne_wiebalck: d'oh ... and here I was thinking that I had such a drive... I see that you at Cern have a fancy custom image builder. It's a bit harder to follow when then best tool in your box is disk-image-create/DIB. Does Cern publish their qcow2 images anywhere? | 15:43 |
arne_wiebalck | holtgrewe: the Linux team creates images via kickstart and snapshots, true | 15:45 |
arne_wiebalck | holtgrewe: *the Linux team here | 15:45 |
arne_wiebalck | holtgrewe: I tried to sell them disk image builder, but that did not work :) | 15:45 |
arne_wiebalck | holtgrewe: I don't think we publish our images | 15:46 |
holtgrewe | arne_wiebalck: well, they know what they are doing. I'm my own Linux team and I'm probably hopeless. :-D | 15:46 |
arne_wiebalck | holtgrewe: check the ubuntu cloud images they are usually UEFI aware | 15:46 |
arne_wiebalck | holtgrewe: (I think) | 15:46 |
holtgrewe | OK, will do... tomorrow. Time to call it a day. It's really ... interesting ... how much time one can spend waiting for machines to reboot. | 15:47 |
arne_wiebalck | heh, yeah, see you tomorrow then! | 15:47 |
holtgrewe | anyway, thanks for all your support and input, I'll get my machines to submit to my will in the end | 15:48 |
* TheJulia is curious what OS was being tried | 15:48 | |
holtgrewe | CentOS 7.9 | 15:48 |
TheJulia | uhh y eah | 15:48 |
TheJulia | the stock centos images don't have the uefi bits installed | 15:48 |
TheJulia | you need to use diskimage-builder and basically tell it to make you a UEFI image | 15:48 |
holtgrewe | Although it looks like I would be able to move to RockyLinux 8.5 by monday as I finally have the upgrade plans for the not-loved GPFS based storage system here. | 15:48 |
* TheJulia has not heard anyone mention GPFS in *ages* | 15:49 | |
holtgrewe | But I think RockyLinux qcow2 images don't have UEFI either. | 15:49 |
holtgrewe | It's called spectrumscale now :-D | 15:49 |
TheJulia | holtgrewe: ahh | 15:49 |
holtgrewe | I inherited a system from a company and it's nothing but trouble. | 15:49 |
holtgrewe | "from a company with 3 letters starting with DDN" | 15:50 |
TheJulia | holtgrewe: their uefi stuff is likely not signed, but it would be fairly easy to make it uefi-worthy by copying /boot/efi from a centos machine *whistles innocently* | 15:50 |
TheJulia | I looked at it to store like a billion single page tiffs like 15 years ago | 15:50 |
TheJulia | and it failed horribly | 15:50 |
holtgrewe | I tried `disk-image-create -o centos-7.9-uefi block-device-efi dhcp-all-interfaces centos7` but that image is not uefi bootable apparently. | 15:51 |
TheJulia | holtgrewe: try adding vm and bootloader elements to that | 15:51 |
holtgrewe | TheJulia: yeah... well I already have the replacement machines mounted all filled up with NVME drives and ready for a ceph installation. It will be work tuning that but at least an upgrade does not require interacting with 5 people in 3 time zones through a salesforce support forum they brand community.company.com. | 15:52 |
TheJulia | *sigh* | 15:52 |
holtgrewe | TheJulia: Will try tomorrow! Thanks everyone, you're great. | 15:52 |
TheJulia | have a good evening holtgrewe | 15:52 |
opendevreview | Merged openstack/ironic master: Update RAID docs https://review.opendev.org/c/openstack/ironic/+/821782 | 15:58 |
jingvar | \o | 16:00 |
jingvar | Is user-data not supported by Bifrost? | 16:01 |
jingvar | ~/bifrost$ grep -r user-data is impty response | 16:04 |
dtantsur | good morning TheJulia | 16:20 |
dtantsur | jingvar: seems like no (surprised myself) | 16:21 |
dtantsur | I think bifrost allows you to pre-generate a configdrive | 16:21 |
dtantsur | but generally this code could use updating | 16:21 |
dtantsur | TheJulia: I think https://ironicbaremetal.org/ needs updating now | 16:33 |
jingvar | yep, and no lvm support and partritioning , no linux-firmware in cloud image, 2GB docker image - like as whole the World but there are no simple things :) | 16:34 |
TheJulia | diskimage-builder can be your friend for making these things easier | 16:34 |
dtantsur | yeah, I'm also annoyed that linux-firmware is often removed from cloud images | 16:34 |
TheJulia | dtantsur: in terms of just reviewing/approving the proposed pull requests? | 16:34 |
jingvar | it is very old tool | 16:34 |
dtantsur | we have to explicitly re-add it in our IPA builds | 16:34 |
dtantsur | TheJulia: dunno, I just noticed the old version :) | 16:35 |
TheJulia | ahh, riccardo asked me to review some prs yesterday | 16:35 |
jingvar | there are supported 2 or 3 elements | 16:35 |
TheJulia | just, too many directions | 16:35 |
TheJulia | well, too many things in too many directions | 16:35 |
dtantsur | jingvar: bare metal provisioning is always about too many options... | 16:36 |
dtantsur | that's largely why ironic exists :) | 16:36 |
jingvar | simple quiestion - Ironic is helpfull for baremetal ? If it yes, why no going through usal deployment and catch basic steps? | 16:37 |
jingvar | we use ansible drive for raid support, but it is ugly IMHO | 16:38 |
jingvar | As I understand Ironic was build around cloud-init | 16:39 |
TheJulia | jingvar: incorrect really | 16:40 |
TheJulia | the core of ironic and the design is to write a disk or partition image to a machine | 16:40 |
TheJulia | *and* a config drive which is attached. You don't have to use just cloud-init to read/use that partition image | 16:40 |
JayF | Ironic existed long before configdrive (and cloud-init) support was added :) | 16:40 |
TheJulia | For example, some people use ignition | 16:40 |
jingvar | hmm | 16:41 |
JayF | Incubated into openstack in Icehouse, and we didn't have patches for cloud-init to read configdrive off disk until at least Juno timeframe | 16:41 |
TheJulia | jingvar: people use ironic to do massive repeatable deployments of known good/tested images/configurations. Imagine trying to manually install two thousand machines in a lab one machine at a time | 16:41 |
JayF | In fact, cloud-init isn't even related to Ironic as a project -- it just knows how to consume the standard metadata in a configdrive/metadata service and apply it :) | 16:41 |
TheJulia | and then imagine, just hitting enter on your keyboard | 16:41 |
TheJulia | JayF++ | 16:42 |
JayF | Rackspace used Ironic to provide bare metal machines across a cluster of thousands to customers over an API in <1m | 16:42 |
JayF | Ironic is a powerful tool which I think is part of why it's sometimes more difficult to use | 16:42 |
TheJulia | This is true | 16:42 |
TheJulia | *and* there are so many different options when it comes to how one may want a physical machine to look or behave | 16:43 |
jingvar | what about LVM? | 16:43 |
JayF | Ironic supports putting down whole disk images onto a node, which can have anything inside you want | 16:44 |
TheJulia | you can bake it into a whole disk image | 16:44 |
JayF | LVM, any filesystem, or really even any OS whatsoever | 16:44 |
TheJulia | and diskimage-builder *does* support articulating an LVM setup | 16:44 |
TheJulia | JayF: I saw a video this morning where someone deployed ESXi *blink* *blink* | 16:45 |
TheJulia | via ironic | 16:45 |
dtantsur | supporting all possible partitioning options is a thankless job, to be honest. even Linux installers limit what they support out-of-box. | 16:45 |
JayF | Yep, we had that POC'd at Rackspace. | 16:45 |
JayF | You can put *whatever you want* in the image. | 16:45 |
dtantsur | this ^^^ | 16:45 |
TheJulia | ++ | 16:46 |
JayF | It's basically a choice... | 16:46 |
dtantsur | meanwhile, kubecon.eu talk submitted! | 16:46 |
JayF | - partition images: give a better way to change how the nodes' disk config looks, but gives you less control over bootloader, FS layout, etc | 16:46 |
jingvar | I means, I can build an image whith lvm, raid etc? It will be copyed on disk, grows partrition etc | 16:46 |
jingvar | ? | 16:46 |
TheJulia | you know, this entire thread would be a good topic for a OIF summit talk | 16:47 |
JayF | jingvar: Ironic will write the bits from your image onto the disk. It's up to automation inside the image to do things like e.g. expand filesystems. | 16:47 |
JayF | jingvar: but cloud-init, as you noted, has support for doing almost all that you'd need in that case if properly configured | 16:47 |
dtantsur | I think cloud-init can expand the last partition (so does ignition) | 16:47 |
TheJulia | jingvar: ask stevebaker[m] abotu growing lvm volumes | 16:47 |
JayF | at a previous job, we even used like a <100 line powershell script to do expanding of partitions/filesystems and setup for network from configdrive | 16:47 |
jingvar | cloud init can't work with root as I rememeber | 16:48 |
TheJulia | JayF: Nice | 16:48 |
JayF | cloud-init, like Ironic, is a powerful tool with lots of configuration knobs. It will even run arbitrary scripts. I suspect if there's something you're trying to do, you'd be able to do it. | 16:48 |
JayF | I can't tell you exactly how to do it, but there are great docs on how to write a cloud-config file | 16:49 |
jingvar | JayF: I have the same, but I don't like a lot of bash | 16:49 |
TheJulia | jingvar: you can't avoid lower level aspects at times when you want things a particular way | 16:49 |
JayF | Not all scripts have to be in bash :)... but realistically, in order to get a functional Ironic going, you're going to have to ^^^ what TheJulia said | 16:49 |
jingvar | I understand | 16:49 |
TheJulia | jingvar: mainly because your charting new territory with what your doing. Your doing something nobdoy else has done before, or nobody quite like the way you need or want to. | 16:50 |
jingvar | cloud-init is pain :) | 16:50 |
TheJulia | indeed | 16:50 |
dtantsur | I cannot disagree with that | 16:50 |
dtantsur | (so is Ignition, in case anyone is curious) | 16:50 |
* TheJulia prefers glean | 16:50 | |
JayF | I mean, lets be honest. TheJulia, dtantsur and I have dedicated close to a decade of our life to Ironic/cloud/etc | 16:50 |
JayF | none of it is easy, it's all kind of a pain | 16:51 |
JayF | but orders of magnitude less pain than doing it without automation like Ironic and cloud-init/ignition | 16:51 |
dtantsur | If my talk gets accepted, I absolutely will dedicate 5 minutes of it talking about how much bare metal provisioning is not the same as requesting a VM on AWS :) | 16:51 |
TheJulia | I literally had someone almost start crying when they told me the story of how ironic and managing their lab environment went from weeks to redeploy to a couple hours and saved their relationship with their girlfriend | 16:52 |
jingvar | I've worked in Metal3 and Airship :) | 16:52 |
dtantsur | jingvar: I'll be surprised if you figure out how much code had to be written for cloud-init to even work with bare metal at all :) | 16:52 |
JayF | dtantsur: about two lines, iirc, right? | 16:52 |
jingvar | yep | 16:52 |
dtantsur | virtual solutions often rely on some sort of metadata service | 16:52 |
JayF | dtantsur: just making cloud-init look at partitions instead of only virual disks | 16:52 |
dtantsur | JayF: if you ignore the whole code around configdrive - yes | 16:52 |
JayF | I mean, configdrive code existed before Ironic was using it. | 16:53 |
dtantsur | I mean configdrive support in Ironic is not exactly a trivial thing | 16:53 |
JayF | In order to utilize it, we (Rackspace at the time) just had to change the logic to find the configdrive on a physical disk instead of only looking at virtuals | 16:53 |
JayF | oooh, I thought you were talking **scoped to cloud-init** | 16:53 |
dtantsur | nope, I'm adding one more point to "why ironic" | 16:53 |
dtantsur | TheJulia: oh, Ironic saving a relationship? I love that! | 16:54 |
jingvar | Question is what a patritionong you use on Controller? | 16:54 |
dtantsur | jingvar: I'm curious why you decided to move from Metal3 to Bifrost | 16:54 |
jingvar | New company | 16:55 |
dtantsur | ahhh | 16:55 |
dtantsur | happens | 16:55 |
dtantsur | Re what partitioning to use: there is no single answer to that. Get a room full of operators, each will have their own ideas. | 16:55 |
TheJulia | dtantsur: It was one of those met someone randomly and started talking thing at a conference and I will never forget that conversation | 16:55 |
dtantsur | I can imagine | 16:55 |
TheJulia | we quite literally got operators to agree that they could never agree in Sydney | 16:55 |
dtantsur | Some will insist on using hardware RAID, some - only software. Some LVM, some won't care at all. | 16:56 |
TheJulia | #success | 16:56 |
jingvar | may be /var /tmp /log on diffrent parts ? | 16:56 |
opendevstatus | TheJulia: Added success to Success page (https://wiki.openstack.org/wiki/Successes) | 16:56 |
TheJulia | omg I forgot about that | 16:56 |
JayF | lol I hope that's like "TheJulia success: " | 16:56 |
dtantsur | jingvar: splitting /var is a reasonable idea for a server. /tmp is usually mounted from RAM | 16:56 |
TheJulia | #success Remembering the time we quite literally got operators to agree that they could never agree in Sydney | 16:56 |
opendevstatus | TheJulia: Added success to Success page (https://wiki.openstack.org/wiki/Successes) | 16:57 |
TheJulia | there we go! | 16:57 |
dtantsur | usually you get /boot/efi, /boot, / and /var | 16:57 |
dtantsur | the latter is what gets stretched to the whole disk | 16:57 |
jingvar | there are bestpractics... | 16:57 |
dtantsur | but again, your mileage may vary A LOT | 16:57 |
JayF | so many folks like to think that with tech, that it's engineering and science and there's a right answer ... but the reality is that it's all just a giant set of tradeoffs, and you gotta pick what you want | 16:57 |
TheJulia | There are, and some of those best practices also make huge assumptions, or are counter to security best practices | 16:57 |
TheJulia | It is definitely a choose your own adventure story | 16:58 |
dtantsur | ... especially since we don't even know your operation requirements | 16:58 |
jingvar | boot/raid 1-lvm/root/, /opt/, /home , /var | 17:00 |
jingvar | nothin special | 17:00 |
jingvar | I can do this via Maas easely | 17:01 |
dtantsur | a good start probably (not sure why you need /home and /opt, but as I said, your mileage will differ) | 17:01 |
dtantsur | at some point CoreOS only supported splitting away /var :) they have developed a more sophisticated solution since then | 17:02 |
jingvar | I've had one root, and hapens no free space | 17:02 |
dtantsur | we *could* give an option to do the /var split as well.. dunno how many people would actually use it vs just creating a whole disk image. | 17:03 |
jingvar | I talking about several patrtritions, not about mount points | 17:03 |
jingvar | 4Tb+filled more than half and kernel update :) | 17:04 |
jingvar | ohhh, grub cant reach more than 2T ... | 17:05 |
jingvar | I have stroyes about wrong parttritiongs | 17:05 |
dtantsur | grub can, but not on MBR | 17:06 |
TheJulia | jingvar: bios boot 1T and but MBR tables are the limitation | 17:06 |
TheJulia | obscure historical computing knowledge for the win | 17:06 |
TheJulia | I've literally had to point out the addressing limitation with bios mbr booting to someone like 6 times int he past week | 17:06 |
jingvar | and it why /boot shuold be on diffrenet part | 17:07 |
TheJulia | yes | 17:07 |
TheJulia | to keep it in that range *and* limit the risk of the intermediate and second stage loader files from moving | 17:07 |
TheJulia | I'm super glad uefi only chips are finally starting to ship | 17:08 |
dtantsur | and aarch64 is UEFI only from the beginning, I think | 17:09 |
jingvar | Can you give me a way to resolve my idea? (lvm, raid etc) | 17:09 |
TheJulia | dtantsur: indeed | 17:09 |
dtantsur | jingvar: without going into many details, build a whole-disk image with LVM and put it on software RAID built by Ironic | 17:09 |
dtantsur | it may be what arne_wiebalck is doing, but he might have left already | 17:09 |
TheJulia | I'm fairly sure his images now use LVM | 17:10 |
TheJulia | he sent me some layout data to refresh my memory so I could update the RAID docs | 17:10 |
arne_wiebalck | no LVM :) | 17:10 |
TheJulia | no?! | 17:10 |
TheJulia | wow, awesome partitioning job then! | 17:10 |
arne_wiebalck | heh, kudos to the our linux team :) | 17:11 |
dtantsur | okay, somebody definitely uses LVM on software RAID. but it may be a Russian operator I know (who is not here normally) | 17:12 |
TheJulia | dtantsur: we got a patch for it at one point, but my doc updates say we don't officially support it | 17:13 |
arne_wiebalck | mnasiadka was working on LVM support as well, he put some patches up | 17:13 |
TheJulia | since we don't have a CI job | 17:13 |
mnasiadka | yes, I use that - I need to update the patch with unit tests | 17:13 |
* TheJulia does not remember htis | 17:13 | |
mnasiadka | and I was thinking if there's a CI that is testing that | 17:14 |
TheJulia | mnasiadka: okay! | 17:14 |
dtantsur | mnasiadka: mind briefly sharing your experience? | 17:14 |
mnasiadka | because if not - it's probably easy to break | 17:14 |
TheJulia | mnasiadka: yeah :( | 17:14 |
jingvar | souds like a plan | 17:14 |
*** amoralej is now known as amoralej|off | 17:15 | |
mnasiadka | dtantsur: my experience is that after my patch it works ok - sometimes there are problems with setting up MD raid in cleaning steps (but that's not related to my patch) - e.g. only one out of two disks ends up in MD array | 17:15 |
mnasiadka | dtantsur: but if there would be a CI, maybe it would be easier to catch those issues, I didn't really have time to dig into that - re-run the cleaning and MD got assembled properly. | 17:15 |
dtantsur | mnasiadka: I thought arne_wiebalck has fixed the assembly issue | 17:15 |
dtantsur | fwiw we do have a software RAID job | 17:15 |
dtantsur | but we cannot properly test the deployed instance because of how limited cirros is | 17:16 |
mnasiadka | dtantsur: Maybe my code didn't have the fix - if you have a link to the patch - I can check | 17:16 |
dtantsur | it's not exactly new, lemme try to find | 17:16 |
TheJulia | we *can* permit a special job with special images or even a custom tempest scenario test which is disabled by default | 17:16 |
TheJulia | many different options exist | 17:16 |
dtantsur | mnasiadka: commit 253b4887d597faf368fc9317399fd616db81989c | 17:17 |
mnasiadka | that's october 2020, for sure that didn't fix all the issues - because I had those like two weeks back on stable/wallaby | 17:18 |
dtantsur | then, I'm afraid, it's on you to debug and fix it | 17:18 |
dtantsur | occasional errors are basically impossible to triage without a lot of practical experience | 17:19 |
mnasiadka | might be, I'll have a new set of nodes for that customer beginning next year, so I'm pretty sure will hit something similar ;-) | 17:20 |
arne_wiebalck | we just re-created some 2500 or so nodes with s/w RAID and for sure there are still races which require to fix the md device | 17:22 |
dtantsur | sigh | 17:24 |
arne_wiebalck | yeah | 17:24 |
dtantsur | computers were a mistake | 17:24 |
arne_wiebalck | heh | 17:24 |
arne_wiebalck | a fun mistake, though | 17:24 |
dtantsur | a lot of fun :) | 17:24 |
jingvar | In my mind it should be like a netplan but for fstab | 17:25 |
TheJulia | jingvar: someone proposed something like that once... | 17:26 |
TheJulia | I don't remember why it didn't go anywhere | 17:26 |
jingvar | ok, have a good evening | 17:28 |
dtantsur | going as well and will be out tomorrow. see you on Monday o/ | 17:30 |
TheJulia | o/ dtantsur | 17:31 |
TheJulia | rpittau: maybe we just need to nuke the tox with driver lib from bugfix/18.1 | 17:45 |
opendevreview | Julia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing https://review.opendev.org/c/openstack/ironic-tempest-plugin/+/819165 | 17:50 |
rpittau | TheJulia: heh I was wondering wht's wrong with that | 17:57 |
rpittau | see you tomorrow! o/ | 17:57 |
TheJulia | goodnight | 17:58 |
TheJulia | NobodyCam: good morning! If you have not already pulled in the ignore plug_vifs patch, melwitt backported them on nova https://review.opendev.org/q/Iba87cef50238c5b02ab313f2311b826081d5b4ab | 17:58 |
NobodyCam | :dance | 17:59 |
NobodyCam | Good Morning Ironic'ers | 17:59 |
NobodyCam | very neat | 18:01 |
*** zbitter is now known as zaneb | 18:14 | |
* arne_wiebalck is watching his upgraded-to-Wallaby qa setup creating its first instances | 18:35 | |
stevebaker[m] | jingvar: we have a growvols script which will grow lvm volumes to the available space on the same disk https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/growvols | 18:39 |
opendevreview | Julia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing https://review.opendev.org/c/openstack/ironic-tempest-plugin/+/819165 | 18:41 |
arne_wiebalck | bye everyone, see you tomorrow o/ | 18:48 |
jingvar | stevebaker[m]: thanks | 18:57 |
opendevreview | Julia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing https://review.opendev.org/c/openstack/ironic-tempest-plugin/+/819165 | 19:14 |
TheJulia | You'd think I'd remember how the plugin actually handles creating credential'ed clients again | 19:17 |
opendevreview | Julia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing https://review.opendev.org/c/openstack/ironic-tempest-plugin/+/819165 | 19:41 |
opendevreview | Arne Wiebalck proposed openstack/ironic-python-agent master: [trivial] Fix typo in __init__.py https://review.opendev.org/c/openstack/ironic-python-agent/+/822049 | 21:06 |
opendevreview | Julia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing https://review.opendev.org/c/openstack/ironic-tempest-plugin/+/819165 | 21:37 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!