Thursday, 2021-12-16

TheJuliaNobodyCam: if memory serves we send a power off command00:35
TheJuliain the standby handler00:35
NobodyCamthank you@00:35
NobodyCamthank you!00:35
holtgreweWhat's the best way of getting a separate partition for /tmp on an ironic bare metal server managed via nova?07:03
arne_wiebalckGood morning, Ironic!07:29
arne_wiebalckholtgrewe: A single disk on which you have a partition for /tmp?07:30
arne_wiebalckholtgrewe: Or a second disk which should become /tmp?07:30
holtgrewearne_wiebalck: I have a single hardware SSD (actually hardware RAID1) that needs a separate XFS partition for /tmp. I already figured out how to add an EFI partition to the stock CentOS 7.9 qcow2.07:31
holtgreweIf I understand correctly, I could "somehow" add a partition to the qcow2 and then "somehow" make it grow together with the root parition.07:32
holtgreweOr I could "somehow" use cloud-init do create the partition. I'm looking for some guidance on best practice and pointers towards manuals on where to look to support this.07:32
arne_wiebalckholtgrewe: if it is a single disk, making it part of the image is maybe easiest ... do you build the image yourself ?07:36
holtgrewearne_wiebalck: I've taken the provided qcow2 image of centos7.9 and subjected it to "disk-image-create -o centos7-uefi block-device-efi dhcp-all-interfaces centos7"07:37
rpittaugood morning ironic! o/07:42
dtantsurmorning ironic07:43
dtantsurholtgrewe: we recommend people to use cloud-init indeed07:43
dtantsurwith DIB you can specify the exact partition layout. if you leave the root partition to go the last, cloud-init will be able to grow it to occupy the whole disk07:44
holtgrewedtantsur: OK, thanks07:45
arne_wiebalckdtantsur: something to add to our docs, like "common use cases" ? ^^07:50
dtantsurarne_wiebalck: there should definitely be some words on partitioning and cloud-init somewhere..07:50
arne_wiebalckdtantsur: cloud-init is mentioned in some places (e.g for portgroups), but I couldn't find "how to create and mount an additional partition"07:54
dtantsurif you have experience with it, would be awesome to update the docs07:55
arne_wiebalckdtantsur: we do the partioning via the image and use cloud-init to do some extra things, e.g. mounting an md device08:00
arne_wiebalckholtgrewe: would you be willing to summarise what needed to be done once you're finished?08:00
holtgrewearne_wiebalck: I'll try.08:04
arne_wiebalckholtgrewe: cool, ty!08:04
holtgreweI just discovered that I will need mdadm in one way or another as my machines don't have hardware RAID controllers...08:05
*** amoralej|off is now known as amoralej08:16
opendevreviewRiccardo Pittau proposed openstack/bifrost master: Install pip package in dib based images
arne_wiebalckholtgrewe: for s/w RAID, we should have docs :)08:38
holtgrewearne_wiebalck: yes, I found them. I'm just exploring how to use them for my purpose.08:39
jandersgood morning holtgrewe arne_wiebalck dtantsur and Ironic o/09:12
arne_wiebalckhey janders, good morning! o/09:12
holtgreweGood morning. New error. "Failure prepping block device" / "ironicclient.common.apiclient.exceptions.BadRequest: Unable to attach VIF <UUID>, not enough free physical ports." now how did I manage to make this?09:20
opendevreviewArne Wiebalck proposed openstack/ironic-python-agent master: Burn-in: Dynamic network pairing
arne_wiebalckdtantsur: rpittau: "make tooz optional" ^^: I was probably overthinking things, sorry :) What you suggested is to have an [extra] section in setup.cfg to have tooz installed only when it is requested, e.g. in IPAB. Is that correct? The change has now tooz as an optional package, and in test-requirements.09:29
rpittauarne_wiebalck: that should be fine, that makes the entire burnin optional, is that intended?09:40
arne_wiebalckrpittau: hmm ... no :)09:42
arne_wiebalckrpittau: does the group name carrry any meaning? I thought this is a free name to pick?09:43
arne_wiebalckrpittau: what I would like to do is make tooz optional, so it can be pulled in if needed09:44
* arne_wiebalck seems like I do not understand how it works :)09:44
rpittauarne_wiebalck: apologies, I expressed myself poorly, I mean that burnin is not a mandatory feature now, not that it's not usable by default09:45
rpittauwith that change tooz will be installed if required09:45
rpittaubut still needs to be installed for any use of burnin09:46
arne_wiebalcksince burnin imports tooz09:46
arne_wiebalckis that what we want or do we want sth else?09:48
rpittauif we want burnin to always work (and that's my understanding reading the story) then I think we should actually keep tooz in the requirements and just leave its drivers as optional (kazoo, etcd, etc...)09:56
arne_wiebalckoh, ok!09:57
rpittauit's one more req for a feature, seems fair, but let's see what others think, maybe there's an alternative09:57
arne_wiebalckI guess the group should also be more specific then09:58
arne_wiebalckthe group name09:58
arne_wiebalckhow would we add another driver like etcd09:59
arne_wiebalckdifferent group within the extra section?10:00
rpittauI'm talking about tooz drivers, that depends on how tooz is installed
rpittauthat's why I think it should be addressed in the ipa-builder10:01
arne_wiebalckso, when you say "optional" you mean nothing in the IPA itself for the drivers?10:04
arne_wiebalckI was thinking having the drivers in a extras section would ease the installation, no?10:04
opendevreviewArne Wiebalck proposed openstack/ironic-python-agent master: Burn-in: Dynamic network pairing
arne_wiebalckrpittau: ^^ latest suggestion :)10:15
holtgreweI'm getting a weird error about raid configuration. The physical disk parameter is supposed to be a list of device hints?
holtgreweah, I probably need to specify the controller...10:19
*** sshnaidm|afk is now known as sshnaidm10:54
opendevreviewwangjing proposed openstack/ironic master: add Debian for doc
*** amoralej is now known as amoralej|lunch11:59
janderssee you on Monday Ironic o/ (I'm off tomorrow)12:42
jandersHave a great weekend everyone12:42
rpittaubye janders :)12:59
holtgrewearne_wiebalck, dtantsur: I have to correct myself. I actually need to write to software RAID-1. When using the following raid configuration, I get /dev/sd{a,b}1 with linux_raid_member configuration. However, if I understand coorectly it would be simpler to have an mdadm-based device and I write the image to that with cloud-init. Or is this unsupported? How do I proceed here in general? Is 13:06
holtgrewethere some documentation on that?13:06
*** amoralej|lunch is now known as amoralej13:12
arne_wiebalckholtgrewe: AFAIU, cloud-init is used to configure the instance upon its first boot (not to write the image to a device in the first place).13:24
arne_wiebalckholtgrewe: so, the sequence (we use anyway) is: configure the RAID with Ironic, create an instance, have cloud-init finish up the instance13:25
arne_wiebalckholtgrewe: this is still about having a separate /tmp (now on top of a s/w RAID)?13:26
arne_wiebalckholtgrewe: if so, I would configure the s/w in Ironic with two md devices, have the first one for the system, and prepare/mount the second one via cloud-init13:27
opendevreviewMerged x/sushy-oem-idrac master: Re-add python 3.6/3.7 in classifier
holtgrewearne_wiebalck: OK, I think I now understand. The RAID configuration created /dev/sd{a,b}1 that I can `mdadm --assemble` into a /dev/md0.14:04
arne_wiebalckholtgrewe: that assembly should happen automatically (if the image is configured correspondingly)14:06
holtgrewearne_wiebalck: thanks, is there anything to look out for when using UEFI? The documentation talks about an EFI partition on the "holder devices".14:06
arne_wiebalckholtgrewe: depending on the boot mode in Ironic for this node, Ironic will prepare things as needed14:07
arne_wiebalckholtgrewe: the main thing to look out for is that the image you install is UEFI capable14:07
arne_wiebalckholtgrewe: as I think dtantsur pointed out earlier this is not the case for all14:08
arne_wiebalckholtgrewe: this may help
holtgrewearne_wiebalck: thanks, I believe I now have apropriate images built... I'll just give it a try. For all of these things related to hardware, there are so big delays on reboot and machine startup (at least with dell servers) that sometimes I'm overr-reading the documentation rather than just trying out things...14:11
holtgreweAnyway, I really appreciate your support and all of the heroically hard work that the ironic community has put into this.14:11
arne_wiebalckholtgrewe: :)14:31
opendevreviewDmitry Tantsur proposed openstack/ironic master: Derive FakeHardware from GenericHardware
holtgrewearne_wiebalck: So now I end up with a node that has /dev/sd{a,b} configured properly for mdadm and the mdadm RAID1 contains teh EFI partition, /boot and then / in three partitions.15:25
holtgrewearne_wiebalck: it turns out that Dell UEFI at least is not able to boot from that15:25
holtgrewearne_wiebalck: I'll look into your blog now, probably my image is not good enough15:26
arne_wiebalckholtgrewe: yeah, you need an UEFI capable image15:26
TheJuliaThis morning I got to wake up to a message with someone pointing me to
TheJuliaAlso, good morning everyone15:33
rpittaugood morning TheJulia :)15:33
holtgrewearne_wiebalck: d'oh ... and here I was thinking that I had such a drive... I see that you at Cern have a fancy custom image builder. It's a bit harder to follow when then best tool in your box is disk-image-create/DIB. Does Cern publish their qcow2 images anywhere?15:43
arne_wiebalckholtgrewe: the Linux team creates images via kickstart and snapshots, true15:45
arne_wiebalckholtgrewe: *the Linux team here15:45
arne_wiebalckholtgrewe: I tried to sell them disk image builder, but that did not work :)15:45
arne_wiebalckholtgrewe: I don't think we publish our images15:46
holtgrewearne_wiebalck: well, they know what they are doing. I'm my own Linux team and I'm probably hopeless. :-D15:46
arne_wiebalckholtgrewe: check the ubuntu cloud images they are usually UEFI aware15:46
arne_wiebalckholtgrewe: (I think)15:46
holtgreweOK, will do... tomorrow. Time to call it a day. It's really ... interesting ... how much time one can spend waiting for machines to reboot.15:47
arne_wiebalckheh, yeah, see you tomorrow then!15:47
holtgreweanyway, thanks for all your support and input, I'll get my machines to submit to my will in the end15:48
* TheJulia is curious what OS was being tried15:48
holtgreweCentOS 7.915:48
TheJuliauhh y eah15:48
TheJuliathe stock centos images don't have the uefi bits installed15:48
TheJuliayou need to use diskimage-builder and basically tell it to make you a UEFI image15:48
holtgreweAlthough it looks like I would be able to move to RockyLinux 8.5 by monday as I finally have the upgrade plans for the not-loved GPFS based storage system here.15:48
* TheJulia has not heard anyone mention GPFS in *ages*15:49
holtgreweBut I think RockyLinux qcow2 images don't have UEFI either.15:49
holtgreweIt's called spectrumscale now :-D15:49
TheJuliaholtgrewe: ahh15:49
holtgreweI inherited a system from a company and it's nothing but trouble.15:49
holtgrewe"from a company with 3 letters starting with DDN"15:50
TheJuliaholtgrewe: their uefi stuff is likely not signed, but it would be fairly easy to make it uefi-worthy by copying /boot/efi from a centos machine *whistles innocently*15:50
TheJuliaI looked at it to store like a billion single page tiffs like 15 years ago15:50
TheJuliaand it failed horribly15:50
holtgreweI tried `disk-image-create -o centos-7.9-uefi block-device-efi dhcp-all-interfaces centos7` but that image is not uefi bootable apparently.15:51
TheJuliaholtgrewe: try adding vm and bootloader elements to that15:51
holtgreweTheJulia: yeah... well I already have the replacement machines mounted all filled up with NVME drives and ready for a ceph installation. It will be work tuning that but at least an upgrade does not require interacting with 5 people in 3 time zones through a salesforce support forum they brand
holtgreweTheJulia: Will try tomorrow! Thanks everyone, you're great.15:52
TheJuliahave a good evening holtgrewe 15:52
opendevreviewMerged openstack/ironic master: Update RAID docs
jingvarIs user-data  not supported by Bifrost?16:01
jingvar~/bifrost$ grep -r user-data  is impty response16:04
dtantsurgood morning TheJulia 16:20
dtantsurjingvar: seems like no (surprised myself)16:21
dtantsurI think bifrost allows you to pre-generate a configdrive16:21
dtantsurbut generally this code could use updating16:21
dtantsurTheJulia: I think needs updating now16:33
jingvaryep, and no lvm support and partritioning , no linux-firmware in cloud image, 2GB docker image - like as whole the World but there are no simple things :)16:34
TheJuliadiskimage-builder can be your friend for making these things easier16:34
dtantsuryeah, I'm also annoyed that linux-firmware is often removed from cloud images16:34
TheJuliadtantsur: in terms of just reviewing/approving the proposed pull requests?16:34
jingvarit is very old tool16:34
dtantsurwe have to explicitly re-add it in our IPA builds16:34
dtantsurTheJulia: dunno, I just noticed the old version :)16:35
TheJuliaahh, riccardo asked me to review some prs yesterday16:35
jingvarthere are supported 2 or 3 elements16:35
TheJuliajust, too many directions16:35
TheJuliawell, too many things in too many directions16:35
dtantsurjingvar: bare metal provisioning is always about too many options...16:36
dtantsurthat's largely why ironic exists :)16:36
jingvarsimple quiestion - Ironic is helpfull for baremetal ?  If it  yes,  why no going through usal deployment and catch basic steps?16:37
jingvarwe use ansible drive for raid support, but it is ugly IMHO16:38
jingvarAs I understand Ironic was build around cloud-init 16:39
TheJuliajingvar: incorrect really16:40
TheJuliathe core of ironic and the design is to write a disk or partition image to a machine16:40
TheJulia*and* a config drive which is attached. You don't have to use just cloud-init to read/use that partition image16:40
JayFIronic existed long before configdrive (and cloud-init) support was added :) 16:40
TheJuliaFor example, some people use ignition16:40
JayFIncubated into openstack in Icehouse, and we didn't have patches for cloud-init to read configdrive off disk until at least Juno timeframe16:41
TheJuliajingvar: people use ironic to do massive repeatable deployments of known good/tested images/configurations. Imagine trying to manually install two thousand machines in a lab one machine at a time16:41
JayFIn fact, cloud-init isn't even related to Ironic as a project -- it just knows how to consume the standard metadata in a configdrive/metadata service and apply it :)16:41
TheJuliaand then imagine, just hitting enter on your keyboard16:41
JayFRackspace used Ironic to provide bare metal machines across a cluster of thousands to customers over an API in <1m16:42
JayFIronic is a powerful tool which I think is part of why it's sometimes more difficult to use16:42
TheJuliaThis is true16:42
TheJulia*and* there are so many different options when it comes to how one may want a physical machine to look or behave16:43
jingvarwhat about LVM?16:43
JayFIronic supports putting down whole disk images onto a node, which can have anything inside you want16:44
TheJuliayou can bake it into a whole disk image16:44
JayFLVM, any filesystem, or really even any OS whatsoever16:44
TheJuliaand diskimage-builder *does* support articulating an LVM setup16:44
TheJuliaJayF: I saw a video this morning where someone deployed ESXi *blink* *blink*16:45
TheJuliavia ironic16:45
dtantsursupporting all possible partitioning options is a thankless job, to be honest. even Linux installers limit what they support out-of-box.16:45
JayFYep, we had that POC'd at Rackspace.16:45
JayFYou can put *whatever you want* in the image. 16:45
dtantsurthis ^^^16:45
JayFIt's basically a choice...16:46
dtantsurmeanwhile, talk submitted!16:46
JayF- partition images: give a better way to change how the nodes' disk config looks, but gives you less control over bootloader, FS layout, etc16:46
jingvarI means, I can build an image whith lvm, raid etc? It will be copyed on disk, grows partrition etc16:46
TheJuliayou know, this entire thread would be a good topic for a OIF summit talk16:47
JayFjingvar: Ironic will write the bits from your image onto the disk. It's up to automation inside the image to do things like e.g. expand filesystems.16:47
JayFjingvar: but cloud-init, as you noted, has support for doing almost all that you'd need in that case if properly configured16:47
dtantsurI think cloud-init can expand the last partition (so does ignition)16:47
TheJuliajingvar: ask stevebaker[m] abotu growing lvm volumes16:47
JayFat a previous job, we even used like a <100 line powershell script to do expanding of partitions/filesystems and setup for network from configdrive16:47
jingvarcloud init can't work with root as I rememeber16:48
TheJuliaJayF: Nice16:48
JayFcloud-init, like Ironic, is a powerful tool with lots of configuration knobs. It will even run arbitrary scripts. I suspect if there's something you're trying to do, you'd be able to do it.16:48
JayFI can't tell you exactly how to do it, but there are great docs on how to write a cloud-config file16:49
jingvarJayF: I have the same, but I don't like a lot of bash16:49
TheJuliajingvar: you can't avoid lower level aspects at times when you want things a particular way16:49
JayFNot all scripts have to be in bash :)... but realistically, in order to get a functional Ironic going, you're going to have to ^^^ what TheJulia said16:49
jingvarI understand16:49
TheJuliajingvar: mainly because your charting new territory with what your doing. Your doing something nobdoy else has done before, or nobody quite like the way you need or want to.16:50
jingvarcloud-init is pain :)16:50
dtantsurI cannot disagree with that16:50
dtantsur(so is Ignition, in case anyone is curious)16:50
* TheJulia prefers glean16:50
JayFI mean, lets be honest. TheJulia, dtantsur and I have dedicated close to a decade of our life to Ironic/cloud/etc16:50
JayFnone of it is easy, it's all kind of a pain16:51
JayFbut orders of magnitude less pain than doing it without automation like Ironic and cloud-init/ignition16:51
dtantsurIf my talk gets accepted, I absolutely will dedicate 5 minutes of it talking about how much bare metal provisioning is not the same as requesting a VM on AWS :)16:51
TheJuliaI literally had someone almost start crying when they told me the story of how ironic and managing their lab environment went from weeks to redeploy to a couple hours and saved their relationship with their girlfriend16:52
jingvarI've worked in Metal3 and Airship :)16:52
dtantsurjingvar: I'll be surprised if you figure out how much code had to be written for cloud-init to even work with bare metal at all :)16:52
JayFdtantsur: about two lines, iirc, right?16:52
dtantsurvirtual solutions often rely on some sort of metadata service16:52
JayFdtantsur: just making cloud-init look at partitions instead of only virual disks16:52
dtantsurJayF: if you ignore the whole code around configdrive - yes16:52
JayFI mean, configdrive code existed before Ironic was using it.16:53
dtantsurI mean configdrive support in Ironic is not exactly a trivial thing16:53
JayFIn order to utilize it, we (Rackspace at the time) just had to change the logic to find the configdrive on a physical disk instead of only looking at virtuals16:53
JayFoooh, I thought you were talking **scoped to cloud-init**16:53
dtantsurnope, I'm adding one more point to "why ironic"16:53
dtantsurTheJulia: oh, Ironic saving a relationship? I love that!16:54
jingvarQuestion is what a patritionong you use on Controller?16:54
dtantsurjingvar: I'm curious why you decided to move from Metal3 to Bifrost16:54
jingvarNew company16:55
dtantsurRe what partitioning to use: there is no single answer to that. Get a room full of operators, each will have their own ideas.16:55
TheJuliadtantsur: It was one of those met someone randomly and started talking thing at a conference and I will never forget that conversation16:55
dtantsurI can imagine16:55
TheJuliawe quite literally got operators to agree that they could never agree in Sydney16:55
dtantsurSome will insist on using hardware RAID, some - only software. Some LVM, some won't care at all.16:56
jingvarmay be /var /tmp /log on diffrent parts ?16:56
opendevstatusTheJulia: Added success to Success page (
TheJuliaomg I forgot about that16:56
JayFlol I hope that's like "TheJulia success: "16:56
dtantsurjingvar: splitting /var is a reasonable idea for a server. /tmp is usually mounted from RAM16:56
TheJulia#success Remembering the time we quite literally got operators to agree that they could never agree in Sydney16:56
opendevstatusTheJulia: Added success to Success page (
TheJuliathere we go!16:57
dtantsurusually you get /boot/efi, /boot, / and /var16:57
dtantsurthe latter is what gets stretched to the whole disk16:57
jingvarthere are bestpractics...16:57
dtantsurbut again, your mileage may vary A LOT16:57
JayFso many folks like to think that with tech, that it's engineering and science and there's a right answer ... but the reality is that it's all just a giant set of tradeoffs, and you gotta pick what you want16:57
TheJuliaThere are, and some of those best practices also make huge assumptions, or are counter to security best practices16:57
TheJuliaIt is definitely a choose your own adventure story16:58
dtantsur... especially since we don't even know your operation requirements16:58
jingvarboot/raid 1-lvm/root/, /opt/, /home ,  /var17:00
jingvarnothin special17:00
jingvarI can do this via Maas easely17:01
dtantsura good start probably (not sure why you need /home and /opt, but as I said, your mileage will differ)17:01
dtantsurat some point CoreOS only supported splitting away /var :) they have developed a more sophisticated solution since then17:02
jingvarI've had one root, and hapens no free  space17:02
dtantsurwe *could* give an option to do the /var split as well.. dunno how many people would actually use it vs just creating a whole disk image.17:03
jingvarI talking about several patrtritions, not about mount points17:03
jingvar4Tb+filled more than half and kernel update :)17:04
jingvarohhh, grub cant reach more than 2T ...17:05
jingvarI have stroyes about wrong parttritiongs17:05
dtantsurgrub can, but not on MBR17:06
TheJuliajingvar: bios boot 1T and but MBR tables are the limitation17:06
TheJuliaobscure historical computing knowledge for the win17:06
TheJuliaI've literally had to point out the addressing limitation with bios mbr booting to someone like 6 times int he past week17:06
jingvarand it why /boot shuold be on  diffrenet part17:07
TheJuliato keep it in that range *and* limit the risk of the intermediate and second stage loader files from moving17:07
TheJuliaI'm super glad uefi only chips are finally starting to ship17:08
dtantsurand aarch64 is UEFI only from the beginning, I think17:09
jingvarCan you give me a way to  resolve my idea? (lvm, raid etc)17:09
TheJuliadtantsur: indeed17:09
dtantsurjingvar: without going into many details, build a whole-disk image with LVM and put it on software RAID built by Ironic17:09
dtantsurit may be what arne_wiebalck is doing, but he might have left already17:09
TheJuliaI'm fairly sure his images now use LVM17:10
TheJuliahe sent me some layout data to refresh my memory so I could update the RAID docs17:10
arne_wiebalckno LVM :)17:10
TheJuliawow, awesome partitioning job then!17:10
arne_wiebalckheh, kudos to the our linux team :)17:11
dtantsurokay, somebody definitely uses LVM on software RAID. but it may be a Russian operator I know (who is not here normally)17:12
TheJuliadtantsur: we got a patch for it at one point, but my doc updates say we don't officially support it17:13
arne_wiebalckmnasiadka was working on LVM support as well, he put some patches up17:13
TheJuliasince we don't have a CI job17:13
mnasiadkayes, I use that - I need to update the patch with unit tests17:13
* TheJulia does not remember htis17:13
mnasiadkaand I was thinking if there's a CI that is testing that17:14
TheJuliamnasiadka: okay!17:14
dtantsurmnasiadka: mind briefly sharing your experience?17:14
mnasiadkabecause if not - it's probably easy to break17:14
TheJuliamnasiadka: yeah :(17:14
jingvarsouds like a plan17:14
*** amoralej is now known as amoralej|off17:15
mnasiadkadtantsur: my experience is that after my patch it works ok - sometimes there are problems with setting up MD raid in cleaning steps (but that's not related to my patch) - e.g. only one out of two disks ends up in MD array17:15
mnasiadkadtantsur: but if there would be a CI, maybe it would be easier to catch those issues, I didn't really have time to dig into that - re-run the cleaning and MD got assembled properly.17:15
dtantsurmnasiadka: I thought arne_wiebalck has fixed the assembly issue17:15
dtantsurfwiw we do have a software RAID job17:15
dtantsurbut we cannot properly test the deployed instance because of how limited cirros is17:16
mnasiadkadtantsur: Maybe my code didn't have the fix - if you have a link to the patch - I can check17:16
dtantsurit's not exactly new, lemme try to find17:16
TheJuliawe *can* permit a special job with special images or even a custom tempest scenario test which is disabled by default17:16
TheJuliamany different options exist17:16
dtantsurmnasiadka: commit 253b4887d597faf368fc9317399fd616db81989c17:17
mnasiadkathat's october 2020, for sure that didn't fix all the issues - because I had those like two weeks back on stable/wallaby17:18
dtantsurthen, I'm afraid, it's on you to debug and fix it17:18
dtantsuroccasional errors are basically impossible to triage without a lot of practical experience17:19
mnasiadkamight be, I'll have a new set of nodes for that customer beginning next year, so I'm pretty sure will hit something similar ;-)17:20
arne_wiebalckwe just re-created some 2500 or so nodes with s/w RAID and for sure there are still races which require to fix the md device17:22
dtantsurcomputers were a mistake17:24
arne_wiebalcka fun mistake, though17:24
dtantsura lot of fun :)17:24
jingvarIn my mind it should be like a netplan but for fstab17:25
TheJuliajingvar: someone proposed something like that once...17:26
TheJuliaI don't remember why it didn't go anywhere17:26
jingvarok, have a good evening17:28
dtantsurgoing as well and will be out tomorrow. see you on Monday o/17:30
TheJuliao/ dtantsur 17:31
TheJuliarpittau: maybe we just need to nuke the tox with driver lib from bugfix/18.117:45
opendevreviewJulia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing
rpittauTheJulia: heh I was wondering wht's wrong with that17:57
rpittausee you tomorrow! o/17:57
TheJuliaNobodyCam: good morning! If you have not already pulled in the ignore plug_vifs patch, melwitt backported them on nova
NobodyCamGood Morning Ironic'ers17:59
NobodyCamvery neat18:01
*** zbitter is now known as zaneb18:14
* arne_wiebalck is watching his upgraded-to-Wallaby qa setup creating its first instances 18:35
stevebaker[m]jingvar: we have a growvols script which will grow lvm volumes to the available space on the same disk
opendevreviewJulia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing
arne_wiebalckbye everyone, see you tomorrow o/18:48
jingvarstevebaker[m]: thanks18:57
opendevreviewJulia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing
TheJuliaYou'd think I'd remember how the plugin actually handles creating credential'ed clients again19:17
opendevreviewJulia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing
opendevreviewArne Wiebalck proposed openstack/ironic-python-agent master: [trivial] Fix typo in
opendevreviewJulia Kreger proposed openstack/ironic-tempest-plugin master: WIP: An idea for rbac positive/negative testing

Generated by 2.17.3 by Marius Gedminas - find it at!