Thursday, 2024-09-26

noonedeadpunkwell, it's us who disable StrictHostKeyChecking for * for Nova user...06:30
noonedeadpunkbut generally - yeah, probably not06:30
noonedeadpunkjrosser: can I ask you to drop WIP from https://review.opendev.org/c/openstack/openstack-ansible/+/916291 ?08:01
noonedeadpunkah, no IC08:02
noonedeadpunk*CI08:03
jrosseri've taken it off but yeah there is a bunch of work still to do on these branches /o\08:21
noonedeadpunkI just spotted https://bugs.launchpad.net/openstack-ansible/+bug/2066932 so wanted to close it :D08:23
noonedeadpunkoh, btw. seems that keystone is borked in terms of DB migrations08:28
noonedeadpunkhttps://bugs.launchpad.net/keystone/+bug/208054208:28
noonedeadpunkAnd I can confirm that DB wasn't updated with upgrade to 2024.108:28
noonedeadpunkthis affects deployments with federation more then others08:28
noonedeadpunk(as there were related changes in schema)08:29
opendevreviewMerged openstack/openstack-ansible unmaintained/wallaby: Update ansible role and openstack services for unmaintained branch names  https://review.opendev.org/c/openstack/openstack-ansible/+/91629108:43
noonedeadpunkjrosser: have you seen that? I'm very o_O right now https://www.phoronix.com/news/NVIDIA-Open-GPU-Virtualization10:15
noonedeadpunkbut it could be only HV drivers, not actually client drivers...10:23
jrosser8-O10:28
noonedeadpunkbut only works with Ada, not for Ampere suckers....10:29
jrosserwell i have complained at every opportunity about this mess to nvidia for a while now10:29
noonedeadpunkso keep up :D10:29
jrosserso for ampere, we have found that the AI models / workloads are getting bigger memory requirements, so moving to PCI passthrough for those with the free driver has been no problem10:30
jrosserwe already got rid of some smaller flavors, then converted just to whole gpu passthrough when the grid driver dropped support for the ampere cards10:31
noonedeadpunkI'm super scared of passthrough when you're providing GPUs to public...10:32
noonedeadpunktotally worthy for internal clouds though10:33
noonedeadpunkas for public - I'd even do 1gpu=1vgpu, but not passthrough10:34
WireLostI use qemu vms with VirGL backed by hardware without GPU passthrough, but not within OpenStack.10:38
WireLostIt's really nice! But it seems that OpenStack Nova Libvirt/QEMU code focus only in "instances for servers", not for desktops (like VDI), which is a shame 10:39
WireLostThe only VMs I have outside of OpenStack are the ones I need VirGL.10:40
noonedeadpunkwell, with vGPU you can have VDI on openstack as well10:40
noonedeadpunkbut I'd agree that virgl support might be very nice to have in nova...10:41
noonedeadpunkI'm not sure if there're any GPUs with MIG support that also support 3D graphics or not.. but they'd also be supported10:42
WireLostcool! I didn't know that. I hope it isn't an NVIDIA-only thing! lol10:44
WireLostThe only downside I see with the VirGL deployments is that it cannot bind to network, only to a local socker, I think. So this would be tricky to the spice-proxy, right?10:46
WireLostfor me to access the remote desktop, I use nomachine inside of the VM10:46
WireLost...so the nomachine exports the VirGL-based GPU and it's really fast! Both glxgears and glmark2 works great to test, as well as youtube or odysee10:48
noonedeadpunkit kinda is Nvidia-only. Though, AMD does have also quite neat SR-IOV for GPUs. which does not have any licensing bullshit, but I never heard of anyone using it11:05
noonedeadpunkiirc nomachine was never free either11:06
noonedeadpunkor is it?11:06
noonedeadpunkas it used to be great indeed and showing very low latency and high resolution ,where VNC totally sucked11:07
jrosserB40 will be the first graphics capable model with MIG11:15
jrosser4 partitions iirc11:16
harunHello everyone, I am trying magnum cluster api but I am getting an error. Can I use magnum cluster api without manila installed? Since manila service is not installed I am getting an error saying "There is no sharev2 service enabled in the cloud".11:50
jrosserare you sure that it is an error?11:59
harunsorry this is not an error message. The test cluster I am trying to create remains in 'CREATE IN PROGRESS'. I thought it was caused by this.12:02
harunhere is the error i am getting: https://paste.openstack.org/show/bcFMfgtGLrK9pnKTQp2m/12:06
harunmagnum version: 17.1.0.dev75, magnum cluster api version: 0.21.0, oslo.db version: 14.1.012:07
jrosserharun: it might be best to ask in the magnum-cluster-api slack channel13:41
noonedeadpunkor summon mnaser here :D14:00
noonedeadpunk(but likely slack is better)14:00
mnaserharun: we alreadey fixed that issue a while back me thinks, but let me share tghis14:02
mnaserhttps://vexxhost.github.io/magnum-cluster-api/admin/troubleshooting/14:02
mnaserid follow the stuck in CREATE_IN_PROGRESS teps14:02
opendevreviewMerged openstack/ansible-hardening master: Apply architecture specific audit rules  https://review.opendev.org/c/openstack/ansible-hardening/+/93045114:31
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Remove db_sync --migrate command  https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/93059114:41
WireLostnomachine is not open source but there's a free version. Maybe it worth checking FreeNX, x2go, RustDesk or similar alternative but, well, nomachine works great14:47
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-hardening stable/2024.1: Apply architecture specific audit rules  https://review.opendev.org/c/openstack/ansible-hardening/+/93059214:58
opendevreviewDmitriy Rabotyagov proposed openstack/ansible-hardening stable/2023.2: Apply architecture specific audit rules  https://review.opendev.org/c/openstack/ansible-hardening/+/93059314:59
opendevreviewMerged openstack/openstack-ansible-rabbitmq_server master: Bump rabbitmq version to 3.13.7  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/93044615:47

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!