Friday, 2024-04-12

fricklerthat's the cold start issue I mentioned, yes06:22
admin1what permission is needed to create lb ? Forbidden. Insufficient permissions of the requested operation 08:34
admin1octavia lb 08:34
frickleradmin1: octavia uses some special roles in keystone for that, should be mentioned in the upstream octavia docs08:40
admin1yeah .. docs said https://docs.openstack.org/octavia/latest/configuration/policy.html  load-balancer_member .. but that too also gave insufficient permissions08:43
opendevreviewMerged openstack/kayobe stable/2023.1: Revert "Mark kayobe-tox-ansible job as non-voting"  https://review.opendev.org/c/openstack/kayobe/+/91535608:44
frickleradmin1: did you set the role on the correct project? otherwise this might be a bug, but possibly in octavia rather than in kolla08:49
opendevreviewMerged openstack/kayobe stable/2023.2: Revert "Mark kayobe-tox-ansible job as non-voting"  https://review.opendev.org/c/openstack/kayobe/+/91535508:49
opendevreviewMark Goddard proposed openstack/ansible-collection-kolla master: Add stats callback plugin  https://review.opendev.org/c/openstack/ansible-collection-kolla/+/91034709:06
opendevreviewMark Goddard proposed openstack/kayobe master: Fix Dell OS6 and Dell OS9 switch configuration  https://review.opendev.org/c/openstack/kayobe/+/91555409:18
opendevreviewMark Goddard proposed openstack/kayobe master: Introduce max fail percentage to playbooks  https://review.opendev.org/c/openstack/kayobe/+/81828809:39
opendevreviewMaksim Malchuk proposed openstack/kolla-ansible master: Fix the issue in nova-libvirt when hostname != fqdn  https://review.opendev.org/c/openstack/kolla-ansible/+/91556010:09
opendevreviewMatúš Jenča proposed openstack/kolla-ansible master: Add backend TLS between MariaDB and ProxySQL  https://review.opendev.org/c/openstack/kolla-ansible/+/90991211:03
PrzemekKHi. How we can use different cinder backends from ceph . Are the volumes needs to be created manually in each az or there is some parameter or metadata in command "openstack server create  --boot-from-volume" Currently volumes tried to be created in AZ Nova but we changed the name https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html12:44
SvenKieskePrzemekK: did you read the part about nova? https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html#nova there you can specify differnt ceph backend for nova-compute right at the end of that config section. HTH?13:34
PrzemekKYes but it is for vms pool. We are creating servers with command openstack server create --flavor xxx --image xxx --nic net-id=xxx--boot-from-volume 100 test . It trying to create volume in nova AZ . But we dont have it . We changed also in nova/cinder conf default section but volumes still trying to be created in nova AZ [DEFAULT] default_availability_zone = W1-az default_schedule_zone = W1-az storage_availability_zone = W1-az13:43
PrzemekKwe dont want to create root disks in vms pool but in volumes 13:43
opendevreviewSven Kieske proposed openstack/kolla master: CI/Master only: pin opensearch{-dashboards}  https://review.opendev.org/c/openstack/kolla/+/91532213:51
PrzemekKThe question is if we use openstack server create --availability-zone W2-az  it will use automatically pool name avaliable in that AZ13:51
opendevreviewAlex Welsh proposed openstack/kolla-ansible master: Automate prometheus blackbox configuration  https://review.opendev.org/c/openstack/kolla-ansible/+/91242013:55
SvenKieskeI _believe_ that's not really guaranteed, afaik there was a question relating to this on the ML recently. the thing is, AZs are a different thing in nova and in cinder.13:57
SvenKieskeI'm not sure that's possible this way. What you of course can do, create first the volume in the volume pool, and then only start the server with that.14:03
SvenKieskething is: vm pool is supposed to be ephemeral, volumes pool is persistent (so to say). why would you mix that up?14:03
opendevreviewMichal Arbet proposed openstack/kolla master: Fix handling configs in base image  https://review.opendev.org/c/openstack/kolla/+/91544014:08
opendevreviewMichal Arbet proposed openstack/kolla master: Fix handling configs in base image  https://review.opendev.org/c/openstack/kolla/+/91544014:28
opendevreviewMichal Arbet proposed openstack/kolla master: Fix handling configs in base image  https://review.opendev.org/c/openstack/kolla/+/91544014:30
opendevreviewMichal Arbet proposed openstack/kolla master: Fix handling configs in base image  https://review.opendev.org/c/openstack/kolla/+/91544014:40
kevkoSvenKieske:  mnasiadka: better ? 14:40
kevko^14:40
opendevreviewMichal Arbet proposed openstack/kolla master: Fix handling configs in base image  https://review.opendev.org/c/openstack/kolla/+/91544014:54
SvenKieskeI agree with "please be specific if you criticise something" :) I think I didn't put a -1 there just yet. :)14:54
kevkoSvenKieske: now i think it is straight code ..simple implementation 14:55
kevkoPrzemekK: can u repeat your problem ? i think i can help you 14:57
kevkoPrzemekK: you need to setup cinder to have  storage in az1, az2,az314:58
kevkoPrzemekK: then you need to setup also default az ... 14:59
kevkoPrzemekK: when the nova will want to schedule instance to az1 ...it will create volume in az1 ... 15:00
kevkoPrzemekK: last thing you need to ask yourself if you will support cross-az-attach or not 15:00
kevkoPrzemekK: for example, i've already setup  three cephs ... every ceph in different az ..and group of hypervisors in all three azs ... 15:01
kevkoPrzemekK: of course you need to setup aggregation groups in nova ...azs ..etc ..15:02
kevkothis is architecure discussion ..but if you know how to setup it ... you are able to config in kolla .. (few user confg overrides needed ... i would say)15:03
kevkoPrzemekK: there is also some tuning patch from me https://review.opendev.org/c/openstack/kolla-ansible/+/907166 15:03
PrzemekKI will look on it. Basically for now we are at stage where we configured  default in cinder/nova as in https://www.ibm.com/support/pages/how-change-default-availability-zone-name-nova . We changed globals and cinder template file from kolla15:09
PrzemekKcinder_ceph_backends:   - name: "rbd-1"     cluster: "ceph"     pool: "VolumesStandardW1"     availability_zone: "W1-az"     enabled: "{{ cinder_backend_ceph | bool }}"   - name: "rbd-2"     cluster: "rbd2"     pool: "VolumesStandardW2"     availability_zone: "W2-az"     enabled: "{{ cinder_backend_ceph | bool }}"15:09
PrzemekKvi /usr/local/share/kolla-ansible/ansible/roles/cinder/templates/cinder.conf.j215:10
PrzemekK{% if cinder_backend_ceph | bool %} {% for backend in cinder_ceph_backends %} [{{ backend.name }}] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = {{ backend.name }} #rbd_pool = {{ ceph_cinder_pool_name }} rbd_pool = {{ backend.pool }}15:10
opendevreviewMichal Arbet proposed openstack/kolla-ansible master: Refactor external ceph  https://review.opendev.org/c/openstack/kolla-ansible/+/90716615:11
PrzemekKwe changed vms for nova in each AZ as in https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html#nova15:12
kevkoPrzemekK: do you have multiple ceph clusters or just one cluster with multiple pools ? 15:12
PrzemekKOne cluster15:12
PrzemekKmultiple pools15:12
PrzemekKdifferent names of pool per az15:12
kevkoPrzemekK: okay, so why you have cluster: ceph, cluster rbd2 ? 15:13
kevkoPrzemekK: you should have ceph and ceph15:13
PrzemekKrbd2 is pool used in second datacenter15:14
PrzemekKas primary15:14
kevko - name: "rbd-2"     cluster: "rbd2"     pool: "VolumesStandardW2"     availability_zone: "W2-az"     enabled: "{{ cinder_backend_ceph | bool }}"15:14
kevko^^ you've sent this 15:14
kevkoso VolumesStandardW1 VolumesStandardW2 pools are on same ceph cluster right ? 15:15
PrzemekKyes15:15
PrzemekKgood question15:15
kevkoi know it because i wrote a patch to support more clusters at once :D 15:16
kevkoPrzemekK: it's also because of ceph python library by default choose a config parsed from a keyring 15:17
PrzemekKWhats issues if in cinder config rbd_cluster_name is different and there is one cluster15:17
kevkohttps://docs.ceph.com/en/latest/rados/operations/user-management/#keyring-management <<< 15:18
kevkoPrzemekK: i am not sure ...it's some time for now ...15:18
kevkoPrzemekK: normally probably nothing ...but it's mess in config 15:18
kevko(as all files will point into same cluster)15:19
kevkoPrzemekK: normally it works following,  cluster: ceph  and some keyrings right ....so kolla-ansible will try to find ceph.conf and ceph.auth.nova.keyring ...etc ...15:19
PrzemekKfor now real question is if we should create first volumes from images and then create servers via command nova boot - command will be outdated soon15:19
kevkoPrzemekK: in your case it will try to find rbd2.conf and rbd2.auth.nova.keyring15:20
kevkoPrzemekK: no it's not needed15:20
kevkoPrzemekK: it will work together without it 15:20
PrzemekKin this case we have volume after instance delete - for example if i want to boot from cdrom etc15:20
PrzemekKor we should go with ephermal disks for OS and configure pool in each AZ on nova side15:21
kevkothere is no reason why you shouldn't boot instances from volume 15:22
kevkowe have a customer who has it exactly as you said 15:22
kevkovolume -> boot 15:22
kevkothree azs15:22
kevkothree clusters in three racks 15:23
kevko3 groups of nova hypervisors 15:23
PrzemekKhow to boot from cdrom to rescue instance if i will use vms volume for OS ?15:23
kevkousing only volumes ..not vms 15:23
kevkoPrzemekK: how from cdrom ? 15:24
PrzemekKif instance failed we deleted vm - volume stay on ceph15:24
PrzemekKand then we create new vm to boot from cdrom nova boot --flavor Windows2022 --block-device id=16670b0c-14fd-4725-a4b9-c31dd13f2f6a,source=volume,dest=volume,type=cdrom,bootindex=1 --block-device id=ad3d8964-1ec7-4a9f-b470-a493bdcc01e4,source=volume,dest=volume,type=cdrom,bootindex=2 --block-device id=4f05eef0-e452-4e89-a93d-696019db9d63,source=volume,dest=volume,type=disk,bootindex=0 --nic net-id=32e4640b-6904-4f5a-8d99-c58889c590a8 Windows2022VM15:25
PrzemekKand customer can rescue it15:25
PrzemekKhow to do it with OS data in vms pool15:26
kevkoto be honest i never tried cdrom :D 15:26
kevkoit's normal rbd device 15:26
kevkosorry disk15:26
kevkoi think you can normally create a volume and boot from it no ? 15:27
PrzemekKso better is go with only disks as volumes or vms for OS data is ok15:28
kevkoi am still not sure what rescue means ...15:28
kevkofs repair after hard shutdown ? 15:28
kevkoor what ? 15:28
kevkobecause all from ceph can be mapped to nbd device via rbd-nbd15:29
PrzemekKYou need to run external iso like gparted/iso for reset windows password etc15:30
kevkonot working with windows :D 15:31
kevkoif i want reset password ... i will mount disk from ceph to /mnt/tmp ... change the pass in /etc/shadow .... :D 15:31
PrzemekKNormally we run command openstack server create --boot-from-volume --availability-zone W2-az  but it always create volume in default AZ for cinder15:32
kevkoPrzemekK: because you don't have it configured in right way :) 15:32
PrzemekKits about cluster: "rbd2" or whats wrong15:33
kevkoi would suggest to you to fix this also ...15:33
kevkobut it's about #storage_availability_zone = nova15:33
kevkoif you have 3 cinder volumes and every cinder-volume has inside 3 rbds ...  you should have cluster enabled in cinder ...and just setup storage_availability_zone = AZx for every node ....15:35
kevko(different of course)15:35
kevkofrom this point your instances in az1 will create volumes in az1 15:35
kevkothen you need to think about cross-az-attach 15:36
kevkobut yeah ...this will fix your issues with different az in cinder and differnet in nova i would say 15:36
PrzemekKOk lets try fix cluster name15:36
kevkobtw15:37
kevkoPrzemekK: ceck your /etc/ceph in cinder-volume container 15:37
kevkoPrzemekK: you will see ceph.conf and rbd2.conf i would say 15:37
kevkoand they will be the same :D 15:38
PrzemekKIn Antelope+  https://that.guru/blog/availability-zones-in-openstack-and-openshift-part-1/15:38
PrzemekKyes same15:38
kevkoPrzemekK: trust me ..i configured this :D ... and i am also author of refactor ..and also using on one deployment :D 15:40
kevkoi know what i am talking about ... (i hope :) )15:41
SvenKieske+1 on trusting kevko in this case; I had not had this usecase myself for now, so I guess the patch author knows best ;)15:42
kevko:D15:43
kevkoSvenKieske: sad is that i have another refactor of ceph on review for months ..and nothing :( 15:43
PrzemekKits not designed for one ceph cluster and different pool names per cluster ^^15:43
kevkoPrzemekK: what ? 15:43
kevkoPrzemekK: it is designed for one ..and also for multiple 15:44
kevkoPrzemekK: use paste.openstack.org and place the config here for cinder volume 15:44
kevkoPrzemekK: be careful and remove sensitive data15:44
kevkoSvenKieske: did you check this ? https://review.opendev.org/c/openstack/kolla/+/91544015:44
SvenKieskekevko: yes but I'm still stuck in PTG stuff and will look at it next week I guess :)15:46
kevkoSvenKieske: 60 lines ? :D 15:46
PrzemekKwe needed to change https://github.com/openstack/kolla-ansible/blob/stable/2023.2/ansible/roles/cinder/templates/cinder.conf.j2 rbd_pool = {{ ceph_cinder_pool_name }} to rbd_pool = {{ backend.pool }} and in globals set cinder_ceph_backends:   - name: "rbd-1"     cluster: "ceph"     pool: "VolumesStandardW1"15:47
SvenKieskeI wanted to quit working for today like 45 minutes ago :P so yes, next week, I will look at it without hurrying. if I hurry reviews I regularly miss stuff :) is it urgent?15:47
PrzemekKso it can qork on 1 cluster15:48
PrzemekKwork*15:48
kevkowell, this will not work out-of-the box without lastest patch not merged for now from me :D ...as all reviewers don't have a time or what :D 15:50
kevkohttps://review.opendev.org/c/openstack/kolla-ansible/+/907166 << as you can see ...here it's defined ...15:50
kevkobut ... you can define it into globals directly 15:50
kevkoso...15:50
kevkolet me provide globals 15:50
mnasiadkakevko, frickler: https://review.opendev.org/c/openstack/kolla-ansible/+/914107 - can we get that merged?15:52
kevkoPrzemekK: can u send me cinder-volume config current ?  ...just remove sensitive data 15:53
kevkoPrzemekK: paste.openstack.org15:53
kevkomnasiadka: done 15:55
mnasiadkakevko: gracias15:55
kevkomnasiadka: can u please re-review https://review.opendev.org/c/openstack/kolla-ansible/+/907166 << as you can see here ^^ it would be helfull for PrzemekK :D 15:56
kevkoPrzemekK: what you need is to use config override with your current version 15:56
mnasiadkakevko: I'll try to have a look, but it might be Monday15:56
kevkomnasiadka: thanks, currently i have 12 merge requests there :( 15:57
kevkomnasiadka: most of them are tested and working in prod ..15:57
mnasiadkaonly 12?15:57
mnasiadkagive RP+1 to everything you feel is important to get in Caracal15:57
mnasiadkaI'll go through the priority list as soon as I can15:58
kevkomnasiadka: thanks, 12 ? is it few ? :D 15:58
mnasiadkabut now on a workshop with customer to the end of the day15:58
kevkomnasiadka: like I mean .. they are  not oneliners :D 15:58
kevko:D 15:58
kevkomnasiadka: yeah ..no problem 15:58
PrzemekKIts https://paste.openstack.org/show/bahsIM3F3OlRKbadhgpk/15:59
kevkoPrzemekK: firstly remove default_availability_zone16:00
kevkodefault_schedule_zone < don't exist ..so you can remove 16:00
kevkoand on the second host of cinder-volume set storage_availability_zone to another one 16:03
kevkoPrzemekK: also, your ceph deployment is not clustered ! 16:03
PrzemekKSo on one openstack controller  configure set storage_availability_zone to DC1 and on different controller DC2 ?16:06
PrzemekKthere is cinder-volume running16:06
kevkoPrzemekK: yep 16:06
kevkoPrzemekK: so both controllers will handle both storage avaialbility zone ..16:07
PrzemekKright now it is like that cinder-volumectrl01@rbd-1W1-az , nder-volumectrl01@rbd-2W2-az Name,Host,AZ16:08
kevkoPrzemekK: yes, it's not good 16:11
kevkoPrzemekK: check my config https://paste.openstack.org/show/bLOyV82gCT444wUE3yv1/16:14
kevkoPrzemekK: ignore that they are down ...it's my testing openstack :D :D  ...but what is visible is that i have all azs and backends in azs available on controller0 (just turned on :D :D )16:15
PrzemekKbetter https://postimg.cc/kV7hkrzk16:22
PrzemekKall was nova before16:22
PrzemekKright now its creating volume based to what controller request is going. I run server create --availability-zone W2-az  and sometimes it creates volume in W1 sometimes W216:31
kevkoPrzemekK: check your aggregation ..16:36
kevkoopenstack aggregate list and show them 16:37
PrzemekKhttps://postimg.cc/PNFzmLHL16:38
PrzemekKhttps://paste.openstack.org/show/bSZTGcOYbgRM9HV4vBHf/16:39
kevkoPrzemekK: w8 a minute16:39
PrzemekKsure16:39
kevkoPrzemekK: it's old test cluster ...i need to maintain little bit :D 16:44
kevkoPrzemekK: but i think you are missing az in aggregation 16:49
PrzemekKIts set openstack aggregate set --zone W1-az W117:01
kevkoI'm not quite at home here ... but let me recover my cluster and i will check everything what i can 17:02
PrzemekKanyway thanks a lot for Your time. It give me more information what look for17:05
kevkoPrzemekK: I remember that I had also hard times configuring this :D 17:08
*** atmark_ is now known as atmark17:16
opendevreviewMerged openstack/kolla-ansible master: ironic: disable heartbeat_in_pthreads  https://review.opendev.org/c/openstack/kolla-ansible/+/91410717:23
kevkoPrzemekK: did you set this ? openstack aggregate set --property availability_zone=az1 az1 ? 17:24
kevkoSorry, need to go :( 17:29
opendevreviewMichal Arbet proposed openstack/kolla-ansible master: Refactor external ceph  https://review.opendev.org/c/openstack/kolla-ansible/+/90716617:31
opendevreviewMichal Arbet proposed openstack/kolla-ansible master: Copy all keyrings and configs to cinder-backup  https://review.opendev.org/c/openstack/kolla-ansible/+/90716717:31
kevkoPrzemekK: try to experiment also with cross-attach-az 17:34
opendevreviewMichal Nasiadka proposed openstack/kolla-ansible stable/2023.2: ironic: disable heartbeat_in_pthreads  https://review.opendev.org/c/openstack/kolla-ansible/+/91553218:51
opendevreviewMichal Nasiadka proposed openstack/kolla-ansible stable/2023.1: ironic: disable heartbeat_in_pthreads  https://review.opendev.org/c/openstack/kolla-ansible/+/91553318:52
opendevreviewMichal Nasiadka proposed openstack/kolla-ansible stable/zed: ironic: disable heartbeat_in_pthreads  https://review.opendev.org/c/openstack/kolla-ansible/+/91553418:52

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!