frickler | that's the cold start issue I mentioned, yes | 06:22 |
---|---|---|
admin1 | what permission is needed to create lb ? Forbidden. Insufficient permissions of the requested operation | 08:34 |
admin1 | octavia lb | 08:34 |
frickler | admin1: octavia uses some special roles in keystone for that, should be mentioned in the upstream octavia docs | 08:40 |
admin1 | yeah .. docs said https://docs.openstack.org/octavia/latest/configuration/policy.html load-balancer_member .. but that too also gave insufficient permissions | 08:43 |
opendevreview | Merged openstack/kayobe stable/2023.1: Revert "Mark kayobe-tox-ansible job as non-voting" https://review.opendev.org/c/openstack/kayobe/+/915356 | 08:44 |
frickler | admin1: did you set the role on the correct project? otherwise this might be a bug, but possibly in octavia rather than in kolla | 08:49 |
opendevreview | Merged openstack/kayobe stable/2023.2: Revert "Mark kayobe-tox-ansible job as non-voting" https://review.opendev.org/c/openstack/kayobe/+/915355 | 08:49 |
opendevreview | Mark Goddard proposed openstack/ansible-collection-kolla master: Add stats callback plugin https://review.opendev.org/c/openstack/ansible-collection-kolla/+/910347 | 09:06 |
opendevreview | Mark Goddard proposed openstack/kayobe master: Fix Dell OS6 and Dell OS9 switch configuration https://review.opendev.org/c/openstack/kayobe/+/915554 | 09:18 |
opendevreview | Mark Goddard proposed openstack/kayobe master: Introduce max fail percentage to playbooks https://review.opendev.org/c/openstack/kayobe/+/818288 | 09:39 |
opendevreview | Maksim Malchuk proposed openstack/kolla-ansible master: Fix the issue in nova-libvirt when hostname != fqdn https://review.opendev.org/c/openstack/kolla-ansible/+/915560 | 10:09 |
opendevreview | Matúš Jenča proposed openstack/kolla-ansible master: Add backend TLS between MariaDB and ProxySQL https://review.opendev.org/c/openstack/kolla-ansible/+/909912 | 11:03 |
PrzemekK | Hi. How we can use different cinder backends from ceph . Are the volumes needs to be created manually in each az or there is some parameter or metadata in command "openstack server create --boot-from-volume" Currently volumes tried to be created in AZ Nova but we changed the name https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html | 12:44 |
SvenKieske | PrzemekK: did you read the part about nova? https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html#nova there you can specify differnt ceph backend for nova-compute right at the end of that config section. HTH? | 13:34 |
PrzemekK | Yes but it is for vms pool. We are creating servers with command openstack server create --flavor xxx --image xxx --nic net-id=xxx--boot-from-volume 100 test . It trying to create volume in nova AZ . But we dont have it . We changed also in nova/cinder conf default section but volumes still trying to be created in nova AZ [DEFAULT] default_availability_zone = W1-az default_schedule_zone = W1-az storage_availability_zone = W1-az | 13:43 |
PrzemekK | we dont want to create root disks in vms pool but in volumes | 13:43 |
opendevreview | Sven Kieske proposed openstack/kolla master: CI/Master only: pin opensearch{-dashboards} https://review.opendev.org/c/openstack/kolla/+/915322 | 13:51 |
PrzemekK | The question is if we use openstack server create --availability-zone W2-az it will use automatically pool name avaliable in that AZ | 13:51 |
opendevreview | Alex Welsh proposed openstack/kolla-ansible master: Automate prometheus blackbox configuration https://review.opendev.org/c/openstack/kolla-ansible/+/912420 | 13:55 |
SvenKieske | I _believe_ that's not really guaranteed, afaik there was a question relating to this on the ML recently. the thing is, AZs are a different thing in nova and in cinder. | 13:57 |
SvenKieske | I'm not sure that's possible this way. What you of course can do, create first the volume in the volume pool, and then only start the server with that. | 14:03 |
SvenKieske | thing is: vm pool is supposed to be ephemeral, volumes pool is persistent (so to say). why would you mix that up? | 14:03 |
opendevreview | Michal Arbet proposed openstack/kolla master: Fix handling configs in base image https://review.opendev.org/c/openstack/kolla/+/915440 | 14:08 |
opendevreview | Michal Arbet proposed openstack/kolla master: Fix handling configs in base image https://review.opendev.org/c/openstack/kolla/+/915440 | 14:28 |
opendevreview | Michal Arbet proposed openstack/kolla master: Fix handling configs in base image https://review.opendev.org/c/openstack/kolla/+/915440 | 14:30 |
opendevreview | Michal Arbet proposed openstack/kolla master: Fix handling configs in base image https://review.opendev.org/c/openstack/kolla/+/915440 | 14:40 |
kevko | SvenKieske: mnasiadka: better ? | 14:40 |
kevko | ^ | 14:40 |
opendevreview | Michal Arbet proposed openstack/kolla master: Fix handling configs in base image https://review.opendev.org/c/openstack/kolla/+/915440 | 14:54 |
SvenKieske | I agree with "please be specific if you criticise something" :) I think I didn't put a -1 there just yet. :) | 14:54 |
kevko | SvenKieske: now i think it is straight code ..simple implementation | 14:55 |
kevko | PrzemekK: can u repeat your problem ? i think i can help you | 14:57 |
kevko | PrzemekK: you need to setup cinder to have storage in az1, az2,az3 | 14:58 |
kevko | PrzemekK: then you need to setup also default az ... | 14:59 |
kevko | PrzemekK: when the nova will want to schedule instance to az1 ...it will create volume in az1 ... | 15:00 |
kevko | PrzemekK: last thing you need to ask yourself if you will support cross-az-attach or not | 15:00 |
kevko | PrzemekK: for example, i've already setup three cephs ... every ceph in different az ..and group of hypervisors in all three azs ... | 15:01 |
kevko | PrzemekK: of course you need to setup aggregation groups in nova ...azs ..etc .. | 15:02 |
kevko | this is architecure discussion ..but if you know how to setup it ... you are able to config in kolla .. (few user confg overrides needed ... i would say) | 15:03 |
kevko | PrzemekK: there is also some tuning patch from me https://review.opendev.org/c/openstack/kolla-ansible/+/907166 | 15:03 |
PrzemekK | I will look on it. Basically for now we are at stage where we configured default in cinder/nova as in https://www.ibm.com/support/pages/how-change-default-availability-zone-name-nova . We changed globals and cinder template file from kolla | 15:09 |
PrzemekK | cinder_ceph_backends: - name: "rbd-1" cluster: "ceph" pool: "VolumesStandardW1" availability_zone: "W1-az" enabled: "{{ cinder_backend_ceph | bool }}" - name: "rbd-2" cluster: "rbd2" pool: "VolumesStandardW2" availability_zone: "W2-az" enabled: "{{ cinder_backend_ceph | bool }}" | 15:09 |
PrzemekK | vi /usr/local/share/kolla-ansible/ansible/roles/cinder/templates/cinder.conf.j2 | 15:10 |
PrzemekK | {% if cinder_backend_ceph | bool %} {% for backend in cinder_ceph_backends %} [{{ backend.name }}] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = {{ backend.name }} #rbd_pool = {{ ceph_cinder_pool_name }} rbd_pool = {{ backend.pool }} | 15:10 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: Refactor external ceph https://review.opendev.org/c/openstack/kolla-ansible/+/907166 | 15:11 |
PrzemekK | we changed vms for nova in each AZ as in https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html#nova | 15:12 |
kevko | PrzemekK: do you have multiple ceph clusters or just one cluster with multiple pools ? | 15:12 |
PrzemekK | One cluster | 15:12 |
PrzemekK | multiple pools | 15:12 |
PrzemekK | different names of pool per az | 15:12 |
kevko | PrzemekK: okay, so why you have cluster: ceph, cluster rbd2 ? | 15:13 |
kevko | PrzemekK: you should have ceph and ceph | 15:13 |
PrzemekK | rbd2 is pool used in second datacenter | 15:14 |
PrzemekK | as primary | 15:14 |
kevko | - name: "rbd-2" cluster: "rbd2" pool: "VolumesStandardW2" availability_zone: "W2-az" enabled: "{{ cinder_backend_ceph | bool }}" | 15:14 |
kevko | ^^ you've sent this | 15:14 |
kevko | so VolumesStandardW1 VolumesStandardW2 pools are on same ceph cluster right ? | 15:15 |
PrzemekK | yes | 15:15 |
PrzemekK | good question | 15:15 |
kevko | i know it because i wrote a patch to support more clusters at once :D | 15:16 |
kevko | PrzemekK: it's also because of ceph python library by default choose a config parsed from a keyring | 15:17 |
PrzemekK | Whats issues if in cinder config rbd_cluster_name is different and there is one cluster | 15:17 |
kevko | https://docs.ceph.com/en/latest/rados/operations/user-management/#keyring-management <<< | 15:18 |
kevko | PrzemekK: i am not sure ...it's some time for now ... | 15:18 |
kevko | PrzemekK: normally probably nothing ...but it's mess in config | 15:18 |
kevko | (as all files will point into same cluster) | 15:19 |
kevko | PrzemekK: normally it works following, cluster: ceph and some keyrings right ....so kolla-ansible will try to find ceph.conf and ceph.auth.nova.keyring ...etc ... | 15:19 |
PrzemekK | for now real question is if we should create first volumes from images and then create servers via command nova boot - command will be outdated soon | 15:19 |
kevko | PrzemekK: in your case it will try to find rbd2.conf and rbd2.auth.nova.keyring | 15:20 |
kevko | PrzemekK: no it's not needed | 15:20 |
kevko | PrzemekK: it will work together without it | 15:20 |
PrzemekK | in this case we have volume after instance delete - for example if i want to boot from cdrom etc | 15:20 |
PrzemekK | or we should go with ephermal disks for OS and configure pool in each AZ on nova side | 15:21 |
kevko | there is no reason why you shouldn't boot instances from volume | 15:22 |
kevko | we have a customer who has it exactly as you said | 15:22 |
kevko | volume -> boot | 15:22 |
kevko | three azs | 15:22 |
kevko | three clusters in three racks | 15:23 |
kevko | 3 groups of nova hypervisors | 15:23 |
PrzemekK | how to boot from cdrom to rescue instance if i will use vms volume for OS ? | 15:23 |
kevko | using only volumes ..not vms | 15:23 |
kevko | PrzemekK: how from cdrom ? | 15:24 |
PrzemekK | if instance failed we deleted vm - volume stay on ceph | 15:24 |
PrzemekK | and then we create new vm to boot from cdrom nova boot --flavor Windows2022 --block-device id=16670b0c-14fd-4725-a4b9-c31dd13f2f6a,source=volume,dest=volume,type=cdrom,bootindex=1 --block-device id=ad3d8964-1ec7-4a9f-b470-a493bdcc01e4,source=volume,dest=volume,type=cdrom,bootindex=2 --block-device id=4f05eef0-e452-4e89-a93d-696019db9d63,source=volume,dest=volume,type=disk,bootindex=0 --nic net-id=32e4640b-6904-4f5a-8d99-c58889c590a8 Windows2022VM | 15:25 |
PrzemekK | and customer can rescue it | 15:25 |
PrzemekK | how to do it with OS data in vms pool | 15:26 |
kevko | to be honest i never tried cdrom :D | 15:26 |
kevko | it's normal rbd device | 15:26 |
kevko | sorry disk | 15:26 |
kevko | i think you can normally create a volume and boot from it no ? | 15:27 |
PrzemekK | so better is go with only disks as volumes or vms for OS data is ok | 15:28 |
kevko | i am still not sure what rescue means ... | 15:28 |
kevko | fs repair after hard shutdown ? | 15:28 |
kevko | or what ? | 15:28 |
kevko | because all from ceph can be mapped to nbd device via rbd-nbd | 15:29 |
PrzemekK | You need to run external iso like gparted/iso for reset windows password etc | 15:30 |
kevko | not working with windows :D | 15:31 |
kevko | if i want reset password ... i will mount disk from ceph to /mnt/tmp ... change the pass in /etc/shadow .... :D | 15:31 |
PrzemekK | Normally we run command openstack server create --boot-from-volume --availability-zone W2-az but it always create volume in default AZ for cinder | 15:32 |
kevko | PrzemekK: because you don't have it configured in right way :) | 15:32 |
PrzemekK | its about cluster: "rbd2" or whats wrong | 15:33 |
kevko | i would suggest to you to fix this also ... | 15:33 |
kevko | but it's about #storage_availability_zone = nova | 15:33 |
kevko | if you have 3 cinder volumes and every cinder-volume has inside 3 rbds ... you should have cluster enabled in cinder ...and just setup storage_availability_zone = AZx for every node .... | 15:35 |
kevko | (different of course) | 15:35 |
kevko | from this point your instances in az1 will create volumes in az1 | 15:35 |
kevko | then you need to think about cross-az-attach | 15:36 |
kevko | but yeah ...this will fix your issues with different az in cinder and differnet in nova i would say | 15:36 |
PrzemekK | Ok lets try fix cluster name | 15:36 |
kevko | btw | 15:37 |
kevko | PrzemekK: ceck your /etc/ceph in cinder-volume container | 15:37 |
kevko | PrzemekK: you will see ceph.conf and rbd2.conf i would say | 15:37 |
kevko | and they will be the same :D | 15:38 |
PrzemekK | In Antelope+ https://that.guru/blog/availability-zones-in-openstack-and-openshift-part-1/ | 15:38 |
PrzemekK | yes same | 15:38 |
kevko | PrzemekK: trust me ..i configured this :D ... and i am also author of refactor ..and also using on one deployment :D | 15:40 |
kevko | i know what i am talking about ... (i hope :) ) | 15:41 |
SvenKieske | +1 on trusting kevko in this case; I had not had this usecase myself for now, so I guess the patch author knows best ;) | 15:42 |
kevko | :D | 15:43 |
kevko | SvenKieske: sad is that i have another refactor of ceph on review for months ..and nothing :( | 15:43 |
PrzemekK | its not designed for one ceph cluster and different pool names per cluster ^^ | 15:43 |
kevko | PrzemekK: what ? | 15:43 |
kevko | PrzemekK: it is designed for one ..and also for multiple | 15:44 |
kevko | PrzemekK: use paste.openstack.org and place the config here for cinder volume | 15:44 |
kevko | PrzemekK: be careful and remove sensitive data | 15:44 |
kevko | SvenKieske: did you check this ? https://review.opendev.org/c/openstack/kolla/+/915440 | 15:44 |
SvenKieske | kevko: yes but I'm still stuck in PTG stuff and will look at it next week I guess :) | 15:46 |
kevko | SvenKieske: 60 lines ? :D | 15:46 |
PrzemekK | we needed to change https://github.com/openstack/kolla-ansible/blob/stable/2023.2/ansible/roles/cinder/templates/cinder.conf.j2 rbd_pool = {{ ceph_cinder_pool_name }} to rbd_pool = {{ backend.pool }} and in globals set cinder_ceph_backends: - name: "rbd-1" cluster: "ceph" pool: "VolumesStandardW1" | 15:47 |
SvenKieske | I wanted to quit working for today like 45 minutes ago :P so yes, next week, I will look at it without hurrying. if I hurry reviews I regularly miss stuff :) is it urgent? | 15:47 |
PrzemekK | so it can qork on 1 cluster | 15:48 |
PrzemekK | work* | 15:48 |
kevko | well, this will not work out-of-the box without lastest patch not merged for now from me :D ...as all reviewers don't have a time or what :D | 15:50 |
kevko | https://review.opendev.org/c/openstack/kolla-ansible/+/907166 << as you can see ...here it's defined ... | 15:50 |
kevko | but ... you can define it into globals directly | 15:50 |
kevko | so... | 15:50 |
kevko | let me provide globals | 15:50 |
mnasiadka | kevko, frickler: https://review.opendev.org/c/openstack/kolla-ansible/+/914107 - can we get that merged? | 15:52 |
kevko | PrzemekK: can u send me cinder-volume config current ? ...just remove sensitive data | 15:53 |
kevko | PrzemekK: paste.openstack.org | 15:53 |
kevko | mnasiadka: done | 15:55 |
mnasiadka | kevko: gracias | 15:55 |
kevko | mnasiadka: can u please re-review https://review.opendev.org/c/openstack/kolla-ansible/+/907166 << as you can see here ^^ it would be helfull for PrzemekK :D | 15:56 |
kevko | PrzemekK: what you need is to use config override with your current version | 15:56 |
mnasiadka | kevko: I'll try to have a look, but it might be Monday | 15:56 |
kevko | mnasiadka: thanks, currently i have 12 merge requests there :( | 15:57 |
kevko | mnasiadka: most of them are tested and working in prod .. | 15:57 |
mnasiadka | only 12? | 15:57 |
mnasiadka | give RP+1 to everything you feel is important to get in Caracal | 15:57 |
mnasiadka | I'll go through the priority list as soon as I can | 15:58 |
kevko | mnasiadka: thanks, 12 ? is it few ? :D | 15:58 |
mnasiadka | but now on a workshop with customer to the end of the day | 15:58 |
kevko | mnasiadka: like I mean .. they are not oneliners :D | 15:58 |
kevko | :D | 15:58 |
kevko | mnasiadka: yeah ..no problem | 15:58 |
PrzemekK | Its https://paste.openstack.org/show/bahsIM3F3OlRKbadhgpk/ | 15:59 |
kevko | PrzemekK: firstly remove default_availability_zone | 16:00 |
kevko | default_schedule_zone < don't exist ..so you can remove | 16:00 |
kevko | and on the second host of cinder-volume set storage_availability_zone to another one | 16:03 |
kevko | PrzemekK: also, your ceph deployment is not clustered ! | 16:03 |
PrzemekK | So on one openstack controller configure set storage_availability_zone to DC1 and on different controller DC2 ? | 16:06 |
PrzemekK | there is cinder-volume running | 16:06 |
kevko | PrzemekK: yep | 16:06 |
kevko | PrzemekK: so both controllers will handle both storage avaialbility zone .. | 16:07 |
PrzemekK | right now it is like that cinder-volumectrl01@rbd-1W1-az , nder-volumectrl01@rbd-2W2-az Name,Host,AZ | 16:08 |
kevko | PrzemekK: yes, it's not good | 16:11 |
kevko | PrzemekK: check my config https://paste.openstack.org/show/bLOyV82gCT444wUE3yv1/ | 16:14 |
kevko | PrzemekK: ignore that they are down ...it's my testing openstack :D :D ...but what is visible is that i have all azs and backends in azs available on controller0 (just turned on :D :D ) | 16:15 |
PrzemekK | better https://postimg.cc/kV7hkrzk | 16:22 |
PrzemekK | all was nova before | 16:22 |
PrzemekK | right now its creating volume based to what controller request is going. I run server create --availability-zone W2-az and sometimes it creates volume in W1 sometimes W2 | 16:31 |
kevko | PrzemekK: check your aggregation .. | 16:36 |
kevko | openstack aggregate list and show them | 16:37 |
PrzemekK | https://postimg.cc/PNFzmLHL | 16:38 |
PrzemekK | https://paste.openstack.org/show/bSZTGcOYbgRM9HV4vBHf/ | 16:39 |
kevko | PrzemekK: w8 a minute | 16:39 |
PrzemekK | sure | 16:39 |
kevko | PrzemekK: it's old test cluster ...i need to maintain little bit :D | 16:44 |
kevko | PrzemekK: but i think you are missing az in aggregation | 16:49 |
PrzemekK | Its set openstack aggregate set --zone W1-az W1 | 17:01 |
kevko | I'm not quite at home here ... but let me recover my cluster and i will check everything what i can | 17:02 |
PrzemekK | anyway thanks a lot for Your time. It give me more information what look for | 17:05 |
kevko | PrzemekK: I remember that I had also hard times configuring this :D | 17:08 |
*** atmark_ is now known as atmark | 17:16 | |
opendevreview | Merged openstack/kolla-ansible master: ironic: disable heartbeat_in_pthreads https://review.opendev.org/c/openstack/kolla-ansible/+/914107 | 17:23 |
kevko | PrzemekK: did you set this ? openstack aggregate set --property availability_zone=az1 az1 ? | 17:24 |
kevko | Sorry, need to go :( | 17:29 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: Refactor external ceph https://review.opendev.org/c/openstack/kolla-ansible/+/907166 | 17:31 |
opendevreview | Michal Arbet proposed openstack/kolla-ansible master: Copy all keyrings and configs to cinder-backup https://review.opendev.org/c/openstack/kolla-ansible/+/907167 | 17:31 |
kevko | PrzemekK: try to experiment also with cross-attach-az | 17:34 |
opendevreview | Michal Nasiadka proposed openstack/kolla-ansible stable/2023.2: ironic: disable heartbeat_in_pthreads https://review.opendev.org/c/openstack/kolla-ansible/+/915532 | 18:51 |
opendevreview | Michal Nasiadka proposed openstack/kolla-ansible stable/2023.1: ironic: disable heartbeat_in_pthreads https://review.opendev.org/c/openstack/kolla-ansible/+/915533 | 18:52 |
opendevreview | Michal Nasiadka proposed openstack/kolla-ansible stable/zed: ironic: disable heartbeat_in_pthreads https://review.opendev.org/c/openstack/kolla-ansible/+/915534 | 18:52 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!