hamidlotfi_ | Good morning. | 06:33 |
---|---|---|
hamidlotfi_ | I want to remove a controller node from the OpenStack cluster, how do that? | 06:33 |
hamidlotfi_ | What are the steps to do it? I didn't see anything in the documents. | 06:33 |
hamidlotfi_ | I am using the zed/26.1 version. | 06:33 |
hamidlotfi_ | @noonedeadpunk @jrosser | 06:33 |
hamidlotfi_ | Good morning. | 06:34 |
hamidlotfi_ | I want to remove a controller node from the OpenStack cluster, how do that? | 06:34 |
hamidlotfi_ | What are the steps to do it? I didn't see anything in the documents. | 06:34 |
hamidlotfi_ | I am using the zed/26.1 version. | 06:34 |
noonedeadpunk | hamidlotfi_: hey | 08:48 |
noonedeadpunk | yeah, we indeed don't have documentation for that and honestly I kinda never did that on my own on production. | 08:49 |
noonedeadpunk | But I think, genericly, you need to: | 08:50 |
noonedeadpunk | 1. Manually stop haproxy and keepalived services on the controller you want to remove, if if these are not on a standalone hosts | 08:51 |
noonedeadpunk | 2. Remove controller from the inventory, ie ./scripts/inventory-manage.py -r control04 | 08:51 |
noonedeadpunk | 3. Remove reference to control from openstack_user_config (this could be 2 actually) | 08:52 |
noonedeadpunk | and then kinda... run setup-infrastructure and setup-openstack.... | 08:52 |
hamidlotfi_ | why run setup-infra and setup-openstack ? | 08:53 |
noonedeadpunk | as you need to reconfigure galera and rabbtimq clusters, and then change refferences in each service configs to remove old cluster members | 08:53 |
noonedeadpunk | and re-configre haproxy to remove backends | 08:54 |
hamidlotfi_ | I want to replace a controller node with an old one. | 08:54 |
hamidlotfi_ | Do you have a suggestion for this? | 08:54 |
noonedeadpunk | that is completely different then remove control node :) | 08:54 |
noonedeadpunk | or well, depending if new one can have the same hostname as the old one or not | 08:55 |
hamidlotfi_ | I want setup a new controller node and join to openstack cluster and remove old one. | 08:55 |
noonedeadpunk | so if you're fine with them sharing the same hostname - process would be pretty much easy I guess | 08:56 |
hamidlotfi_ | I used this method to add a new compute node but I got a lot of errors. | 08:58 |
noonedeadpunk | Huh? We do that quite a lot today | 08:59 |
noonedeadpunk | So, first you would need to disable haproxy backends on control you wanna replace (to make it gracefully) | 08:59 |
noonedeadpunk | We use playbook like this for such thing: https://paste.openstack.org/show/bYOYrgIJYZETJdMXOPpm/ | 09:01 |
noonedeadpunk | then I guess you can just shutdown the controller, provision a new one, boot it and kinda run setup-hosts --limit control01,control01-containers (or smth like that...) | 09:03 |
jrosser | hamidlotfi_: replacing a node (or reinstalling the OS, which is basically the same) does not need a specific removal step if it comes back as the same thing in the ansible inventor | 09:19 |
jrosser | adding a new controller then removing an old one is a really complicated way to do that which involves much extra work | 09:20 |
hamidlotfi_ | noonedeadpunk: Thanks for your help. | 09:27 |
hamidlotfi_ | jrosser: I know this is a complicated process. | 09:27 |
jrosser | hamidlotfi_: but it doesnt have to be? | 09:27 |
jrosser | replace things "in place" is much easier, and it gives you practice for when you want to do an operating system upgrade across your whole deployment | 09:28 |
hamidlotfi_ | jrosser: Can you explain more about what you mean by replacement? | 09:31 |
jrosser | you've added a controller and removed a controller | 09:31 |
jrosser | thats "replacing a controller" ? | 09:31 |
opendevreview | Andrew Bonney proposed openstack/openstack-ansible-galera_server master: Fix ignored database directories configuration https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/900860 | 09:32 |
hamidlotfi_ | jrosser: | 09:36 |
hamidlotfi_ | I know what I want to do, I mean the steps to do it, how should it be done? | 09:36 |
jrosser | there are some instructions for an OS upgrade here https://docs.openstack.org/openstack-ansible/latest/admin/upgrades/distribution-upgrades.html | 09:37 |
jrosser | which is very similar | 09:37 |
opendevreview | Merged openstack/openstack-ansible-os_swift master: Fix example playbook linters https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/900789 | 11:19 |
opendevreview | Merged openstack/openstack-ansible-os_keystone master: Add quorum queues support for service https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/900631 | 11:24 |
nixbuilder | I have Antelope installed and everything seems to work (so far) *EXCEPT* for adding images... I have the system configured to use cinder for image storage... however whenever I try to add an image I get a "HttpException: 500: Server Error for url: http://<hidden>:9292/v2/images/51db54d6-3d6a-49e5-9045-08c281cd52b1/file, Internal Server Error". Here is the full log entry... https://paste.opendev.org | 12:20 |
opendevreview | Merged openstack/openstack-ansible stable/zed: Apply rate limit for journald in AIO builds https://review.opendev.org/c/openstack/openstack-ansible/+/900670 | 12:22 |
nixbuilder | Oh and BTW... adding volumes works fine. | 12:24 |
noonedeadpunk | nixbuilder: paste link is jsut link to new one fwiw | 12:33 |
nixbuilder | noonedeadpunk: When I click on https://paste.opendev.org/show/bSIIleOANxJADPUnjTWk/ I get the entire log I posted | 12:37 |
noonedeadpunk | nixbuilder: question: do you have /etc/glance/rootwrap.d/glance_cinder_store.filters ? | 12:42 |
nixbuilder | noonedeadpunk: Yes. | 12:43 |
nixbuilder | noonedeadpunk: https://paste.opendev.org/show/b14nu4kdYKlvtO9fI34s/ | 12:44 |
noonedeadpunk | nixbuilder: can you also provide output for `/openstack/venvs/glance-27.2.0/bin/pip list` ? | 12:46 |
nixbuilder | nooneddeadpunk: Here it is: https://paste.opendev.org/show/bUYETZXqtgNsVuOlyr5Y/ | 12:48 |
noonedeadpunk | huh, ok | 12:49 |
noonedeadpunk | I was hoping we're missing smth obvious, rather then oslo.rootwrap somehow not being executed from venv or not respecting it... | 12:50 |
noonedeadpunk | And I'm kinda sure that thing used to work not that long ago | 12:50 |
nixbuilder | noonedeadpunk: I'm just a newbee when it comes to oslo and all of that... but one thing I noticed was that the Antelope system did not take any parameters after the /use/bin/rootwrap in the cinder or glance sudoers file. Could that be the reason? | 12:53 |
nixbuilder | noonedeadpunk: Our production Pike system has "glance ALL = (root) NOPASSWD: /usr/bin/glance-rootwrap /etc/glance/rootwrap.conf *" | 12:54 |
nixbuilder | noonedeadpunk: Similar syntax with cinder as well. | 12:55 |
noonedeadpunk | projects moved to privsep since then | 12:56 |
noonedeadpunk | https://governance.openstack.org/tc/goals/selected/migrate-to-privsep.html | 12:56 |
noonedeadpunk | or well.... | 12:56 |
noonedeadpunk | I'd need to get some sandbox actually to check on that | 12:57 |
nixbuilder | noonedeadpunk: I guess I will build an AIO and see if I can duplicate the problem there. | 13:00 |
noonedeadpunk | nixbuilder: if you can give an overrides applied - I can sapwn it as well | 13:04 |
nixbuilder | noonedeadpunk: You mean you want my openstack_user_config and user_variables file? | 13:06 |
noonedeadpunk | nah, just how you set glance_* variables for user_variables | 13:08 |
nixbuilder | oonedeadpunk: Oh... ok | 13:09 |
nixbuilder | noonedeadpunk: https://paste.opendev.org/show/bikK43TFHQ4ZAXJjkNHk/ | 13:11 |
mgariepy | anyone seen this ? https://bugzilla.redhat.com/show_bug.cgi?id=1846844 | 13:43 |
mgariepy | TL;DR; placement seems to be have some race conditions or similar on selecting host for vms. | 13:44 |
jrosser | is that the conclusion? last comment is about reserving more host memory | 13:47 |
mgariepy | in my case it scheduled 3 240gb vm on a 512 gb host. | 13:48 |
jrosser | ewww | 13:48 |
mgariepy | not just a small bit. | 13:48 |
jrosser | sort of related we see very large memory requirements when starting vgpu instances | 13:48 |
mgariepy | for vgu it does pre-allocate memory (at least on passtrougth) | 13:49 |
jrosser | way in excess of the allocation | 13:49 |
jrosser | swap is totally needed, even if theres no overcommitment | 13:49 |
mgariepy | huh | 13:49 |
jrosser | otherwise OOM killer | 13:49 |
noonedeadpunk | wow | 13:49 |
jrosser | but the steady state once stuff is running is not using swap at all | 13:50 |
mgariepy | in my case it was also causing oom kill :) | 13:50 |
noonedeadpunk | I haven't seen that, but I haven't checked for it either | 13:50 |
jrosser | we found the hard way with 2 v.large GPU instances per server that one would regularly get OOM when the other booted | 13:50 |
mgariepy | i wasn't looking at it until a custumer reported that his vm was being killed .. :/ | 13:51 |
mgariepy | how much do you leave for the host ? | 13:51 |
noonedeadpunk | we do smth like 16gb | 13:52 |
jrosser | it ended up being really quite a lot, 50G maybe | 13:52 |
jrosser | and that wasnt enough | 13:52 |
mgariepy | wow | 13:52 |
jrosser | but thats the wierd thing, it was just transient at VM startup | 13:52 |
jrosser | so eventually we made enough swap space on another disk (grrr zfs root) | 13:53 |
jrosser | swap on ZFS will deadlock, don't do it :) | 13:53 |
mgariepy | isn't zfs consuming a lot of memory too ? | 13:54 |
mgariepy | i do use zfs on my controllers but not on the computes nodes haha | 13:54 |
jrosser | yeah exactly, and it's catch 22 when it tries to swap on a zfs volume that it needs a bunch of memory to do that | 13:54 |
andrewbonney | There were some nasty libvirt memory leaks too which needed recent versions to patch | 13:55 |
jrosser | we did some experiments with deliberately overcommitting and could lock the computes up reliably like that | 13:55 |
noonedeadpunk | nixbuilder: I think I have built an AIO for testing. What commands you were using to create an image? | 14:01 |
*** starkis is now known as Guest6972 | 14:02 | |
nixbuilder | noonedeadpunk: openstack image create cirros-0.6.1 --file cirros-0.6.1-x86_64-disk.img | 14:14 |
nixbuilder | noonedeadpunk: This is on a brand new install of Antelope. | 14:14 |
noonedeadpunk | nixbuilder: are you sure you can do like this? | 14:42 |
noonedeadpunk | As I thought that cinder storage means you can not provide a `--file` argument to image creation | 14:42 |
noonedeadpunk | as path to the file would be a cinder volume | 14:42 |
noonedeadpunk | https://docs.openstack.org/cinder/latest/admin/volume-backed-image.html | 14:42 |
noonedeadpunk | So I'm not sure if `--file` is applicable in this case | 14:43 |
jrosser | isnt that why privsep was trying to do a bunch of iscsi stuff | 14:44 |
jrosser | to arrange a volume to be there to put the image into | 14:44 |
noonedeadpunk | could be actually... | 14:45 |
jrosser | afaik this lets you have images in some cinder store then (if you configure it right) boot an instance from a snapshot, much like we can do with ceph | 14:47 |
nixbuilder | This works in our Production Pike cloud.... https://paste.opendev.org/show/bPL57mzdOT1RBrwIahRv/ | 14:48 |
jrosser | google says https://ilearnedhowto.wordpress.com/2020/05/06/how-to-install-cinder-in-openstack-rocky-and-make-it-work-with-glance/ | 14:48 |
noonedeadpunk | yeah, so things like `cinder_store_auth_address, cinder_store_user_name, cinder_store_password` should be added as an override as of today | 14:49 |
noonedeadpunk | I don't see logic handling that in our glance role | 14:49 |
jrosser | it's a shame there was not slightly more debug from privset aabout exactly what command it tried | 14:50 |
jrosser | similarly we miss barbican<>glance integration currently as well | 14:51 |
noonedeadpunk | yeah | 14:51 |
nixbuilder | noonedeadpunk: jrosser: Reading the documentation that you sent over. | 14:52 |
noonedeadpunk | While we actually cinder in additional stores by default. huh | 14:53 |
jrosser | nixbuilder: i think what we need to do is check that everything is set up in the service config files as they should be | 14:54 |
jrosser | it's quite possible that few people are using this integration and there is gap in the way we set it up | 14:54 |
noonedeadpunk | oh well. | 14:58 |
noonedeadpunk | appears that our rootwrap filters are outdated | 14:58 |
noonedeadpunk | at least this https://opendev.org/openstack/glance_store/src/branch/master/etc/glance/rootwrap.d/glance_cinder_store.filters looks completely different from https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/files/rootwrap.d/glance_cinder_store.filters | 15:00 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:00 |
opendevmeet | Meeting started Tue Nov 14 15:00:55 2023 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:00 |
noonedeadpunk | #topic rollcall | 15:01 |
noonedeadpunk | o/ | 15:01 |
noonedeadpunk | I hope I didn't mess up with timezones | 15:01 |
NeilHanlon | o/ | 15:02 |
NeilHanlon | seems OK to me | 15:02 |
NeilHanlon | but I'm in a different one than usual 😂 | 15:02 |
noonedeadpunk | #topic office hours | 15:07 |
noonedeadpunk | we had really good progress landing things during previous week | 15:07 |
noonedeadpunk | 2023.1 is still struggling from full_disk though on upgrade jobs | 15:08 |
noonedeadpunk | I do hope that this backprt could help with that | 15:08 |
noonedeadpunk | #link https://review.opendev.org/c/openstack/openstack-ansible/+/900670 | 15:08 |
opendevreview | Merged openstack/openstack-ansible-galera_server master: Fix ignored database directories configuration https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/900860 | 15:09 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Ensure tempest include and exclude lists all use unique names https://review.opendev.org/c/openstack/openstack-ansible/+/893968 | 15:11 |
jrosser | o/ | 15:11 |
noonedeadpunk | our jobs now seem to be quite faster after all | 15:12 |
noonedeadpunk | looks like we're back to 1:10 for metal jobs | 15:12 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server stable/2023.1: Fix ignored database directories configuration https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/900888 | 15:14 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server stable/zed: Fix ignored database directories configuration https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/900889 | 15:14 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server stable/yoga: Fix ignored database directories configuration https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/900890 | 15:15 |
noonedeadpunk | So releasing | 15:15 |
noonedeadpunk | I was waiting for https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/899045 to land to propose new releases for stable branches | 15:15 |
noonedeadpunk | as you might have seen - Yoga is going to be dropped and re-created as "unmaintained/yoga" | 15:16 |
noonedeadpunk | which means we need to wrap up all patches we want to land there | 15:16 |
noonedeadpunk | #link https://review.opendev.org/q/parentproject:openstack/openstack-ansible+branch:%255Estable/yoga+status:open+ | 15:16 |
noonedeadpunk | do we wanna put some work and try to fix these or should we jsut abandon them? | 15:17 |
noonedeadpunk | Next, is what we have left for Bobcat branching | 15:21 |
jrosser | hmm i have not managed to find time to work on stable branches | 15:21 |
jrosser | tends to involve a lot of making AIO and investigating | 15:21 |
noonedeadpunk | Yeah, while I like in general do things like that, but right now no time for that :( | 15:22 |
noonedeadpunk | So I guess I will go on and abandon backports that are not trivially broken | 15:22 |
noonedeadpunk | So, 2023.2. I think biggest thing, that was on plate requiring changes everywhere was osa/quorum_queues. | 15:23 |
noonedeadpunk | It's mostly landed now, except Zun, which is jsut broken for 2023.2 | 15:23 |
noonedeadpunk | While I've managed to fix it for master, I dunno what to do with the backport: https://review.opendev.org/c/openstack/zun/+/900785 | 15:23 |
noonedeadpunk | Another thing is openstack_resources role | 15:25 |
noonedeadpunk | I've started making some adjustments and add magnum resources management. Hopefully will push some result soonish | 15:25 |
jrosser | ah i can test that in my cluster-api patch | 15:26 |
noonedeadpunk | ok, awesome | 15:26 |
jrosser | when i can find some time i'm turning that into a collection | 15:26 |
noonedeadpunk | I guess we can branch once these will land | 15:26 |
noonedeadpunk | About skyline - haven't worked on that yet... | 15:27 |
noonedeadpunk | And seems we have quite valid bug with cinder as a glance storage | 15:31 |
noonedeadpunk | that nixbuilder raised today | 15:31 |
jrosser | i was wondering if that was a candidate to add to the os_cinder/os_glance CI jobs | 15:32 |
noonedeadpunk | yeah, makes sense | 15:32 |
jrosser | actually yes we run an nfs scenario there | 15:33 |
jrosser | so a cinder one would be a good addition | 15:33 |
noonedeadpunk | we just need to fix that first somehow :) | 15:35 |
jrosser | it's sometimes quite tricky to come up with the cross-role variables to control this stuff | 15:37 |
jrosser | or to say that its actually a documentation issue and a specific set of overrides are required by the operator | 15:37 |
noonedeadpunk | And I guess I know the reason | 15:37 |
jrosser | there is already a place to document this https://docs.openstack.org/openstack-ansible-os_glance/latest/configure-glance.html#configuring-default-and-additional-stores | 15:38 |
opendevreview | Merged openstack/openstack-ansible-os_aodh master: Add quorum support for service https://review.opendev.org/c/openstack/openstack-ansible-os_aodh/+/895690 | 15:39 |
noonedeadpunk | yeah, so the issue is in the "wrong" rootwrap, that doesn't include bindir from venv | 15:39 |
noonedeadpunk | which was removed back then: https://opendev.org/openstack/openstack-ansible-os_glance/commit/9748e6b1543d225b19afe8fe2f93f5a7d66a69e4#diff-c99b9efb7da8932e4002fa34d4f7e5cf5d69d351 | 15:41 |
noonedeadpunk | anyway, let's investigate that after the meeting... | 15:42 |
opendevreview | Merged openstack/openstack-ansible-os_swift master: Add quorum queues support for service https://review.opendev.org/c/openstack/openstack-ansible-os_swift/+/900632 | 15:42 |
noonedeadpunk | If' that's it | 15:46 |
noonedeadpunk | #endmeeting | 15:46 |
opendevmeet | Meeting ended Tue Nov 14 15:46:29 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:46 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-11-14-15.00.html | 15:46 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-11-14-15.00.txt | 15:46 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-11-14-15.00.log.html | 15:46 |
jrosser | so - quorum queues job needed too perhaps? | 15:46 |
noonedeadpunk | Oh, that is good that you've mentioned that | 15:47 |
noonedeadpunk | I clean forgot that we should disable them by default | 15:48 |
jrosser | could combine that with an _infra_ job on the rabbitmq role | 15:48 |
jrosser | then we get a cluster + quorum at the same time | 15:48 |
noonedeadpunk | well, infra job only runs keystone | 15:48 |
noonedeadpunk | which kinda... doesn't care much about rabbit | 15:48 |
noonedeadpunk | I guess | 15:49 |
jrosser | well - i guess we can make anything we want if we add _quorum_ into the scenario parsing | 15:49 |
noonedeadpunk | yeah, true | 15:49 |
jrosser | which is some sort of step toward making it default | 15:49 |
noonedeadpunk | We have to make it default before rabbitmq 4 releases | 15:50 |
jrosser | also seems to be some progress here https://review.opendev.org/q/topic:bug-2031497 | 15:50 |
noonedeadpunk | oh, new reviews are in | 16:01 |
noonedeadpunk | not for most crucial parts though | 16:36 |
noonedeadpunk | and that's still for 2024.1 | 16:36 |
jrosser | yes - so from OSA perspective we should chase those patches or we are delayed by another cycle | 16:39 |
noonedeadpunk | nixbuilder: can I kindly ask you to submit a bug report? | 16:49 |
noonedeadpunk | I'm working now on proposing a fix | 16:49 |
nixbuilder | noonedeadpunk: Thank you... sure. Where do I submit the bug report? | 17:22 |
jrosser | nixbuilder: here https://launchpad.net/openstack-ansible | 17:23 |
nixbuilder | jrosser: Thank you! | 17:24 |
nixbuilder | Noonedeadpunk: jrosser: Bug report here...https://bugs.launchpad.net/openstack-ansible/+bug/2043503 | 17:45 |
noonedeadpunk | awesome | 17:45 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_glance master: Add glance_bin to rootwrap defenition https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/900930 | 17:50 |
noonedeadpunk | nixbuilder: can you check if that patch solves your issue? ^ | 17:50 |
nixbuilder | noonedeadpunk: I will start with a fresh install, patch the files and then let you know. | 17:55 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_glance master: Remove glance_cinder_store filters override https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/900931 | 17:55 |
noonedeadpunk | In my AIO it kinda worked. But I guess that's not all what's needed to make it fully functional... | 17:56 |
nixbuilder | noonedeadpunk: OK... it will take a bit to verify. I'll let you know ASAP. | 17:56 |
noonedeadpunk | we _really_ need to give some love to glance role as looking at it a bit wondering how it worked at all... | 18:17 |
noonedeadpunk | and not sure how to fix in a good way.... | 18:18 |
noonedeadpunk | Looking now at `glance_available_stores` that should eventually be a list of mappings, but in fact smth half-backed that would require quite some overrides if one dare to make it a mappring | 18:20 |
nixbuilder | noonedeadpunk: Oh wow... didn't mean to create a huge rabbit hole :-) | 18:29 |
noonedeadpunk | nixbuilder: nah, I think it's a good thing actually... | 18:37 |
spatel | folks, did you ever upload 700GB image in glance ? | 18:45 |
mgariepy | wow. | 18:45 |
spatel | I am getting timeout randomly when hit 50% upload | 18:46 |
spatel | I am using command line | 18:46 |
mgariepy | no never did that. | 18:46 |
mgariepy | how comes it's so big ? | 18:46 |
spatel | This is someone VM running on VMware and they want to migrate to openstack.. | 18:47 |
spatel | I convert image from vmdk to raw and now trying to upload | 18:47 |
mgariepy | i'm currently decomissioniung an old cloud and downloading via glance is so buggy. i do export data via rbd directly. | 18:48 |
spatel | can I import image directly in ceph? | 18:49 |
mgariepy | that's a good question | 18:49 |
spatel | Feels like haproxy getting timeout | 18:52 |
mgariepy | is glance store and forward or stream the data to ceph ? | 18:52 |
spatel | I think glance is backed by ceph so it should stream to ceph directly | 18:53 |
spatel | I wish we can upload glance imgae to ceph and tell glance to point there | 18:54 |
spatel | is this right doc for me? - https://xahteiwi.eu/resources/hints-and-kinks/importing-rbd-into-glance/ | 18:55 |
spatel | looks good to me.. | 18:56 |
noonedeadpunk | spatel: You can always trust Florian :D | 19:06 |
mgariepy | yeah looks nice :D | 19:07 |
mgariepy | spatel, if you run from a volume you might want to convert it to raw if it's un vmdk format first. | 19:08 |
mgariepy | otherwise you will have the same issue. | 19:08 |
mgariepy | just in a later stage. | 19:08 |
spatel | noonedeadpunk lol | 19:18 |
spatel | image is in raw format | 19:18 |
opendevreview | Merged openstack/openstack-ansible-os_octavia stable/2023.1: Add security rule for octavia healthmanager https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/899045 | 19:45 |
nixbuilder | noonedeadpunk: OK... got the servers reloaded, source code files patched and now on to installation. | 20:08 |
opendevreview | Merged openstack/openstack-ansible-os_ceilometer master: Add quorum support for service https://review.opendev.org/c/openstack/openstack-ansible-os_ceilometer/+/895696 | 20:30 |
opendevreview | Christian Rohmann proposed openstack/openstack-ansible-rabbitmq_server stable/zed: Add ability to add custom configuration for RabbitMQ https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/900941 | 20:34 |
logan- | 👋 | 20:34 |
jrosser | o/ hello | 20:45 |
logan- | y'all up and switched irc networks on me! | 21:12 |
logan- | hope everyone is well, glad to make my way back here and see some familiar names :) | 21:13 |
dmsimard[m] | you did not witness the implosion of freenode? :D | 22:12 |
mgariepy | hey welcome back logan- | 22:29 |
opendevreview | Jimmy McCrory proposed openstack/openstack-ansible-galera_server master: Include CA cert in client my.cnf https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/900266 | 22:35 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!