| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-repo_server master: Ensure that selected Apache MPM is enforced https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/929690 | 08:26 |
|---|---|---|
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Ensure that selected Apache MPM is enforced https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/929695 | 08:26 |
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Ensure that selected Apache MPM is enforced https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/929691 | 08:27 |
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_skyline master: Ensure that selected Apache MPM is enforced https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/929697 | 08:27 |
| kleini | I just stumbled again in my staging deployment across changed/removed inventory data. What is the best approach, to remove not anymore generated data from inventory? In this case hostvars/<host>/container_address was not there anymore and my pools.yml generation for designate failed. I need to migrate to hostvars/<host>/container_networks/container_address/address. | 08:58 |
| jrosser | kleini: do you have "management_address" ? | 09:07 |
| kleini | yes, I see management_address. So container_address was just renamed? | 10:02 |
| noonedeadpunk | yup | 10:04 |
| kleini | okay, thanks | 10:04 |
| noonedeadpunk | container_address was left for old deployments, but on new one it's not created anymore | 10:04 |
| noonedeadpunk | there should be a release note about that as well | 10:04 |
| kleini | then back to my question: how can I remove old data from inventory? | 10:06 |
| noonedeadpunk | well, there's really no trivial way of doing so :( | 10:15 |
| kleini | I redeploy my staging from scratch often. Then I get fresh inventories. And my only way was to compare entries in staging inventory with production inventory, which then allows to remove this or that value. But a lot of work and red eyes afterwards. | 10:27 |
| jrosser | can you use jq to delete specific keys from the inventory json? | 10:32 |
| kleini | yeah, something like that. Will have a look into jq | 10:34 |
| noonedeadpunk | I think we should be able to add some kind of flag/feature to do such cleanups | 11:10 |
| noonedeadpunk | but would be tough with priority for this | 11:11 |
| *** mnasiadka1 is now known as mnasiadka | 12:28 | |
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-repo_server master: Ensure that selected Apache MPM is enforced https://review.opendev.org/c/openstack/openstack-ansible-repo_server/+/929690 | 13:28 |
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Ensure that selected Apache MPM is enforced https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/929691 | 13:29 |
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_skyline master: Ensure that selected Apache MPM is enforced https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/929697 | 13:30 |
| opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Ensure that selected Apache MPM is enforced https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/929695 | 13:30 |
| opendevreview | Gaudenz Steinlin proposed openstack/openstack-ansible master: Disable console SSL termination for user certs https://review.opendev.org/c/openstack/openstack-ansible/+/929775 | 13:38 |
| opendevreview | Jonathan Rosser proposed openstack/openstack-ansible master: Update skyline http check to modern format https://review.opendev.org/c/openstack/openstack-ansible/+/929812 | 15:00 |
| sykebenX | Hello, I've noticed a number of missing options which can be applied to `/etc/cinder/cinder.conf` which pertain to ceph storage backends. These options (namely the ability to specify a backup_ceph_conf path) are not possible to pass in through openstack-ansible's user_variables.yml file. Is this intentionally left out of the os-cinder role and present somewhere else perhaps? | 18:38 |
| noonedeadpunk | sykebenX: hey | 18:51 |
| noonedeadpunk | yeah, some options are left out intentionally as there're too many options in each service to pass | 18:52 |
| noonedeadpunk | though we do have a config_template module, which is capable/designed to apply user-provided configs to templates | 18:52 |
| noonedeadpunk | so you can both change defaults in templates as well as add arbitrary content to them | 18:53 |
| noonedeadpunk | so you can do like that specifically with the option you've mentioned: https://paste.openstack.org/show/b92HKFkiUXKXdYgAiuum/ | 18:54 |
| noonedeadpunk | and define that in user_variables | 18:54 |
| noonedeadpunk | sykebenX: hope this helps :) | 18:55 |
| sykebenX | Oh that's fantastic! Thanks very much noonedeadpunk :) | 19:04 |
| noonedeadpunk | we have such overrides variable for pretty much any tempalte we have | 19:08 |
| noonedeadpunk | so you're really flexible with values | 19:08 |
| noonedeadpunk | module can also remove options that are already defined, for that in overrides var you define them as empty mappig, ie `backup_ceph_conf: {}` | 19:09 |
| noonedeadpunk | (under section they need to be in) | 19:09 |
| sykebenX | Excellent! Thanks again! I appreciate the help | 19:13 |
| sykebenX | Oh and one more thing. Does cinder have a way to specify which ceph keyring file to use? | 19:25 |
| sykebenX | Right now it seems to want to use /etc/ceph/ceph.client.cinder.keyring - but for my purposes, I'd like it to use /etc/ceph/ceph.keyring | 19:25 |
| jrosser | sykebenX: is it some ceph cluster that is external to your openstack-ansible deployment? | 19:26 |
| sykebenX | Yes sir | 19:30 |
| noonedeadpunk | you can actually store a keyring inside openstack_deploy folder and supply it for cinder | 19:30 |
| noonedeadpunk | and define `ceph_keyrings_dir` | 19:31 |
| jrosser | there is a good bunch of documentation for integrating external ceph here https://docs.openstack.org/openstack-ansible-ceph_client/latest/ | 19:31 |
| noonedeadpunk | https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/master/releasenotes/notes/ceph_keyrings_in_files-7d6a01e64861f8c6.yaml#L4 | 19:32 |
| sykebenX | I see. Currently, I'm not using the ceph_client role. I was having some issues with the ceph.conf and keyrings for some reason (need to look further into it), but for now since I'm not using that role, would the `ceph_keyrings_dir` do anything? | 19:32 |
| jrosser | it would be better that we fix your issue :) | 19:33 |
| sykebenX | haha I figured that too! Let me gather my notes and I'll see about providing the specifics here or in a issue ticket (whichever is preferred) | 19:34 |
| noonedeadpunk | sykebenX: well, you kind of "have to" use it | 19:34 |
| noonedeadpunk | as otherwise nova won't be happy | 19:34 |
| jrosser | but to answer your question ceph_keyrings_dir is used inside the ceph_client role | 19:34 |
| noonedeadpunk | role does apply required symlinks into nove venv to make it properly load ceph modules | 19:35 |
| jrosser | oh yes this is suuuper important | 19:35 |
| sykebenX | Right now I have a post-openstack hook I created which configures libvirt and sets up all the clients and keyrings for me, but it was intentionally a workaround until I figure out what I'm doing wrong with the ceph_client role | 19:39 |
| sykebenX | A lot of it is sadly redundant when I look at the ceph_client role... again, just a workaround though for now lol | 19:40 |
| noonedeadpunk | it's getting slightly late here, so I will sign out soon, but feel free to fire failures in some paste and we will try to help you sort out them | 19:41 |
| sykebenX | Appreciate that jrosser and noonedeadpunk! Thanks for all your help so far. Have a good evening | 19:42 |
| noonedeadpunk | sure, take care! | 19:42 |
| jrosser | no problem - theres most activity here eu timezone / working hours fwiw | 19:52 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!