15:00:16 #startmeeting openstack_ansible_meeting 15:00:16 Meeting started Tue Jan 23 15:00:16 2024 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:16 The meeting name has been set to 'openstack_ansible_meeting' 15:00:19 #topic rollcall 15:00:20 o/ 15:00:25 o/ hello 15:02:09 o/ hiya folks 15:02:27 hey 15:05:03 #topic office hours 15:05:08 Soooooo 15:05:55 This week I had a peek into cinder_backends stuff https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/tasks/cinder_backends.yml 15:06:09 and realized that there's no way to actually skip this thing 15:06:55 And there're couple of smallish issues like if you want to set public: False - this will appear in cinder.conf as well 15:07:36 based on what mess it is, I thought that it's worth extending openstack_resources, since some modules appeared there to cover volume types, but these modules are jsut borked.... 15:08:03 they're not idempotent, require weird input, etc 15:08:35 But now not sure if we should keep messing with what we have or try to fix modules... 15:09:36 even this comment is wrong https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/tasks/cinder_backends.yml#L28 15:09:41 As at very least, I guess we should include the task conditionally based on the variable 15:10:47 there could be a pretty huge cleanup of that code without changing how it works 15:10:49 huh... it's not wrong... 15:11:02 As looks like we still deploying openrc to cinder_backends.... 15:11:13 or well, cinder_api 15:11:14 there is already delegation, and there is no need to run the openrc role i think 15:11:59 What I don't like very much, is how it's tighten to backends defenition 15:12:34 So I was kinda thinking of introducing the new variable, and then do some kind of default wiring to it 15:12:52 which basically lead me to variable structure and opensatck_resources role.... 15:13:20 until I realized it's a dead end as modules just don't work :( 15:13:24 i guess the 'right thing' to do is to fix the ansible modules :/ 15:13:26 but well 15:13:42 but that is likley work++ 15:13:49 potentially we can use "same" structure 15:14:00 What I'm afraid most, is when it will land 15:14:06 so we can use it 15:14:12 right 15:14:26 ultimately the modules follow pretty closely the CLI 15:14:52 so perhaps we can make a new datastructure that serves either, as temporary measure 15:15:17 volume_type jsut failed on me with `Volume Type lvm already exists.` when I ran it second time 15:15:27 yeah, true 15:15:51 How are things with capi integration? 15:15:58 so i think i would look at cleaning up what we have there in the ansible code as it is a huge mess 15:16:48 like all the path stuff is bogus, cli is in /usr/local/bin 15:17:00 insecure is dealt with in openrc anyway 15:18:12 and `environment: { OS_CLOUD: default }` is sufficient to get command or shell module to pick up and use the right bits of openrc 15:18:22 I was ectually thinking of providing --os-cloud default rather then sourcing openrc 15:18:37 or that, yes 15:18:59 to not repeat myself for each command, yes 15:19:13 i think thats what i mean about cleanup 15:19:19 yeah ,fair, I will look into cleanup of all that then 15:19:21 we could put all that in a block: and delete tons of nonsense 15:19:30 Was hoping a bit just to drop all that instead :D 15:19:52 :P 15:20:40 ok so on capi integration 15:20:45 i have some pretty interesting patches 15:21:11 this one first https://review.opendev.org/c/openstack/openstack-ansible/+/906255 15:21:18 what I'm not sure about - if it's safe to drop `public` here https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/templates/cinder.conf.j2#L107 15:21:48 general purpose extension hook points to add new playbooks 15:22:20 I do like this one 15:22:37 this looks pretty cool as you can put in user variables `pre_setup_hosts_hook: my.collection.playbook` 15:23:18 and i did wonder if we want to add similar hook points to all the things in openstack-ansible/playbooks 15:23:23 but i make this patch first as an example 15:24:02 I guess it depends on how much sense to add this everywhere 15:24:31 then second thing is i have got a structure for a CI job that includes setup from an external collection 15:24:32 https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/905199/17/zuul.d/playbooks/bootstrap-mcapi-vexxhost.yml 15:24:33 As kinda service playbooks are small enough usually, so it would be close to same? 15:24:55 and we don't have much in pre-post tasks... 15:25:02 well it depends if you want to drop something in after services, but before tempest for example 15:25:10 but dunno. Can't think of good usecase for having same in service playbooks yet 15:25:11 but maybe we wait for an actual need for that 15:25:26 eventually.... 15:25:47 there was some old patch of mine, that was jsut separating tempest/rally to their own thing 15:25:50 or smth like that 15:26:03 i've got a was to have a `bootstrap-aio` playbook in an external collection that gets called in the pre job zuul playbook 15:26:07 *a way 15:26:34 like https://review.opendev.org/c/openstack/openstack-ansible/+/840685 15:26:37 this keeps some things for /etc/openstack_deploy/... in the collection and drops them there before the main aio is boostrapped 15:27:31 currently this is a bit ugly, as there is a native way for zuul to understand that some of your repos contain roles, and to call them from your pre playbook 15:27:46 you would do this with `job.roles` and then use the roles as you need them 15:28:06 unfortunately we cannot do the same with a collection, so there is a hacky call to the playbook path on the disk instead currently 15:28:41 ideally i would make a `osa_ops.mcapi_vexxhost.boostrap_aio` role that we could call directly from the pre playbook 15:29:39 and for that bootstrap-ansible should already be done 15:29:48 no, this is before 15:30:06 um, ok. but what will execute pre-playbook then? 15:30:17 in the case of a CI job, zuul 15:30:38 like, given this pre-playbook is a hook to setup-everything (or smth like that) 15:30:49 (if that's was an idea) 15:30:56 this is separate to the hooks thing 15:31:01 aha, ok. 15:31:15 (though it could be related :D) 15:31:20 the hooks allow you to use an external collection to deploy whatever, like cluster_api or ELK 15:31:43 like provision extra stuff for openstack_deploy..... 15:31:45 anyway 15:31:59 yeah, probably you can't do that 15:32:10 but you need to get in *before* bootstrap-ansible to be able to use things like `user-ansible-venv-requirements.yml` 15:32:10 as you'd need to reload inventory 15:32:20 yeah, true 15:32:32 fair 15:32:40 so anyway, i think there is a route forward here for things like capi and ELK 15:32:53 that sounds really nice 15:32:58 which will be much more maintainable than what we have right now in the ops repo 15:35:12 yeah, that looks really nice 15:35:22 i have everything pretty much done to deploy magnum + out of tree capi driver, and then run sonobuoy sanity check against a deployed cluster 15:35:39 this works nicely locally in 8 core 32G vm 15:37:13 this job pulls the whole lot together https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/905199 15:37:25 and i am seeing now if there is any chance for it to run in 3h 15:37:41 fingers corssed 15:37:52 we can trim out heat 15:38:01 maybe worth disabling tempest somehow 15:38:08 but currently no way to do that as the auto-scenario stuff pulls it in with magnum 15:38:27 maybe we need a NO_SCENARIO as well :) 15:38:40 yes tempest is pretty spurious for this 15:38:40 lol 15:45:36 I don't have bright ideas on how to drop heat from there 15:45:48 rather then define all jobs from magnum independently 15:46:32 `tempest_install: false` looks like it will just drop tempest entirely 15:46:39 yeah i will think about that 15:47:29 yeah, tempest should be easier to drop. 15:47:38 ofc depending on where we keep it currently 15:48:18 https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/templates/user_variables.aio.yml.j2#L268-L270 15:48:23 It's not _that_ easy 15:48:52 user_variables_zzzz.yml will win :) 15:49:02 or well, matter of condition {% if 'capi' not in bootstrap_host_scenarios %} 15:49:17 i was trying not to mess with the openstack-ansible repo at all 15:49:43 becasue i'd like to make this all general enough that it will also be suitable for ELK or whatever else 15:49:52 so avoiding special casing things would be good 15:50:03 yeah. true 15:50:42 hah and of course just now tempest did fail on my capi job 15:51:04 heh, yeah 15:51:24 it's doomed to fail more or less 15:51:40 or well. I think it's failing now jsut for regular magnum I assume 15:51:59 right, because tempest runs in an unconfigured magum with no driver installed 15:52:12 the templates and images and everything are missing at that point 15:54:31 ouch 15:54:52 ok, well, it's expected I guess though 15:55:01 indeed, i'll just disable it 15:55:10 or this all is still need for capi driver? 15:55:17 like cluster templates? 15:55:35 yes, you still need it all 15:55:48 but there is chicken/egg 15:56:12 the control plane cluster_api k8s needs to be deployed before you make the templates 15:56:21 aha, gotcha 15:56:40 And I guess here where my debt comes to introduce a playbook for creating openstack_resources 15:56:50 so os_magnum runs and installs the out-of-tree driver 15:56:53 as that can be added as post-openstack hook to create resources 15:56:54 or smth 15:57:03 then along somes the ops repo collection and puts in the rest in the post_hook 15:57:50 i already used openstack_resources :) https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906361/2/mcapi_vexxhost/playbooks/functional_test.yml 16:09:06 #endmeeting