15:00:16 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:00:16 <opendevmeet> Meeting started Tue Jan 23 15:00:16 2024 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:16 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:16 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:00:19 <noonedeadpunk> #topic rollcall
15:00:20 <noonedeadpunk> o/
15:00:25 <jrosser> o/ hello
15:02:09 <NeilHanlon> o/ hiya folks
15:02:27 <mgariepy> hey
15:05:03 <noonedeadpunk> #topic office hours
15:05:08 <noonedeadpunk> Soooooo
15:05:55 <noonedeadpunk> This week I had a peek into cinder_backends stuff https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/tasks/cinder_backends.yml
15:06:09 <noonedeadpunk> and realized that there's no way to actually skip this thing
15:06:55 <noonedeadpunk> And there're couple of smallish issues like if you want to set public: False - this will appear in cinder.conf as well
15:07:36 <noonedeadpunk> based on what mess it is, I thought that it's worth extending openstack_resources, since some modules appeared there to cover volume types, but these modules are jsut borked....
15:08:03 <noonedeadpunk> they're not idempotent, require weird input, etc
15:08:35 <noonedeadpunk> But now not sure if we should keep messing with what we have or try to fix modules...
15:09:36 <jrosser> even this comment is wrong https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/tasks/cinder_backends.yml#L28
15:09:41 <noonedeadpunk> As at very least, I guess we should include the task conditionally based on the variable
15:10:47 <jrosser> there could be a pretty huge cleanup of that code without changing how it works
15:10:49 <noonedeadpunk> huh... it's not wrong...
15:11:02 <noonedeadpunk> As looks like we still deploying openrc to cinder_backends....
15:11:13 <noonedeadpunk> or well, cinder_api
15:11:14 <jrosser> there is already delegation, and there is no need to run the openrc role i think
15:11:59 <noonedeadpunk> What I don't like very much, is how it's tighten to backends defenition
15:12:34 <noonedeadpunk> So I was kinda thinking of introducing the new variable, and then do some kind of default wiring to it
15:12:52 <noonedeadpunk> which basically lead me to variable structure and opensatck_resources role....
15:13:20 <noonedeadpunk> until I realized it's a dead end as modules just don't work :(
15:13:24 <jrosser> i guess the 'right thing' to do is to fix the ansible modules :/
15:13:26 <noonedeadpunk> but well
15:13:42 <jrosser> but that is likley work++
15:13:49 <noonedeadpunk> potentially we can use "same" structure
15:14:00 <noonedeadpunk> What I'm afraid most, is when it will land
15:14:06 <noonedeadpunk> so we can use it
15:14:12 <jrosser> right
15:14:26 <jrosser> ultimately the modules follow pretty closely the CLI
15:14:52 <jrosser> so perhaps we can make a new datastructure that serves either, as temporary measure
15:15:17 <noonedeadpunk> volume_type jsut failed on me with `Volume Type lvm already exists.` when I ran it second time
15:15:27 <noonedeadpunk> yeah, true
15:15:51 <noonedeadpunk> How are things with capi integration?
15:15:58 <jrosser> so i think i would look at cleaning up what we have there in the ansible code as it is a huge mess
15:16:48 <jrosser> like all the path stuff is bogus, cli is in /usr/local/bin
15:17:00 <jrosser> insecure is dealt with in openrc anyway
15:18:12 <jrosser> and `environment: { OS_CLOUD: default }` is sufficient to get command or shell module to pick up and use the right bits of openrc
15:18:22 <noonedeadpunk> I was ectually thinking of providing --os-cloud default rather then sourcing openrc
15:18:37 <noonedeadpunk> or that, yes
15:18:59 <noonedeadpunk> to not repeat myself for each command, yes
15:19:13 <jrosser> i think thats what i mean about cleanup
15:19:19 <noonedeadpunk> yeah ,fair, I will look into cleanup of all that then
15:19:21 <jrosser> we could put all that in a block: and delete tons of nonsense
15:19:30 <noonedeadpunk> Was hoping a bit just to drop all that instead :D
15:19:52 <NeilHanlon> :P
15:20:40 <jrosser> ok so on capi integration
15:20:45 <jrosser> i have some pretty interesting patches
15:21:11 <jrosser> this one first https://review.opendev.org/c/openstack/openstack-ansible/+/906255
15:21:18 <noonedeadpunk> what I'm not sure about - if it's safe to drop `public` here https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/templates/cinder.conf.j2#L107
15:21:48 <jrosser> general purpose extension hook points to add new playbooks
15:22:20 <noonedeadpunk> I do like this one
15:22:37 <jrosser> this looks pretty cool as you can put in user variables `pre_setup_hosts_hook: my.collection.playbook`
15:23:18 <jrosser> and i did wonder if we want to add similar hook points to all the things in openstack-ansible/playbooks
15:23:23 <jrosser> but i make this patch first as an example
15:24:02 <noonedeadpunk> I guess it depends on how much sense to add this everywhere
15:24:31 <jrosser> then second thing is i have got a structure for a CI job that includes setup from an external collection
15:24:32 <jrosser> https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/905199/17/zuul.d/playbooks/bootstrap-mcapi-vexxhost.yml
15:24:33 <noonedeadpunk> As kinda service playbooks are small enough usually, so it would be close to same?
15:24:55 <noonedeadpunk> and we don't have much in pre-post tasks...
15:25:02 <jrosser> well it depends if you want to drop something in after services, but before tempest for example
15:25:10 <noonedeadpunk> but dunno. Can't think of good usecase for having same in service playbooks yet
15:25:11 <jrosser> but maybe we wait for an actual need for that
15:25:26 <noonedeadpunk> eventually....
15:25:47 <noonedeadpunk> there was some old patch of mine, that was jsut separating tempest/rally to their own thing
15:25:50 <noonedeadpunk> or smth like that
15:26:03 <jrosser> i've got a was to have a `bootstrap-aio` playbook in an external collection that gets called in the pre job zuul playbook
15:26:07 <jrosser> *a way
15:26:34 <noonedeadpunk> like https://review.opendev.org/c/openstack/openstack-ansible/+/840685
15:26:37 <jrosser> this keeps some things for /etc/openstack_deploy/... in the collection and drops them there before the main aio is boostrapped
15:27:31 <jrosser> currently this is a bit ugly, as there is a native way for zuul to understand that some of your repos contain roles, and to call them from your pre playbook
15:27:46 <jrosser> you would do this with `job.roles` and then use the roles as you need them
15:28:06 <jrosser> unfortunately we cannot do the same with a collection, so there is a hacky call to the playbook path on the disk instead currently
15:28:41 <jrosser> ideally i would make a `osa_ops.mcapi_vexxhost.boostrap_aio` role that we could call directly from the pre playbook
15:29:39 <noonedeadpunk> and for that bootstrap-ansible should already be done
15:29:48 <jrosser> no, this is before
15:30:06 <noonedeadpunk> um, ok. but what will execute pre-playbook then?
15:30:17 <jrosser> in the case of a CI job, zuul
15:30:38 <noonedeadpunk> like, given this pre-playbook is a hook to setup-everything (or smth like that)
15:30:49 <noonedeadpunk> (if that's was an idea)
15:30:56 <jrosser> this is separate to the hooks thing
15:31:01 <noonedeadpunk> aha, ok.
15:31:15 <noonedeadpunk> (though it could be related :D)
15:31:20 <jrosser> the hooks allow you to use an external collection to deploy whatever, like cluster_api or ELK
15:31:43 <noonedeadpunk> like provision extra stuff for openstack_deploy.....
15:31:45 <noonedeadpunk> anyway
15:31:59 <noonedeadpunk> yeah, probably you can't do that
15:32:10 <jrosser> but you need to get in *before* bootstrap-ansible to be able to use things like `user-ansible-venv-requirements.yml`
15:32:10 <noonedeadpunk> as you'd need to reload inventory
15:32:20 <noonedeadpunk> yeah, true
15:32:32 <noonedeadpunk> fair
15:32:40 <jrosser> so anyway, i think there is a route forward here for things like capi and ELK
15:32:53 <noonedeadpunk> that sounds really nice
15:32:58 <jrosser> which will be much more maintainable than what we have right now in the ops repo
15:35:12 <noonedeadpunk> yeah, that looks really nice
15:35:22 <jrosser> i have everything pretty much done to deploy magnum + out of tree capi driver, and then run sonobuoy sanity check against a deployed cluster
15:35:39 <jrosser> this works nicely locally in 8 core 32G vm
15:37:13 <jrosser> this job pulls the whole lot together https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/905199
15:37:25 <jrosser> and i am seeing now if there is any chance for it to run in 3h
15:37:41 <noonedeadpunk> fingers corssed
15:37:52 <jrosser> we can trim out heat
15:38:01 <noonedeadpunk> maybe worth disabling tempest somehow
15:38:08 <jrosser> but currently no way to do that as the auto-scenario stuff pulls it in with magnum
15:38:27 <jrosser> maybe we need a NO_SCENARIO as well :)
15:38:40 <jrosser> yes tempest is pretty spurious for this
15:38:40 <noonedeadpunk> lol
15:45:36 <noonedeadpunk> I don't have bright ideas on how to drop heat from there
15:45:48 <noonedeadpunk> rather then define all jobs from magnum independently
15:46:32 <jrosser> `tempest_install: false` looks like it will just drop tempest entirely
15:46:39 <jrosser> yeah i will think about that
15:47:29 <noonedeadpunk> yeah, tempest should be easier to drop.
15:47:38 <noonedeadpunk> ofc depending on where we keep it currently
15:48:18 <noonedeadpunk> https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/templates/user_variables.aio.yml.j2#L268-L270
15:48:23 <noonedeadpunk> It's not _that_ easy
15:48:52 <jrosser> user_variables_zzzz.yml will win :)
15:49:02 <noonedeadpunk> or well, matter of condition {% if 'capi' not in bootstrap_host_scenarios %}
15:49:17 <jrosser> i was trying not to mess with the openstack-ansible repo at all
15:49:43 <jrosser> becasue i'd like to make this all general enough that it will also be suitable for ELK or whatever else
15:49:52 <jrosser> so avoiding special casing things would be good
15:50:03 <noonedeadpunk> yeah. true
15:50:42 <jrosser> hah and of course just now tempest did fail on my capi job
15:51:04 <noonedeadpunk> heh, yeah
15:51:24 <noonedeadpunk> it's doomed to fail more or less
15:51:40 <noonedeadpunk> or well. I think it's failing now jsut for regular magnum I assume
15:51:59 <jrosser> right, because tempest runs in an unconfigured magum with no driver installed
15:52:12 <jrosser> the templates and images and everything are missing at that point
15:54:31 <noonedeadpunk> ouch
15:54:52 <noonedeadpunk> ok, well, it's expected I guess though
15:55:01 <jrosser> indeed, i'll just disable it
15:55:10 <noonedeadpunk> or this all is still need for capi driver?
15:55:17 <noonedeadpunk> like cluster templates?
15:55:35 <jrosser> yes, you still need it all
15:55:48 <jrosser> but there is chicken/egg
15:56:12 <jrosser> the control plane cluster_api k8s needs to be deployed before you make the templates
15:56:21 <noonedeadpunk> aha, gotcha
15:56:40 <noonedeadpunk> And I guess here where my debt comes to introduce a playbook for creating openstack_resources
15:56:50 <jrosser> so os_magnum runs and installs the out-of-tree driver
15:56:53 <noonedeadpunk> as that can be added as post-openstack hook to create resources
15:56:54 <noonedeadpunk> or smth
15:57:03 <jrosser> then along somes the ops repo collection and puts in the rest in the post_hook
15:57:50 <jrosser> i already used openstack_resources :) https://review.opendev.org/c/openstack/openstack-ansible-ops/+/906361/2/mcapi_vexxhost/playbooks/functional_test.yml
16:09:06 <noonedeadpunk> #endmeeting