15:03:34 <noonedeadpunk> #startmeeting openstack_ansible_meeting
15:03:34 <opendevmeet> Meeting started Tue Feb 14 15:03:34 2023 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:03:34 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:03:34 <opendevmeet> The meeting name has been set to 'openstack_ansible_meeting'
15:03:39 <noonedeadpunk> #topic office hours
15:03:56 <noonedeadpunk> #topic rollcall
15:04:10 <noonedeadpunk> o/
15:04:16 <noonedeadpunk> sorry for using wrong topic at first
15:05:11 <damiandabrowski> hi
15:07:24 <jrosser> o/ hello
15:08:29 <noonedeadpunk> #topic bug triage
15:08:45 <noonedeadpunk> We have couple of new bug reports, and one I find very scary/confusing
15:09:02 <noonedeadpunk> #link https://bugs.launchpad.net/openstack-ansible/+bug/2007044
15:09:31 <noonedeadpunk> I've tried to inspect code at very least for neutron and haven't found any possible opportuninty of such thing happening
15:10:49 <noonedeadpunk> I was thinking to maybe add extra conditions here https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/python_venv_install.yml#L43-L48 to check for common path we use for distron install
15:11:24 <noonedeadpunk> As I can recall patching some role to prevent running python_venv_build for distro path
15:12:05 <noonedeadpunk> As then it will be passed venv_install_destination_path as <service>_bin and <service>_bin for sure comes from distro_install.yml for distro path
15:12:22 <noonedeadpunk> But bug overall looks a bit messy overall
15:12:42 <jrosser> we could have a default thats says `/openstack` in that role
15:12:52 <jrosser> and if it doesnt match that start then `fail:`
15:13:20 <noonedeadpunk> Well. I do use this role outside of openstack as well...
15:13:56 <jrosser> right - so some more generic way of defining a "safe path"
15:15:15 <noonedeadpunk> I just can't think of good way of doing that to be frank
15:15:34 <noonedeadpunk> We use `/usr/bin` mainly for distro path
15:16:43 <noonedeadpunk> But basically - ppl are free to set venv_install_destination_path to any crazy thing...
15:18:09 <noonedeadpunk> I was going to check for some more roles if we might run the role somehow for distro path...
15:19:18 <jrosser> i think we need to ask for a reproduction and log in the bug report
15:19:25 <jrosser> as i've never seen anything like that before
15:20:56 <noonedeadpunk> Another thing from same person you've never seen...
15:21:13 <noonedeadpunk> #link https://bugs.launchpad.net/openstack-ansible/+bug/2006986
15:22:53 <noonedeadpunk> I was going to create a sandbox, but havn't managed to
15:23:45 <noonedeadpunk> But since I know you're using dns-01 and having some envs on zed - I'm not really sure I will be able to reproduce that either
15:24:38 <damiandabrowski> "Haproxy canno't using fqdn for binding and wait for an IP."
15:24:41 <damiandabrowski> is that really true?
15:25:13 <noonedeadpunk> Well, as I wrote there - we have haproxy binded to fqdn everywhere...
15:25:39 <noonedeadpunk> I can assume that it might not be true with newer haproxy versions or when having DNS RR or failing to resolve DNS....
15:29:36 <noonedeadpunk> But I don't see referring binding on FQDN on haproxy docs https://www.haproxy.com/documentation/hapee/latest/configuration/binds/syntax/
15:30:03 <noonedeadpunk> I kind of wonder if debian or smth is shipped with newer haproxy where bind on fqdn is no longer possible
15:30:51 <noonedeadpunk> `The bind directive accepts IPv4 and IPv6 IP addresses.`
15:31:39 <noonedeadpunk> Actually, I'm thinking if it's not time to try to rename internal_lb_vip_address
15:31:45 <noonedeadpunk> It's hugely confusing
15:31:53 <damiandabrowski> works fine at least on HA-Proxy version 2.0.29-0ubuntu1.1 2023/01/19
15:32:54 <noonedeadpunk> Well. That could be some undocumented behaviour we've taken as granted....
15:33:19 <jrosser> comment #9 suggests it is working now?
15:33:33 <jrosser> i'm pretty unclear what is going on in the earlier comments
15:33:50 <noonedeadpunk> yeah...
15:34:32 <jrosser> oh right but `haproxy_keepalived_external_vip_cidr ` will stop the fqdn being in the config file?
15:34:49 <noonedeadpunk> in keepalived file
15:34:50 <jrosser> well not sure actually
15:35:25 <noonedeadpunk> for haproxy you'd need haproxy_bind_internal_lb_vip_address
15:36:31 <noonedeadpunk> I think we should get rid of internal/external_lb_vip_address by using smth with more obvious naming
15:36:54 <noonedeadpunk> As basically what we want this variable to be - represent public/internal endpoints in keystone?
15:37:19 <noonedeadpunk> And serve as a default for keepalived/haproxy whenever possible
15:39:34 <noonedeadpunk> so maybe we can introduce smth like openstack_internal/external_endpoint and set it's default to internal/eternal_lb_vip_address and replace _lb_vip_address everywhere in docs/code with these new vars?
15:40:01 <jrosser> having it actually describe what it is would be good
15:40:26 <jrosser> though taking into account doing dashboard.example.com and compute.example.com rather than port numbers would be good too
15:41:01 <jrosser> there is perhaps a larger piece of work to understand how to make that tidy as well
15:41:06 <noonedeadpunk> what confuses me a lot - saying that address can be fqdn...
15:41:38 <noonedeadpunk> yeah, I assume that would need quite ACLs, right?
15:41:51 <jrosser> yeah but perhaps that makes it clearer what we need
15:42:23 <jrosser> as the thing that haproxy binds to is either some IP or a fqdn
15:42:51 <noonedeadpunk> I'm not sure now if it should bind to fqdn.... or if it does in 2.6 for example...
15:43:10 <jrosser> and we completely dont handle dual stack nicely either
15:43:33 <jrosser> feels like we get to PTG topic area with this tbh
15:43:55 <noonedeadpunk> yeah, totally... Let me better write it down to etherpad :D
15:44:23 <jrosser> dual stack is possible - we have it but the overrides are really quite a lot
15:45:58 <noonedeadpunk> I'd say one of problems as of today - <service>.example.com is part of the role
15:46:09 <noonedeadpunk> service role I mean
15:47:18 <noonedeadpunk> As I guess we should join nova_service_type with internal_lb_vip_address by default for that
15:47:38 <noonedeadpunk> So this leads us to more relevant topic
15:48:23 <noonedeadpunk> #topic office hours
15:48:44 <noonedeadpunk> Current work that happens on haproxy with regards to internal TLS
15:49:07 <damiandabrowski> today I'm working on:
15:49:13 <damiandabrowski> - removing haproxy_preconfigured_services and stick only with haproxy_services
15:49:15 <damiandabrowski> - adding support for haproxy_*_service_overrides variables
15:49:18 <damiandabrowski> - evaluating possibility of moving LE temporary haproxy service feature from haproxy_server role to openstack-ansible repo
15:49:28 <damiandabrowski> i'll push changes today/tomorrow
15:49:38 <damiandabrowski> I also pushed PKI/TLS support for glance and neutron(however i need to push some patches to dependent roles to get them working):
15:49:40 <damiandabrowski> https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/821011
15:49:42 <damiandabrowski> https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/873654
15:50:21 <noonedeadpunk> damiandabrowski: I have a question - was there a reason why we don't want to include haproxy role from inside of service roles?
15:51:15 <noonedeadpunk> As it feels right now, that implementation of this named endpoints will be way easier, as we will have access to vars that are defined inside roles
15:51:41 <noonedeadpunk> Or it was somehow more tricky with delegation?
15:51:43 <jrosser> what would we do for galera role there?
15:51:48 <noonedeadpunk> And handlers?
15:52:12 <jrosser> do we want to couple the galera role with the haproxy one like that when they are currently independant
15:52:34 <noonedeadpunk> jrosser: I should return to my work on proxysql to be frank I've put on hold year ago...
15:53:06 <jrosser> i am also using galera_server outside OSA
15:53:08 <damiandabrowski> hmm, i'm not sure if i understand you correctly, can you provide some example?
15:53:31 <damiandabrowski> we do you think it would be better to patch each role?
15:53:38 <noonedeadpunk> Well. It's doesn't make usage of haproxy really good option....
15:53:46 <noonedeadpunk> for galera balancing
15:55:42 <jrosser> anyway fundamental question seems to be if we should call haproxy role from inside things like os_glance
15:55:52 <jrosser> or if it should be done somehow in the playbook
15:55:56 <noonedeadpunk> yes ^
15:56:23 <jrosser> and then also i am not totally following damiandabrowski> - evaluating possibility of moving LE temporary haproxy service feature from haproxy_server role to openstack-ansible repo
15:56:37 <jrosser> ^ is this about how the code is now, or modifiying the new patches
15:56:39 <noonedeadpunk> jrosser: Galera for me doesn't make much sense for me personally to make dependant on haproxy
15:56:49 <noonedeadpunk> I'm not sure though if you wanted to do that or not
15:57:07 <jrosser> i think we should keep those decoupled, and also rabbitmq
15:57:14 <noonedeadpunk> But I'd rather not, and left galera in default_services or whatever var will be
15:57:21 <noonedeadpunk> Yes
15:57:45 <noonedeadpunk> but for os_<service> I think it does make sense to call haproxy role from them
15:58:09 <damiandabrowski> "^ is this about how the code is now, or modifiying the new patches" - modifying patches, that was your suggestion, right?
15:58:30 <jrosser> yes, thats right
15:59:13 <jrosser> is it possible to make nearly no change to haproxy role?
16:00:49 <damiandabrowski> i don't think so...
16:01:32 <damiandabrowski> but i can at least try to make as little changes as possible
16:02:27 <damiandabrowski> i still have no idea how can we avoid having haproxy_service_config.yml for "preconfigured" services and other one for services configured by service playbooks
16:03:05 <jrosser> we can talk that though if you like
16:03:11 <jrosser> *through
16:03:55 <noonedeadpunk> We can make some call even if needed
16:05:03 <damiandabrowski> yeah, sure
16:05:25 <noonedeadpunk> #endmeeting