15:00:54 #startmeeting openstack_ansible_meeting 15:00:54 Meeting started Tue Jun 27 15:00:54 2023 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:54 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:54 The meeting name has been set to 'openstack_ansible_meeting' 15:00:58 #topic rollcall 15:01:00 o/ 15:01:56 hi! 15:03:24 o/ hello 15:03:45 o/ 15:03:57 sorta around. doing some errands 15:04:48 #topic office hours 15:05:32 o/ 15:06:08 I don't have big agenda for today. I guess mainly we should land some backports to 2023.1 and make new bugfix release https://review.opendev.org/q/parentproject:openstack/openstack-ansible+branch:%255Estable/2023.1+status:open+ 15:06:28 As most nasty thing is that I forgot to update openstack-ansible-plugins version in a-c-r 15:06:35 so heat is going to fail 15:06:50 also gnocchi is known to be broken, but I have no idea what we can do with thta 15:07:14 as constraints are not respected when project has pyproject.toml 15:09:15 o/ 15:09:17 Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder stable/2023.1: Use v3 service type in keystone_authtoken config https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/887057 15:09:41 Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder stable/zed: Use v3 service type in keystone_authtoken config https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/887058 15:09:49 Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder stable/yoga: Use v3 service type in keystone_authtoken config https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/887059 15:10:02 we need to clean up the cinder role 15:10:14 lots of v1/v2 stuff in there 15:11:44 Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.1: Ensure management_address is used instead of ansible_host https://review.opendev.org/c/openstack/openstack-ansible/+/887060 15:12:08 yup - that's really good call 15:14:05 and I guess we kinda needs to review patches for making tls/internal tls as default 15:14:25 I personally reluctant to vote on that, because I don't really have any strict opinion on that 15:14:50 I'm not sure if it's good default or not 15:15:00 #link https://etherpad.opendev.org/p/openstack-ansible-tls-performance-impact 15:15:16 and this is actually good work and smth to think about 15:15:40 after my benchmarks, i also don't have a strong opinion 15:15:44 I will add the topic for next TC meeting (not the one that will be in 2 hours, but next week) 15:16:17 To see what they think about http/2 and if it's time for openstack to adopt it 15:17:02 but I see tremendeus amount of work that would be required, which is probably the main blocker 15:18:01 and yeah, not having TLS on internal VIP have quite big difference comparing to enabled TLS on it 15:18:42 and like almost 30% difference between current default and suggested one, if I'm right? 15:19:02 60s vs 88s 15:19:34 idk what the other tools do for this 15:19:46 if we are different by having tls or by not having it 15:20:21 noonedeadpunk: yeah, but I can't explain why enabling TLS on backend doesn't make any difference while for haproxy it does 15:22:11 jrosser: not sure I got your point? as I guess as long as we test both we should be good? 15:23:53 Folks! I am trying to run OSA stack inside lxd container for lab/stage/testing but look like it doesn't support, hit this error when running setup-host.yml - https://paste.opendev.org/show/bsBRasNMflOnflDb68bm/ 15:24:03 any workaround ? 15:25:03 i mean if the default for the other tools is to do TLS then that says that the lower performance might be seen as acceptable already 15:25:59 does kolla enforce internal tls? 15:26:13 (I don't know to be frank) 15:26:36 me neither - thats why it would be interesting to see what the other perspectives are 15:27:10 spatel: do you know how things are with tls in kolla world?:) 15:28:26 spatel: in an LXD you can't do anything with the kernel really, so you need to disable those tasks, look at the code and the vars to make some overrides 15:28:35 regarding your question - this specific issue can be overcomed by defining `openstack_host_specific_kernel_modules: []` but I think you will fail in soooo many places, that I don't find it being feasable to run inside container 15:28:54 I mostly keep TLS disable but it does has support to encrypt all traffic using haproxy - https://docs.openstack.org/kolla-ansible/latest/admin/tls.html 15:29:37 jrosser: default is `no` https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/group_vars/all.yml#L834-L840 15:29:51 jrosser just disabled that task and re-running it.. Hope we can make it variable to make it workable on LXD playground 15:30:11 spatel, lxc --vm ? 15:30:26 Yes, running whole stack inside LXD to mimic production 15:30:30 yeah, lxd can manage LVM 15:30:36 brrrrrrr 15:30:41 vm.. lvm meh 15:30:42 *KVM 15:30:54 Its quick to spin up and testing 15:31:20 spatel: yeah, but it can be proper VM rather then lxc container 15:31:20 LVM for cinder correct but we can use physical host for LVM support we don't need that inside LXD 15:32:54 currently my dev/stage environment running inside VMware VMs which is very hard to setup and destroy.. I want something quick and automation way and LXD is very quick and easy 15:33:49 the problem with lxc containers, is that you can't manage a lot of things, including time, kernel modules, firewall?, devices 15:34:09 (probably you can have firewall ifproper modules are loaded though) 15:34:52 spatel: https://ubuntu.com/blog/lxd-virtual-machines-an-overview 15:35:14 so spawning proper KVM VM is quite as trivial as lxc container IMO 15:35:44 Hmmm! 15:36:35 maybe we just found a volunteer who can work on https://github.com/openstack/openstack-ansible-ops/tree/master/multi-node-aio ? :D 15:36:37 returning back to tls - I would leave default as is, but improve testing whenever possible 15:36:42 hehe 15:36:44 only need to add --vm to your lxc launch command 15:36:54 exactly ^ 15:37:40 lol 15:37:52 okay, so keep tls disabled for now but implement 'tls-transition' scenario anyway, right? 15:38:07 mgariepy let me try.. --vm 15:39:47 yeah, we must test it anyway imo 15:40:39 maybe also document better on how to enable/switch to TLS and possible performance degradation? 15:40:49 i think i will be switching to tls 15:41:09 i'll too. 15:41:10 we will switch to tls as well(at least in some regions) 15:41:22 it's just on * everywhere here so my openstack is a pretty big outlier 15:41:30 but i'm pretty low on api calls so i don't expect it to cause much issue 15:42:29 but I kinda feel extra complexity by this as default especially for beginners or who doesn't care a lot as network is internal 15:42:43 so it kinda pretty much depends on usecases and regulations 15:42:48 but if we see ~30% degradation on rally, maybe it's indeed better to keep it disabled by default 15:42:54 (and existance of quantum computers) 15:43:01 mgariepy that works!! --vm 15:45:44 I don't think we have too much complexity with our implementation which we don't want to carry for some period of time 15:47:22 since now we just rely on haproxy configuration at playbook runtime, this extra complexity for tcp is not gigantic anymore 16:03:30 but this '--vm' parameter is interesting(didn't know about it before) 16:03:41 do I understand correctly that if we implement LXD support at some point, it will be much easier to spin up multi-node-aio? 16:06:21 as we can skip all virsh/pxe tasks then 16:14:20 damiandabrowski let me spin up my lab and i will give you feedback how it goes but agreed with you LXD is must faster and easier if works with OSA 16:16:13 Merged openstack/openstack-ansible stable/2023.1: Remove other releases from 2023.1 index page https://review.opendev.org/c/openstack/openstack-ansible/+/884921 16:17:49 i'm not sure if it's faster, but for ex. it has a proper tooling for image management. But I think that requirement to install LXD from snap successfully prevented us from switching to it so far 16:18:02 noonedeadpunk: endmeeting? ;) 16:18:34 #endmeeting