Monday, 2025-04-07

f0onoonedeadpunk: can I run ovn-bgp _only_ on compute nodes with osa? or is that not a supported pick&mix?06:51
noonedeadpunkf0o: you need to run it where gtateway is07:23
f0ohuh?07:23
noonedeadpunkafaik07:24
f0oso I cant mix it with distributed_fips to get bgp towards the compute nodes?07:24
noonedeadpunkor well... maybe not with distributed fips07:24
noonedeadpunkeventually you set group of hosts where to run the agent through neutron_ovn_bgp_agent_group variable07:25
noonedeadpunkhttps://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/defaults/main.yml#L54307:25
f0odo you happen to know if ovn bgp-agent honors MED?07:26
noonedeadpunkI think you can try running it on computes only.... but you may get an unexpected behaviour with some expose methods I guess07:26
f0oyeah I need to play around with this somewhere :D07:27
noonedeadpunklike if you want to expose tenant networks - that's expected to be done through the gateway node (together with the router)07:27
noonedeadpunktbh - no idea about med. I'm really having gaps in my networking knowledge07:28
f0owill need to lab it and check what the actual config is that it generates, would be great if it does proper multihoming07:35
f0oshould be able to apply my ovs patches to the compute nodes and give them full tables as well07:36
noonedeadpunk well, you kinda need to provide 95% of config of your own for frr07:37
f0oneat then it should work just fine07:37
noonedeadpunkI think templates that are injected into frr are here: https://opendev.org/openstack/ovn-bgp-agent/src/branch/master/ovn_bgp_agent/drivers/openstack/utils/frr.py07:38
noonedeadpunk`ADD_VRF_TEMPLATE` is applied only for the `vrf` expose method iirc07:39
noonedeadpunkand rest is jsut up to you to set up07:39
noonedeadpunkso yes, I'd guess it should work :)07:40
f0othink I need to see the rendered config; seems a bit odd that it only supports one vrf in the LEAK template07:45
f0obut maybe that's the wrong template anyway07:45
noonedeadpunkyes, `underlay` and `ovn` expose method support only single vrf07:46
noonedeadpunkso if you need multiple vrfs - you have to use the `vrf` expose method07:46
f0oaaah07:46
f0othat explains it07:46
noonedeadpunkbut then you need to set vni id for neutron network through ovn-nbctl07:47
jrossergood morning08:04
noonedeadpunko/08:40
noonedeadpunkso rocky9 is still broken wrt missing ovs/ovn: https://zuul.opendev.org/t/openstack/build/f03476eebe5b4d1b80f3c806179efa9908:47
noonedeadpunk(it was sooooo great idea to use yaml as output filter)08:48
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-rabbitmq_server master: Fail with human-readable errors if upgrade impossible  https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/94623209:29
jrosser^ nice error reporting pattern there with just one task09:30
jrosseri like that09:30
noonedeadpunkI was thinking if worth merging with prior failures, but then they somehow different...09:33
noonedeadpunkso maybe in follow-up...09:33
f0oSilly question; is it possible to expand the haproxy role to allow the definition of custom error page paths?10:10
f0oor is this not something that will see any value? maybe it's a too custom request for $branding10:13
f0oshould I just write a patch and propose it and we take it from there?10:17
jrosserf0o: theres a couple of different ways to do it10:31
jrossereither you can add `errorfile` directives in here with the existing code, either per service, or across all services https://github.com/openstack/openstack-ansible-haproxy_server/blob/master/templates/service.j2#L93-L9510:32
jrosserthough do watch out for breaking the handing of security.txt if you use it https://github.com/openstack/openstack-ansible/blob/a1f47e174343573efb17ab9e56082faade55dee4/inventory/group_vars/haproxy/haproxy.yml#L66-L7810:33
jrosserbut having said that there is no problem extending the haproxy role to have handling of `http-errors / errorfiles` as per https://www.haproxy.com/documentation/haproxy-configuration-tutorials/alerts-and-monitoring/error-pages/10:35
jrosserit might take a bit of thought though as the current design allows the haproxy config to be distributed across all the group_vars for the different services, such as https://github.com/openstack/openstack-ansible/blob/a1f47e174343573efb17ab9e56082faade55dee4/inventory/group_vars/glance_all/haproxy_service.yml10:36
jrosserso it would be a case of deciding if there is one global place to define custom error pages, or if it is a per-service config, which is a legitimate thing to want given the security.txt example10:37
noonedeadpunkI think you can already distribute custom error pages as well with the role10:37
noonedeadpunkwith haproxy_static_files_extra10:38
jrosseryeah, i think we currently lack a way of defining multiple `http-errors`10:38
noonedeadpunkanother thing on the topic - I wanted to look into basic auth implementation for the role as well, which I did through static files as well for now... 10:40
jrosserperhaps a good way to think about this is to refactor security.txt to be using some more generic means of defining the error page handling10:40
noonedeadpunkbut then also realised there might be more "global" things we might want in there10:40
jrosserthen that ca be used as a basis for different custim things10:40
noonedeadpunkyeah...10:41
jrossertbh it is probably as simple as adding a `haproxy_http_errors` list for each service, and a `haproxy_errorfiles` var to reference one of them, rather than just define it directly in `haproxy_backend_arguments`10:45
jrosserand some experiments to see how also defining that globally too can be done10:45
noonedeadpunkI'd guess that having a default for all services might be also worth having? 10:45
jrosserindeed10:46
noonedeadpunkyeah10:46
noonedeadpunkas I imagine how same error pages are applicable region-wise10:46
f0ojrosser: I was going for a very naive list/tuple and just define the errorfile {code} {path} in defaults under maxconn10:46
jrosserthe issue is you might have services needing custom use of errorfile already10:46
jrosserlike we do for security.txt10:47
f0owouldnt the frontend errorfiles definition override the defaults one?10:47
f0oso at very least it shouldnt break the custom handlers but just not display the branded one for them10:47
noonedeadpunkNeilHanlon: hey! any updates wrt ovn/ovs versions for rocky?13:03
opendevreviewDmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: Use libxslt1-dev package instead of unversioned one  https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/94656014:35
noonedeadpunkjrosser: not sure what capo version are you using, but beware of v0.11.2 which we have as the default in ops repo15:06
noonedeadpunkhttps://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/247715:07
jrosserare there bugs there?15:07
noonedeadpunkso if tenant delets VM - pod goes into crush loop15:07
jrosserurgh15:08
* noonedeadpunk trying to find how capo versions are getting onto containers15:09
noonedeadpunknice - they seem to be hardcoded in https://github.com/vexxhost/ansible-collection-kubernetes/tree/main/roles/cluster_api/files/providers/infrastructure-openstack15:10
noonedeadpunkdoh15:11
jrossernoonedeadpunk: did you replicate that?15:23
noonedeadpunkyeah15:23
jrosserit's a shame they don't say which release that was introduced in15:23
jrosserhahh15:23
jrosserok15:23
noonedeadpunkwell, I actually accidentally deleted a VM in k8s cluster and next day realized that capi is just borked completely15:23
jrosserand it doesnt look like there is a point release of 0.11 that covers that bug15:23
noonedeadpunknope15:24
noonedeadpunkI'm not sure about .0.12.2 - as I can't find manifest file somehow for it15:24
noonedeadpunkah, blind me15:28
noonedeadpunkok, I found how AI is useful - when you need google smth, but it's not on stackoverflow yet - it might point to some direction :D16:23
noonedeadpunkwell, manifest is either worng, or 0.12.2 requires smth more, as there's no image kind16:51
noonedeadpunkDo I need newere CAPI as well..?16:55
noonedeadpunkdoh, crap, just saw that now ORC should be installed separatelly /o\ 17:22
noonedeadpunkomfg17:22
jrosserits like a full time job keeping on top of this17:26
noonedeadpunkI somehow start liking heat driver tbh17:27
noonedeadpunkit was not that bad after all17:27
noonedeadpunkat least heat is not getting chaotically split over 10 repos in a year lifespan17:28
noonedeadpunkfor now I just see tons of extra complexity and somehow not too much benefits, over being able to independently control versioning of k8s from openstack releases17:30
noonedeadpunkBut that could be solved by moving heat driver to an separate repo with independent release cycles...17:31
noonedeadpunkouch, magnum ptg is at 6 am utc....17:32
jrosserdoh17:32
jrosseri do wonder what is happening with the "official" capi driver for magnum as thats not really ever become part of opendev17:33
noonedeadpunkwell, helm charts were imported to opendev17:33
noonedeadpunkbuyt they left maintined separately17:33
noonedeadpunkand I asked question lately - they said that progress on import stuck, as now azimuth is a separate from stackhpc entity17:34
noonedeadpunkso it kind of vague17:34
jrosserseems like there is too much to unpick with github actions workflow17:34
noonedeadpunkcould be that as well17:34
NeilHanlonnoonedeadpunk: sorry, i had some stuff come up this weekend and it got away from me19:10
NeilHanloni've just tagged the builds in CBS so they should go out to mirrors in like.. 12 hours?19:10
noonedeadpunkok, cool then I will check on them tomorrow :)19:26
noonedeadpunkwell, seems I've figured out the way forward, except of the upgrade....20:20

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!