f0o | noonedeadpunk: can I run ovn-bgp _only_ on compute nodes with osa? or is that not a supported pick&mix? | 06:51 |
---|---|---|
noonedeadpunk | f0o: you need to run it where gtateway is | 07:23 |
f0o | huh? | 07:23 |
noonedeadpunk | afaik | 07:24 |
f0o | so I cant mix it with distributed_fips to get bgp towards the compute nodes? | 07:24 |
noonedeadpunk | or well... maybe not with distributed fips | 07:24 |
noonedeadpunk | eventually you set group of hosts where to run the agent through neutron_ovn_bgp_agent_group variable | 07:25 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/defaults/main.yml#L543 | 07:25 |
f0o | do you happen to know if ovn bgp-agent honors MED? | 07:26 |
noonedeadpunk | I think you can try running it on computes only.... but you may get an unexpected behaviour with some expose methods I guess | 07:26 |
f0o | yeah I need to play around with this somewhere :D | 07:27 |
noonedeadpunk | like if you want to expose tenant networks - that's expected to be done through the gateway node (together with the router) | 07:27 |
noonedeadpunk | tbh - no idea about med. I'm really having gaps in my networking knowledge | 07:28 |
f0o | will need to lab it and check what the actual config is that it generates, would be great if it does proper multihoming | 07:35 |
f0o | should be able to apply my ovs patches to the compute nodes and give them full tables as well | 07:36 |
noonedeadpunk | well, you kinda need to provide 95% of config of your own for frr | 07:37 |
f0o | neat then it should work just fine | 07:37 |
noonedeadpunk | I think templates that are injected into frr are here: https://opendev.org/openstack/ovn-bgp-agent/src/branch/master/ovn_bgp_agent/drivers/openstack/utils/frr.py | 07:38 |
noonedeadpunk | `ADD_VRF_TEMPLATE` is applied only for the `vrf` expose method iirc | 07:39 |
noonedeadpunk | and rest is jsut up to you to set up | 07:39 |
noonedeadpunk | so yes, I'd guess it should work :) | 07:40 |
f0o | think I need to see the rendered config; seems a bit odd that it only supports one vrf in the LEAK template | 07:45 |
f0o | but maybe that's the wrong template anyway | 07:45 |
noonedeadpunk | yes, `underlay` and `ovn` expose method support only single vrf | 07:46 |
noonedeadpunk | so if you need multiple vrfs - you have to use the `vrf` expose method | 07:46 |
f0o | aaah | 07:46 |
f0o | that explains it | 07:46 |
noonedeadpunk | but then you need to set vni id for neutron network through ovn-nbctl | 07:47 |
jrosser | good morning | 08:04 |
noonedeadpunk | o/ | 08:40 |
noonedeadpunk | so rocky9 is still broken wrt missing ovs/ovn: https://zuul.opendev.org/t/openstack/build/f03476eebe5b4d1b80f3c806179efa99 | 08:47 |
noonedeadpunk | (it was sooooo great idea to use yaml as output filter) | 08:48 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-rabbitmq_server master: Fail with human-readable errors if upgrade impossible https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/946232 | 09:29 |
jrosser | ^ nice error reporting pattern there with just one task | 09:30 |
jrosser | i like that | 09:30 |
noonedeadpunk | I was thinking if worth merging with prior failures, but then they somehow different... | 09:33 |
noonedeadpunk | so maybe in follow-up... | 09:33 |
f0o | Silly question; is it possible to expand the haproxy role to allow the definition of custom error page paths? | 10:10 |
f0o | or is this not something that will see any value? maybe it's a too custom request for $branding | 10:13 |
f0o | should I just write a patch and propose it and we take it from there? | 10:17 |
jrosser | f0o: theres a couple of different ways to do it | 10:31 |
jrosser | either you can add `errorfile` directives in here with the existing code, either per service, or across all services https://github.com/openstack/openstack-ansible-haproxy_server/blob/master/templates/service.j2#L93-L95 | 10:32 |
jrosser | though do watch out for breaking the handing of security.txt if you use it https://github.com/openstack/openstack-ansible/blob/a1f47e174343573efb17ab9e56082faade55dee4/inventory/group_vars/haproxy/haproxy.yml#L66-L78 | 10:33 |
jrosser | but having said that there is no problem extending the haproxy role to have handling of `http-errors / errorfiles` as per https://www.haproxy.com/documentation/haproxy-configuration-tutorials/alerts-and-monitoring/error-pages/ | 10:35 |
jrosser | it might take a bit of thought though as the current design allows the haproxy config to be distributed across all the group_vars for the different services, such as https://github.com/openstack/openstack-ansible/blob/a1f47e174343573efb17ab9e56082faade55dee4/inventory/group_vars/glance_all/haproxy_service.yml | 10:36 |
jrosser | so it would be a case of deciding if there is one global place to define custom error pages, or if it is a per-service config, which is a legitimate thing to want given the security.txt example | 10:37 |
noonedeadpunk | I think you can already distribute custom error pages as well with the role | 10:37 |
noonedeadpunk | with haproxy_static_files_extra | 10:38 |
jrosser | yeah, i think we currently lack a way of defining multiple `http-errors` | 10:38 |
noonedeadpunk | another thing on the topic - I wanted to look into basic auth implementation for the role as well, which I did through static files as well for now... | 10:40 |
jrosser | perhaps a good way to think about this is to refactor security.txt to be using some more generic means of defining the error page handling | 10:40 |
noonedeadpunk | but then also realised there might be more "global" things we might want in there | 10:40 |
jrosser | then that ca be used as a basis for different custim things | 10:40 |
noonedeadpunk | yeah... | 10:41 |
jrosser | tbh it is probably as simple as adding a `haproxy_http_errors` list for each service, and a `haproxy_errorfiles` var to reference one of them, rather than just define it directly in `haproxy_backend_arguments` | 10:45 |
jrosser | and some experiments to see how also defining that globally too can be done | 10:45 |
noonedeadpunk | I'd guess that having a default for all services might be also worth having? | 10:45 |
jrosser | indeed | 10:46 |
noonedeadpunk | yeah | 10:46 |
noonedeadpunk | as I imagine how same error pages are applicable region-wise | 10:46 |
f0o | jrosser: I was going for a very naive list/tuple and just define the errorfile {code} {path} in defaults under maxconn | 10:46 |
jrosser | the issue is you might have services needing custom use of errorfile already | 10:46 |
jrosser | like we do for security.txt | 10:47 |
f0o | wouldnt the frontend errorfiles definition override the defaults one? | 10:47 |
f0o | so at very least it shouldnt break the custom handlers but just not display the branded one for them | 10:47 |
noonedeadpunk | NeilHanlon: hey! any updates wrt ovn/ovs versions for rocky? | 13:03 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: Use libxslt1-dev package instead of unversioned one https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/946560 | 14:35 |
noonedeadpunk | jrosser: not sure what capo version are you using, but beware of v0.11.2 which we have as the default in ops repo | 15:06 |
noonedeadpunk | https://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/2477 | 15:07 |
jrosser | are there bugs there? | 15:07 |
noonedeadpunk | so if tenant delets VM - pod goes into crush loop | 15:07 |
jrosser | urgh | 15:08 |
* noonedeadpunk trying to find how capo versions are getting onto containers | 15:09 | |
noonedeadpunk | nice - they seem to be hardcoded in https://github.com/vexxhost/ansible-collection-kubernetes/tree/main/roles/cluster_api/files/providers/infrastructure-openstack | 15:10 |
noonedeadpunk | doh | 15:11 |
jrosser | noonedeadpunk: did you replicate that? | 15:23 |
noonedeadpunk | yeah | 15:23 |
jrosser | it's a shame they don't say which release that was introduced in | 15:23 |
jrosser | hahh | 15:23 |
jrosser | ok | 15:23 |
noonedeadpunk | well, I actually accidentally deleted a VM in k8s cluster and next day realized that capi is just borked completely | 15:23 |
jrosser | and it doesnt look like there is a point release of 0.11 that covers that bug | 15:23 |
noonedeadpunk | nope | 15:24 |
noonedeadpunk | I'm not sure about .0.12.2 - as I can't find manifest file somehow for it | 15:24 |
noonedeadpunk | ah, blind me | 15:28 |
noonedeadpunk | ok, I found how AI is useful - when you need google smth, but it's not on stackoverflow yet - it might point to some direction :D | 16:23 |
noonedeadpunk | well, manifest is either worng, or 0.12.2 requires smth more, as there's no image kind | 16:51 |
noonedeadpunk | Do I need newere CAPI as well..? | 16:55 |
noonedeadpunk | doh, crap, just saw that now ORC should be installed separatelly /o\ | 17:22 |
noonedeadpunk | omfg | 17:22 |
jrosser | its like a full time job keeping on top of this | 17:26 |
noonedeadpunk | I somehow start liking heat driver tbh | 17:27 |
noonedeadpunk | it was not that bad after all | 17:27 |
noonedeadpunk | at least heat is not getting chaotically split over 10 repos in a year lifespan | 17:28 |
noonedeadpunk | for now I just see tons of extra complexity and somehow not too much benefits, over being able to independently control versioning of k8s from openstack releases | 17:30 |
noonedeadpunk | But that could be solved by moving heat driver to an separate repo with independent release cycles... | 17:31 |
noonedeadpunk | ouch, magnum ptg is at 6 am utc.... | 17:32 |
jrosser | doh | 17:32 |
jrosser | i do wonder what is happening with the "official" capi driver for magnum as thats not really ever become part of opendev | 17:33 |
noonedeadpunk | well, helm charts were imported to opendev | 17:33 |
noonedeadpunk | buyt they left maintined separately | 17:33 |
noonedeadpunk | and I asked question lately - they said that progress on import stuck, as now azimuth is a separate from stackhpc entity | 17:34 |
noonedeadpunk | so it kind of vague | 17:34 |
jrosser | seems like there is too much to unpick with github actions workflow | 17:34 |
noonedeadpunk | could be that as well | 17:34 |
NeilHanlon | noonedeadpunk: sorry, i had some stuff come up this weekend and it got away from me | 19:10 |
NeilHanlon | i've just tagged the builds in CBS so they should go out to mirrors in like.. 12 hours? | 19:10 |
noonedeadpunk | ok, cool then I will check on them tomorrow :) | 19:26 |
noonedeadpunk | well, seems I've figured out the way forward, except of the upgrade.... | 20:20 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!