jamesdenton | @spatel may need to figure something out for OVN+Ironic; seems like OVN DHCP doesn't support some Ironic bits. OOO uses the legacy DHCP agent, but our deploy is missing br-int on controller nodes w/ OVN. Fun times for tomorrow. | 02:57 |
---|---|---|
spatel | OVN use openflow for DHCP simulation so yes... may need external component to do DHCP job | 02:59 |
spatel | why do we need br-int on controller node for Ironic? | 03:01 |
spatel | lets talk tomorrow, i never deployed ironic but may need that piece in my DC soon, we have over 2500 servers and not easy to manage them without ironic | 03:02 |
jamesdenton | the dhcp tap interface needs to plug into a bridge, it expects br-int | 03:16 |
jamesdenton | i am not sure if deploying dhcp agent (legacy) on compute is advised, or not. | 03:16 |
jamesdenton | anyway, yeah tomorrow is fine | 03:16 |
noonedeadpunk | mgariepy: oh.... | 07:36 |
noonedeadpunk | that is a bit unfortunate... | 07:37 |
noonedeadpunk | I haven't dig much into ironic but looking into it | 07:37 |
noonedeadpunk | *going to use it one day | 07:38 |
noonedeadpunk | I'm not sure how best to fix... | 07:38 |
noonedeadpunk | Would need to read docs at least to answer that | 07:39 |
noonedeadpunk | I guess we can just replace it with direct everywhere? | 07:39 |
noonedeadpunk | or ansible?:)) | 07:40 |
noonedeadpunk | that is actually helping https://review.opendev.org/c/openstack/ironic/+/789382/5/api-ref/source/samples/drivers-list-detail-response.json | 07:40 |
noonedeadpunk | they suppose using direct instead of the iscsi | 07:44 |
noonedeadpunk | https://docs.openstack.org/ironic/wallaby/admin/interfaces/deploy.html#iscsi-deploy | 07:48 |
ptoft | I am struggeling to use the openstack cli with the SAML2 federated provider. Anyone one have it working? | 08:14 |
noonedeadpunk | ptoft: jrosser shold have that working afaik | 08:17 |
ptoft | noonedeadpunk: thx! | 08:19 |
noonedeadpunk | I have a feeling that ironic is super outdated... | 08:21 |
noonedeadpunk | since what I see among drivers in ironic is nothing close comparing to what we have defined in ironic_driver_types | 08:22 |
opendevreview | Merged openstack/openstack-ansible-plugins master: Define missing options for ssh connection wrapper https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/807657 | 08:40 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_ironic master: Remove iscsi deploy https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/812644 | 08:56 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_ironic master: Remove iscsi deploy https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/812644 | 08:57 |
jrosser | ptoft: we have cli working with OIDC, not saml | 09:02 |
jrosser | this may help https://platform9.com/docs/openstack/cli-access-cli-saml-auth | 09:06 |
jrosser | ours was more complicated becasue the corporate identity provider we have enforces browser based 2FA which we had to integrate with the cli | 09:06 |
ptoft | jrosser: I have digged through allot of docs, and i think it because of a missing feature on the idp | 09:12 |
jrosser | right, i can understand that | 09:12 |
ptoft | jrosser: But iam still not sure. I might also be a bug in the openstack client. | 09:13 |
jrosser | we also had a similar situation with idp functions for cli, where the "trusted server" OIDC flows were well supported, but anonymous clients like CLI was not | 09:13 |
ptoft | jrosser: Its VERY complicated and hard to debug. Whish we never went down that rabbit hole... | 09:14 |
jrosser | yes, we have a test setup with an OSA AIO and an instance of keycloak to figure out wtf is going on | 09:15 |
ptoft | jrosser: I have also enabled LDAP/Windows AD for another user group and that sems to work much better | 09:16 |
jrosser | i don't know if it would help but we put a keycloak instance between the upstream idp and openstack to be an identity broker, rather than idp | 09:19 |
jrosser | that gives an middle-point to do extra debugging | 09:19 |
ptoft | jrosser: That could be a good approach. Just got a confirmation that the IDP provider does not support ECP or SOAP, so the idea is dead in the water | 09:23 |
jrosser | keycloak is interesting | 09:24 |
jrosser | our upstream provider did not support PKCE which we needed to make the CLI stuff work at all | 09:24 |
jrosser | so we broker the upstream idp with keycloak and enable PKCE on keycloak, rather than upstream | 09:24 |
ptoft | jrosser: We are using keycloak for all storts of stuff in out business and it works great | 09:25 |
jrosser | you might even be able to use it to broker upstream SAML into OIDC, never tried that though | 09:26 |
ptoft | jrosser: I will look into the keycloak options. Thanks allot for helping | 09:57 |
opendevreview | Merged openstack/openstack-ansible-os_cinder master: Use management_address by default https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/805989 | 10:00 |
opendevreview | Merged openstack/openstack-ansible-os_murano master: Fix murano role https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/781239 | 10:05 |
opendevreview | Merged openstack/openstack-ansible-os_octavia master: Fix spelling mistakes https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/806410 | 10:25 |
opendevreview | Merged openstack/openstack-ansible master: Enable tempest tests for sahara https://review.opendev.org/c/openstack/openstack-ansible/+/802551 | 11:03 |
opendevreview | Merged openstack/openstack-ansible master: Revert "Add integrated build job to use in sahara repo" https://review.opendev.org/c/openstack/openstack-ansible/+/807388 | 11:03 |
opendevreview | Merged openstack/openstack-ansible-os_nova master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/810034 | 11:08 |
opendevreview | Merged openstack/openstack-ansible master: Bump ansible version to 2.11.5 https://review.opendev.org/c/openstack/openstack-ansible/+/807316 | 11:13 |
opendevreview | Merged openstack/openstack-ansible-os_placement master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/809981 | 11:16 |
opendevreview | Merged openstack/openstack-ansible-os_designate master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_designate/+/810185 | 11:18 |
opendevreview | Merged openstack/openstack-ansible-os_cinder master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/809989 | 11:20 |
opendevreview | Merged openstack/openstack-ansible-os_aodh master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_aodh/+/809704 | 11:23 |
opendevreview | Merged openstack/openstack-ansible-os_murano master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/810246 | 11:23 |
opendevreview | Merged openstack/openstack-ansible-os_masakari master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_masakari/+/810221 | 11:23 |
opendevreview | Merged openstack/openstack-ansible-os_cloudkitty master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_cloudkitty/+/810184 | 11:31 |
opendevreview | Merged openstack/openstack-ansible-os_tacker master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_tacker/+/810255 | 11:31 |
opendevreview | Merged openstack/openstack-ansible-os_trove master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/810257 | 11:32 |
opendevreview | Merged openstack/openstack-ansible-os_heat master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_heat/+/810188 | 11:32 |
opendevreview | Merged openstack/openstack-ansible-os_gnocchi master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_gnocchi/+/810187 | 11:38 |
opendevreview | Merged openstack/openstack-ansible-os_senlin master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_senlin/+/810250 | 11:40 |
mgariepy | morning everyone | 11:52 |
opendevreview | Merged openstack/openstack-ansible-os_sahara master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_sahara/+/810252 | 11:53 |
opendevreview | Merged openstack/openstack-ansible-os_octavia master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/810247 | 11:55 |
mgariepy | noonedeadpunk, https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/810210 can we push this one on top of the iscsi removal ? | 11:56 |
noonedeadpunk | yes, totally, was jsut waiting for iscsi removal to pass first) | 11:59 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_ironic master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/810210 | 11:59 |
opendevreview | Merged openstack/openstack-ansible-os_magnum master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/810219 | 12:15 |
opendevreview | Merged openstack/openstack-ansible-os_neutron master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/810031 | 12:17 |
opendevreview | Merged openstack/openstack-ansible-os_horizon master: setup.cfg: Replace dashes with underscores https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/789713 | 12:20 |
jamesdenton | mgariepy didn't you suggest putting neutron_ovn_controller on network nodes, too, at one time? what ever came of your OVN testing? | 12:45 |
mgariepy | it got burried by other more urgent projects.. | 12:49 |
mgariepy | :/ | 12:49 |
jamesdenton | no worries | 12:49 |
mgariepy | my plan was to put the networking service (that actually do the packet routing) on the network node. and the controllers onto the sdn. | 12:51 |
mgariepy | and to have a toggle to be able to have DVR-like feature on or off.. | 12:52 |
mgariepy | on the bright side, my cluster is on ovs + ovs-flows so migrating to ovn shouldn' | 12:53 |
mgariepy | t be too hard when i get to it. | 12:53 |
jamesdenton | the docs seems to call out the global toggle, if that's what you're referencing: https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-ovn.html | 12:53 |
jamesdenton | i am testing adding ovn_controller to the network nodes, anyway, since it may be needed for legacy dhcp agent (needed for ironic, i think) | 12:54 |
jamesdenton | i just moved my linuxbridge env to OVN, so hopefully can clean some of that up | 12:54 |
mgariepy | nice. | 12:55 |
spatel | in OVN there is no network node concept right? OVN randomly pick node in cloud and make it gw chassis (if DVR flag is not set) | 12:57 |
mgariepy | https://github.com/openstack/openstack-ansible-os_neutron/blob/master/tasks/providers/setup_ovs_ovn.yml#L23-L27 | 12:58 |
mgariepy | if you want only your ""network_router_host"" to be gw you can. | 12:59 |
jamesdenton | ahh yes, that's it. you were gonna toggle based on group membership | 12:59 |
opendevreview | Merged openstack/openstack-ansible-os_barbican master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_barbican/+/809706 | 13:00 |
mgariepy | yep that was my plan. but it can be something else. | 13:00 |
mgariepy | iirc the neutron service that runs on the compute needed to be tweaked a bit. theyre were allocated way too much cpu/ram iirc | 13:01 |
spatel | mgariepy what do you mean by too much cpu/memory? | 13:03 |
mgariepy | spatel, what is the neutron service that runs on the hypervisor hosts with ovn ? | 13:04 |
mgariepy | metadata_agent ? | 13:05 |
spatel | metadata | 13:05 |
spatel | its simple haproxy running in namespace | 13:05 |
mgariepy | how much thread in the config file? | 13:05 |
mgariepy | is the haproxy managed on the compute ? ie: stopped and disabled.. since debian based service auto-start/enable on install (mostly with working-ish default config) | 13:07 |
spatel | mgariepy https://paste.opendev.org/show/809814/ | 13:07 |
spatel | Yes every compute node running ovnmeta namespace which proxy request to nova-api | 13:08 |
mgariepy | yes but not from that config file. this file is probably the one shipped by haproxy pkg. | 13:08 |
spatel | its simple haproxy running inside namespace | 13:08 |
mgariepy | yes i know. | 13:10 |
spatel | yes i can see haproxy package is installed | 13:10 |
mgariepy | systemctl status haproxy ? | 13:10 |
spatel | haproxy.service i can see it | 13:10 |
mgariepy | that's not the one for the metadata agent. | 13:11 |
mgariepy | the one spawned for the metadata agent are probably running in a namespace with another config file | 13:11 |
spatel | ovn-metadata is using haproxy inside namespace | 13:11 |
spatel | you are correct - https://paste.opendev.org/show/809816/ | 13:13 |
spatel | /var/lib/neutron/ovn-metadata-proxy/0d7a2525-7594-48f4-b7f4-ec9707197388.conf | 13:13 |
mgariepy | `ip netns identify $PID` | 13:16 |
mgariepy | the haproxy service probably needs to be turned off ;) | 13:16 |
spatel | oh! wait.. why turn off? | 13:18 |
mgariepy | because it doesn't need to be runnig. the one that needs to be running in the ovn namespace is started by the agent. | 13:19 |
mgariepy | if you do `ip netns identify 1420` (running from the default config) and `ip netns identify 2515` running from the ovn-metadata-proxy config you will see that it's not running from the same namespace. | 13:21 |
jamesdenton | mgariepy are you thinking we may have a deployment bug that's installing haproxy on computes? | 13:21 |
mgariepy | we do have. | 13:22 |
mgariepy | there is similar behavior on network node. | 13:23 |
jamesdenton | i see it's installed on my computes, but not configured with anything beyond the stock configuratioon | 13:23 |
jamesdenton | and not running | 13:23 |
mgariepy | it's not enabled? | 13:23 |
spatel | i thought we need haproxy without that how you can spawn haproxy service inside namespace right? | 13:23 |
spatel | i can turn off haproxy outside namespace that should be fine i think | 13:24 |
spatel | but if i remove haproxy package then it will break namespace | 13:24 |
mgariepy | you can test it in staging first but i'm pretty sure it's not needed. | 13:24 |
mgariepy | not removing the package. | 13:24 |
mgariepy | only managing the service. | 13:24 |
spatel | yes.. that make sense to turn off haproxy service | 13:25 |
mgariepy | jamesdenton, spatel are the haproxy service on the compute nodes enabled or not ? | 13:25 |
spatel | enabled and running | 13:25 |
jamesdenton | on my computes, it's disabled | 13:25 |
mgariepy | since when ? | 13:25 |
jamesdenton | but i have an haproxy process running in the namespace ovnmeta | 13:25 |
mgariepy | that's ok | 13:26 |
mgariepy | jamesdenton, what did disabled the haproxy service on your compute ? | 13:26 |
mgariepy | how comes spatel has one that is enabled? | 13:26 |
jamesdenton | i am not sure. there's no timestamp. but it existed prior to my OVN change | 13:26 |
jamesdenton | my inventory looks legit for haproxy membership, so maybe it came in a different way | 13:28 |
spatel | i don't know i just noticed when mgariepy mentioned :) | 13:30 |
spatel | This is fresh new compute nodes so i am sure i didn't start by hand | 13:31 |
mgariepy | debian by default starts service on install. | 13:31 |
spatel | should be tell OSA to disable ? | 13:33 |
opendevreview | Merged openstack/openstack-ansible-os_blazar master: Clean up debian blazar_distro_packages https://review.opendev.org/c/openstack/openstack-ansible-os_blazar/+/810183 | 13:33 |
opendevreview | Merged openstack/openstack-ansible-rabbitmq_server master: Fix PKI certificates regeneration https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/808021 | 13:33 |
opendevreview | Merged openstack/openstack-ansible-haproxy_server master: Fix PKI regen behaviour https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/808023 | 13:39 |
opendevreview | Merged openstack/openstack-ansible-os_blazar master: Refactor galera_use_ssl behaviour https://review.opendev.org/c/openstack/openstack-ansible-os_blazar/+/809746 | 13:39 |
opendevreview | Merged openstack/openstack-ansible-os_ironic master: Remove iscsi deploy https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/812644 | 13:44 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests master: Bump ansible and collection versions https://review.opendev.org/c/openstack/openstack-ansible-tests/+/812684 | 13:47 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests master: Bump ansible and collection versions https://review.opendev.org/c/openstack/openstack-ansible-tests/+/812684 | 14:24 |
opendevreview | Merged openstack/openstack-ansible master: Change pki_create_ca condition https://review.opendev.org/c/openstack/openstack-ansible/+/809205 | 14:31 |
jrosser | i think i made a patch a long time ago which disabled haproxy on dedicated network nodes | 15:30 |
jrosser | sounds very similar to what you describe - the package is installed for use inside the ns but it starts an instance on the host anyway | 15:30 |
jrosser | jamesdenton: spatel https://github.com/openstack/openstack-ansible-os_neutron/commit/ccd396eb6f1879d9b84918669c8d13209970e289 | 15:32 |
spatel | jrosser ah!! | 15:37 |
jrosser | that quite likley needs the conditionals updating for OVN | 15:38 |
spatel | why does it running on my compute nodes. | 15:38 |
spatel | ansible_hostname in groups['neutron_metadata_agent'] ? | 15:39 |
jrosser | is that the right group name? | 15:39 |
spatel | may be that group not validating this.. just guessing | 15:40 |
spatel | I will test that out | 15:40 |
spatel | may be we need this - neutron_services['neutron-ovn-controller'] | 15:40 |
jrosser | it will have to be an 'or' with the other group | 15:41 |
spatel | ovn-controller runs on all compute nodes and that is where we running haproxy | 15:41 |
jrosser | that sounds about right, anywhere that is ovn-controller but not an actual haproxy_all member should have it disabled | 15:46 |
mgariepy | can we disable haproxy on membership or haproxy_all direclty not only the ovn one ? | 16:12 |
mgariepy | https://github.com/openstack/openstack-ansible-os_neutron/blob/master/vars/debian.yml#L100-L101 | 16:12 |
jrosser | you need to leave the host haproxy enabled on an AIO becasue it's all collapsed on the same node | 16:13 |
jrosser | thats why the conditions are more complicated | 16:14 |
mgariepy | yeah but ifg the host is haproxy_all. then enabled haproxy ? | 16:14 |
mgariepy | no? | 16:14 |
jrosser | oh like make state: (ansible_hostname in groups['haproxy_all']) | ternary('started', 'stopped') | 16:16 |
jrosser | similar for enabled | 16:17 |
mgariepy | yep. | 16:26 |
mgariepy | should be good enough i think. | 16:26 |
spatel | question, does keepalived has feature to sync connection to backup server? | 16:56 |
mgariepy | i don't think so. but i might be wrong. | 17:32 |
mgariepy | hmm apparently it can,, | 17:39 |
mgariepy | spatel, what connection do you want to sync? | 17:39 |
spatel | TCP | 17:39 |
mgariepy | what service ? | 17:40 |
spatel | i have some application they may go in bad state if failover happened | 17:40 |
mgariepy | udp is pretty much stateless ;p | 17:40 |
spatel | some custom application built in house | 17:40 |
mgariepy | they fail if the DB switch ? | 17:41 |
spatel | we have large UDP but we don't run through firewall because of connection tracking issue :) | 17:41 |
spatel | we have some signaling servers run on custom TCP stack need connection mirror | 17:41 |
spatel | in my datacenter i have Cisco ASA firewall which doing good job to mirror TCP state. but i am building cloud in remote datacenter where i don't have option to deploy Cisco ASA so i am building in-house keepalived + iptables based firewall to protect some services. | 17:42 |
spatel | I think keepalived has option LVS or something which sync connection to standby but i never tested, i am building lab to test that out | 17:43 |
spatel | if anyone already know please share some idea or advice so i don't waste lots of time :) | 17:43 |
jamesdenton | i'd be curious to know, too. We used F5s w/ connection mirroring and it worked well for us | 18:11 |
spatel | I have so many F5 and they are great | 18:12 |
spatel | let me setup LAB and i will let you know how it goes | 18:12 |
jamesdenton | thx | 18:12 |
spatel | Did you use Cisco CML lab, its amazing and worth having, specially for network lovers :) | 18:20 |
spatel | jamesdenton ^^ | 18:20 |
jamesdenton | i've heard of it, but have never used it | 18:20 |
jamesdenton | actually, i may be thinking of VIRL | 18:21 |
spatel | its not free, i paid $200 annual cost for license | 18:21 |
spatel | you can create awesome lab and do some good functional testing | 18:21 |
jamesdenton | thats reasonable | 18:21 |
spatel | Here i am building my keepalived lab :) - https://ibb.co/vVV8bZ6 | 18:22 |
jrosser | we have pretty much complete iptables on our OSA deployment | 18:45 |
jrosser | that was quite a task | 18:45 |
jrosser | spatel: you should look at this if you are using ansible for your stuff https://github.com/logan2211/ansible-iptables | 18:46 |
spatel | i am planning to use ansible to play with iptables rules so yes its helpful :) | 18:48 |
spatel | thanks | 18:48 |
jrosser | that has a really nice way of defining the rules in host/group vars so they become additive | 18:48 |
jrosser | though you have to be super super careful on nodes where neutron is manipulating iptables | 18:49 |
spatel | My firewall is going to be totally different hardware/server | 18:51 |
jrosser | ah ok - we've also put it across * nodes in the whole deployment | 18:51 |
spatel | i am sure take a look to see how i can utilize it | 18:52 |
jrosser | trying to address hard edge / soft center design | 18:52 |
spatel | some of my application sit in DMZ so i need some kind of dedicated firewall rules to protect them.. even i have sec-groups in openstack | 18:53 |
spatel | now i am so addictive with OSA and didn't realized running 6 good size clouds running on it :) and building 3 more outside US | 18:55 |
spatel | In other department they are running kolla-ansible and thinking to try out OSA after watching my rapid deployment of OSA.. hehe | 18:56 |
spatel | soon we are planning to build large GPU cluster so will see how it goes.. | 18:57 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!