*** jralbert has quit IRC | 00:23 | |
*** spatel has joined #openstack-ansible | 02:10 | |
*** evrardjp has quit IRC | 02:33 | |
*** evrardjp has joined #openstack-ansible | 02:33 | |
*** miloa has joined #openstack-ansible | 04:12 | |
*** miloa has quit IRC | 04:18 | |
*** d34dh0r53 has quit IRC | 04:30 | |
*** akahat has quit IRC | 05:02 | |
*** spatel has quit IRC | 05:05 | |
*** akahat has joined #openstack-ansible | 05:14 | |
*** SiavashSardari has joined #openstack-ansible | 05:25 | |
*** dpawlik has joined #openstack-ansible | 06:33 | |
*** dpawlik has quit IRC | 06:37 | |
*** dpawlik has joined #openstack-ansible | 06:40 | |
*** dpawlik has quit IRC | 06:42 | |
*** dpawlik has joined #openstack-ansible | 06:54 | |
*** dpawlik has quit IRC | 06:55 | |
*** pcaruana has quit IRC | 07:11 | |
*** dpawlik has joined #openstack-ansible | 07:13 | |
*** dpawlik has quit IRC | 07:13 | |
*** dpawlik has joined #openstack-ansible | 07:14 | |
*** andrewbonney has joined #openstack-ansible | 07:14 | |
*** pcaruana has joined #openstack-ansible | 07:40 | |
*** luksky has joined #openstack-ansible | 07:42 | |
*** rpittau|afk is now known as rpittau | 07:43 | |
*** tosky has joined #openstack-ansible | 07:49 | |
*** miloa has joined #openstack-ansible | 08:42 | |
*** miloa has quit IRC | 08:53 | |
*** snapdeal has joined #openstack-ansible | 09:07 | |
admin0 | morning | 09:22 |
---|---|---|
*** dpawlik has quit IRC | 09:28 | |
*** akahat has quit IRC | 09:28 | |
*** snapdeal has quit IRC | 09:43 | |
*** dpawlik has joined #openstack-ansible | 09:52 | |
*** akahat has joined #openstack-ansible | 09:53 | |
*** rpittau is now known as rpittau|bbl | 09:54 | |
*** dpawlik has quit IRC | 10:45 | |
*** dpawlik9 has joined #openstack-ansible | 10:53 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Map dbaas and lbaas with role defaults https://review.opendev.org/c/openstack/openstack-ansible/+/784113 | 11:07 |
openstackgerrit | Merged openstack/openstack-ansible-os_nova master: Set default qemu settings for RBD https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/783829 | 11:08 |
openstackgerrit | Merged openstack/openstack-ansible-openstack_hosts master: Replace import with include https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774224 | 11:18 |
openstackgerrit | Merged openstack/openstack-ansible-os_trove master: Use ansible_facts[] instead of fact variables https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/780732 | 11:26 |
SiavashSardari | Hi everyone | 11:44 |
SiavashSardari | I have a problem with oslomsg mq notify setup | 11:45 |
SiavashSardari | I think some logic here is not quite right | 11:45 |
SiavashSardari | https://opendev.org/openstack/openstack-ansible-tests/src/branch/master/sync/tasks/mq_setup.yml#L69-L71 | 11:45 |
SiavashSardari | and also in lines 82 and 97 | 11:46 |
SiavashSardari | I wanted to upload a patch but I though if it would be great if I can get your input first | 11:48 |
jrosser | SiavashSardari: can you explain what you thing is wrong? | 11:52 |
SiavashSardari | jrosser Oops, I always forget to explain '=D . I want to add another rabbitmq cluster for notifications. I added proper skel info to env.d folder and installed a cluster with rabbit role. then I added oslomsg_notify_* variables and ran the playbooks. I notices that there are no users and vhosts on new notify rabbit cluster. I did a little digging | 11:57 |
SiavashSardari | and got to the link above. | 11:57 |
jrosser | this may address it? https://review.opendev.org/c/openstack/openstack-ansible-tests/+/785224 | 11:58 |
SiavashSardari | yep that's the case. Thanks | 12:00 |
jrosser | if you could test that and leave a comment on the patch it would be great | 12:00 |
SiavashSardari | just out of curiosity I cant see how (_oslomsg_rpc_vhost is undefined) can happen? | 12:01 |
SiavashSardari | yep actually I did the exact same thing on my setup and it worked. but let me check again just to be sure | 12:02 |
SiavashSardari | jrosser qq I was testing just on cinder role, and noonedeadpunk patch worked. how can I use openstack-ansible-test repo to update all my repos? | 12:09 |
jrosser | SiavashSardari: unfortunately you can't, that is automation for when you submit a patch to gerrit | 12:10 |
jrosser | once it merges in openstack-ansible-tests there is scripting which creates the same patch in all the relevant repositories automatically | 12:11 |
SiavashSardari | oh. what about backports? | 12:11 |
jrosser | generally only bugfixes are backported, and the merged patch from openstack-ansible-tests would be cherry-picked to a stable branch and then the whole auto-generation of patches would happen again | 12:12 |
SiavashSardari | jrosser Thanks for the explanation. so I should fork everything on our gitlab, commit on every repo and use them as role_requirements? Please tell me there is another way =$ X-P | 12:18 |
*** rpittau|bbl is now known as rpittau | 12:31 | |
noonedeadpunk | well, technically we could backport it somewhere, but it will produce another pile of work there as some roles CI might be broken, especially later then for V... | 12:38 |
*** spatel_ has joined #openstack-ansible | 12:39 | |
*** spatel_ is now known as spatel | 12:39 | |
*** d34dh0r53 has joined #openstack-ansible | 12:53 | |
SiavashSardari | noonedeadpunk can we at least backport to V? there are some changes in V compare to U, but I think at some point I will upgrade to U. | 12:59 |
noonedeadpunk | You can easily jump from T to V directly just in case | 13:00 |
SiavashSardari | noonedeadpunk OSA upgrades always worked fine for me, but last time I upgrade from T to U, OSA was fine but ceph part caused us data loss, and now I have a very hard job to convince management for any upgrade. | 13:04 |
noonedeadpunk | You can easily leave ceph as is actually | 13:04 |
noonedeadpunk | It's not a hard requirement to match ceph-ansible (or ceph) with specific osa version. It's more like what we're providing by default, and that can be overriden to macth current settings | 13:05 |
SiavashSardari | That is my plan but management saw upgrades as risks now '=( | 13:05 |
noonedeadpunk | gotcha | 13:06 |
SiavashSardari | anyways can we backport your change to V? | 13:07 |
noonedeadpunk | let's cross the bridge when we come to it. We first need to land it for master anyway | 13:08 |
SiavashSardari | OK then (y) | 13:09 |
noonedeadpunk | btw, can I get another vote for https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/784145/6 to be able to rebase upper patches just on master and resolve conflicts? | 13:10 |
mgariepy | noonedeadpunk, done. | 13:12 |
noonedeadpunk | awesome, thanks! | 13:12 |
*** tosky has quit IRC | 13:19 | |
SiavashSardari | The other day I caught a strange behavior, I created a VM with external network and inside the VM I ran tcpdump and I noticed there was some SYN packets with destinations other than my IP address (but in my IP pool, I have two external subnets on my external net). I did a little digging on the issue and noticed that SYN packets destinations was the | 13:21 |
SiavashSardari | IPs which are in my pool but they are not assigned to any port or fip. and when I assign one of those IPs to a port SYN packets will vanish. sg have default rules with ssh and ping rules. it's not a security issue but still very strange thing. maybe I have some missconfiguration? | 13:21 |
*** tosky has joined #openstack-ansible | 13:25 | |
*** tosky has quit IRC | 13:29 | |
*** tosky has joined #openstack-ansible | 13:29 | |
spatel | SiavashSardari check bridge ageing time | 13:57 |
spatel | brctl showstp <bridgename> | 13:57 |
spatel | In past i had that issue, it was neutron bug causing ageing time to zero generating flood to all ports | 13:58 |
*** luksky has quit IRC | 14:05 | |
*** luksky has joined #openstack-ansible | 14:06 | |
SiavashSardari | spatel Thanks, I'm using OVS and I don't know how to check aging time. but thank you for the hint | 14:06 |
spatel | That issue is more over LinuxBridge side.. | 14:06 |
SiavashSardari | spatel I found some snooping aging time on ovs docs. maybe I can find something | 14:10 |
SiavashSardari | spatel just to be sure, your issue was like mine? I mean just unassigned IPs was getting broadcasts? | 14:11 |
spatel | In my case bridge was flooding every single packet like unicast on each port so when i was running tcpdump i can see my neighbor vm traffic | 14:13 |
SiavashSardari | oh that is not my case. that is a very big security issue. my problem is that I just get SYN packets with destination of unassigned IP addrs | 14:15 |
SiavashSardari | in other way SYN packets which does not match any MAC addrs and sg rules gets broadcasted | 14:16 |
SiavashSardari | and when I assign that IP to a port which gets a MAC addr broadcasting stops. | 14:17 |
*** macz_ has joined #openstack-ansible | 14:17 | |
spatel | look into openflow rules, may be something there or may be its normal behavior. I am not running OVS so no idea | 14:28 |
*** SiavashSardari has quit IRC | 14:38 | |
openstackgerrit | Merged openstack/openstack-ansible-os_trove master: Change default pool subnet https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/784145 | 14:46 |
openstackgerrit | Merged openstack/openstack-ansible-os_designate master: Generate designate_pool_uuid dynamically https://review.opendev.org/c/openstack/openstack-ansible-os_designate/+/771841 | 14:47 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:04 |
openstack | Meeting started Tue Apr 13 15:04:14 2021 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:04 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:04 |
*** openstack changes topic to " (Meeting topic: openstack_ansible_meeting)" | 15:04 | |
openstack | The meeting name has been set to 'openstack_ansible_meeting' | 15:04 |
noonedeadpunk | #topic rollcall | 15:04 |
*** openstack changes topic to "rollcall (Meeting topic: openstack_ansible_meeting)" | 15:04 | |
noonedeadpunk | o/ | 15:04 |
noonedeadpunk | #topic bug triage | 15:11 |
*** openstack changes topic to "bug triage (Meeting topic: openstack_ansible_meeting)" | 15:11 | |
noonedeadpunk | https://bugs.launchpad.net/openstack-ansible/+bug/1923183 | 15:12 |
openstack | Launchpad bug 1923183 in openstack-ansible "Spice html5 client does not get installed in nova api" [Undecided,New] | 15:12 |
noonedeadpunk | So, it seems that it is distro install | 15:14 |
noonedeadpunk | which fails for centos 8... | 15:14 |
jrosser | hello | 15:17 |
noonedeadpunk | hey! sorry, decided to start meeting with new time today | 15:18 |
noonedeadpunk | probably worth switching from next week.... | 15:18 |
jrosser | yeah, just saw my email! | 15:19 |
noonedeadpunk | What do you think about https://bugs.launchpad.net/openstack-ansible/+bug/1923184 ? | 15:21 |
openstack | Launchpad bug 1923184 in openstack-ansible "Allow disable console_agent when using spice console" [Undecided,New] | 15:21 |
noonedeadpunk | sounds pretty fair for me... | 15:23 |
*** gshippey has joined #openstack-ansible | 15:23 | |
jrosser | a config override can disable that? | 15:24 |
noonedeadpunk | I think it can.... | 15:24 |
noonedeadpunk | but probably it really make sense to change default behaviour we have | 15:24 |
jrosser | i'm struggling to find docs for this | 15:25 |
noonedeadpunk | https://docs.openstack.org/nova/victoria/configuration/config.html#spice.agent_enabled | 15:26 |
noonedeadpunk | I think we just misused nova_console_agent_enabled... | 15:27 |
jrosser | yeah that is pretty odd use of it | 15:28 |
jrosser | there is the same oddness in the vnc conditional block too | 15:29 |
noonedeadpunk | Eventually, I'd probably just dropped nova_console_agent_enabled at all... | 15:29 |
noonedeadpunk | and made to use overrides in case agent_enabled should be adjusted | 15:30 |
jrosser | feels like this uses also mis-uses that variable https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L212 | 15:31 |
jrosser | that'll be disabling [spice|vnc] on aarch64 | 15:31 |
jrosser | which is serial only | 15:31 |
noonedeadpunk | hm, yes... I think that we can just set `nova_console_type` to false or smth | 15:32 |
noonedeadpunk | wait... | 15:32 |
*** gyee has joined #openstack-ansible | 15:32 | |
noonedeadpunk | https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L266 | 15:32 |
noonedeadpunk | we're not using it in `[serial_console]` | 15:33 |
noonedeadpunk | Super weird really | 15:33 |
noonedeadpunk | So, we can either fix behaviour, then it would be limited to spice only I guess? | 15:33 |
noonedeadpunk | Or we can jsut drop variable :) | 15:33 |
jrosser | this code is nuts | 15:36 |
jrosser | seems like a legitimate use case to want to disable it for windows | 15:37 |
noonedeadpunk | so you suggest fixing it? | 15:38 |
noonedeadpunk | or we can fix, backport and drop afterwards :p | 15:38 |
jrosser | for spice i think fix | 15:38 |
jrosser | then i have no idea what the point of this is https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L224 | 15:39 |
jrosser | so drop that, and clean up all the aarch64 logic | 15:39 |
noonedeadpunk | I'm more wondering about that https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L212 | 15:40 |
jrosser | that looks bogus, and should be just True | 15:40 |
noonedeadpunk | It's just not used anywhere to add | 15:41 |
noonedeadpunk | https://codesearch.opendev.org/?q=nova_spice_console_agent_enabled&i=nope&files=&excludeFiles=&repos= | 15:41 |
jrosser | both spice and vnc should already be disabled for aarch64 from here https://github.com/openstack/openstack-ansible-os_nova/blob/master/defaults/main.yml#L266 | 15:41 |
noonedeadpunk | and they are I'm pretty sure, yes | 15:41 |
jrosser | oh well thats actually the bug then, this line https://github.com/openstack/openstack-ansible-os_nova/blob/master/templates/nova.conf.j2#L79 | 15:42 |
noonedeadpunk | I'm mrore frustrated is that we have nova_spice_console_agent_enabled, nova_novncproxy_agent_enabled and nova_console_agent_enabled | 15:42 |
jrosser | should be nova_spice_console_agent_enabled | 15:42 |
jrosser | there us no need for nova_console_agent_enabled or nova_novncproxy_agent_enabled that i can see | 15:42 |
jrosser | it's a specific setting for spice? | 15:43 |
noonedeadpunk | yeah | 15:43 |
noonedeadpunk | Well, from what I see from nova config reference | 15:43 |
jrosser | what a mess /o\ my head hurts :) | 15:44 |
noonedeadpunk | well, I'd vote actually for removing all these options, and make use overrides. But porbably nova_spice_console_agent_enabled is really pretty widespread usecase, so maybe worth leaving this single var indeed... | 15:45 |
jrosser | i think thats the best simplification given it's an already existing var | 15:45 |
noonedeadpunk | but yeah, that's a mess and great it has been raised | 15:45 |
openstackgerrit | Merged openstack/openstack-ansible master: Return PyMySQL installation for distro installs https://review.opendev.org/c/openstack/openstack-ansible/+/784957 | 15:45 |
noonedeadpunk | ok, greed then | 15:46 |
noonedeadpunk | *agreed | 15:46 |
noonedeadpunk | #topic office hours | 15:46 |
*** openstack changes topic to "office hours (Meeting topic: openstack_ansible_meeting)" | 15:46 | |
noonedeadpunk | THe most problematic things at the moment are not being able to merge bump and mariadb version upgrade | 15:46 |
jrosser | yeah, very sad that theres been no further movement on the mariadb jira ticket | 15:47 |
noonedeadpunk | bump fails on rabbitmq ssl, while SSL should not be actually used according to the config... | 15:47 |
*** luksky has quit IRC | 15:47 | |
jrosser | i wonder if it's worth *downgrading* galera :( | 15:47 |
noonedeadpunk | and I think we'd want to merge it really asap to be able to do freeze for wallaby, and not go forward with master | 15:47 |
jrosser | we'd have to go back to 10.5.7 or something | 15:48 |
noonedeadpunk | and can we technically downgrade to version prior then it was in V? | 15:48 |
jrosser | perhaps, we pin the exact version but the package manager might not like it | 15:48 |
noonedeadpunk | but I mean that mysql_upgrade script might not like it as well... | 15:49 |
jrosser | oh right sure yes, it's just not good really at all | 15:49 |
noonedeadpunk | so I think our best option is kind of to move to admin user somehow, and then we won't care about root missing privileges? | 15:50 |
*** luksky has joined #openstack-ansible | 15:50 | |
jrosser | i was thinking should do something really minimal in the PKI role just to get the rabbitmq stuff sorted out | 15:50 |
noonedeadpunk | and I think patch for admin has already merged? | 15:50 |
jrosser | gshippey did a lot of investigation on this | 15:51 |
noonedeadpunk | I will try to sort out bump upgrade tomorrow and also check out on maria after that. | 15:52 |
jrosser | the root user gets broken at upgrade, thats ultimately the issue | 15:52 |
jrosser | it's possible to create a file with some SQL in it which is executed at startup | 15:52 |
noonedeadpunk | Btw for trove, I kind of stopped with internet access from VMs inside aio, as it tries to pull docker image... | 15:52 |
jrosser | that was a possible solution in the maridb jira ticket, to fix the root user permissions at startup with extra params pointing to statements to execute immediately | 15:53 |
noonedeadpunk | but we won't need root user if we create admin somewhere before running setup-infrastructure | 15:53 |
noonedeadpunk | and updating my.cnf | 15:54 |
jrosser | ah, in the upgrade scripts | 15:54 |
noonedeadpunk | (I specificly changed order of tasks lately to prevent my.cnf from being updated before maria upgrade) | 15:54 |
noonedeadpunk | yep | 15:54 |
noonedeadpunk | I think that might work actually | 15:55 |
noonedeadpunk | and root auth with socket I think still working in 10.5.9? | 15:55 |
jrosser | not after the upgrade iirc | 15:56 |
jrosser | as the root user perms get broken | 15:56 |
noonedeadpunk | hm, I thought it affects only password auth? | 15:56 |
noonedeadpunk | but yeah, might be... | 15:56 |
jrosser | the ALL grant goes away | 15:57 |
noonedeadpunk | from all root users or only `root`@`%` or smth? | 15:57 |
noonedeadpunk | as how then non-upgrade tests pass.... | 15:58 |
noonedeadpunk | ah, it's broken during running upgrade.... | 15:58 |
noonedeadpunk | *mysql upgrade | 15:58 |
noonedeadpunk | damn it.... | 15:58 |
noonedeadpunk | maria becomes so buggy lately... | 15:58 |
jrosser | the grants are a bitfield | 15:59 |
jrosser | and the interpretation of the bitfield changes 10.5.8 -> 10.5.9 | 16:00 |
noonedeadpunk | oh........... | 16:00 |
jrosser | so what previously meant 'ALL' no longer means 'ALL' | 16:00 |
noonedeadpunk | I read bug report wrong way then.... | 16:00 |
jrosser | oh maybe me too :) | 16:00 |
noonedeadpunk | nah, I really just did quick lookthrough | 16:00 |
noonedeadpunk | So it was me under impressions that things are not so bad | 16:01 |
jrosser | it's this massive number "access":18446744073709551615 <- what the bits mean inside that | 16:01 |
jrosser | `And with the addition of the new privilege SLAVE MONITOR, the value has become insufficient for "GRANT ALL":` | 16:02 |
jrosser | doh | 16:02 |
noonedeadpunk | yeah, so then indeed the only option is stratup script, that indeed would fix permissions and got removed afterwards | 16:02 |
noonedeadpunk | but that's so.... | 16:03 |
jrosser | i dig around in the code a bit before 8-O https://github.com/MariaDB/server/blob/10.6/mysql-test/main/upgrade_MDEV-19650.test#L20-L53 | 16:05 |
noonedeadpunk | I was probably confused with ` IF(JSON_VALUE(Priv, '$.plugin') IN ('mysql_native_password', 'mysql_old_password')` | 16:08 |
jrosser | super fragile here too https://github.com/MariaDB/server/blob/10.6/sql/privilege.h#L72-L78 | 16:08 |
noonedeadpunk | doh, indeed... Well, that's C, so everything if fragile... | 16:08 |
noonedeadpunk | I think that's one of the reasons why Rust becomes more and more popular | 16:09 |
jrosser | anyway :) | 16:09 |
noonedeadpunk | #endmeeting | 16:09 |
*** openstack changes topic to "Launchpad: https://launchpad.net/openstack-ansible || Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible || Review Dashboard: http://bit.ly/osa-review-board-v3" | 16:09 | |
openstack | Meeting ended Tue Apr 13 16:09:19 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:09 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-04-13-15.04.html | 16:09 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-04-13-15.04.txt | 16:09 |
openstack | Log: http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-04-13-15.04.log.html | 16:09 |
jrosser | we do kind of need to work on the merging backlog | 16:11 |
jrosser | theres a lot of stuff outstanding | 16:11 |
noonedeadpunk | agree | 16:12 |
noonedeadpunk | oh, btw | 16:13 |
noonedeadpunk | I wanted to discuss haproxy changes from andrewbonney.... | 16:13 |
noonedeadpunk | I'm [retty afraid of the https://review.opendev.org/c/openstack/openstack-ansible/+/782373/1 | 16:15 |
jrosser | oh yeah, so maybe something for tomorrow to talk to andrew | 16:17 |
jrosser | but we've added loads of monitoring on haproxy/keepalived and some really odd things happening | 16:17 |
noonedeadpunk | sounds good | 16:17 |
jrosser | that might be one also to ask evrardjp about too | 16:18 |
jrosser | as i don't know the history of that | 16:18 |
noonedeadpunk | yeah, I'm just trying to be cautious here, as I really feel we're not doing our best there, but we need to change delicately, as it kind of worked somehow that way for years... | 16:18 |
jrosser | so we find things like, if you take down the external things that keepalived is using as healthcheck | 16:19 |
jrosser | then all of a sudden the haproxies are restarted | 16:19 |
jrosser | and thats kind of surprising | 16:20 |
noonedeadpunk | yeah, I was facing that as well and second part of the work is good (except changing behaviour) | 16:20 |
noonedeadpunk | but I'm not sure about keepalived_haproxy_notifications.sh script | 16:21 |
jrosser | this also led to chasing a bunch of galera/haproxy related stuff | 16:21 |
jrosser | i think we are hitting 1500 max connections on only modest deployments | 16:21 |
jrosser | ~35 computes or so + HA control plane | 16:21 |
jrosser | and did you ever see this? https://www.percona.com/blog/2013/05/02/galera-flow-control-in-percona-xtradb-cluster-for-mysql/ | 16:22 |
noonedeadpunk | umm... nope... But this could probably change in modern galera? | 16:23 |
noonedeadpunk | but not dramatically I think... | 16:23 |
jrosser | perhaps, though i think we are seeing flow control kick in when we run tempest against our test lab | 16:23 |
*** rpittau is now known as rpittau|afk | 16:24 | |
noonedeadpunk | well, I think that's because haproxy can't really properly pick up real master, comparing to maxscale? | 16:24 |
jrosser | it's an indication that the slaves cannot keep up with the replication rate | 16:25 |
noonedeadpunk | but haproxy might pick up slave as "master"? And then it will "write" to slave, which will forward traffic to master and then recieve same stuff back? | 16:26 |
jrosser | hmm | 16:26 |
noonedeadpunk | ah, sorry, I think xinetd prevent this from happening.... | 16:27 |
jrosser | we're going to put some mysql_exporter on it to see whats going on | 16:29 |
noonedeadpunk | hmm... I haven't really look much into our galera setup as actually it was never a problem... | 16:29 |
noonedeadpunk | but we have `balance leastconn` and from what I see in my cluster, all backends are L7 passed. | 16:30 |
noonedeadpunk | which means that xinetd is not really working. And it's super bad that we write to all galera backends, as it's the thing that might cause issues for you | 16:30 |
jrosser | we were also thinking that the xinet scripts are not smart enough | 16:31 |
noonedeadpunk | we should really write only on master, as otherwise there's extra load on slaves | 16:31 |
noonedeadpunk | The only proper thing I know and battle tested from galera LB was maxscale | 16:31 |
jrosser | it's possible that there are cluster states that we dont handle properly during times when flow control is active | 16:32 |
jrosser | and thats confusing the healthcheck and making it fail over to another node | 16:32 |
noonedeadpunk | well, according to https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/master/templates/clustercheck.j2#L74 we should see all except one backend as failed | 16:32 |
noonedeadpunk | ah... it's not read only.... | 16:33 |
noonedeadpunk | but, it's proxying all write requests to master anyway | 16:34 |
noonedeadpunk | tbh, I'd be glad to replace this weird haproxy balancing with maxscale... As it's able to get curent status and pass write requests only to master and read ones to all other hosts | 16:35 |
noonedeadpunk | and I don't think we will be able to do that in haproxy ever | 16:35 |
mgariepy | for keepalived i've been setting the ping address to localhost for a couple years now. i had stability issue with external ping in the past. | 16:38 |
jrosser | does maxscale handle it's own VIP? | 16:42 |
openstackgerrit | Merged openstack/ansible-hardening master: Added pam_auth_password to nullok check https://review.opendev.org/c/openstack/ansible-hardening/+/785984 | 16:45 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova master: Remove nova console variables https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/786077 | 16:48 |
noonedeadpunk | jrosser: no, but, it's keepalived who handle it? | 16:49 |
jrosser | they have many words of docs but i miss a picture :) | 16:49 |
jrosser | so it's like one maxscale instance per current "haproxy" node in OSA and VIP with keepalived | 16:50 |
noonedeadpunk | yep | 16:50 |
jrosser | makes sense | 16:50 |
noonedeadpunk | and in maxscale you configure backends https://mariadb.com/kb/en/mariadb-maxscale-25-configuring-servers/ and monitor https://mariadb.com/kb/en/mariadb-maxscale-25-configuring-the-galera-monitor/ | 16:51 |
noonedeadpunk | and that's was kind of it iirc. Considering, that we already point all clients to vip, transition should be pretty flawless | 16:52 |
jrosser | yes, it would be fairly OK upgrade path | 16:52 |
noonedeadpunk | oh, and you would need a router ofc as well https://mariadb.com/kb/en/mariadb-maxscale-25-read-write-splitting-with-mariadb-maxscale/ | 16:53 |
*** sshnaidm has joined #openstack-ansible | 17:28 | |
*** sshnaidm is now known as sshnaidm|pto | 17:28 | |
evrardjp | I missed the ping, is everything ok? | 17:38 |
evrardjp | On the first OSA clusters I built and maintained, I used my own keepalived/haproxy roles, because my haproxy requirements were quite different than what we had in OSA | 17:40 |
evrardjp | so I might help on keepalived and haproxy, but I might not be fully aware of all things of our HAproxy role | 17:40 |
evrardjp | If you have any question, shoot :) | 17:40 |
jrosser | evrardjp: it was a query on this https://review.opendev.org/c/openstack/openstack-ansible/+/782373/1 | 17:57 |
jrosser | having haproxy restart when keepalived goes to FAULT seems unusual, is there some good reason to do that which we miss? | 17:59 |
*** larsks has joined #openstack-ansible | 18:08 | |
*** larsks has left #openstack-ansible | 18:08 | |
mgariepy | anyone here have any contact on the horizon core team ? | 18:40 |
mgariepy | i have a patch that has been sitting for weeks.. | 18:40 |
*** andrewbonney has quit IRC | 18:54 | |
*** spatel has quit IRC | 19:12 | |
*** spatel_ has joined #openstack-ansible | 19:20 | |
*** spatel_ is now known as spatel | 19:20 | |
*** lvdombrkr has joined #openstack-ansible | 19:23 | |
lvdombrkr | hello all | 19:23 |
lvdombrkr | is there possibility setup not-containerized openstack-ansible? | 19:24 |
*** spatel has quit IRC | 19:36 | |
jrosser | lvdombrkr: yes, we support a fully "metal" deployment with no containers | 19:42 |
jrosser | you can test that in the AIO by setting the environment variable "SCENARIO=aio_metal" before running bootstrap-aio.sh | 19:42 |
*** spatel_ has joined #openstack-ansible | 19:46 | |
*** spatel_ is now known as spatel | 19:46 | |
*** spatel has quit IRC | 20:03 | |
admin0 | lvdombrkr, though non-containerized works very well, you might come in a situation where you install something ( say for monitoring or some internal package etc) and it affects python or adds/removes something that might affect the APIs .. so based on experince, i would recommend using the containerized setup | 20:07 |
admin0 | as it gives you a lot of flexibility and keeps your base-os and its packages separate from the api containers | 20:07 |
lvdombrkr | admin0 jrosser: we have existing "manually" not-containerized deployed openstack (Ocata, Ubuntu 16.4) in prod but now we want deploy some dev platform and great if we can deploy something as much similar | 20:13 |
admin0 | lvdombrkr, they are exactly the same .. instead they run on the containers | 20:13 |
admin0 | you used to do non-containers does not mean you cannot give containarized system a try :) | 20:13 |
admin0 | you can scale, move, delete, shutdown, do stuff etc freely without worrying about its effect on other containers or the base os | 20:14 |
jrosser | lvdombrkr: it’s clear that we are using lxc machine containers that work just like hosts, not docker style app containers? | 20:16 |
lvdombrkr | admin0: yes but for example we want to implement horizon/keystone multi-auth. can i customize/build container images? | 20:18 |
lvdombrkr | jrosser: ^ | 20:19 |
jrosser | there are no container images | 20:19 |
jrosser | like I say it’s not docker style | 20:19 |
jrosser | you “boot” something that’s like an Ubuntu (or whatever OS) vm via LXC | 20:20 |
jrosser | and install / configure everything with regular andible playbooks/ roles | 20:20 |
jrosser | one of the strong points of OSA is the service configuration possibilities are extremely good | 20:21 |
jrosser | lvdombrkr: if you've not use openstack-ansible before i would highly recommend building an all-in-one test setup in a single VM, instructions are here https://docs.openstack.org/openstack-ansible/victoria/user/aio/quickstart.html | 20:24 |
jrosser | thats a completely self building deployment, it's also what we use in our CI setup so it gets very heavy testing | 20:26 |
jrosser | it's reasonably likley that you would be able to test/prototype your keystone auth setup with an AIO | 20:28 |
jrosser | my team have used the AIO as a testbed for SSO and openid-connect for example | 20:29 |
lvdombrkr | jrosser: perfect, let me test.. if i will have questions i will ping you here | 20:30 |
jrosser | sure, theres usually folk around here EU timezone | 20:31 |
lvdombrkr | jrosser: PS maybe you know someone who already implemented some multi-auth? | 20:41 |
jrosser | do you have an example of what you want to do? | 20:41 |
jrosser | i'm slightly unsure if you mean multiple auth providers or mutli-factor auth | 20:42 |
admin0 | lvdombrkr, the only thing container does is provide a separate env for your apis to run .. rest = all the same | 20:43 |
lvdombrkr | jrosser: authentification like this : https://openstackdocs.safeswisscloud.ch/en/quickguide/index.html | 20:46 |
lvdombrkr | admin0: good, i will test it | 20:46 |
jrosser | lvdombrkr: oh cool! keycloak :) | 20:47 |
jrosser | so you've got keycloak deployed somewhere seperate from your openstack as an identity provider? | 20:48 |
*** spatel_ has joined #openstack-ansible | 20:52 | |
*** spatel_ is now known as spatel | 20:52 | |
lvdombrkr | jrosser: not yet, we are just brainstorming right now )) | 20:55 |
jrosser | ok :) so i have two regions with OSA deployed clouds using keycloak as an IdP | 20:56 |
jrosser | OSA won't do anything to deploy keycloak for you, but integrating keystone/horizon as federated identity with OIDC is all working | 20:57 |
lvdombrkr | jrosser: you implemented it by yourself? lot of customizations required? | 21:05 |
jrosser | lvdombrkr: there is quite a few settings to make for keystone, but it's all a set of yml that the ansible roles read | 21:08 |
jrosser | my team contributed a lot of the current OIDC support in openstack-ansible | 21:09 |
lvdombrkr | jrosser: there is no customizations in horizon? just in keystone? | 21:10 |
jrosser | well it's a mixture tbh | 21:10 |
jrosser | depends exactly what you want to achieve | 21:10 |
lvdombrkr | jrosser: ok i see.. im off for today, will be back tomorrow )) thanks for clues/assist | 21:11 |
jrosser | for keystone there is an example here https://github.com/openstack/openstack-ansible-os_keystone/blob/master/defaults/main.yml#L451-L481 | 21:11 |
jrosser | lvdombrkr: no problem | 21:12 |
*** lvdombrkr has quit IRC | 21:23 | |
*** spatel has quit IRC | 21:25 | |
*** luksky has quit IRC | 22:07 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Use ansible_facts[] instead of fact variables https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/777650 | 22:35 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Gather minimum facts https://review.opendev.org/c/openstack/openstack-ansible/+/779174 | 22:36 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Disable fact variables https://review.opendev.org/c/openstack/openstack-ansible/+/778396 | 22:36 |
*** macz_ has quit IRC | 23:26 | |
*** tosky has quit IRC | 23:34 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!