*** ianw is now known as ianw_pto | 07:13 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-pki master: Ensure key and certificate regenerated when pki_regen_cert is defined https://review.opendev.org/c/openstack/ansible-role-pki/+/808022 | 09:49 |
---|---|---|
*** dpawlik5 is now known as dpawlik | 10:25 | |
opendevreview | Merged openstack/openstack-ansible-os_octavia master: Do not log private key https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/814430 | 11:43 |
anskiy | hey! I do remember that quite some time ago you've been discussing some problems with galera, but I can't find that discussion right now. Currently, on my semi-production deployment I see this: https://jira.mariadb.org/browse/MDEV-25368. Is there some workaround for this in either of OSA branches, or I should just manually pin Maria to 10.5.6 (which, according to comments there should work fine)? | 12:02 |
noonedeadpunk | anskiy: eventually every new release of mariadb nowdays is buggy... | 12:05 |
noonedeadpunk | The most working deployment I have now is with `galera_minor_version:12` `galera_major_version: 10.5` | 12:07 |
anskiy | noonedeadpunk: well, I see "deadlocks" on that version :( | 12:07 |
noonedeadpunk | you can actually just define these variables and run openstack-ansible galera-install.yml -e galera_upgrade | 12:07 |
noonedeadpunk | this could be workaround easily | 12:07 |
anskiy | noonedeadpunk: yeah, that's what I did, gonna try 10.5.6 as per issue, I've linked | 12:08 |
noonedeadpunk | `galera_wsrep_slave_threads: 1` shoudl fix that iirc | 12:08 |
noonedeadpunk | https://jira.mariadb.org/browse/MDEV-22766 | 12:09 |
noonedeadpunk | Just don't try 10.5.9 if I'm not mistaken | 12:09 |
noonedeadpunk | because they break local root user privileges there | 12:09 |
anskiy | oh, reading those comments... looks like whole 10.4 is buggy too. Guess, I'll stick with 10.5.6 for now, if that would work fine. Thank you! | 12:14 |
noonedeadpunk | 10.6.4 was also broken btw - can't recall now how exactly though... | 12:15 |
opendevreview | Merged openstack/openstack-ansible-os_heat master: Do not install ceilometerclient https://review.opendev.org/c/openstack/openstack-ansible-os_heat/+/815468 | 13:45 |
spatel | jamesdenton around | 17:03 |
jamesdenton | barely :) whats up? | 17:59 |
supamatt | jamesdenton: he logged off half an hour before you replied ;P | 18:18 |
jamesdenton | whoops! | 19:05 |
spatel | hey | 19:07 |
spatel | sorry i didn't see your mesg @jamesdenton | 19:07 |
spatel | my IRC client won't notify until i am tagged :) | 19:08 |
jamesdenton | no problem | 19:49 |
jamesdenton | spatel whats up? | 19:51 |
spatel | i am running production style load-test and see following on dpdk so trying to understand what does this means? | 19:51 |
spatel | https://paste.opendev.org/show/810290/ | 19:52 |
spatel | i have two PMD core assigned to thread | 19:52 |
spatel | one of core hitting 97% processing cycle | 19:53 |
spatel | does that means my PMD CPU is busy and not handling traffic optimal way | 19:53 |
spatel | my load is around 500kpps UDP packets | 19:56 |
spatel | one more thing if i restart openvswitch-switch it make all my vm stopped pinging.. :( | 20:09 |
spatel | so i have to shutdown all vm by hand and bring them up to fix it | 20:10 |
spatel | i thought ovs restart just blip networking but in my case it stop network until vm restarted | 20:10 |
mgariepy | huh. | 20:10 |
mgariepy | that's weird. | 20:10 |
mgariepy | did you try restarting neutron ? | 20:11 |
jamesdenton | restarting openvswitch-switch will clear all flows, but i would expect the neutron agent to notice and repopulate | 20:26 |
spatel | i restart whole bix | 20:32 |
spatel | box | 20:32 |
spatel | I did restart neutron also | 20:32 |
spatel | but no luck | 20:33 |
spatel | may be in DPDK case it different.. | 20:33 |
jamesdenton | perhaps. could be worth submitting a bug | 20:53 |
spatel | i will | 20:57 |
spatel | jamesdenton do you know how to read this ? - https://paste.opendev.org/show/810291/ | 20:57 |
jamesdenton | no, i'm sorry | 20:57 |
spatel | what are these queue-id ? and why only id=0 is in used | 20:57 |
spatel | :) | 20:57 |
jamesdenton | all good questions | 20:58 |
spatel | thinking this is my bottleneck | 20:58 |
jamesdenton | what chassis? | 20:58 |
spatel | chassis? | 20:58 |
spatel | This is not OVN | 20:59 |
spatel | https://support.sonus.net/display/SBXDOC92/_OVS-DPDK+Virtio+Interfaces+-+Performance+Tuning+Recommendations | 20:59 |
spatel | look like i have to do ovs-vsctl set interface dpdk0 other_config:pmd-rxq-affinity="0:8,1:26" | 20:59 |
spatel | assign queue to cores.. | 20:59 |
jamesdenton | sorry, what server hardware chassis. | 21:01 |
spatel | Dell 440 PowerEdge | 21:01 |
spatel | Intel X550T | 21:01 |
spatel | CPU - Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz | 21:02 |
spatel | This is how i am reading that output | 21:02 |
spatel | my VM has 8 queue, i can see it in ethtool on vm | 21:02 |
spatel | In this output i can see 8 queues for each vm - https://paste.opendev.org/show/810291/ | 21:03 |
spatel | now PMD default using only single queue which is 0 (zero) | 21:03 |
spatel | if i map it with 8 queue then i have more opened LANE to push packet | 21:03 |
jamesdenton | so many tunables | 21:04 |
spatel | DPDK is not worth :( | 21:04 |
jamesdenton | well, maybe once you get it worked out it will be | 21:04 |
spatel | damn 100 of option and this is not something you do in production | 21:04 |
jamesdenton | probably best for specific applications vs general purpose | 21:04 |
spatel | Yes... | 21:09 |
spatel | also i am surprised not enough good document outside.. world.. everyone saying just try this and that | 21:10 |
spatel | no wonder why people by default using DPDK if its so great | 21:10 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!