opendevreview | Erik Olof Gunnar Andersson proposed openstack/designate master: [DNM] Fix dns.query and centralize implementation https://review.opendev.org/c/openstack/designate/+/813722 | 01:34 |
---|---|---|
opendevreview | Erik Olof Gunnar Andersson proposed openstack/designate master: [DNM] Fix dns.query and centralize implementation https://review.opendev.org/c/openstack/designate/+/813722 | 01:40 |
opendevreview | Erik Olof Gunnar Andersson proposed openstack/designate master: Fix dns.query.tcp/udp not always handling ipv6 properly https://review.opendev.org/c/openstack/designate/+/813722 | 02:18 |
opendevreview | Erik Olof Gunnar Andersson proposed openstack/designate master: Fix dns.query.tcp/udp not always handling ipv6 properly https://review.opendev.org/c/openstack/designate/+/813722 | 02:38 |
opendevreview | Erik Olof Gunnar Andersson proposed openstack/designate-tempest-plugin master: [DNM] Testing https://review.opendev.org/c/openstack/designate-tempest-plugin/+/813738 | 04:39 |
eandersson | I don't think I fully understood the problem. | 05:55 |
eandersson | Makes sense johnsom | 05:55 |
opendevreview | Arkady Shtempler proposed openstack/designate-tempest-plugin master: Add "cleanup" for created recordsets + delete zone test https://review.opendev.org/c/openstack/designate-tempest-plugin/+/796469 | 07:16 |
*** eandersson8 is now known as eandersson | 10:50 | |
ozzzo_work | in http://lists.openstack.org/pipermail/openstack-discuss/2021-October/025292.html I was advised to setup redis to fix my "DBDuplicateEntry" problem | 16:00 |
ozzzo_work | so we installed redis, and allowed ports 6379 and 26379, and it appears that redis is working, but we still get the duplicate entry errors, and DNS fails when that error occurs | 16:01 |
ozzzo_work | what am I missing? | 16:01 |
frickler | ozzzo_work: you need to configure designate to actually use redis as coordination backend? | 16:05 |
ozzzo_work | it pulls from the redis_enabled value | 16:11 |
ozzzo_work | I looked in the designate_producer container and I see it in /etc/designate/designate.conf: backend_url = redis://admin:Z59Lekw5HODiBVbS85BHk9ruhEyrb9sT8btt2sTl@10.221.176.48:26379?sentinel=kolla&sentinel_fallback=10.221.176.173:26379&sentinel_fallback=10.221.177.38:26379&db=0&socket_timeout=60&retry_on_timeout=yes | 16:14 |
ozzzo_work | and I can hit the port with nc, from the container | 16:15 |
ozzzo_work | it seems like redis is working, but I still see the DB error | 16:15 |
frickler | ozzzo_work: there's also https://bugs.launchpad.net/designate/+bug/1940976 , so there might be an issue with parallel threads. this needs further investigation probably | 17:53 |
johnsom | I thought the DLM changes Erik made resolved that issue as well. | 17:54 |
eandersson | It should have resolved the race condition, unless there is a new issue. | 18:04 |
eandersson | I wonder if they are using the api or the sink? | 18:05 |
eandersson | I would double check that he is running 9.0.2 and not 9.0.1 | 18:08 |
eandersson | because 9.0.2 was released with this fix back in february | 18:09 |
eandersson | If they are using the sink they might need this patch. https://github.com/openstack/designate/commit/4869913519e0b7bb12b4ba1ef6b7ce8aabb53825 | 18:11 |
eandersson | It's the only time I have seen that type of database errors in our deploy | 18:11 |
eandersson | and does not look like it was backported to Train | 18:14 |
eandersson | nvm it is in train https://opendev.org/openstack/designate/commit/0174797a52d8c2efa6581a97adfec95977511024 | 18:15 |
eandersson | ozzzo_work: Are there any mentions of coordination in the logs? Also, can you make sure you are running at least 9.0.2? | 18:17 |
ozzzo_work | eandersson: I'll take a look, ty! | 20:13 |
-opendevstatus- NOTICE: Both Gerrit and Zuul services are being restarted briefly for minor updates, and should return to service momentarily; all previously running builds will be reenqueued once Zuul is fully started again | 22:49 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!