*** chlong_ has quit IRC | 00:03 | |
*** james_li_ has quit IRC | 00:04 | |
*** penick has quit IRC | 00:05 | |
*** jasonsb has joined #openstack-dns | 01:01 | |
*** ducttape_ has joined #openstack-dns | 01:08 | |
*** rudrajit has joined #openstack-dns | 01:14 | |
*** ducttape_ has quit IRC | 01:15 | |
*** rudrajit_ has quit IRC | 01:16 | |
*** stanzgy has joined #openstack-dns | 01:20 | |
*** jasonsb has quit IRC | 01:25 | |
*** jasonsb has joined #openstack-dns | 01:35 | |
*** bpokorny has quit IRC | 01:43 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/designate: Updated from global requirements https://review.openstack.org/285016 | 01:47 |
---|---|---|
openstackgerrit | OpenStack Proposal Bot proposed openstack/designate-dashboard: Updated from global requirements https://review.openstack.org/285017 | 01:47 |
*** rudrajit has quit IRC | 01:47 | |
*** chlong_ has joined #openstack-dns | 01:50 | |
*** jasonsb has quit IRC | 02:13 | |
*** jasonsb has joined #openstack-dns | 02:14 | |
*** EricGonczer_ has joined #openstack-dns | 02:24 | |
*** EricGonczer_ has quit IRC | 02:25 | |
*** EricGonczer_ has joined #openstack-dns | 02:27 | |
*** jasonsb has quit IRC | 02:32 | |
*** ducttape_ has joined #openstack-dns | 02:37 | |
*** ducttape_ has quit IRC | 02:40 | |
*** ducttape_ has joined #openstack-dns | 02:44 | |
*** ducttape_ has quit IRC | 02:46 | |
*** fawadkhaliq has joined #openstack-dns | 02:48 | |
*** ducttape_ has joined #openstack-dns | 02:53 | |
*** rudrajit has joined #openstack-dns | 03:10 | |
*** EricGonczer_ has quit IRC | 03:12 | |
*** boris-42 has quit IRC | 03:24 | |
*** fawadkhaliq has quit IRC | 03:32 | |
*** ducttape_ has quit IRC | 03:46 | |
*** jasonsb has joined #openstack-dns | 04:08 | |
*** richm has quit IRC | 04:10 | |
*** rudrajit has quit IRC | 04:47 | |
*** rudrajit has joined #openstack-dns | 04:47 | |
*** fawadkhaliq has joined #openstack-dns | 04:58 | |
*** ducttape_ has joined #openstack-dns | 05:05 | |
*** ducttape_ has quit IRC | 05:37 | |
*** rudrajit_ has joined #openstack-dns | 05:51 | |
*** rudrajit has quit IRC | 05:54 | |
*** jasonsb has quit IRC | 07:00 | |
*** chlong_ has quit IRC | 07:26 | |
*** jordanP has joined #openstack-dns | 08:49 | |
*** rudrajit_ has quit IRC | 09:05 | |
*** jschwarz has joined #openstack-dns | 09:20 | |
*** jordanP has quit IRC | 09:32 | |
*** fawadkhaliq has quit IRC | 09:37 | |
*** jordanP has joined #openstack-dns | 09:46 | |
*** kei_yama has quit IRC | 10:00 | |
eandersson | It's funny. I had that patches in my build, but forgot to re-apply the patch after updating to 2015.1.1. | 10:16 |
*** ducttape_ has joined #openstack-dns | 10:34 | |
*** stanzgy has quit IRC | 10:36 | |
*** en_austin has joined #openstack-dns | 10:37 | |
en_austin | Kiall: ping? | 10:37 |
*** jschwarz has quit IRC | 10:46 | |
*** ducttape_ has quit IRC | 10:57 | |
eandersson | Looking ath the [service:pool_manager] section. What would be the recommended changes from default to make it less spammy? | 11:20 |
eandersson | I am suspecting that we might be overloading powerdns with too many requests. | 11:27 |
federico3 | eandersson: do you have any number? | 12:08 |
eandersson | You mean any numbers set in the configuration already? | 12:09 |
*** krotscheck_dcm is now known as krotscheck | 12:13 | |
eandersson | I really don't like this bug: http://paste.openstack.org/show/yPxcgi4ErEr4FV8zDJPz/ | 12:22 |
eandersson | It happens if you try to do a dns lookup on mdns while the record is in DELETE PENDING | 12:22 |
*** jordanP has quit IRC | 12:35 | |
federico3 | uh, opening bug report. eandersson anything else you can add? | 12:35 |
*** jordanP has joined #openstack-dns | 12:36 | |
eandersson | That bug is a side effect, but I hit you up with some more details in a PM | 12:37 |
*** johnbelamaric has quit IRC | 12:56 | |
Kiall | eandersson: ah, that should be an easy fix. Is there a bug filed? | 13:01 |
Kiall | I can see what's gone wrong there.. | 13:01 |
Kiall | en_austin: pong - so. https://bugs.launchpad.net/designate/+bug/1549980 | 13:02 |
openstack | Launchpad bug 1549980 in Designate "MiniDNS TCP connections stop being accepted" [Critical,In progress] - Assigned to Rahman Syed (rahman-syed-w) | 13:02 |
Kiall | I was about to pull your logs down again to grep for ^ | 13:02 |
eandersson | Kiall: I confirmed that I didn't have the patch in place. | 13:03 |
eandersson | What is even funnier is that I did have the patch before upgrading to the latest Kilo release. I forgot to re-merge it after the upgrade. :p | 13:03 |
Kiall | eandersson: I saw, that sucks. We'll merge it today | 13:03 |
Kiall | (and backport) | 13:03 |
eandersson | Awesome. | 13:03 |
eandersson | That is one less issue to worry about. | 13:04 |
Kiall | en_austin: yep, your logs have the faithful "timed out" | 13:04 |
*** km has quit IRC | 13:08 | |
*** ducttape_ has joined #openstack-dns | 13:13 | |
en_austin | so, that means, that you've found an issue that caused my mDNS to hang? | 13:22 |
en_austin | Kiall: ^ | 13:22 |
Kiall | en_austin: yep | 13:23 |
Kiall | not me | 13:24 |
Kiall | eandersson found it | 13:24 |
en_austin | eandersson: thank you :) | 13:24 |
Kiall | https://review.openstack.org/#/c/284912 should fix it | 13:24 |
en_austin | I can apply that to my instance and check it out. | 13:24 |
Kiall | (with a more comprehensive fix to prevent it in the future coming..) | 13:24 |
en_austin | and also Kiall do you remember a "with lockutils.lock" issue with my PoolManager? What do you think about it? | 13:24 |
en_austin | Are there a way to fix it out? | 13:24 |
en_austin | Now I'm experiencing a zone falling to "ERROR" state (seems like some race condition). Or that patch will fix it? | 13:25 |
Kiall | I remember that, I'm trying to remember the exact reason we believed it to be an issue.. It wasn't mdns lockup, was it? | 13:25 |
en_austin | PoolManager | 13:25 |
en_austin | https://bugs.launchpad.net/designate/+bug/1534490 | 13:25 |
openstack | en_austin: Error: malone bug 1534490 not found | 13:25 |
Kiall | Yea, it could have been us thinking PM was overloading mDNS - Trying to remember | 13:25 |
Kiall | ah.. Okay, remembering now. Can you give ^ patch a go, and if it's still happening, we'll dig in again | 13:26 |
en_austin | So. I should revert a removal of "with lockutils.lock" and apply a patch from review you've gave me above? | 13:30 |
eandersson | en_austin, is that similar to this? http://paste.openstack.org/show/T8LXLLnUQvu0HIWc1yZY/ | 13:30 |
en_austin | +/- | 13:30 |
en_austin | I'll show you now. | 13:31 |
en_austin | 2016-02-26 16:31:28.913 24354 WARNING designate.mdns.notify [req-bcfd8c83-2b93-416f-9b75-9d327435122d noauth-user noauth-project - - -] Got lower serial for 'xxxxxxx.' to 'xxxxxxx:53'. Expected:'1456493481'. Got:'1456493431'.Retries left='9' | 13:31 |
en_austin | And that repeats. | 13:32 |
eandersson | Yea, I have the same issue. | 13:32 |
en_austin | Then zone can or recover to SUCCESS, or fall to ERROR. | 13:32 |
Kiall | I think that "Got lower serial" message is really OK - and shouldn't be a warning.. It's just "The data hasn't propagated yet"... If it goes to success, nothing is wrong or in need of warning about | 13:33 |
eandersson | I think for us pdns is simply not keeping up. | 13:34 |
en_austin | The issue is - that it often does not goes to success... | 13:34 |
en_austin | eandersson: I'm running BIND on my backends jfyi | 13:34 |
en_austin | Kiall: there was not "got lower serial" before I've removed that lock in PM code | 13:35 |
*** ducttape_ has quit IRC | 13:35 | |
en_austin | I think it occurs because more new records are trying to propagate to backend while he has not processed an older records | 13:36 |
*** fawadkhaliq has joined #openstack-dns | 13:36 | |
en_austin | and there is no problem, really, until that retries are falling zone to ERROR state | 13:36 |
Kiall | So, is the "cosmetic" only, as in assuming mDNS doesn't fall over, content is going out reasonably fast and things return to ACTIVE? | 13:37 |
Kiall | is it* | 13:37 |
Kiall | eandersson: PowerDNS is doing lots of work with large zones like yours.. You may be a candidate for using BIND, or something that doesn't BEGIN; DELETE *; INSERT *; COMMIT | 13:38 |
eandersson | Actually in this case we don't have large zones, but many zones instead. | 13:38 |
Kiall | Oh, I thought you had a bunch of large zones? I'm probably getting people mixed up | 13:39 |
Kiall | (it happens ;)) | 13:39 |
eandersson | :D | 13:39 |
en_austin | Sometimes - yes. | 13:45 |
en_austin | :D | 13:45 |
en_austin | Kiall: ^ | 13:45 |
en_austin | sometimes - such log entries causes my designate stop to propagate new records (both of zones are failing to ERROR and mdns restart helps, yeah) | 13:46 |
en_austin | sometimes - it's just falls to ERROR and self-recovers here in 1-2min | 13:46 |
en_austin | (e.g returns to ACTIVE) | 13:46 |
en_austin | so Kiall i'm reverting that http://paste.openstack.org/show/485460/ to its original state (with 'with lockutils.lock') and applying https://review.openstack.org/#/c/284912/3/designate/service.py that patch. | 13:54 |
en_austin | Hope it helps... | 13:54 |
*** johnbelamaric has joined #openstack-dns | 13:57 | |
*** richm has joined #openstack-dns | 14:03 | |
Kiall | en_austin: lets see, although both changes may be necessay. we'll see. | 14:07 |
en_austin | I've done that i've said before - will now restart Designate and look for it. | 14:07 |
en_austin | If it will hangs - I will re-apply a removal of "with lockutils.lock" and try to use it with it. | 14:08 |
eandersson | btw Kiall was this normal? http://paste.openstack.org/show/T8LXLLnUQvu0HIWc1yZY/ | 14:08 |
Kiall | eandersson: was this during a new zone creation? | 14:09 |
eandersson | It seems to happen during record create/delete | 14:09 |
Kiall | If not, mDNS failed to query the nameserver for the zones SOA | 14:09 |
en_austin | btw eandersson I've seen the same logs.. sometimes, not always. | 14:09 |
eandersson | and the same session you will have it retry endlessly | 14:10 |
eandersson | I changed it from 3 to 9 retries, and it just keeps retrying until it runs out of retries | 14:10 |
eandersson | It gets propagated eventually, usually 60-180s | 14:10 |
eandersson | Could it be after it hits periodic_recovery_interval ? | 14:11 |
Kiall | That could be a mis-config | 14:12 |
eandersson | On the pool manager? | 14:13 |
Kiall | The log is not detailed enough to be able to tell :/ | 14:13 |
Kiall | Yea, likely a pool_nameserver section is not right | 14:13 |
Kiall | (If it always happens) | 14:13 |
eandersson | all that is in pool_nameserver is the ip and port | 14:13 |
Kiall | Yea, one of those might be wrong :) | 14:14 |
Kiall | (is there 1 pool_nameserver section, or more than 1?) | 14:14 |
eandersson | 1 pool_nameserver section | 14:14 |
eandersson | confirmed that the ip and port are correct | 14:14 |
eandersson | Our theory at the moment is that pdns is overloaded | 14:15 |
Kiall | That's certainly possible, and would explain it too | 14:15 |
eandersson | What exactly does periodic_recovery do? | 14:17 |
*** karimb has joined #openstack-dns | 14:17 | |
*** karimb has quit IRC | 14:17 | |
eandersson | It just checks for records pending and tries to fix that or? | 14:18 |
Kiall | recovery finds things in ERROR status, and attempts to fix them | 14:24 |
*** mlavalle has joined #openstack-dns | 14:41 | |
eandersson | Kiall: Unable to AXFR zone 'example.com' from remote '<pdns-ip>' (resolver): Timeout waiting for answer from <designate01>:53 during AXFR | 14:53 |
eandersson | This is a common error in the pdns logs. | 14:53 |
Kiall | so, you don't have large zones? and is that before or after the TCP lockup fix was applied? | 14:54 |
eandersson | '<pdns-ip> = <designate01>:53 | 14:54 |
eandersson | Small zones | 14:54 |
eandersson | 100-200 records | 14:54 |
*** fawadkhaliq has quit IRC | 14:55 | |
Kiall | Just checking is mDNS listening on 53? | 14:55 |
Kiall | or the default of 5354? | 14:55 |
eandersson | 53 | 14:55 |
eandersson | Patch had no effect. | 14:55 |
Kiall | Is there lots of churn happening in the zones? | 14:56 |
*** ducttape_ has joined #openstack-dns | 14:56 | |
eandersson | about 3 records created and deleted per 5 minutes | 14:56 |
eandersson | by monitoring | 14:57 |
eandersson | otherwise it's pretty static | 14:57 |
Kiall | Okay, and you mentioned you think powerdns is overloaded, is that query load coming in? | 14:59 |
eandersson | When I say overloaded I don't mean IO, but rather that the zone updates get queued up. | 14:59 |
eandersson | CPU and RAM usage is very low on pdns and mdns. | 14:59 |
Kiall | Humm, small zones with a few updates a minute really shouldn't be causing anything like that.. can you manually / via dig do a AXFR of the zones again mDNS, from the pDNS server? Keeping an eye on how long it's taking... | 15:02 |
eandersson | very fast | 15:03 |
Kiall | It's been way too long since I benchmarked it myself to remember, but someone mentioned the other day 8-10k record zones taking about 8-10 seconds | 15:03 |
Kiall | Trying to think how we rule out pDNS and/or mDNS.. can you run that in a loop - say once a second, until the next time you see a timeout in pDNS logs? | 15:04 |
eandersson | Kiall: I added some additional info in a pm | 15:08 |
en_austin | Kiall: looks like it (PM) begins to freeze again... SOAs is out-of-sync now (as in previous time) | 15:09 |
en_austin | 2016-02-26 18:08:54.089 3275 INFO designate.mdns.notify [req-93573e6a-a34a-415c-9870-479ddeaa30fb noauth-user noauth-project - - -] Sending 'SOA' for 'xxxxxx.' to 'yyyyyyy:53'. | 15:09 |
en_austin | ^ and repeating | 15:09 |
en_austin | it was ~300sec difference between Designate's SOA and actual one (bigger on Designate side) | 15:10 |
en_austin | then it fixed | 15:10 |
en_austin | but Zabbix is still reporting "Serials differ on designate and ns1/ns2". | 15:10 |
en_austin | and kill -USR1 reports "with lockutils.lock" greenthreads. | 15:12 |
en_austin | `with lockutils.lock('update-status-%s' % domain.id):` | 15:12 |
en_austin | 62 green threads as for now (1hr of uptime). | 15:12 |
en_austin | O_O I've began to receive a IOError: Socket closed from... RabbitMQ | 15:18 |
en_austin | Kiall: http://paste.openstack.org/show/488385/ | 15:18 |
eandersson | I ran into that in Liberty as well when I started testing. | 15:19 |
en_austin | http://paste.openstack.org/show/488386/ | 15:20 |
en_austin | that's what in logs of RabbitMQ | 15:20 |
en_austin | eandersson: I'm using Liberty too. | 15:20 |
en_austin | closing AMQP connection <0.16519.8> (127.0.0.1:54608 -> 127.0.0.1:5672): | 15:21 |
en_austin | {heartbeat_timeout,running} | 15:21 |
en_austin | also here. | 15:21 |
en_austin | AFAIK "heartbeat_timeout" is raised by RabbitMQ when TCP connection from another side is dead. | 15:21 |
Kiall | Sorry, busy will all sorts of other stuff so back/forth from IRC | 15:22 |
Kiall | heartbeat_timeout <-- that's not a TCP fail.. what's your RMQ config in Designate look like? | 15:22 |
en_austin | http://paste.openstack.org/show/488388/ | 15:24 |
eandersson | Kiall: Kombu is single threaded, so if something is keeping the thread up, e.g. deadlock, it wont reply to heartbeats | 15:24 |
en_austin | https://www.rabbitmq.com/heartbeats.html that's why I've considered about dead TCP conn | 15:24 |
eandersson | Unless they have fixed it now | 15:24 |
eandersson | AMQP server xxxx:5672 closed the connection. Check login credentials: Socket closed | 15:24 |
Kiall | So, kobu requires the calling app periodically calls the "send heartbeat" method, oslo.messaging does not call it | 15:25 |
eandersson | I have that in my logs for Liberty as well. | 15:25 |
Kiall | there's something somewhere to tell RMQ not to expect heartbeats, which means it will instead rely on TCP keepalive | 15:25 |
eandersson | If you set heartbeat interval to 0 | 15:26 |
Kiall | But - It's been ages since I've seen it :) | 15:26 |
Kiall | eandersson: sounds about right | 15:26 |
eandersson | I wrote my own library, as I didn't like how pika and py-amqp handled heartbeats :p | 15:27 |
en_austin | I was just worried about IOError's in my mdns.log... | 15:27 |
Kiall | eandersson: lol | 15:27 |
en_austin | well Kiall have you seen my report about deadlocking a greenlets? | 15:28 |
*** pglass has joined #openstack-dns | 15:28 | |
en_austin | now i'm running with both patch for removing a "with lockutils.lock" from PoolManager and patch for service.py (about except socket.timeout) | 15:28 |
Kiall | en_austin: behaving any better? | 15:28 |
en_austin | Up and running now, sometimes serials are different on Designate and ns1/2 (but returning in-sync at 5-10sec, that's OK) | 15:29 |
eandersson | btw Kiall, I hit you up with some logs in pm in case you didnt see it | 15:37 |
Kiall | sorry, multi tasking all over the place ;) | 15:48 |
*** penick has joined #openstack-dns | 15:55 | |
*** bpokorny has joined #openstack-dns | 15:55 | |
*** bpokorny has quit IRC | 16:04 | |
*** bpokorny has joined #openstack-dns | 16:04 | |
*** logan- has quit IRC | 16:14 | |
*** logan- has joined #openstack-dns | 16:14 | |
en_austin | well... no faults for 1hr - it's a progress lol :D | 16:21 |
en_austin | Kiall: can you explain in couple of words, what was that fix for? what behaviour does it changed? e.g why exception about socket timeout (if any) would not be caught by socket.error exception clause? | 16:22 |
timsim | en_austin: It was being caught, but because the exceptions aren't uniform, a KeyError was happening during the socket.error exception handling and raising an exception in the main tcp handling thread. | 16:34 |
Kiall | Basically, if an exception happened in our exception handlers, we goofed up. | 16:37 |
*** penick_ has joined #openstack-dns | 16:37 | |
Kiall | we need to re-work so it's some nested try/catches, with the outer one being nothing more than LOG.critical("OH CRAP, SOMETHING SPLODED") so there's little risk of it raising an exception itself | 16:38 |
*** penick has quit IRC | 16:40 | |
*** penick_ is now known as penick | 16:40 | |
*** jasonsb has joined #openstack-dns | 16:44 | |
en_austin | I've got it.. And now, if socket.error will occur, will it correctly handle it (re-initiate connection again, etc) ? | 16:49 |
*** ccneill has joined #openstack-dns | 16:52 | |
Kiall | Well, it'll continue doing what it's doing, rather than let the exception (the one generated insude the exception handler) go un-caught, which kills the thread and leaves you with a service that does UDP but not TCP | 17:00 |
*** james_li has joined #openstack-dns | 17:01 | |
*** darkxploit has joined #openstack-dns | 17:03 | |
en_austin | Hm.. | 17:05 |
en_austin | Maybe, we can find an origin of that socket.error - or it's a normal behaviour? | 17:06 |
*** jordanP has quit IRC | 17:06 | |
*** jasonsb_ has joined #openstack-dns | 17:12 | |
*** en_austin has quit IRC | 17:13 | |
*** baffle___ has joined #openstack-dns | 17:15 | |
*** mikal_ has joined #openstack-dns | 17:15 | |
*** jasonsb has quit IRC | 17:20 | |
*** ekarlso- has quit IRC | 17:20 | |
*** krotscheck has quit IRC | 17:20 | |
*** mikal has quit IRC | 17:20 | |
*** lmiccini has quit IRC | 17:20 | |
*** baffle has quit IRC | 17:20 | |
*** krotscheck has joined #openstack-dns | 17:20 | |
*** lmiccini has joined #openstack-dns | 17:23 | |
*** ekarlso- has joined #openstack-dns | 17:27 | |
*** eandersson_ has joined #openstack-dns | 17:39 | |
*** rudrajit has joined #openstack-dns | 17:43 | |
*** rudrajit has joined #openstack-dns | 17:44 | |
*** bpokorny has quit IRC | 17:59 | |
*** jasonsb_ has quit IRC | 18:00 | |
*** ducttape_ has quit IRC | 18:34 | |
*** bpokorny has joined #openstack-dns | 18:53 | |
*** ccneill has quit IRC | 18:53 | |
*** ccneill has joined #openstack-dns | 19:03 | |
*** darkxploit has quit IRC | 19:09 | |
*** ducttape_ has joined #openstack-dns | 19:23 | |
*** porunov has joined #openstack-dns | 19:38 | |
*** bpokorny has quit IRC | 20:07 | |
*** johnbelamaric has quit IRC | 20:11 | |
*** johnbelamaric has joined #openstack-dns | 20:23 | |
*** tg90nor has quit IRC | 20:25 | |
andrewbogott | If anyone is around… can I get advice about the kilo->trusty upgrade path for designate? Any config changes? And do I really need to start running designate-zone-manager if I’m not using ceilometer? | 20:36 |
andrewbogott | bah, sorry, kilo->liberty | 20:37 |
eandersson_ | I am in the process of that upgrade and it was really easy. | 20:41 |
eandersson_ | The only thing I had to change in the config was to make sure that I had the host and port specificed in pool_target | 20:41 |
eandersson_ | designate-zone-manager isn't required | 20:42 |
andrewbogott | eandersson_: host and port are new options? | 20:44 |
andrewbogott | right now I specify… options, masters, type | 20:44 |
eandersson_ | No, but it would use pool_namespace previously to send notifcations | 20:45 |
eandersson_ | so if you didn't have options: host, port set under pool_target it would default to localhost. | 20:45 |
andrewbogott | eandersson_: my target is a pdns database, which is specified in options = connection: | 20:47 |
andrewbogott | my pool_nameserver sections have port and host though | 20:47 |
eandersson_ | options = host: <pdns>, port: 53 | 20:47 |
eandersson_ | you will need that under pool_target | 20:47 |
eandersson_ | in addition to what you have in pool_nameserver | 20:48 |
andrewbogott | ok, and that points to where pdns is running, I take it? (It’s confusing in my case since the target is a single database, which is used by two different pdns servers running on different hosts) | 20:48 |
eandersson_ | yep | 20:49 |
andrewbogott | any idea what that host/port is used for? What new interaction is there between designate and pdns? | 20:49 |
eandersson_ | nah, it's just an undocumented change | 20:49 |
andrewbogott | I should rephrase: Since I /already/ have two pdns servers running, one of which is not on localhost... | 20:50 |
andrewbogott | what’s broken by that? | 20:50 |
andrewbogott | since obviously the non-localhost one is already not referenced | 20:50 |
eandersson_ | ah, yea if you are already targeting localhost it's fine | 20:50 |
andrewbogott | eandersson_: I still don’t understand, sorry | 20:53 |
andrewbogott | I have /two/ targets. Why would it work to just pick one and point to it? | 20:53 |
eandersson_ | So basically under pool_target options = port: 53, host: xxx put what ever you already have under pool_namespace host/port | 20:53 |
eandersson_ | It's due to this change: https://review.openstack.org/#/c/170612/ | 20:54 |
andrewbogott | so it’ll only notify whichever one I specify | 20:55 |
eandersson_ | yes | 20:55 |
andrewbogott | and the other one will just have to catch up | 20:55 |
andrewbogott | I guess that’s ok for now | 20:55 |
andrewbogott | anyway, overall this sounds painless :) thanks! | 20:55 |
eandersson_ | You can also add also_notify | 20:55 |
eandersson_ | Yep | 20:55 |
eandersson_ | #also_notifies = 192.0.2.1:53, 192.0.2.2:53 | 20:56 |
eandersson_ | https://github.com/openstack/designate/blob/master/etc/designate/designate.conf.sample#L345 | 20:56 |
andrewbogott | ah! | 20:56 |
andrewbogott | better yet, thank you | 20:56 |
openstackgerrit | Eric Larson proposed openstack/designate: Ensure the zone records quota is enforced https://review.openstack.org/284361 | 21:00 |
*** tg90nor has joined #openstack-dns | 21:11 | |
*** bpokorny has joined #openstack-dns | 21:17 | |
*** mlavalle has quit IRC | 21:41 | |
*** porunov has quit IRC | 21:45 | |
elarson | timsim: so I'm looking at the worker review and just thought about how often we get an rcpapi. I kind of want to submit a review that basically does `from designate import rcpapi` and then do `rcpapi['pool-manager']` (or something similar) to get an instance | 21:58 |
elarson | doesn't really matter. just was thinking aloud | 21:58 |
*** eandersson_ has quit IRC | 22:36 | |
*** pglass has quit IRC | 22:40 | |
*** ccneill has quit IRC | 22:40 | |
*** ccneill has joined #openstack-dns | 22:45 | |
*** rudrajit has quit IRC | 23:04 | |
*** rudrajit has joined #openstack-dns | 23:08 | |
*** ducttape_ has quit IRC | 23:10 | |
*** ccneill has quit IRC | 23:48 | |
*** james_li has quit IRC | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!