*** jmcbride has joined #openstack-dns | 00:00 | |
*** ryanpetrello has quit IRC | 00:15 | |
*** GonZo2K has joined #openstack-dns | 00:35 | |
*** EricGonczer_ has joined #openstack-dns | 00:44 | |
*** pk has quit IRC | 00:45 | |
*** EricGonc_ has joined #openstack-dns | 00:47 | |
*** EricGonczer_ has quit IRC | 00:49 | |
*** ryanpetrello has joined #openstack-dns | 00:49 | |
*** Stanley00 has joined #openstack-dns | 00:59 | |
*** rmoe has quit IRC | 01:03 | |
*** rmoe has joined #openstack-dns | 01:18 | |
*** pk has joined #openstack-dns | 01:18 | |
*** pk has quit IRC | 01:27 | |
*** stanzgy has joined #openstack-dns | 01:47 | |
*** nosnos has joined #openstack-dns | 01:50 | |
*** penick has joined #openstack-dns | 01:53 | |
*** penick_ has joined #openstack-dns | 01:54 | |
*** penick has quit IRC | 01:57 | |
*** penick_ is now known as penick | 01:57 | |
*** EricGonczer_ has joined #openstack-dns | 02:08 | |
*** EricGonc_ has quit IRC | 02:10 | |
*** ryanpetrello has quit IRC | 02:14 | |
*** GonZo2K has quit IRC | 02:18 | |
*** zhang_liang__ has quit IRC | 02:23 | |
*** pk has joined #openstack-dns | 02:26 | |
*** pk has quit IRC | 02:32 | |
*** pk has joined #openstack-dns | 02:33 | |
*** rjrjr has quit IRC | 02:35 | |
*** vinod has joined #openstack-dns | 02:38 | |
*** jmcbride has quit IRC | 03:05 | |
*** jmcbride has joined #openstack-dns | 03:10 | |
*** jmcbride has quit IRC | 03:12 | |
*** pk has quit IRC | 03:24 | |
*** nosnos has quit IRC | 03:34 | |
*** ryanpetrello has joined #openstack-dns | 03:38 | |
*** EricGonczer_ has quit IRC | 03:42 | |
*** richm has quit IRC | 03:44 | |
*** EricGonczer_ has joined #openstack-dns | 03:45 | |
*** EricGonczer_ has quit IRC | 03:47 | |
*** vinod has quit IRC | 03:48 | |
*** penick has quit IRC | 03:51 | |
*** penick has joined #openstack-dns | 03:54 | |
*** ryanpetrello has quit IRC | 03:55 | |
*** GonZo2K has joined #openstack-dns | 04:03 | |
*** nosnos has joined #openstack-dns | 04:29 | |
*** pk has joined #openstack-dns | 04:34 | |
*** pk has quit IRC | 04:39 | |
*** penick has quit IRC | 04:43 | |
*** GonZo2K has quit IRC | 05:07 | |
*** k4n0 has joined #openstack-dns | 05:24 | |
*** k4n0 has quit IRC | 05:25 | |
*** stanzgy has quit IRC | 06:36 | |
*** stanzgy has joined #openstack-dns | 06:37 | |
*** chlong has quit IRC | 07:49 | |
*** chlong has joined #openstack-dns | 07:49 | |
*** chlong has quit IRC | 07:51 | |
*** chlong has joined #openstack-dns | 07:53 | |
*** jordanP has joined #openstack-dns | 09:06 | |
*** Stanley00 has quit IRC | 10:20 | |
*** stanzgy has quit IRC | 10:54 | |
*** eandersson has joined #openstack-dns | 11:00 | |
*** untriaged-bot has joined #openstack-dns | 11:02 | |
untriaged-bot | Untriaged bugs so far: | 11:02 |
---|---|---|
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1403267 | 11:02 |
uvirtbot | Launchpad bug 1403267 in designate "create_domain should handle status asynchronously" [Undecided,New] | 11:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1404395 | 11:02 |
uvirtbot | Launchpad bug 1404395 in designate "Pool manager attempts to periodically sync *all* zones" [Undecided,New] | 11:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1406414 | 11:02 |
uvirtbot | Launchpad bug 1406414 in designate "Delete zone fails to propagate to all (Bind) nameservers in a pool depending on threshold_percentage" [Undecided,New] | 11:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1403591 | 11:02 |
uvirtbot | Launchpad bug 1403591 in designate "A ZeroDivisionError is Thrown Without Servers" [Undecided,New] | 11:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1289444 | 11:02 |
uvirtbot | Launchpad bug 1289444 in designate "Designate with postgres backend is having issues" [Undecided,New] | 11:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1404529 | 11:02 |
uvirtbot | Launchpad bug 1404529 in designate "DynECT is called twice when any domain action happens." [Undecided,Confirmed] | 11:02 |
*** untriaged-bot has quit IRC | 11:02 | |
eandersson | Kiall: So, I tried to start a second instance on a second server today, and even with pymysql it gets stuck in a DeadLock again when running multiple workers. So make sure that for the master that you try my script with multiple workers (or instances) running. | 11:12 |
eandersson | I can only get it working with a single worker and pymysql. | 11:12 |
*** ryanpetrello has joined #openstack-dns | 12:46 | |
*** mwagner_lap has quit IRC | 12:48 | |
Kiall | eandersson: heya | 13:29 |
Kiall | So - with multiple separate processes (i.e. not workers), you're still seeing that deadlock? | 13:29 |
Kiall | And - Are these database deadlocks, or code deadlocks? (The original issue was code deadlocks from memory) | 13:31 |
*** jmcbride has joined #openstack-dns | 13:35 | |
*** jmcbride has quit IRC | 13:37 | |
*** mwagner_lap has joined #openstack-dns | 13:40 | |
*** artom has joined #openstack-dns | 13:44 | |
openstackgerrit | Kiall Mac Innes proposed openstack/designate: Implement default page size for V2 API https://review.openstack.org/142505 | 13:56 |
eandersson | Kiall: Basically I think this is a different issue. | 13:59 |
eandersson | So I had a single instance with a single worker using pymysql. It worked under heavy load, no issues. | 13:59 |
eandersson | I added a second instanc eon a different server with the same setup, and now I see exceptions. | 14:00 |
eandersson | I stopped one of the instances and it works agian. | 14:00 |
Kiall | Okay, do you have a stacktrace of one of the exceptions? | 14:00 |
eandersson | yep getting it now | 14:00 |
Kiall | I expect it's a real database deadlock this time (yay for the "serial" column -_-) | 14:01 |
eandersson | RemoteError: Remote error: DBDeadlock (InternalError) (1213, u'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE domains SET updated_at=%s, serial=%s WHERE domains.id = %s AND domains.deleted = %s' (datetime.datetime(2015, 1, 6, 11, 6, 7, 967861), 1420542370, '684a6bcfd9ab4c678824a83bb9d021f9', '0') | 14:01 |
eandersson | I'll see if I can find the formated exception. | 14:01 |
eandersson | (Since that was taken from the sink) | 14:01 |
Kiall | No need - It's DB contention on the serial column as I thought | 14:01 |
Kiall | What version are you running again? | 14:02 |
Kiall | icehouse? | 14:02 |
eandersson | Yea, should be. | 14:02 |
ekarlso- | heya :P | 14:02 |
eandersson | I am gonna try to get it set up in the LAB, so I can do some proper troubleshooting. | 14:03 |
eandersson | I only have one node there, so never experienced it during testing. | 14:03 |
Kiall | hah - yea production has a way of showing up errors your lab can't ;) | 14:04 |
Kiall | So - back when you first mentioned the issues your having.. I wrote this up: https://review.openstack.org/#/c/134524/ | 14:04 |
*** nosnos has quit IRC | 14:04 | |
Kiall | I think we can fix that up, and it should solve the issue | 14:05 |
eandersson | oh, so it will simply retry a couple of times until it works? | 14:05 |
Kiall | Yep - That's the only thing you can do with a real database deadlock (kinda) - two queries are trying to change the same value in the DB at once, only 1 can succeed | 14:06 |
Kiall | (The kinda is around the fact that we could find a way to remove the single per-domain serial column, also avoiding the deadlock) | 14:06 |
eandersson | Perfect. | 14:07 |
Kiall | Let me try write that patch up against icehouse, and see if it works | 14:07 |
eandersson | I'll have Internet back tomorrow, so I'll upload the patches I apply for review. | 14:07 |
Kiall | Ah - Well, check that review.. It should be updated tomorrow for master, and there may be a backport if I can get time before then :) | 14:08 |
eandersson | Sounds good. :) | 14:08 |
*** richm has joined #openstack-dns | 14:10 | |
openstackgerrit | Kiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks https://review.openstack.org/134524 | 14:15 |
Kiall | eandersson: ^ should in theory do it (for master..) | 14:15 |
Kiall | Not sure how easy a backport will be though.. as always :'( | 14:17 |
eandersson | I am done backporting now, gonna grab a coffee and test it out. :D | 14:17 |
Kiall | lol - not possible ;) | 14:18 |
Kiall | eandersson: In theory, this is a backport.. https://review.openstack.org/145233 | 14:22 |
Kiall | totally untested | 14:22 |
*** betsy has joined #openstack-dns | 14:35 | |
*** EricGonczer_ has joined #openstack-dns | 14:40 | |
*** vinod has joined #openstack-dns | 14:51 | |
*** vipul has quit IRC | 14:52 | |
*** vipul has joined #openstack-dns | 14:52 | |
*** timsim has joined #openstack-dns | 14:52 | |
*** vinod has quit IRC | 14:54 | |
*** vinod has joined #openstack-dns | 14:55 | |
*** jmcbride has joined #openstack-dns | 14:58 | |
*** jmcbride has quit IRC | 14:58 | |
*** jmcbride has joined #openstack-dns | 14:58 | |
eandersson | Ok, so clearly I am a little lost after weeks of vacation. | 14:59 |
eandersson | I am indeed running Icehouse.... for everything besides Designate that is running Juno. :D | 15:01 |
eandersson | The first patch you linked pretty much works out of the box on Juno, as it has def transaction(f): | 15:02 |
*** artom has quit IRC | 15:03 | |
eandersson | (hides in a corner) | 15:03 |
eandersson | btw hey ekarlso-! :D | 15:03 |
ekarlso- | eandersson: tjena grabben :) | 15:04 |
eandersson | Gott nytt år! | 15:05 |
ekarlso- | :D | 15:05 |
ekarlso- | låt oss prata svenska i stellet för engelska eandersson :) | 15:05 |
ekarlso- | dom kommer inte fatta en skit :P | 15:05 |
Kiall | eandersson: ah, makse sense :) | 15:06 |
Kiall | eandersson: BTW - Seems to be some sort of issue with the patch still, but I'm wondering if it's an issue with our unittests rather than anything else... | 15:06 |
eandersson | So it did retry, and initially it looked good :D | 15:06 |
ekarlso- | Kiall: pratar ni inte svenska ?;) | 15:06 |
eandersson | RemoteError: Remote error: DBError (InternalError) (1364, u"Field 'data' doesn't have a default value") 'INSERT INTO records (id, version, created_at, managed, status) VALUES (%s, %s, %s, %s, %s)' ('c9448843365f48beac5dec0ce0be7a36', 1, datetime.datetime(2015, 1, 6, 14, 57, 7, 358400), 0, 'ACTIVE') | 15:07 |
Kiall | ekarlso-: An bhfuil tú Gaeilge a labhairt? | 15:07 |
ekarlso- | Kiall: :P | 15:07 |
eandersson | I am not sure if that is caused by something else lol | 15:07 |
Kiall | eandersson: Humm, different issue! | 15:08 |
ekarlso- | +1 to Kiall on that, data doesn't seem to be populate | 15:08 |
ekarlso- | +d | 15:08 |
eandersson | Never seen it before the patch though. Maybe I messed something up. | 15:08 |
ekarlso- | swedes -,,- | 15:09 |
eandersson | cannot be trusted! | 15:09 |
Kiall | Nah, it sounds like it could be caused by the retry.. Not 100% sure yet.. Writing unot tests at the moment | 15:09 |
Kiall | unit* | 15:09 |
eandersson | Massive traceback :D | 15:09 |
eandersson | but yea, it's caused by the retry | 15:09 |
eandersson | I see it hitting retrying, and throws this. | 15:10 |
eandersson | *it may be caused by retry | 15:10 |
Kiall | I'm betting some state is lost after the retry -_- | 15:11 |
openstackgerrit | Kiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks https://review.openstack.org/134524 | 15:12 |
Kiall | ^ has 2 new tests for the retry, one succeeds, one fails.. | 15:12 |
*** nkinder has joined #openstack-dns | 15:12 | |
Kiall | (It just verifies the retry is attempted for now, not that the retry eventually succeeds) | 15:12 |
*** vinod has quit IRC | 15:15 | |
*** vinod has joined #openstack-dns | 15:16 | |
*** artom has joined #openstack-dns | 15:18 | |
Kiall | eandersson: so, I see my bug... still not sure about the data doesn't have a value thing | 15:20 |
eandersson | So it fails at increment. Is the issue that it tries to insert the data again? | 15:29 |
eandersson | Since create_record will be called twice as well? | 15:29 |
Kiall | Your issue seems to be that it "looses" it's data the second go around.. | 15:29 |
eandersson | Yea. It makes no sense. | 15:30 |
Kiall | Mine is that, a call like update_recordset, which has nested TX's, will rollback the inner TX and retry - which is an error. | 15:30 |
Kiall | We have to bail all the way back out to the initial call, and retry from there.. Just can't see how yet ;) | 15:30 |
Kiall | I can see a fix that involves reintroducing the code deadlock we just fixed.. lol | 15:32 |
eandersson | haha | 15:43 |
*** EricGonc_ has joined #openstack-dns | 15:50 | |
eandersson | The good thing with all of this is that I am getting more and more familar with the source code :p | 15:50 |
openstackgerrit | Betsy Luzader proposed openstack/designate: Migrate Server table https://review.openstack.org/136440 | 15:51 |
*** barra204_ has joined #openstack-dns | 15:52 | |
*** EricGonczer_ has quit IRC | 15:53 | |
*** barra204_ is now known as shakamunyi | 15:53 | |
*** paul_glass has joined #openstack-dns | 16:01 | |
*** paul_glass1 has joined #openstack-dns | 16:02 | |
*** paul_glass1 has quit IRC | 16:02 | |
*** paul_glass has quit IRC | 16:05 | |
*** mikedillion has joined #openstack-dns | 16:13 | |
*** mikedillion has quit IRC | 16:14 | |
openstackgerrit | Kiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks https://review.openstack.org/134524 | 16:14 |
Kiall | eandersson: as near as I can tell, the "data is empty" issue doesn't happen on master.. Not sure why, or if my tests are just flawed though.. | 16:31 |
openstackgerrit | Kiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks https://review.openstack.org/134524 | 16:35 |
*** rjrjr has joined #openstack-dns | 16:49 | |
*** untriaged-bot has joined #openstack-dns | 17:02 | |
untriaged-bot | Untriaged bugs so far: | 17:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1403267 | 17:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1404395 | 17:02 |
uvirtbot | Launchpad bug 1403267 in designate "create_domain should handle status asynchronously" [Undecided,New] | 17:02 |
uvirtbot | Launchpad bug 1404395 in designate "Pool manager attempts to periodically sync *all* zones" [Undecided,New] | 17:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1406414 | 17:02 |
uvirtbot | Launchpad bug 1406414 in designate "Delete zone fails to propagate to all (Bind) nameservers in a pool depending on threshold_percentage" [Undecided,New] | 17:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1403591 | 17:02 |
uvirtbot | Launchpad bug 1403591 in designate "A ZeroDivisionError is Thrown Without Servers" [Undecided,New] | 17:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1289444 | 17:02 |
uvirtbot | Launchpad bug 1289444 in designate "Designate with postgres backend is having issues" [Undecided,New] | 17:02 |
untriaged-bot | https://bugs.launchpad.net/designate/+bug/1404529 | 17:02 |
uvirtbot | Launchpad bug 1404529 in designate "DynECT is called twice when any domain action happens." [Undecided,Confirmed] | 17:02 |
*** untriaged-bot has quit IRC | 17:02 | |
*** artom has quit IRC | 17:04 | |
*** mikedillion has joined #openstack-dns | 17:13 | |
eandersson | I'll run some tests tomorrow, and try to figure out why it is happening. :D | 17:19 |
Kiall | I may have just reproduced it.. Maybe... | 17:19 |
*** betsy has quit IRC | 17:22 | |
rjrjr | timsim: you on? | 17:27 |
rjrjr | vinod: you on? | 17:27 |
timsim | rjrjr yep | 17:27 |
rjrjr | okay, i looked at the code you submitted. i have a better fix for it which also fixes other things. i will be pushing that up shortly. in a nutshell, i'm fixing it with the fix for https://bugs.launchpad.net/designate/+bug/1403267 | 17:28 |
uvirtbot | Launchpad bug 1403267 in designate "create_domain should handle status asynchronously" [Undecided,New] | 17:28 |
rjrjr | also, about removing the create and delete statuses... | 17:28 |
rjrjr | i think we should hold off on removing the create statuses | 17:28 |
rjrjr | we might need that information when we allow for new servers to be added. | 17:29 |
rjrjr | i do agree however, that once a domain is deleted, we should remove the create, update, and delete statuses. | 17:29 |
*** mikedillion has quit IRC | 17:29 | |
rjrjr | will that be okay with you? (about the statuses) | 17:30 |
timsim | I thought the whole 'add a new server, sync everything' process was going to be like, hey central, dump your state and call out to do all these things or something | 17:30 |
timsim | Or more generally, hey new server (or old server) what's your state, this is my state, make the necesary changes. | 17:31 |
rjrjr | we'll want to calculate consensus when we add a new server. if the threshold is 100% and we add a new server and it fails... | 17:31 |
rjrjr | we already have the logic to calculate concensus for creation. i don't think we need to add new logic for the same thing on an add by removing the status. | 17:33 |
rjrjr | honestly, this isn't causing any issues. | 17:33 |
timsim | If you've got millions of zones, keeping those around might be an issue. | 17:33 |
timsim | Wouldn't you pop a new status for creation of a zone when you add a new server, and then calculate consensus for the Pool anyway? | 17:34 |
rjrjr | we are talking about 3 rows per zone (create, update, and delete status) | 17:35 |
*** rmoe has quit IRC | 17:36 | |
Kiall | Well - Isn't it the "pool manager cache" - So, it should be safe to loose any value from it? | 17:39 |
Kiall | So - Clearing should be safe+fine to do | 17:39 |
Kiall | If a value isn't present, I think it should be recreatable... | 17:40 |
timsim | Agreed, it kind of seemed like these values would represent one change or one logical group of changes, and when the changes were finished, they'd be deleted. The only things that would remain in the cache were pending or error'd changes. | 17:41 |
Kiall | timsim: ++ | 17:41 |
timsim | Aren't there multiple create statuses for each zone if you've got more than 1 server backend? | 17:47 |
rjrjr | sorry, you are correct, one of each status for each server. | 17:47 |
rjrjr | i still need the update status for each zone per server however. | 17:48 |
timsim | I'm ok with that, as long as they're deleted when they're finished. Because in theory if you've got one entry per server per change, you can retry only the ones that need retrying. | 17:48 |
rjrjr | i use that to calculate consensus. | 17:49 |
*** vinod has quit IRC | 17:49 | |
rjrjr | we would leave the update status though. | 17:49 |
rjrjr | let me explain | 17:49 |
rjrjr | i add a record | 17:50 |
rjrjr | one zone is at serial number | 17:50 |
rjrjr | sorry 1 server is at serial number 2 | 17:50 |
rjrjr | 1 server is at serial number 3 | 17:50 |
rjrjr | 1 server is at serial number 4 | 17:50 |
rjrjr | notifies go out and 2 servers don't | 17:50 |
rjrjr | notifies go out and 2 servers don't respond. | 17:50 |
rjrjr | but the server with serial number 2 does respond and it is now at serial number 5 | 17:51 |
rjrjr | the new consensus is now 3 for 100% | 17:51 |
rjrjr | i need the old serial numbers to calculate that. | 17:51 |
rjrjr | man, this is not fun to explain in chat... | 17:52 |
timsim | Would you wait to mark the zone active until they're all at 5? | 17:52 |
rjrjr | it isn't a zone thing. | 17:52 |
rjrjr | it is marking records as active | 17:52 |
timsim | Alright, the records then, doesn't the zone get marked pending when you make that change though? | 17:53 |
rjrjr | any record change that occurred at serial number 3 is now active, even though 2 servers didn't respond. they already got the change because they responded earlier with higher serial numbers. | 17:53 |
rjrjr | changes for serial number 4 and 5 will be pending however. | 17:53 |
rjrjr | but all changes for serial number 3 and lower are active. | 17:53 |
Kiall | rjrjr: so, "the new consensus is now 3 for 100%" <-- This is where the status entries for 3 should be deleted IMO | 17:54 |
Kiall | And.. If none exist, or some are missing (it's a cache after all), they can be re-fected | 17:54 |
rjrjr | i have 1 status per server for an update | 17:54 |
rjrjr | that status includes the serial number returned from the server. | 17:54 |
rjrjr | i don't store a status for every update. 1 and only 1 per server per zone. | 17:55 |
rjrjr | otherwise, our table could be theoretically larger than what tim had pointed out. | 17:55 |
Kiall | Ah.. Humm, so.. deleting them there would bork the concensus for #4 | 17:55 |
rjrjr | correct. | 17:55 |
rjrjr | think of that status as a watermark. i store the highest serial number returned from a server for that zone. | 17:56 |
*** rmoe has joined #openstack-dns | 17:57 | |
Kiall | Yep, I get that now.. So... Is the confusion/concern more related to how the pool manage cache is really being treated as a relaible datastore then? e.g. should we add a memcache driver to help ensure the code doesn't treat it like that? | 17:57 |
rjrjr | are you suggesting the logic should all be changed? | 17:58 |
rjrjr | honestly, i designed it this way exactly to address the problem of too large of a cache. | 17:58 |
Kiall | Only if necessary to ensure it's treated as a cache, rather than persistent dtore | 17:58 |
*** penick has joined #openstack-dns | 17:58 | |
timsim | I can't operate under the assumption that the Pool Manager Cache will always be there. | 17:59 |
rjrjr | if we did this the way i heard from others, the logic of removing/adding/etc. would be horrendous. | 17:59 |
rjrjr | if servers don't respond the cache could grow without bounds. | 17:59 |
Kiall | A SQL based cache without an "expires" would, a tranditional cache would clear itself out over time | 18:00 |
*** jordanP has quit IRC | 18:00 | |
timsim | Eventually when things respond again, those entries will be deleted, and as it catches up, your cache size would get back down toward 0 | 18:00 |
rjrjr | you have a server down for a day and you have a million changes in that day. that cache would be large. | 18:00 |
timsim | Huge. Yep | 18:01 |
Kiall | rjrjr: cache entries should expire after a short period (say 1 hr) | 18:01 |
rjrjr | this sets an upper bound. predictable. manageable. | 18:01 |
Kiall | after that, if you need to info 6 houts later, you do a SOA query to the nameserver to refresh the info | 18:01 |
Kiall | hours* | 18:01 |
rjrjr | with this design, we can tell a user what to expect for a footprint even when servers are down. | 18:02 |
timsim | I suppose if you had something down for an hour, then you resync everything anyway? | 18:02 |
rjrjr | with what you are proposing, you cannot tell the user what the footprint will be. | 18:02 |
timsim | Kiall: ^ | 18:02 |
timsim | rjrjr: Not sure what you mean by footprint? | 18:03 |
Kiall | timsim: ideally, your thresholds haven't been hurt by 1 server being down.. so things will recover by themselves, and through ther periodic sync to add/remore created/deleted zones.. | 18:03 |
rjrjr | the size of the cache/database. you can easily predict its size based on the number of zones and servers. | 18:03 |
rjrjr | even when there is an outage. | 18:04 |
Kiall | rjrjr: sure, so - using an actual cache without a expire would result in a cache of the same size.. add an expire, and that # turns into a max size, rather than absolute size | 18:04 |
Kiall | i.e. it wouldn't ballon the size, the # of entries is still fixed, is just treating the cache as a cache - i.e tolerating loss of cached values and recreating them on demand when necessary | 18:05 |
rjrjr | this can still be a cache. it is similar to how NTP's cache works. | 18:08 |
rjrjr | while it isn't tolerant of a cleaning right now, it could be with a few modification. | 18:08 |
rjrjr | regardless, how do you want me to proceed? | 18:09 |
Kiall | I think you cleared things up for me anyway .. timsim? | 18:09 |
openstackgerrit | Kiall Mac Innes proposed openstack/designate: Retry transactions on database deadlocks https://review.openstack.org/134524 | 18:11 |
Kiall | eandersson: In theory, ^ solves the "data is empty" issue | 18:11 |
rjrjr | timsim? | 18:11 |
Kiall | timsim was asking Q's too ;) | 18:11 |
Kiall | Oh - You were pinging him | 18:11 |
Kiall | not asking why I mentioned him.. lol | 18:12 |
* Kiall goes back to deadlocks | 18:12 | |
rjrjr | kiall, honest opinion, should we redesign the logic for pool manager? i am under a lot of pressure right now to get this work done. i have some very real hard dates i need to meet. | 18:14 |
Kiall | redesign? I don't think we need to, I think there's some cleanup in various places needed, but not a redesign .. | 18:15 |
rjrjr | i am working on some of the bug fixes and fixing some of the things you and i talked about. i'll proceed with that until i hear back from timsim about this logic. | 18:15 |
timsim | Sorry, was getting some food | 18:18 |
timsim | I don't think it needs to be completely redesigned, but I do think we should talk it all through and see where everybody lands at the mid-cycle. | 18:20 |
rjrjr | okay. back to bug fixes. :) | 18:30 |
*** pk has joined #openstack-dns | 18:34 | |
ekarlso- | I still havent figured on how to integrate the secondary stuff with pools -,,- | 18:48 |
ekarlso- | was some quirks | 18:48 |
*** penick has quit IRC | 19:02 | |
*** penick has joined #openstack-dns | 19:06 | |
*** shakamunyi has quit IRC | 19:07 | |
*** vinod has joined #openstack-dns | 19:10 | |
*** rickerc has joined #openstack-dns | 19:18 | |
*** ryanpetrello_ has joined #openstack-dns | 19:25 | |
*** ryanpetrello has quit IRC | 19:27 | |
*** ryanpetrello_ is now known as ryanpetrello | 19:27 | |
*** pk has quit IRC | 19:43 | |
*** GonZo2K has joined #openstack-dns | 19:47 | |
*** shakamunyi has joined #openstack-dns | 19:51 | |
*** achilles has joined #openstack-dns | 19:52 | |
*** timsim has quit IRC | 19:52 | |
*** achilles has left #openstack-dns | 19:53 | |
*** pk has joined #openstack-dns | 19:54 | |
*** timsim has joined #openstack-dns | 19:54 | |
*** barra204_ has joined #openstack-dns | 19:55 | |
*** shakamunyi has quit IRC | 19:58 | |
*** jmcbride has quit IRC | 20:50 | |
*** jmcbride has joined #openstack-dns | 20:59 | |
*** pk_ has joined #openstack-dns | 21:18 | |
*** pk_ has quit IRC | 21:18 | |
openstackgerrit | Ron Rickard proposed openstack/designate: Handle create_domain Status Asynchronously in Pool Manager https://review.openstack.org/145346 | 21:25 |
*** russellb has joined #openstack-dns | 21:27 | |
rjrjr | timsim: this patch will fix the bug you were addressing and another bug kiall and i discussed. | 21:27 |
timsim | Cool. I'll test it out today/tomorrow morning. | 21:29 |
*** paul_glass1 has joined #openstack-dns | 21:30 | |
*** paul_glass1 has quit IRC | 21:37 | |
*** barra204_ has quit IRC | 21:39 | |
*** barra204_ has joined #openstack-dns | 21:40 | |
*** russellb has left #openstack-dns | 21:52 | |
*** ryanpetrello_ has joined #openstack-dns | 21:55 | |
*** ryanpetrello has quit IRC | 21:56 | |
*** ryanpetrello_ is now known as ryanpetrello | 21:56 | |
*** penick has quit IRC | 21:56 | |
*** barra204_ has quit IRC | 22:12 | |
*** barra204 has joined #openstack-dns | 22:13 | |
*** mwagner_lap has quit IRC | 22:15 | |
*** penick has joined #openstack-dns | 22:18 | |
*** jmcbride has quit IRC | 22:37 | |
*** timsim has quit IRC | 22:41 | |
*** ryanpetrello has quit IRC | 22:56 | |
*** vinod has quit IRC | 23:13 | |
*** nkinder has quit IRC | 23:14 | |
*** EricGonc_ has quit IRC | 23:16 | |
*** rickerc has quit IRC | 23:21 | |
*** eandersson has quit IRC | 23:21 | |
*** ekarlso- has quit IRC | 23:21 | |
*** timfreund has quit IRC | 23:21 | |
*** gohko has quit IRC | 23:21 | |
*** rickerc has joined #openstack-dns | 23:22 | |
*** eandersson has joined #openstack-dns | 23:22 | |
*** ekarlso- has joined #openstack-dns | 23:22 | |
*** timfreund has joined #openstack-dns | 23:22 | |
*** gohko has joined #openstack-dns | 23:22 | |
*** boris-42 has quit IRC | 23:24 | |
*** boris-42 has joined #openstack-dns | 23:26 | |
*** penick has quit IRC | 23:34 | |
*** barra204 has quit IRC | 23:43 | |
*** GonZoPT has joined #openstack-dns | 23:44 | |
*** GonZo2K has quit IRC | 23:45 | |
*** ryanpetrello has joined #openstack-dns | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!