17:00:01 #startmeeting Designate 17:00:02 Meeting started Wed Apr 1 17:00:01 2015 UTC and is due to finish in 60 minutes. The chair is Kiall. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:05 The meeting name has been set to 'designate' 17:00:09 Hey folks - who's about? 17:00:10 o? 17:00:13 o/ 17:00:15 o/ 17:00:27 o/ 17:01:01 So - Agenda is nice and simple, kilo rc1 status ;) 17:01:05 #link https://etherpad.openstack.org/p/designate-kilo-rc1-reviews 17:01:14 Any other reviews need to land before we cut rc1? 17:01:24 #topic Kilo Release Status 17:01:28 Whoops.. Forgot ^ ;) 17:01:32 not that I can see 17:02:01 * elarson is fine pushing the designate emacs client to liberty 17:02:11 lolol 17:02:20 lol - better be an April 1st joke ;) 17:02:28 elarson never jokes about emacs 17:02:41 uh... yeah... just kidding? april fools? 17:02:56 lol.. sure 17:03:09 Anyway - Okay, let's move to... 17:03:10 #link https://launchpad.net/designate/+milestone/kilo-rc1 17:03:33 Status is looking petty good, 3 in progress where 2 of those reviews should fix in. 17:04:04 Any takers on the last couple of open issues? 17:04:42 i can take #1413472 17:04:58 APIv1 slowness? 17:04:59 k 17:05:27 Endere isn't about, I think he's looking at bug 1433650 17:05:29 bug 1433650 in Designate "Floating IP Reverse DNS API cannot query neutron using passed through credentials" [Medium,New] https://launchpad.net/bugs/1433650 - Assigned to Endre Karlson (endre-karlson) 17:06:04 So - Any takers on bug 1437699 ? :) 17:06:05 bug 1437699 in Designate "mDNS should Handle Bad File Descriptor" [Medium,New] https://launchpad.net/bugs/1437699 17:06:46 Doesn't look like it :P 17:06:48 Nobody? 17:06:54 Okay, I'll take it ;) 17:07:01 i can look if nobody else wants it. 17:07:20 yeah, I don't mind taking a look, I'm just not sure how much time I'll have :( 17:07:23 rrickard: cool, if you can get to it that'd be great :) 17:07:34 i'll take it then. 17:08:12 The last docs one has no rush for rc1, so can stay unassigned for now. 17:08:49 Okay - So, How are people feeling about rc1? Have people found issues missing off that list etc? 17:09:42 I spotted this one i need to revival yesterday - https://review.openstack.org/#/c/142505/ 17:09:42 and, wanted to discuss the default periodic_sync/recovery config values.. 17:09:43 That thing we talked about getting some of the update logic out of the pool manager and into the backends maybe? 17:10:38 timsim: Yea, assuming we can keep the change small enough 17:10:52 Yeah, I don't think that has to get in. 17:10:56 Other than that, test test test 17:11:05 Yep - Okay .. So https://github.com/openstack/designate/blob/master/designate/pool_manager/__init__.py 17:11:16 The default options in there, I'm nearly certain we need to change them :) 17:11:39 periodic-recovery-interval = 120 seconds 17:11:39 periodic-sync-interval = 300 seconds 17:11:39 periodic-sync-seconds = None (i.e. everything) 17:11:56 That last one has me the most concerned. 17:12:24 every 300 seconds we're sending a SOA/NOTIFY to every nameserver for every zone.. Which is quite a number of queries every 5 mins ;) 17:12:56 Anyone have throughts on "safer" defaults we can choose? i.e. keeping the 99% of recovery, without causing excessive load 17:13:32 Feel like I'm on my own today :D 17:13:41 Maybe only zones that have changed in the last 24 hours? 17:13:54 testing has shown the values to be too frequent? 17:14:02 And do the sync every ten minutes, or thirty? 17:14:20 rrickard: Yea, I've noticed quite some number of queries with only a small number of domains 17:15:04 I think we talked internally about Syncing every hour for everything in the last 24 hours at some point. Don't know how reasonable that is now. 17:15:18 I feel like the sync probably needs to run more often than that. 17:15:42 not sure how complicated it would be, but what about using the refresh/retry of the zone SOA 17:15:55 that's sort of the reason for it existing.. at least in the other direction 17:16:44 jbratton: we had discussed that at one time. 17:16:53 jbratton: possibly, that gives some flexability too (some zones can be re-checked more often etc).. But implementing that will be too big a change at this point 17:17:18 You could vaguely replicate that with a 1 or 2 hour interval, and sync everything. 17:17:18 yeah, that flexibility is what I liked, but I completely get that's a fairly big change 17:17:31 timsim - 30 mins + 24 hours == 48 sync's per changed domain, which is a good # less than we do today.. 17:18:09 Which, I think is reasonanle.. the average designate deploy will be a small # of zones, service providers like rax/hp will tune these... 17:18:22 reasonable* 17:18:32 Everyone will have to tune that for their setup a little bit, but for small setups that seems totally reasonable, maybe even a little overkill. 17:19:08 30mins + 12 hours maybe ? 17:19:15 Yea, my thinking is the stock config should just work and "idle" along for small setups, bigger setups need more TLC when standing them up 17:20:43 Say - 500 zones churn each day with 30+24, 24k SOA/NOTIFY's a day * the number of nameservers 17:20:55 That seems high when you work it out ;) 17:21:29 i think we can all agree that on 30mins ? 17:21:41 so, the question is 6/12/24hr 17:22:22 30+12 = 12k * # of nameservers 17:22:22 30+6 = 6x * # of nameservers 17:22:31 or 1/3/6/12/24/48/96 :P 17:23:37 Okay - From the numbers I like 30+6 as the default, low enough that it won't scare people, high enough it should still catch zones whch failed to AXFR/miss NOTIFIES etc 17:23:47 ++ 17:24:01 I'm cool with that. 17:24:02 seems reasonable. +1 17:24:38 Sorted, releasing with periodic-sync-seconds = None (i.e. everything) scared me ;) 17:25:23 Beyond that, the open bugs, the etherpad reviews, and https://review.openstack.org/#/c/142505/ - are we all thingking "Good to release?" 17:25:35 thinking* 17:26:04 how much testing have we done? 17:26:33 not much yet - but we still have some time to hammer it between RC1 and K-Final 17:26:59 okay. i would love to share my experiences with Icehouse and Juno once the pain wears off... 17:27:47 So we release K1, test, fix anything we find, same for a couple more iterations, and then K-Final, right? 17:27:57 s/K1/rc1 17:27:57 timsim: exactly 17:28:02 yup 17:28:25 we should not tag rc-1 if we know of any blockers 17:28:45 re testing, we've done a good bit of scale tests between endre / timsim / myself, myself and mugsie will be setting up a soak env today/tommow 17:29:07 * mugsie is just waiting on credentials ;) 17:29:20 Yeah, I haven't given the latest changes a good hammering. Definitely want to do that. 17:30:24 timsim: yep, that's the plan for the next 3-4 weeks or so 17:30:33 Cool. 17:30:35 (Release is April 30) 17:31:50 Okay, we can call it done re kilo status for today 17:31:54 #topic Open Discussion 17:32:11 rrickard: http://sonar.designate-ci.com/dashboard/index/openstack:designate <-- I know you'll like this, still WIP. 17:32:18 hi Kiall - i see the pool targets has merged - so that is what I need to do the infoblox backend, correct? 17:32:35 kiall: cool. i'll take a look. 17:32:40 yeah, was going to ask rrickard for the config they use in ebay/paypal for sonar 17:32:54 mugsie: i'll get that for you. 17:33:05 cool, thanks :) 17:33:13 johnbelamaric1: It was the blocker, and should enable a driver to be written but I was hoping to get more done to leave much less code in the driver itself :( 17:33:59 we are still not live on designate. but getting close. i apologize for my absence, but rolling out designate has been my top priority. 17:34:09 kiall: ok. as far as migrating our "old style" driver to the latest code, any pitfalls or hints? 17:34:29 rrickard: np - it happens 17:34:48 damn employeers trying to get thier moneys worth from us :D 17:34:52 rrickard: yep, I know the feeling.. Open Source time vs Company Time.. Balance can be hard to find ;) 17:35:46 johnbelamaric1: I've nothing written up yet - getting all the various issues resolved has taken pretty much all my time :( 17:36:33 kiall - ok, no problem. I can just figure it out :). If i get this done, is there any chance of it shipping in Kilo - when do I need it by if that is possible? Assuming little to no changes outside of our specific driver code 17:37:26 would anyone be interested in getting a dev env set up on os x doc? 17:38:00 johnbelamaric1: Letting anything more land in Kilo will result in a certain Release Manager shouting very loud at me, we're already techically well past the feature freeze :( 17:38:23 kiall - Ok. I can still get it working with the Kilo base and just put it on github 17:38:28 and land it in L-1 17:38:57 johnbelamaric1: but, getting it in-tree isn't the only way to get it "into kilo".. e.g. at HP we've had an Akamai driver out of tree for ages - since the backends are plugins, they can live anywhere.. and yes, L will open for new feature sonce we cut rc1 17:39:16 kiall: ok, that works. thanks 17:39:19 will open for new features once we cut rc1* 17:39:39 My head is so not working right today, can't type, thinking @ about 1mph etc -_- 17:40:11 elarson: re OS X - Sure! Lots of OpenStack devs seems to be OS X folks, so adding that would be great :) 17:40:28 Kiall: cool, I'll compile my notes 17:40:46 Excellent :) 17:40:49 Okay - Anything else before we call it a day? 17:41:07 (and don't forget to keep and eye on + review these pls - https://etherpad.openstack.org/p/designate-kilo-rc1-reviews ;)) 17:41:41 nothing from mwe 17:41:46 nor i 17:42:09 I'm is holding onto the emacs designate client until liberty so I'm good ;) 17:42:33 elarson: we'll see if your still suggesting that tomorrow, April 2nd ;) 17:42:38 :) 17:42:51 Okay - Thanks guys :) Hoepfully we can get rc1 out the door before the end of the week :D 17:43:16 #endmeeting