17:01:35 #startmeeting Designate 17:01:36 Meeting started Wed Jan 28 17:01:35 2015 UTC and is due to finish in 60 minutes. The chair is Kiall. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:40 The meeting name has been set to 'designate' 17:01:40 thanks Sukhdev, bye all 17:01:45 Heya - Who's about? 17:01:51 o/ 17:02:10 o/ 17:02:10 mugsie just got nabbed by a phonecall, will be along in a minute.. 17:02:21 rjrjr_ - about? 17:02:31 o/ 17:02:49 Okay - Lets get started... 17:02:54 #topic Kilo Release Status (kiall - recurring) 17:03:01 #link https://launchpad.net/designate/+milestone/kilo-2 17:03:17 k2 is Feb 5th.. 7/8 days 17:03:56 Lot's of bugs still open, if you have one, and it's not going to land in k2, can we move them to k3 please? 17:04:10 (I started a few mins ago, but got distracted ;)) 17:04:56 Okay - Other than that, we have the PM cache stuff rjrjr_is working on - The review is still marked WIP, is that intentional rjrjr? 17:05:30 o/ 17:05:38 correct. unit tests still pass (sqlalchemy) but i am working on getting the correct status out of mdns for create/delete. 17:05:59 rjrjr_: K, will keep an eye out for another patchset on that one :) 17:06:34 Okay - Other than that, k2 is starting to shape up.. So, let's move on! 17:06:44 #topic Pools - Where are we? (kiall - recurring) 17:06:51 i'd also like to know if 'force_check' is the right approach. i posted in the main chat about it. 17:07:05 I think we already have the answer to this - and we can remove the recurring topic at this stage, thoughts? 17:07:21 i think so 17:07:25 +1 to remove the recurring topic of pools 17:07:39 K - Consider it gone.. :) 17:07:40 Yep 17:08:07 #topic #toipic 17:08:09 gah. 17:08:17 #topic Mid-Cycle Review (kiall) 17:08:19 there we go ;) 17:08:25 #link https://etherpad.openstack.org/p/designate-mid-cycle-jan-2015 17:08:55 So - Just wanted to figure out if we've left anything dangling after the mid-cycle.. 17:09:07 i.e. things we need to follow up on etc? 17:09:32 Kinda curious where we are with the Pools->DB stuff. 17:09:34 IXFR and backup/restore 17:09:50 I know we've got the pools API stuff in progress, I'm hoping to find time to finish my piece of that this week.. But after being away from the office for 2 weeks it's hard ;) 17:10:13 Cool 17:10:38 vinod: yes, IXFR and few topics were dropped :) rjrjr_, did you manage to finish that spec? (All good if you didn;t) 17:10:40 didn't* 17:10:48 no 17:11:08 batch actions / ns update as well 17:11:15 Okay, whenever you do, drop it on the agenda for the next meet :) 17:11:48 mugsie: Yep, and perf/testing/API microversions :) I was more hoping for stuff that we started, but left unfinished ;) 17:13:03 I'll take that as a now.. From my memory, we have the Pools Config -> DB stuff, and the Agent stuff we landed. 17:13:05 Any others? 17:13:13 The start of the agent stuff we landed* 17:13:30 I don't think so. 17:13:38 nothing - the agent is tied up waiting for targets 17:13:45 which is part of the API->DB 17:13:51 Yeah, the Pool Manager backend for it. 17:14:00 yeah 17:14:03 KK.. 17:14:18 I'll move on so ;) 17:14:24 #topic Next Sprint? 17:14:28 link https://etherpad.openstack.org/p/designate-sprints 17:14:32 #link https://etherpad.openstack.org/p/designate-sprints 17:15:03 I figure we should pick+plan for the next monthly sprint by figuring out the what and when early.. 17:15:19 My suggestion is to do a Docs sprint - thoughts? 17:15:27 yup 17:15:36 Absolutely. 17:15:42 That was part of what we talked about on the thursday 17:15:49 we need docs, badly 17:16:01 +1 on docs 17:16:02 Cool :) I missed that since I was gone at that point ;) I started a etherpad for it, https://etherpad.openstack.org/p/designate-documentation-sprint 17:16:29 Fill in any areas you think would be useful.. 17:16:51 The last sprint was Jan 16th from memory - Should we pick a similar-ish date for the next? 17:17:09 yeah, mid month sounds good 17:17:14 Sure, why not. After k2 I suppose 17:17:51 +1 17:18:36 So, say the 13th then? 17:18:43 Friday the 13th that is ;) 17:19:25 Seems good. That's the day before Valentine's Day here (don't know if you guys celebrate the patron saint of overpriced greeting cards, flowers and chocolates) 17:19:49 If anyone is going out of town with a significant other they may miss. Probably not a concern though. 17:20:08 timsim: crass consumerism? we have that here 100% ;) 17:20:22 Ah, too bad lol. 17:20:27 Okay - Unless I hear shouting from someone who's unable to attend, we'll call it the 13th.. Ideally afternoon Ireland, Morning Texas, but exact times can be figured out closer to the day ;) 17:21:48 Sorry - Distracted for a sec.. Moving on since we're settled :) 17:21:58 #topic Strawman: Set a McCabe complexity threshold 17:22:01 #link https://review.openstack.org/#/c/149885/ 17:22:06 #link http://logs.openstack.org/85/149885/1/check/gate-designate-pep8/de167ea/console.html#_2015-01-25_17_15_51_132 17:22:27 So - rjrjr_ was the one who suggested this at the mid-cycle, so figured I'd throw it out there and see what people think? 17:22:41 I'm personally not a fan of yet anything thing to -1 me ;) 17:23:15 I don't know about voting, but it'd be a nice thing to keep track of for new patches. 17:23:17 With our current code - and a threshold of 10 - we have ~10 places where we exceed the limit.. 17:23:30 timsim: I'm not sure I can make it non-voting to be honest... 17:24:15 It's ran as part of the pep8/flake8 job.. So can't easily pick+choose what votes from that 17:24:22 ah 17:24:28 rjrjr_: thoughts, since you brought it up? :D 17:24:48 Ah. Well, maybe we make it a point to fix the more problematic ones (bugs?) and run it periodically for new things on our own and file more for things that have gotten out of hand? 17:24:59 hum, personally I dislike metrics like ^ when they are a used in a hard rule type of way 17:25:04 one way or another, i have to address this to get designate through our internal processes. 17:25:30 rjrjr_: yea, it would be interesting to see if ^ results line up with what your tooling is complaining about :) 17:25:30 but, it is not an immediate thing. 17:25:51 that and code coverage are the big 2. 17:25:59 I think I agree with timsim.. Something we run periodically outside the gate + address outliers.. 17:26:03 we addressed that at the last sprint mostly. 17:26:11 started to anyway :) 17:26:48 in the end, it will make for easier to read code, hopefully. 17:27:09 rjrjr_: yea, I looked at each of the methods it highlighed - some were mine - and I still got confused ;) 17:27:42 But - I'm not convinced about making it a hard rule in the gate.. 17:28:09 i think if we fix the ones it complains about, we can make it a rule later. 17:28:33 and if we decide we need the complexity, then we don't need to make it a hard rule. 17:28:35 Yeah I don't like that. You could do that and make it something ridiculously high, but otherwise, eh. 17:29:17 ultimately, the code problem is real. a value of 10 is reasonable. 17:29:36 Yea, I'm not sold on it ever being a hard rule .. But I do see the value in the output it generates.. Not sure how I reconcile my two viewpoints on this ;) 17:30:21 Anyway - Let's leave it, for now, as a guideline.. Each of those methods identified should be considered a bug, but the gate won't enforce 10 or less - for now.. Sound good? 17:30:23 Kiall: set up a sonar isnatnce as a 3rd party build ? ;) 17:30:50 kiall: agreed. 17:30:57 mugsie: go for it - tell me when your done ;) 17:31:05 Sounds good 17:31:11 Ok - Moving on :) 17:31:11 #topic Open Discussion 17:31:16 mugsie: i can look into that. sonar is what we use in either case. 17:31:27 I was half joking 17:31:40 I did have it on my longer term plan to do 17:31:52 rjrjr_: I was kidding too :) BUT - I don't see any harm with having a public sonar instance chugging over the code 17:32:07 but in the next month or i could look into it 17:32:12 it would help me be proactive instead of reactive. i'll look into it. 17:32:18 kk 17:32:34 It's not going to be as useful as it would be for C/Java/.NET .. Cuz... Python, but I doub't it would be harmful ;) 17:32:41 dount* 17:32:44 doubt* 17:32:45 -_- 17:32:58 Okay - Any other topics? 17:33:03 so, for the open discussion, just a comment 17:33:51 i am going to be workiing on our internal rollout for the next few weeks. i am juggling quite a bit right now. so, if it looks like i'm slow, it's just because i'm doing other work too. 17:34:02 rjrjr_: np 17:34:12 I know how you feel ;) 17:34:14 hah - no problem :) 17:34:21 Is there any way we can step up and help you out? 17:34:34 ++ to ^ ;) 17:34:38 test, test, test. 17:35:00 once i have this mdns stuff worked out, we need to test. 17:36:21 Okay - Any other topics from anyone? I'm all out. 17:36:38 i'm thinking about a test_service_sqlalchemy.py, test_service_noop.py, and test_service_memcache.py for pool manager. thoughts? 17:37:10 or maybe just a _cache and _nocache since sqlalchemy and memcache should behave the same. 17:37:24 Should probably test that they do though. 17:37:46 rjrjr_: It sounds like 1 suite to me.. with the cache sensitive methods mocked in a few of the single tests 17:38:10 all the methods are cache sensitive. 17:38:31 we only have 4 public methods in pool_manager. 17:39:02 and i figured we'll want tests for the periodic_recovery and periodic_sync too. 17:39:10 but ultimately, all cache sensitive. 17:39:24 Humm - Aren't we moving the cache pieces into only the update_status methods, rather than pre-creating etc? 17:39:51 (I'm probably forgetting something TBH - Been away for way too many hours already) 17:39:55 not that i'm aware. 17:39:56 awake* 17:40:50 I think I'll need to eyeball the code again to be able to say ^ with any amount of confidence ;) Let's leave it as whatever you have already for now and can change later if needs be? 17:40:54 unless we want the backend drivers calling update_status. right now, we are making calls to the backend drivers. 17:41:46 but we decided that was a bad approach last week. 17:42:17 Yea, I think I remember - too tired to be able to think back that far though ;) 17:42:33 okay. i'll get the code working and we can decide then. 17:42:38 ++ 17:42:53 Okay, unless anyone else has something.. Let's call it a day! 17:42:53 I thought it was more like, create the cache entries when you're creating/updating/deleting, but if they don't create it's cool because each piece that uses them can handle them not being there. 17:42:58 or not ;) 17:43:08 timsim, correct. 17:43:12 Alright, I'm good then :P 17:43:33 all i was suggesting was a separate suite for each cache driver. 17:43:59 or one suite with all the test methods x3. 17:44:47 I think there's a seperate test suite for each driver, but only for the driver code rather than testing the driver 3 times through pool manager - Maybe that's where we got out of sync? ;) 17:45:27 each driver has a test. but a noop driver will behave different than the memcache and sqlalchemy driver. 17:45:40 inside pool manager. 17:46:16 should have wrote 'each driver has a test suite of their own'. 17:46:26 I'm starting to remember ;) You're making more sense to me now :) 17:46:55 how about this. i'll do what i want and everyone can -1 it if they don't like it. :) 17:47:06 lol - perfect. 17:47:11 Sounds good 17:47:16 i'm good then. 17:47:22 Okay, let's call it a day then :) Thanks all! 17:47:30 #endmeeting