17:01:06 #startmeeting Designate 17:01:06 Meeting started Wed Dec 17 17:01:06 2014 UTC and is due to finish in 60 minutes. The chair is Kiall. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:10 The meeting name has been set to 'designate' 17:01:12 Heya - Who's about today? 17:01:31 o/ 17:01:31 o/ 17:01:55 Nobody else about? 17:02:24 Guess not... 17:02:24 #topic Action Items from last week 17:02:32 No actions recorded from last ... 17:02:33 o/ 17:02:41 o/ :P 17:02:46 #topic Kilo Release Status (kiall - recurring) 17:02:54 #link https://launchpad.net/designate/+milestone/kilo-1 17:03:21 o/ btw 17:03:29 I was hiidng :p 17:03:30 https://bugs.launchpad.net/designate/+bug/1402788 17:03:31 So - the last unfinished BP is half in - Theirry recommened we split it, and close the first part 17:03:32 Launchpad bug 1402788 in designate "obj_reset_changes is implemented incorrectly in SQLA Storage backend" [High,In progress] 17:03:37 https://bugs.launchpad.net/designate/+bug/1396720 17:03:38 Launchpad bug 1396720 in designate "PUT on a Recordset with multiple Records results in delete/recreation of all Records" [High,In progress] 17:03:40 ^ need review 17:03:54 #action kiall to split bp validation-cleanup into k1/k2 parts. 17:04:04 mugsie: I thought that landed? 17:04:09 nope 17:04:16 both still need +A 17:04:22 K 17:05:05 So - bugs 1402788 and 1396720 - could we get any final reviews done on those ASAP, I'll be giving Theirry or k1 sha1 in less than 24 hours. 17:05:10 Launchpad bug 1402788 in designate "obj_reset_changes is implemented incorrectly in SQLA Storage backend" [High,In progress] https://launchpad.net/bugs/1402788 17:05:11 Launchpad bug 1396720 in designate "PUT on a Recordset with multiple Records results in delete/recreation of all Records" [High,In progress] https://launchpad.net/bugs/1396720 17:06:02 And the other two bugs (1399257 and 1398989) can get pushed to k2. 17:06:17 #action kiall to push 1398989 and 1399257 to k2 17:06:19 Done with https://review.openstack.org/#/c/141879/ 17:06:32 Thanks vinod :) 17:06:40 I'll test that second one in a bit 17:06:55 timsim: tnx 17:07:26 Okay, beyond those, I think we're pretty much set for k1.. I'm not convinced we have time for any other changes to properly merge + bake a little. 17:07:38 sorry I’m late 17:07:44 No problem :) 17:08:11 betsy: ^ comment includes your change I think, I'm not sure we have enough time to fixup/merge before I have to give theirry a SHA1 tomorrow for K1 17:08:36 (I've got a meeting with him at 16:30 UTC tomorrow where I'll be giving him the sha1) 17:08:37 I know. i really wanted it in k-1 17:08:58 betsy: I think we get it in ASAP into K2 17:09:04 mugsie: + 17:09:27 is there anything else people think we need in K1 ? 17:10:13 i will takew that as no ;) 17:10:13 sorry - call distracted me for sec .. power outages for the win ;) 17:10:20 I haven't had time to go back through and test server pools since yesterday, I assume it got fixed from yesterday :P 17:10:38 fixed? 17:10:44 (or maybe it wasn't broken, just configuration things) 17:11:02 it worked for me with the correct config. 17:11:08 https://review.openstack.org/142505 wouled be nice, but it's slightly more involved than just changing a default conf value, so it's probably best if it doesn't.. 17:11:08 (I still think there should be something to catch that possible divide by zero, but that can be done later) 17:11:23 I would say K2 for ^ 17:11:26 timsim: yea, I've hit that divide by zero once or twice.. 17:11:50 it happens if you have no servers. i'll create a bug and fix it. 17:11:58 Alright, fair enough, I'm good. 17:12:04 Anyway - I think we're good on K2 - Only change I'd like to see landed in mugsie's update RRSet one, but it's not the end of the world if it doesn't either. 17:12:18 Okay - Since we're mostly here already... 17:12:19 #topic Pools - Where are we? (kiall - recurring) 17:12:39 Remember my code removes servers as such so wait on that before you fix a bug 17:12:42 Mine might fix it 17:12:53 We have some bugs (divide by zero for example), but overall we're pretty good. 17:13:17 oh, yeah I hit that just now Kiall 17:13:27 I guess it's due to the server_id thing not being set w dynect : 17:13:41 rjrjr and I discussed some changes to the data model (pools have servers currently) to support different backends like Akamai and DynECT - These are k2 things, so once we get k1 out, I'll write up the thoughts I promised you 17:14:14 Anything else of note around getting pools "perfect"? :) 17:14:18 i have unit tests and better logging and bugs. 17:14:38 Cool :) 17:15:00 has anyone tested with multiple servers in a single pool, multiple pools, etc. yet? 17:15:16 rjrjr: not yet 17:15:21 I've done a little around that, but not enough to be 100% confident in it yet 17:15:21 not yet 17:15:24 Kiall: how you mean though by changing the datamodel ? 17:15:29 vs the also_notify thing 17:15:33 ekarlso-: I'll write up post-k1 17:15:52 It's related, but separate to the also-notify thing needed for Dyn/Akamai 17:15:52 Kiall: just wondering if it affects the work i'm doing 17:15:59 hmm k 17:16:23 rjrjr: I'm guessing you have BTW? 17:16:31 a little. 17:16:41 K 17:16:47 Anything else on Pools before we move on? 17:16:58 not extensive testing. wondering if we can get it into our automated testing though. 17:17:06 I'm good 17:17:30 rjrjr: I think we probably can, but I'm not 100% sure how we might go about it 17:17:47 e.g. devstack could be updated to manually start bind/powerdns etc with multiple different sets of config 17:18:47 Okay - Moving on, we can look into better CI of multi-server pools in k2.. 17:19:04 #topic V2 RecordSet Update Behavior (kiall) 17:20:00 So - We've discussed this before, but everytime I see it, I'm still thinking we've done this one wrong! Specifically, the PUT vs PATCH for RecordSet update being different to all our other resources. 17:20:29 I thought we were changing it all to json patch 17:20:56 Plus I don’t think the update server should be a patch. It should be a put like recordset 17:21:14 betsy: well, JSON-Patch will be support at some point soon too .. But the "normal" way should be supported as well 17:21:19 You can easily delete all your servers if you try to add one and don’t include all the others 17:21:43 Which you might think you can do since it’s a patch 17:21:55 betsy: that is problem. 17:22:01 betsy: Well, I see it as the other way around .. If i PATCH /zones/ with {"description": "Foo"} - I'm updating part of the Zone resource, If i PATCH /zones//recordsets/ with {"description": "Foo"} - I'm updating part of the RRSet resource 17:22:02 is a problem 17:22:39 But - If I PUT /zones//recordsets/ - I expect to be "replacing" the whole resource, e.g. the resource should look like I had just done a create API call.. 17:22:43 Right, so if i want to add a server to a pool, I would just include that new server in my request, but that will delete all the other servers 17:22:50 … that aren’t listed in the request 17:23:02 +1 to betsy 17:23:15 betsy: right - but that's not the resource being acted on .. Your acting on the Pool 17:23:38 actually.... you are acting on the nested sub resource 17:23:41 No. You’re also acting on the namservers, because that’s how we add, delete and modify nameservers — thru the pool 17:23:42 an update with {"description": "foo"} replaces the description with the supplied value 17:23:53 an update with {"records": []} replaces the records with the supplied value 17:24:50 e.g. it's key + value, and you update a key to a new value, the fact that the value is a list doesn't change the fact for me that your not repalcing the entire RecordSet resource with {"records": []} - you're only changing 1 part of it. 17:25:05 so, there is a way to add a server and update the server list? 17:25:40 rjrjr: No with the current code, you have to supply the key's full value .. When we implement JSON-Patch, it will be possible then to modify just a small part 17:26:21 re " No. You’re also acting on the namservers, because that’s how we add, delete and modify nameservers — thru the pool" 17:26:21 I see it as, your acting on the Pool resource - the URL is /pools/ - and the HTTP verb and behaviours should reflect that. 17:26:27 But I think having the pool update as PATCH is confusing to the user since the nameservers and attributes act as a PUT 17:26:31 (same applies to RRSets etc..) 17:27:03 betsy: exactly! Using PUT (i.e replace) when you want to update a pools description is the same issue, in the reverse direction. 17:27:46 Right, but having it as a PUT method is clearer than having it as a PATCH and then having the nameservers act as a PUT behind the scense when you’re not expecting it 17:27:46 can't you get the previous value and ad it to the patch request ? 17:27:49 But - Since we're acting on /pools/ - It seems more approperiate for the API call to be acting on the pool with ID# 17:28:38 benwha: we could require the user to supply all values there, but that's still a break from the majority of our other API resources 17:28:39 is there a /pools//servers resource? 17:28:57 No 17:28:58 rjrjr: no, there isn't 17:29:12 rjrjr: no, we decided against that format after we removed the /records endpoint from recordsets 17:29:28 isn 17:29:33 isn't there a pool attributes one though ? 17:29:37 should there be? 17:29:42 honestly, we would have been beter keeping /records imho 17:29:45 yea - The pattern here is the same for Pool's (with Servers embedded in it) and RecordSets (with Records embedded in it) 17:30:00 ok 17:30:06 Kiall:no, I wanted to say that when doing a PATCH cant the previous value be fetch and put into the value we are updating so not all ressources are affected ? 17:30:41 benwha, symantically, that isn't correct is what is being said. 17:30:42 benwha: ah yes, that's the behaviour of our current PATCH calls, and the behaviour of a PUT on the RecordSets and Pools resource .. Which is where I see the issue. 17:31:44 e.g. PATCH to /zones/ with {"description": "Foo"} will not modify the zone name and - 17:31:44 PUT to /zones//recordsets/ with {"description": "Foo"} will not modify the recordset name... 17:32:10 Both of those API calls behave identically, but the HTTP verb is different.. 17:33:02 The wiedness is introduced when the key your updating is a list, and to be, the behaviour is still "replace key , with value " 17:33:07 and to me* 17:33:28 not "replace the resource with this set of keys+values", as PUT implies 17:33:47 To me the weirdness is having a pool PATCH where the nameservers and attributes without the pool are treated as a PUT withing that PATCH 17:34:04 I'd like to see this articulated in some sort of doc personally. 17:34:06 ^within/without 17:34:33 yeah, having a document to dicuss would be helpful I think 17:34:39 discuss* 17:35:13 betsy: Well, if you think of the resource as key/value pairs: 17:35:14 PATCH /pools/ {"description": "Foo"} <-- Replace the key "description" with supplied value ("Foo") 17:35:14 PATCH /pools/ {"nameservers": ["ns1", "ns22]} <-- Replace the key "nameservers" with supplied value (["ns1", "ns2") 17:35:19 If we have json-patch do we still have to have patch? 17:35:49 kiall: And you’ve deleted any other existing nameservers that you didn’t include in that request 17:35:53 Yea, I believe we do.. 99% of APIs treat PATCH or POST /pools/ {"description": "Foo"} as Replace the key "description" with supplied value ("Foo") 17:36:24 betsy: correct, but the API call is acting on the Pool, and I've not removed the content of the description field 17:36:39 actually most APIs use PATCH in conjuntion with JSON+PATCH 17:36:48 apart from Keystone V3 and us 17:36:56 and glance ;) 17:36:59 nope 17:37:14 http://docs.openstack.org/api/openstack-image-service/2.0/content/appendix-b-http-patch-media-types.html 17:37:21 Meaning they're separate operations? 17:37:55 timsim: not sure what you mean? 17:38:13 Also - We're taking quite a while on this one :) We should move on shortly. 17:38:22 Yeah I think there should be a doc^ 17:38:35 Meaning you can do a PATCH on something and a json-patch on that same thing? 17:38:47 And they behave differently? 17:39:06 timsim: thats what is being proposed 17:39:14 i think we do need a doc writen for this 17:39:25 timsim: correct, by telling the API what kind of request your sending - the standard for that is the Content-Type: application/json vs application/json+json-patch headers 17:39:33 I can write it if needs be 17:39:34 ok 17:39:34 From what I can gather, if you want to support PATCH properly, it seems like you'd have to have /recordset/id/records and pool/id/nameservers. 17:39:51 but having a point of refernce seems like a good idea, to ensure thsi beate goes somewhere 17:39:57 debate* 17:40:00 Unless I'm not following. In which case, doc plz. 17:40:16 mugsie: Okay - We can take an hour later this week/early next to write somehting? 17:40:25 sounds good 17:40:33 Much appreciated. 17:40:39 (I think mugsie is on the opposite side to me here, so should in theory be balanced!) 17:40:50 Okay - Let's move on :) 17:41:00 #topic Mid Cycle. 3 or 4 days? & Confirmation (graham) 17:41:04 mugsie: over to you.. 17:41:24 so, just want to get confirmation. Are we goingn for a 3 day or 4 day event 17:41:39 we can host either. 17:41:40 and can we get location details, and a hard limit of the number of people? 17:41:51 I want to send out details to the mailing list asap 17:42:04 I'll be 3 days at most, but if people can/want to do 4 - I've no problem leaving the day early.. 17:42:04 i'll send out the location details right after this meeting. 17:42:09 and put it on the wiki, so people can send devs if they are interested 17:42:28 I am happy with either - I am flying 8k miles for it ;) 17:42:37 it and other meetings ;) 17:42:41 what do other people think? 17:42:57 vinod / timsim / betsy - any idea what's better for you guys? 17:42:59 timsim: vinod betsy - what do you think you would be allowed do? 17:43:29 Good question. Joe’s not here 17:43:46 … at the moment 17:43:46 I think we'll be able to make it work, whatever people want. 17:44:01 ok what is people preference then? 17:44:06 (If I have to, I'll drive :P) 17:44:09 :) 17:44:31 We don't get together that often, it seems like we should stretch it as long as possible. 17:44:31 3 is better for me, I have to be in sunnvale or palo alto early thursday morning ;) 17:44:33 4 days 17:44:44 :) 17:44:53 We can just bother Kiall on IRC on the last day :P 17:44:57 :) 17:44:58 ;) 17:44:58 btw, do u need me for anything more ? ^ I gotta run :/ 17:45:03 We can probably get a lot done in 3 days 17:45:04 let's do 4 days then. 17:45:10 cool 17:45:12 4 it is then 17:45:20 ekarlso-: Not unless you have any input on the last agenda item :) 17:45:26 Event should be four, if people have to leave early, that's cool. 17:45:31 timsim: ++ 17:45:31 ekarlso- while you are here - mugsie is there any update on the time change of this meeting? 17:45:32 timsim: ++ 17:45:47 oh - yeah.. I will look at that this evening 17:45:47 mugsie: did you get results from the poll? 17:45:58 I have a poll, but have not looked at results 17:45:58 Logging and RPCAPI Strawman proposals (graham) 17:46:02 that one u mean Kiall ? 17:46:05 yeah 17:46:12 #action mugsie to figure out possible schedule change for next meeting. 17:46:25 Anything else on this, or will we move on? 17:46:26 we took the poll a while back. 17:46:32 what I wondered about is how does that play with the embedded central thing ? 17:46:41 rjrjr: yea, sounds like he hasn't checked the results :) 17:46:41 just a thought I got 17:46:44 rjrjr: yeah, i need to get the results 17:46:50 ekarlso-: absolutly no idea 17:46:56 mugsie: :P 17:47:08 Okay - Moving on since it's already happening anyway ;) 17:47:08 #topic Logging and RPCAPI Strawman proposals (graham) 17:47:14 so 17:47:28 #link https://review.openstack.org/142218 17:47:33 #link https://review.openstack.org/142222 17:48:01 are 2 ideas that I had yesterday - if people want to look at them and see if they fit with how we work, please do 17:48:13 I dont think we need to debate them here 17:48:15 Anyone had a chance to look them over yet? 17:48:19 i liked the logging idea. i didn't understand why we want to change RPCAPI. 17:48:27 I thought they looked like nice ideas. 17:48:31 rjrjr: yea, I'm pretty much the same :) 17:48:32 but if people can read them, and comment that would be great 17:48:39 mugsie: I already looked, so far except the embedded central comment I dont have anything on it 17:48:40 I haven’t had a chance to look at them yet 17:49:00 they are totally out of left field so please look critically 17:49:05 but really, thnx for meeting but I really gotta run :) 17:49:10 rjrjr: I hated the repetition 17:49:14 cya ekarlso- 17:49:17 ekarlso-: o/ 17:49:35 they are not nessiasarly good ideas ;) 17:49:53 Okay - So if this was a "Please review" item, let's move to Open Discussion while we have 10 mins left :) 17:50:00 #topic Open Discussion 17:50:07 unless any one has anythoing ;) 17:50:08 oh 17:50:10 too late 17:50:12 Any other topics from anyone else? 17:50:51 I'm good :) 17:51:03 i'm also good. 17:51:17 Ok - I have one if nobody else does, Any suggestions for ways to improve these meetings? It's been a good while since we had that discussion. 17:51:54 hmm - are you trying to address a particular problem you see or is this just a general question? 17:52:11 We talked about having a google hangout occasionally in place of one of these meetings 17:52:19 betsy: no specific issues from me, just wanted to make sure people are happy with it :) 17:52:27 a hangout ever month would help me. i miss hearing your voices. 8^) 17:52:28 I'm always a fan of a little more pre-work, if there's something like the PATCH/PUT to be discussed, I love to see it documented and sent around before so everyone is aware of the issue before the meeting starts. 17:52:44 timsim: that is a good idea 17:52:47 I don't know how realistic that is for everyone's busy lives though :P 17:53:00 I think they’re basiclaly good. We usually finish on time. A google hangout might be nice occasionally 17:53:03 timsim: Yea, agreed.. It's often hard to find the time though when it's not guaranteed to be necessary :) 17:53:11 Yeah, I get it 17:53:40 I’m fine with the fact that someimes things are bigger than we think, so then we write a doc and take it offline 17:53:48 vinod: re hangout.. Yea, should we do something like that once a month with a single topic or something? 17:54:03 silence. 17:54:10 kiall: +1 17:54:36 yeah, I think it could tie in wityh the monthly sprint 17:54:36 Also keeping in mind .. there's lots of new faces starting to show at these meetings :) (benwha is todays example! Welcome benwha :)) 17:54:43 As the community grows that may be difficult :P 17:54:51 Ah^ 17:55:04 thx all 17:55:28 timsim: yeah, but there is other solutions as well - but we would have to take as we go 17:55:31 Anyway - I like the idea of getting more "face time", but we should be careful it's inclusive :) 17:55:39 Totally agree ^ 17:56:21 mugsie: suggested doing it for the monthly topic sprints - I think those are a GREAT way to get new people involved, so, I'm not sure it's the right place to scare people off with a Hangout ;) 17:56:57 Everyone could just fly somewhere once a month, no big. 17:56:58 Which reminds me. 17:56:58 #action kiall to put monthly topic sprints on agenda for next week, and everyone to put ideas into the agenda page before next meet 17:57:08 oh, not for the full sprint, but before or after 17:57:18 timsim: that sounds like a plan ;) 17:57:43 Oh, next Wed. is Christmas Eve. Are we still having the meeting? I know a lot of people (me) will be out 17:57:55 fyi - I will not be around for he next 2 meetings 17:58:01 s/he/the 17:58:03 betsy: are you trying to tell me you don't work christmas evening? 17:58:05 ;) 17:58:13 :D 17:58:21 Yes - I think we can safely cancel next weeks meeting. Thanks for the reminder. 17:58:33 31st is New Years eve too ;) 17:58:36 i'll be on if needed. 17:58:47 Are we Jan 7th for the next metting then? 17:58:49 I will most likely not be around for the next 2 17:58:50 best time of the year to catch up on work. :) 17:58:52 and the next one +1 is new year eve I think 17:58:58 so 3 weeks? 17:59:08 Or - Should we have on off-schedule on Fri Jan 5th? 17:59:16 I'm guessing most people are back by then 17:59:20 That works too. 17:59:26 yeah, sounds good 17:59:42 next dec 23 and jan 5? 17:59:52 You mane Mon Jan 5th? 18:00:09 vinod: whoops, yes 18:00:10 Let's take it over to #openstack-dns? 18:00:15 Fri is the 2nd 18:00:21 timsim: its fine SlickNik can wait ;) 18:00:22 timsim: Yes, Move to #openstack-dns to figure this out. 18:00:37 * Kiall conventiently forgets to #endmeeting ;) 18:00:40 #endmeeting