19:05:16 #startmeeting marconi 19:05:16 Meeting started Thu Jul 18 19:05:16 2013 UTC. The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:05:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:05:19 The meeting name has been set to 'marconi' 19:05:20 Woot 19:05:23 \o/ 19:05:24 #topic review actions from last time 19:05:36 #link http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-07-11-19.03.html 19:05:52 1. oz_akan_ and flaper87 to investigate mongo insert performance 19:06:35 kgriffs: It wasn't possible to do that, we'll have to schedule it for next week 19:06:41 ok 19:06:48 #action oz_akan_ and flaper87 to investigate mongo insert performance 19:07:20 2. flaper87 to document client cleanup 19:07:36 kgriffs: I started it here: https://etherpad.openstack.org/apiclient-marconi and folks helped me out 19:07:46 Alessio added a bunch of stuff there 19:07:52 sweet. can we get these turned into bugs/bp's? 19:08:13 Yup. I can do that today. 19:08:21 * cppcabrera volunteers for action 19:08:23 imaginary internet bonus points (IIBP's) for anyone who helps! 19:08:36 cppcabrera: I'll help you with that 19:08:50 Awesome, flaper87! 19:08:59 #action cppcabrera, flaper87 to turn apiclient-marconi etherpad into bps/bugs 19:09:21 3. flaper87 to merge patches 19:09:28 well, that's rather specific, isn't it? 19:09:32 :p 19:09:33 Hahaha 19:09:39 wheheheheheh 19:09:41 Common lib patches were merged. 19:09:56 yeah, and we pointed out what needs to be done next 19:10:03 sweet. nice work 19:10:05 I think the input validation framework is still pending? Is that correct? 19:10:14 yes. 19:10:15 yup 19:10:21 4. flaper87, cppcabrera to finalize client design 19:10:43 We certainly rocked out on that item last week. All kinds of brainstorming happened. 19:10:51 +1 19:10:57 indeed +1 19:10:57 still some loose ends there? 19:10:59 #link https://wiki.openstack.org/wiki/Python_Marconi_Client 19:11:03 cppcabrera: good work on that! 19:11:09 #link https://wiki.openstack.org/wiki/Marconi/specs/zmq/api/v1 19:11:21 * flaper87 didn't know that page exists. 19:11:26 glad it does 19:11:28 cppcabrera: +1 19:11:34 there are many thoughts on the input validation, while the auther is not in the office. 19:11:49 A few loose ends remain, mostly in regards to adding polling support (drafting that now), and... some I can't think of yet. :P 19:12:28 #action cppcabrera to tie up loose ends re python-marconicient v1 design 19:12:48 5. zyuan to write up metadata handling proposal and review next time 19:12:52 +1. I'll pick all your brains on that. :D 19:13:04 wrote, as in the comment 19:13:13 * kgriffs wonders if cppcabrera is secretly a zombie 19:13:17 see card #287 19:13:34 QAQ 19:13:36 ok, let's take that topic now 19:13:45 #topic metadata handling proposal 19:14:03 zyuan: you have the floor. :D 19:14:34 (15 minutes) 19:14:51 (for this topic) 19:15:17 first i want to clearify 19:15:21 I've discussed this at length already with zyuan, so I'll sit on the sidelines for a bit 19:15:41 the checking is costly. 19:16:22 because the checking is to check have a user used a _name we already defined" 19:16:56 not just to check whether the user have _ prefix in their metadata 19:17:29 to distinguish "their" metadata from ours, we have to check all the _ prefixed metadata one by one 19:17:52 1st it decreases runtime performance 19:18:05 2nd it makes it hard for us to define new _names 19:18:18 (the list will become longer and longer) 19:18:31 (and can hardly be driver-specific) 19:18:57 I agree on the performance portion. It's O(n), given n is the number of metadata fields defined. The actual check can be against a dictionary of reserved names O(1). So O(n) * O(1). 19:19:21 TBH, I was thinking about that as forbidding anything starting with _ 19:19:30 cppcabrera: trust me, never think O(1) is cheap 19:19:33 that will avoid having to have a secondary list with reserved words 19:19:49 O(1) is cost -- when compared with 1 19:20:04 if we don't want to use _ for reserved metadata, we can pick another prefix 19:20:09 flaper87: i considered about that. 19:20:15 like C++'s std namespace 19:20:21 we can have our own 19:20:47 Hmm... namespaced metadata... that sounds pretty awesome. 19:20:51 but the core problem is not changed: what if a user expend the std namespace and add their own stuff? 19:21:12 (some programmers really open the std namespace....) 19:21:42 No need to check user meta - that can be bundled as "metadata" : { "user" : { "mine" : ...} } and ours can be in "metadata" : { "private": { ... } } 19:21:54 What if used that kind of scheme? 19:21:57 well, the limit is up to us. What about just keeping everything in a separate metadata field that we *don't* return? 19:22:13 flaper87: Roughly what I was thinking. 19:22:23 cppcabrera: :P 19:22:27 then user can't see their configurations 19:22:30 my turn to squize your brain 19:22:30 flaper87: we would need a different resource type to support setting it 19:22:32 :D 19:22:36 :) 19:22:47 when i was implementing per-queue grace, i found that we have to return it 19:22:50 kgriffs: that would make acl easier, wouldn't it ? 19:23:01 so, either same resource type with reserved sub-document name (namespace) or two different resources 19:23:07 btw, just thinking outloud 19:23:41 reserved namespacing +1 19:24:14 as i said, a reserved namespace does not solve the problem intrinsincally. 19:24:21 zyuan: pls elaborate 19:24:31 check or not check is the 1st problem 19:25:03 how to reserve -- 2nd problem 19:25:30 zyuan: if we don;t answer the 2nd we won't be able to answer the 1st, IMHO 19:25:51 let's assume we namespace reserved variables 19:25:57 (for the sake of argument) 19:26:06 how does that effect 1st problem? 19:26:10 affect 19:26:27 my opinion is, it does not. 19:27:29 if you want to enforce the checking, you need to perform the checking wherever you have "reserved" things. 19:28:24 I does change the check a lot, in my eyes. It changes the check from verifying whether *any* field in metadata is a reserved one to checking whether the PATCH operation contains a document that identifies itself with the key "reserved". 19:28:50 Hmm... 19:29:05 but we want to allow them to PATCH that portion, no? 19:29:13 Right. I got things mixed. :P 19:29:14 oh, then you need to verify *any* fields in your reserved interface 19:29:25 s/interface/namespace/ 19:29:36 I was thinking of the "create metadata" and thought I meant patch, heh. 19:29:53 But yeah, we disallow users from creating new metadata fields in the reserved space. 19:30:01 s/space/namespace 19:30:02 there is no PATCH opt for queue; only PUT 19:30:23 zyuan: Yup, I remembered after I posted my sentence. :) 19:30:35 yes. i think there is a patch for only for claims. 19:30:36 then you have to check the names under the reserved namespace 19:30:51 but, the reserved metadata should be used internally, right? 19:30:55 *ignore first for 19:31:06 so, sounds like namespacing maybe helps psychologically, but not necessarily technically? 19:31:36 kgriffs: yes. it may looks clearer, but the problem is still there. 19:31:50 (but to me, i think _ is enough...) 19:32:04 (if your argument is like "who will touch 'metadata'") 19:32:08 I think we should think about a real use-case 19:32:14 (For the sake of the argument) 19:32:18 go for it 19:32:20 (i can answer "who will touch _names?") 19:32:22 * cppcabrera wishes we had a whiteboard to think this one out 19:32:32 * kgriffs budgets a few more minutes for this topic 19:32:42 +1, a real use-case will help. 19:32:48 +1 flaper87 19:33:07 stand by 19:33:12 example, if we have a field (w/o namespace/prefix) 19:33:17 called "grace" 19:33:23 with prefix 19:33:26 so, when we first talk about reserved emtadata, IIRC, we were thinking about having different custom TTL per queue 19:33:37 I mean, queue level TTL, IIRC 19:33:42 it's "_grace" 19:33:47 or grace 19:34:03 if you want to do the checking, you need to keep a list of those names 19:34:03 yes, that's a good example - queue-level defaults if not specified per message/claim 19:34:16 and theck the global namespace 19:34:24 +1 for context 19:34:29 if we use a reserved namespace instead 19:34:32 so, having a queue level default means we have to read it everytime a message will be posted 19:34:35 say, "metadata" 19:34:44 then the global namespace become 19:34:48 flaper87: yes, but we could cache I suppose 19:35:00 {"metadata" : {"grace": 60}} 19:35:01 which means that it won't be enough to have *just* the queue id when doing that 19:35:02 (memcached/redis colocated on app server) 19:35:13 kgriffs: right right, just bringing the worst case scenario 19:35:24 well, actually, that helps with the context 19:35:32 then you need to open the "metadata" field and keep checking the reserved names.... 19:35:33 anyway, let's hold off on that tangent until another time 19:35:33 so, lets think we have this cache system 19:35:54 problems become more: "metadata" may not be a dict.... 19:36:02 what would we put into that cache if we have everything mixed? 19:36:17 we would have to separate *our* metadata from user's and store it in the cache 19:36:31 so, that makes me think that a separate field for that would make sense 19:37:06 i think it's ok to store anything metadata in the cache 19:37:34 it's actually more simple, and cheap if we have a configurable json parser 19:37:56 i mean, a proposed parser which can stop parsing at some level of the elements 19:38:18 and don't forget, we have size limit on queue metadata -- as a whole 19:38:28 zyuan: but that requiers more memory, right? And we'll have to update it everytime the metadata is updated. (re putting everything on cache) 19:39:09 we have size limit, the space should not matter that much 19:39:28 we could also compress with snappy 19:39:45 the thing matter may be the performance -- with our own namespace, the processing logic becomes more complex 19:39:51 but you'd be counting the reserved metadata size as part of the queue metadata size 19:39:54 but idk if cache is faster than mongo if you have it centralized 19:39:55 unless we split them 19:40:17 here is how splitting into two resource types may look: 19:40:17 https://etherpad.openstack.org/queuing-scratch 19:40:24 which is not nessasory, from my point of view 19:40:29 +1 kgriffs (a whiteboard!) 19:41:20 i like this separation of config from metadata... 19:41:25 meta would have no size checks except valid JSON and size 19:41:25 *I 19:41:50 config would have type checks, since we know what they are ahead of time. 19:42:05 I prefer the second one 19:42:22 then what's the queue metadata for? what if user PUT garbage into the /meta 19:42:25 for our first release, we just don't reference the config/settings/whatever resource in the home document 19:42:46 so, we kick the can down the road and have more time to come up with a good solution 19:42:58 I think we just ignore garbage fields 19:43:15 but still check overal doc size for sanity 19:44:02 Sounds good to me. 19:44:07 +1 19:44:16 zyuan: I'm not saying this solves the fundamental problem of schema checking vs. backwards compat with clients, but it lets us divide and conquer 19:44:51 yes, this looks good. 19:45:01 this is one of the longest 15 minutes :) 19:45:02 soo... what's the plan for the 1st release? 19:45:18 if we go with this, then what happens if you GET /v1/queues/foo 19:45:34 405 ? 19:46:11 PUT has no content at all? 19:46:19 or do we leave as-is, and later add /queues/foo/defaults 19:46:54 the original design was based on swift semantics, where they overload metadata headers to do fancy things 19:47:11 I wonder if that is the best approach after all? 19:47:13 x-delete-at, x-delete-after... 19:47:28 /queues/foo/defaults looks too fancy to me.... 19:47:51 can we have /queues/foo/meta/foo ? 19:47:56 in the first release? 19:48:08 then we'll add more things when needed 19:48:12 s/meta/metadata/ 19:48:14 what's the lat "foo" 19:48:21 s/lat/last 19:48:24 ? 19:48:30 (almost time for a vote) 19:48:45 ah remove that foo 19:48:54 /queue/foo/metadata/ 19:48:54 my option: leave things as-is 19:49:24 if we leave things as they are, we, most likely, will have to break backward compatibility in our next release 19:49:35 zyuan: as-is, including reserved prefix? 19:49:35 i don't like to invent needs 19:50:13 kgriffs: we have that in doc, and we don't care the guys want to challge us 19:51:05 does the current design close the door of a more fancy metadata? 19:51:35 no if we decide to have reserved words (and checks) within the existing metadata 19:51:49 but if we want to split them, then yes 19:52:31 i'm OK with two metadata plan; i'm c++ guy :) 19:53:04 so, vote? as-is or add new metadata resource to the queue ? 19:53:07 #startvote Should we keep queue metadata API as is (a), create a metadata type (b), or keep as-is, but not reserve a prefix, adding one or more new settings types later? a, b, c 19:53:08 Begin voting on: Should we keep queue metadata API as is (a), create a metadata type (b), or keep as-is, but not reserve a prefix, adding one or more new settings types later? Valid vote options are a, b, c. 19:53:09 Vote using '#vote OPTION'. Only your last vote counts. 19:53:39 #vote a 19:53:47 #vote b 19:53:49 sorry 19:53:51 arrg 19:53:53 :) 19:54:06 fortunately, only last vote counts. :D 19:54:13 indeed :P 19:54:26 #vote b 19:54:28 #vote b 19:54:32 #vote a 19:54:55 ... 19:54:57 anyone else in the room? 19:55:18 the explaination looks... confusing.... 19:55:41 what exactly (b) is? 19:55:54 that is what flaper87 proposed 19:55:57 kgriffs: in the room, but trust you guys to decide 19:56:04 :D 19:56:29 "/queue/foo/metadata" 19:56:54 then (c) looks not so good. 19:57:14 why adding new routes in the future conflict with reserving _names now? 19:58:42 zyuan: it is cleaner, if we add _foo, then harder to migrate that to a different resource later 19:58:46 #showvote 19:58:47 a (1): zyuan 19:58:48 b (3): cppcabrera, Sriram, flaper87 19:58:58 anyone want to change their vote? 19:59:04 clean... he he 19:59:06 kgriffs: u gotta vote 19:59:09 you have 10 seconds 19:59:15 #vote b 19:59:31 #vote b 19:59:33 * bryansd votes for pop-tarts, too 19:59:37 heh 19:59:42 +1 for pop tarts :D 19:59:48 #endvote 19:59:49 Voted on "Should we keep queue metadata API as is (a), create a metadata type (b), or keep as-is, but not reserve a prefix, adding one or more new settings types later?" Results are 19:59:50 a (1): zyuan 19:59:50 * cppcabrera votes for donuts 19:59:51 b (5): bryansd, kgriffs, cppcabrera, Sriram, flaper87 19:59:53 * flaper87 gives bryansd pop-tarts 20:00:13 ok, so this is obviously a breaking change. Let's get it done ASAP 20:00:14 That's all the time we have, I believe. Any closing comments? 20:00:21 * bryansd is bringing doughnuts tomorrow 20:00:31 so, quick one about priorities 20:00:49 i'm looking at the board and in deep thinking... 20:00:55 let's finalize API, continue focus on bugs and robustness and performance 20:01:12 kgriffs: +1 (and client) 20:01:16 right 20:01:33 focus on HTTP transport for now, ZMQ is lower priority 20:01:43 +1 20:01:51 we need to finalize placement service design 20:02:06 +1 20:02:11 oz_akan_: we're planning to discuss it on Monday 20:02:40 since Amit will be out tomorrow 20:02:44 ok, let's talk metadata implementation in #openstack-marconi 20:02:51 rock on 20:02:55 great work everyone 20:02:58 ok 20:03:17 Those bugs are afraid of us 20:03:31 #action oz_akan, amit, flaper87 to finalize placement service strategy 20:03:47 #endmeeting