19:06:34 #startmeeting marconi 19:06:35 Meeting started Thu Feb 21 19:06:34 2013 UTC. The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:06:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:06:38 The meeting name has been set to 'marconi' 19:07:12 #note Trello board - https://trello.com/board/openstack-marconi/511403287d138cd6200078e0 19:07:32 #topic Dev kickoff 19:08:48 So, first I just wanted to mention that earlier today I met with several Rackspace devs in Atlanta, as well as Flavio from Red Hat (via G+ Hangout) to kick off Marconi development. 19:09:08 So, the Trello board will be spruced up and populated with a bunch more stuff in the very near future. 19:09:16 +1 19:10:18 nice 19:10:21 Flavio is going to see if he can sweet-talk some of his fellow Red Hat devs to contributing at least part time. 19:10:50 Is there a github repository available at this time? Initial commit, perhaps? 19:10:51 It sounds like he (Flavio/flaper87) will be able to contribute mostly full-time to the project, at least to help us get it off the ground. 19:10:58 Yes. 19:11:18 Scaffolding is there. 19:11:18 https://github.com/stackforge/marconi 19:11:28 I'll add the link to the Trello board description. 19:11:40 No real code yet, but that will be arriving in the next few days. 19:12:23 kgriffs: let me know when I can put you on the openstack-atlanta meetup talk schedule ;-) 19:12:39 Heh. :D 19:12:48 I definitely need to get involved. 19:13:58 #action kgriffs to stop being lazy and go to openstack-atlanta meetups. 19:14:08 haha 19:15:19 OK, so the plan for development is to track tasks in Trello. Later on we will probably leverage launchpad, at least for bugs. 19:15:24 kgriffs: there are several dreamhost devs here in atl, too, and we have an interest in marconi, so keep us in the loop 19:15:32 hey, sweet. 19:15:35 I may be able to commit some resources during H 19:16:40 note: next OpenStack ATL meetup is tonight - Quantum is the focus. 19:17:21 cppcabrera: mark is tweaking his presentation now :-) 19:17:48 Any Dreamhost devs close to Rackspace's Atlanta office near Gwinnett Arena? Ya'll should stop by for lunch or something. 19:18:24 ametts-atl: we meet in decatur, but could meet somewhere half way 19:19:20 let's arrange something offline, I didn't mean to derail the meeting 19:19:48 no worries. Will do 19:20:23 #action ametts-atl, dhellmann, kgriffs to arrange Metro ATL meetup 19:20:53 OK, one more thing on the topic of code 19:21:21 (a) We have an aggressive timeline 19:21:51 b. We can't compromise on quality 19:22:10 So, v1 needs to simple with great test coverage 19:22:33 re the timeline, we need to have something demo-able (not feature complete) for the summit. 19:22:46 ...but 19:22:49 We need to define a CONTRIBUTORS file in the github. That'll help unify initial development. 19:23:17 good idea 19:23:44 #action kgriffs to add commited contributors to CONTRIBUTORS file 19:24:02 So, I need to redo the milestones on launchpad to reflect this 19:24:04 Maybe a HACKING file, too. (style conventions, etc.) 19:24:15 #action kgriffs to update milestones 19:24:25 So, basically, for milestones we have: 19:24:35 How about (c) simple start should lay an extensible foundation for everything else to come later on 19:25:09 yes 19:25:55 we need a good foundation that is flexible enough to accomodate the stuff on the "future features" list on the spec (or at least a good number of them) 19:26:32 See also: https://wiki.openstack.org/wiki/Marconi/specs/grizzly 19:26:57 But it also needs to be functional enough that it can be used by real apps and services. 19:27:41 #topic Milestones 19:28:35 m1 - Demo at Portland Summit (Ready to go by the prev. Thursday, April 11) 19:28:50 #note m1 - Demo at Portland Summit (Ready to go by the prev. Thursday, April 11) 19:29:20 #note Also hoping to conduct some feedback sessions on the API and architecture during the summit 19:29:46 #link http://www.openstack.org/summit/portland-2013/vote-for-speakers/presentation/647 19:30:21 We need everyone to vote for this session -- so we'll actually have a forum to conduct the demo! 19:30:28 +1 19:30:49 #note m2 - Feature Complete by mid-may (May 13) 19:31:09 #note m3 - Production Ready in the summer (June/July) 19:32:47 Queuing is a big gaping whole in the OpenStack portfolio, so we need to ship something to the community in the next 6 months. 19:32:59 ametts-atl: there are also open space rooms 19:33:07 That will then set us up nicely for the fall summit when we can discuss the next batch of features 19:33:32 dhellman: Yeah, but vote for this session anyway. This will be our chance to really start building Marconi momentum in OpenStack. :) 19:33:35 whole => hole. :p 19:33:40 ametts-atl: of course! :-) 19:34:00 OK, so anything else for general discussion before we move on to API? 19:35:55 #topic API - Claim Messages 19:36:17 #note Etherpad for API draft - https://etherpad.openstack.org/queuing-api 19:36:30 Scroll down to "Claim Messages" 19:36:56 (about 3/4 way to the bottom) 19:37:17 Ctrl + F works well, too. 19:37:23 So, the original proposal is still there, But I added (in green) 19:37:27 heh 19:37:51 Sometimes Etherpad doesn't like Ctrl + F, but if it works, awesome. 19:38:01 anyway... 19:38:48 Highlighted in green, you can see the proposed change to using POST and defining a "claims" resource. 19:39:05 +1 19:40:54 OK, so any major objections to going with this approach? I like it better than trying to PATCH or POST against the "messages" collection - seems more natural and RESTy 19:42:04 Is the document structure OK as-is? 19:42:13 (shown in the draft as JSON) 19:42:40 LGTM 19:42:49 one comment on renewal: 19:43:00 I'm not sure why a separate claim id couldn't be generated for each message 19:43:00 OK, let's move on to that 19:43:27 (that's listed as a downside in the new section on the POST call) 19:43:35 It could be. 19:43:58 using a separate claim id for each message, or having the claim id and message id be a 2 part key would allow renewing a claim on a single message at a time 19:44:34 actually, I like the latter. patch the claim with a timeout and optionally include a list of the messages you want? the others are not renewed. 19:44:39 So, if we did that, it would be a side-effect of creating a claim 19:44:42 or is that too complicated? 19:45:04 side-effect? 19:45:04 may be too complicated 19:45:17 ok, keep it simple this time around 19:47:02 well, I was just thinking that workers can always claim smaller batches of messages in order to mitigate the problem of a big block getting held up for too long. 19:48:27 kgriffs: true 19:48:39 A claim could include separate id's for each message, allowing them to be renewed independently, but maybe it's not worth the complexity. 19:49:22 Is there any reason to un-claim a message before the timeout expires? 19:49:58 As in "I took 100 messages, but I'm not compatible with request #44, so I'm un-claiming it". 19:50:48 ametts-atl: I would think the client would have a way to subscribe so it only saw messages it should (something like ampq topics?) 19:51:23 Yeah, that could be handled by using multiple queues and/or tags (the latter if proposed as a post-v1.0 feature) 19:51:45 Ok - I don't disagree. Just throwing it out there in case it was a rationale for separate ids? 19:52:48 ametts-atl: a better case is, I claimed 100 but am being told to shut down so I want to release my claim on the ones I haven't finished with 19:52:50 If you really needed to do something like that, you could skip the message, process and DELETE all the ones you can, and at the end, release the claim so the skipped messages go back into the pool 19:53:12 kgriffs: that sounds like it takes care of the use case I just described 19:53:55 yeah 19:54:10 I just added a note to etherpad. I think it gets us most of the way there. 19:55:37 So, that strategy combined with claiming messages in small-ish batches should alleviate the need to renew claims on a per-message basis. 19:55:54 yes 19:55:58 It's also nice, because i probably want to do batch renewals regardless 19:56:32 OK, well, let's run with that and see what kind of trouble it gets us into. :D 19:56:50 Agreed regarding batching. A message at a time is probably too chatty. :P 19:57:18 Anything else before we move on? 19:58:21 I think releasing and querying claims are pretty straightforward 19:58:31 I'd like to touch on renewing a claim, though. 19:59:02 #topic API - Renew a Claim 19:59:58 So, personally, I'm leaning towards using PUT 20:01:39 Hmmm... would that imply a worst case 64KB PUT? 20:02:06 no, you are just setting the ttl for the claim 20:02:26 Ah,, I see. 20:02:44 +1 for PUT. Looks simpler. 20:03:55 However, it isn't quite right because PUT can imply that you are creating the claim, when really we want to update an existing claim. I guess we could allow the client to generate the claim ID and just get rid of the separate POST method 20:04:10 (…imply creating OR updating) 20:04:33 kgriffs: I think the server really ought to generate the claim id 20:04:41 PATCH is fun, but also strange because you are resetting age, even though age is a dynamic value based on diff between create time and server time 20:04:42 otherwise you have to deal with duplicates 20:05:16 #agreed server should generate IDs 20:05:46 why isn't post the right verb? 20:05:50 post a new age to the claim 20:06:20 or not age, but timeout? 20:08:11 I think you could make the same argument against PATCH, against PUT as well 20:08:21 I know that it is common to use POST in APIs when PATCH is really what is meant - json-patch is an attempt to to clear that up 20:08:58 Renew claim sounds to me like a modifying operation, which feels closest to the semantics of PUT, imho. 20:09:27 kgriffs: patch makes sense, too, I thought we were looking for an alternative 20:09:39 patch is saying "change this attribute you're storing to this value" 20:09:56 post is saying "update the attribute based on some algorithm and this input I'm providing" 20:10:13 IOW, patch seems like setattr() vs. a method call 20:10:20 (where the algorithm is not uniform across services) 20:10:25 kgriffs: right 20:10:58 dhellmann: I can definitely see patch as setattr. Nice comparison. 20:11:00 so in a patch the client controls the new value precisely, but in a post the server is in control 20:11:12 which is the case for updating the ttl? 20:11:28 well, it isn't updating the ttl 20:11:41 that stays at 30 seconds, say 20:11:45 I guess you could change it 20:11:50 actually, you probably should be able to 20:11:52 sorry, not ttl, but expiration 20:12:15 does the client get to pick a precise time, or does it ask for an extension of a certain length, or just an extension without specifying a length? 20:12:22 I think we need a way to update ttl and expiration 20:12:33 good question 20:12:40 kgriffs: ttl feels like a property of the queue, rather than the claim, no? 20:12:55 hmmm. 20:13:40 Well, I kinda like being able to reset the expiration and set a different TTL if it's taking me a while to process a certain batch for some reason. 20:13:41 do claims have ttls or just expirations? (and if both, what's the difference?) 20:14:02 so, ttl is like "30 seconds from the time the claim was created or last renewed" 20:14:10 what happens at that time? 20:14:15 the claim is expired? 20:14:26 the claim expires and is automatically released 20:14:29 ok 20:14:38 and an expiration is the exact time at which that should occur? 20:15:04 or is it all controlled via the ttl? 20:15:05 maybe not exact, but within, say, a few seconds. 20:15:15 well, yeah, but it's a timestamp rather than a # of seconds 20:15:32 So, I imagine in the backend there will be some kind of garbage collector that runs periodically and removes expired claims 20:15:51 so, a claim has a "create" or "renewed at" timestamp, plus a ttl 20:15:58 so, ts + ttl = expiration time 20:16:02 got it 20:16:26 so the ttl is the life of the claim and the expiration time is updated when the renewal comes in by taking the current time and adding the ttl 20:17:11 so it feels like you want to be able to change the ttl value at the same time as doing a renewal, but it doesn't make sense to change the ttl without doing a renewal 20:17:26 right, I think that makes sense 20:17:53 that says to me ttl may be an optional argument to the rewnewal call, rather than a patch of the claim itself 20:18:04 I like PUT because it says, "overwrite this existing claim with a new one and this new ttl" 20:18:27 doesn't the claim also include the associated messages, though? 20:18:46 Well, that's where things get gray 20:18:49 you aren't replacing those in the PUT, so you're not providing a whole new version of the claim 20:19:08 Conceptually, a claim isn't a container or parent of messages 20:19:22 hmm 20:19:36 well, at least, that is the way we would have to spin it 20:19:43 normalized 20:19:59 the messages aren't in the queue at that point any more, so if they don't live in the claim where do they live? 20:20:00 so, a claim is ticket with a list of message IDs written on it 20:20:22 ok 20:20:32 they are in the queue; in fact, if an observer wanted to GET them, it could. 20:20:39 ah, ok, that wasn't clear 20:21:19 the idea is that in a work-queuing scenario, all the clients are well-behaved and get their messages only by creating claims, and always include claim IDs when deleting messages. 20:21:43 it's more of a unified concept than the disjoint one you have with SQS and SNS 20:21:56 ok 20:22:25 The nice thing is that it allows you to audit/debug what is going on by running one or more observers. 20:22:27 so if you put a claim to renew it, does that imply you can update the message id list for the claim at the same time? 20:22:41 kgriffs: sure, that makes sense 20:23:37 dhellmann: good point. Maybe PATCH is better after all because it is more specific? 20:23:47 actually, wait a sec. 20:23:52 that's an interesting thought 20:24:38 if I could update the list of messages, then I could allow ones I could release ones that I want to skip. 20:24:51 I still feel like this renewal is a POST, since the client isn't actually setting the value of the expiration time, just asking to have it updated. 20:24:59 kgriffs: yeah, you just rejected that idea a few minutes ago :-) 20:25:16 :p 20:25:45 only as a complex api, so I took that to mean maybe in a later version 20:27:45 I'm out, guys. I have another metting in 3 minutes. TTL expired. ;) 20:27:50 heh 20:27:53 heh 20:27:56 *meeting 20:28:00 no worries 20:28:04 we are running long this time 20:29:12 #note json-patch supports removing specific fields 20:30:05 PUT is like all-or-nothing, right? Basically, the client gives the entire entity that should replace the existing one? 20:30:14 kgriffs: yes, that's my understanding 20:30:31 Mine too. 20:30:32 #agreed 20:31:58 So, you would have to give the entire list of associated messages. 20:32:06 kgriffs: right 20:32:09 I guess it would have to be just their IDs to reduce traffic 20:32:17 true 20:33:09 but then, that doesn't match what the result of a POST gives back, which is the full messages. We might change that to a just a list of IDs, and then the client would have to request each message body in separate requests. 20:33:14 Not exactly efficient. 20:33:47 yeah, this is why PUT for renewal feels wrong :-) 20:35:26 OK, let's look at PATCH for a minute 20:35:57 Say that a claim entity is represented something like this (I'll put it into etherpad) 20:36:07 I'm liking this json-patch draft. Looks like the "remove" operation is great for removing messages I want to release my claim on, and the "replace" operation would be great for resetting a claim's age to 0. 20:36:54 you could also optionally replace the ttl 20:37:20 { 20:37:20 "messages": { 20:37:20 "message-id-foo": { 20:37:20 ... 20:37:20 } 20:37:20 } 20:37:21 } 20:37:40 so, if you remove "/messages/message-id-foo" that would do the trick 20:37:50 exactly 20:38:09 if messages is a list, I don't think the remove operation works 20:38:24 * kgriffs is pulling up the json-patch spec 20:38:40 using a hash to return the messages in the claim will lose the ordering, won't it? 20:39:01 ah, that is a good point 20:39:31 I suppose you could still order them (roughly) by timestamp 20:40:40 making the client order them seems, … rude? 20:40:41 :-)( 20:41:22 yes, although I don't think we are going to support guaranteed ordering anyway, just that you won't get duplicates. 20:41:28 ah, ok 20:41:33 hmm 20:41:39 why not ordering? 20:42:17 it's not really a queue if order isn't ensured, is it? 20:42:45 Still under discussion… it could be done, but adds complexity in terms of how to scale-out individual queues. 20:42:55 we're at almost 1:45, should we carry over to next week? 20:42:56 well, SQS doesn't guarantee it 20:43:04 yeah, I think we will have to 20:43:29 this was a good discussion, thanks! 20:44:00 thanks everyone for your time 20:44:42 I think once we figure out claims the remaining bits of the API draft will be pretty smooth sailing 20:45:29 +1 20:45:47 cool 20:46:43 OK, so everyone go sleep on that and think about whether we need guaranteed ordering, or if best-effort is good enough (where most of the time things are ordered) 20:47:13 :-) 20:47:25 I will say that with guaranteed ordering you can't scale a single queue horizontally, but maybe that's OK. 20:47:34 cheers 20:47:42 #endmeeting