19:04:57 <kgriffs> #startmeeting marconi 19:04:58 <openstack> Meeting started Thu Mar 21 19:04:57 2013 UTC. The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:04:59 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:05:01 <openstack> The meeting name has been set to 'marconi' 19:05:10 <flaper87> o/ 19:05:53 <kgriffs> ok, before we get rolling, any items folks want to add to the agenda? 19:06:21 <kgriffs> https://wiki.openstack.org/wiki/Meetings/Marconi 19:07:39 <kgriffs> #topic sate of the projects 19:07:52 <kgriffs> #topic state of the project 19:08:33 <kgriffs> so, any questions or should I just give a summary? 19:09:09 <flaper87> kgriffs: summary sounds good 19:09:25 <flaper87> some questions might come out of there 19:09:40 <kgriffs> OK, so we have queue management mostly done 19:09:58 <kgriffs> This week we started working on message CRUD 19:10:26 <kgriffs> We are trying to get closure on the v1 API, but I think it's stable enough to implement on 19:10:44 <kgriffs> Doubtless there will be tweaks in the coming months, esp. at the summit 19:11:29 <kgriffs> We are close to getting python-marconiclient on stackforge, etc. 19:11:52 <flaper87> kgriffs: +2 storage drivers in good progress 19:12:05 <kgriffs> (the foundational stuff is in a different repo) 19:12:20 <kgriffs> yes, so we have sqlite and mongo drivers in progress 19:12:38 <kgriffs> sqlite is pretty much for testing, demos, and stuff like that 19:12:47 <flaper87> kgriffs: perhaps we should send an email to openstack-dev requesting reviews for the python-marconiclient 19:13:48 <jdprax> flaper87: Yeah, I just got some additional feedback and a -1. https://review.openstack.org/#/c/24807/ 19:14:28 <jdprax> They want me to add pypi configs before merging. 19:14:33 <jdprax> :-/ 19:14:52 <jdprax> Doesn't look like a big deal. 19:15:00 <flaper87> jdprax: cool 19:15:37 <kgriffs> A few more things I just wanted to mention real quick FYI. We have 3 core contributors now handling code reviews through Gerrit: Jamie (jdprax), Flavio (flaper87), and myself (Kurt). 19:16:14 <kgriffs> We've also been busy adding blueprints, and updating the wiki. 19:16:40 <kgriffs> Also, our code is now pep8-friendly, and we have a pretty good HACKING file (still needs a few tweaks) 19:16:56 <kgriffs> so, there's never been a better time to become a Marconi contributor (hint, hint) 19:17:18 <kgriffs> Our first big milestone is a demo at the Portland summit 19:17:41 <kgriffs> (our talk was accepted) 19:17:54 <flaper87> w000000t 19:18:02 <kgriffs> so, if you have any cool ideas for a demo give us a shout out in #openstack-marconi 19:18:20 <kgriffs> OK, any questions before we move on? 19:19:23 * kgriffs listens to crickets chirping 19:19:33 <kgriffs> #topic API - Maximum queues/page 19:20:01 <kgriffs> OK, so as the blueprints are currently written, Marconi limits queue metadata to 4 KiB. 19:21:17 <kgriffs> So, if we allow clients to list 100 queues at a time, that's not a ton of data - should be doable even if you buffer in memory, although we'll likely code such that everything is streamed. 19:21:46 <kgriffs> Questions/concerns? https://wiki.openstack.org/wiki/Marconi/specs/api/v1#List_Queues 19:22:09 <flaper87> sounds good to me! 19:22:57 <jdprax> +1 19:23:01 <kgriffs> irc://card.freenode.net:7000/#topicĀ API - Maximum queues/page 19:23:36 <kgriffs> same deal. 4 KiB per message body, 100 messages per request returned 19:23:53 <kgriffs> #topic API - Maximum queues/page 19:24:55 <kgriffs> OK, so I'll take that as a "+1" 19:25:24 <kgriffs> :D 19:25:30 <flaper87> :D 19:25:41 <jdprax> Anyone have a good ascii art cricket? 19:25:51 <kgriffs> #topic API - Claim Messages 19:26:07 <kgriffs> https://wiki.openstack.org/wiki/Marconi/specs/api/v1#Claim_Messages 19:26:38 <kgriffs> so, it would be cool if everyone could read over that real quick and share your thoughts 19:29:16 <kgriffs> any concerns there? 19:29:37 <flaper87> kgriffs: just re-read it and seems ok so far 19:29:50 <bryansd-home> The WARNING under Claim Messages, can you explain that a little more? 19:29:53 <flaper87> I'm still a bit worried about the ttl thing 19:30:03 <kgriffs> sure 19:30:13 <bryansd-home> I don't see how you can ever know that the TTL of the claim will be less than that of the messages 19:30:43 <kgriffs> typically your application would agree on something reasonable. 19:30:58 <kgriffs> So producers would always set ttl of messages to X or at minimum X 19:31:29 <flaper87> kgriffs: so, here's an idea 19:31:40 <kgriffs> and workers would set claims to some percentage of X, depending on average time to process a given message 19:32:05 <kgriffs> typically you'd just set the ttl for messages to something like 5 days and the ttl for a claim to 5 minutes 19:32:31 <kgriffs> flaper87: shoot 19:34:52 <flaper87> since clients don't know, as bryansd-home also pointed out, what the reasonable ttl would be and we won't be able to "make sure" that messages being claimed wont expire before the claim expires, what if don't make claimed messages expire until the claim expires as well 19:35:32 <flaper87> so, if the message would expire before the claim expires we would extend the message expiration / ttl 'til the claim expires 19:35:54 <jdprax> Sounds reasonable to me. 19:36:00 <bryansd-home> +1 19:36:06 <kgriffs> so, adjust the claim ttl to be min(requested_ttl, max(claimed_messages_ttl)) 19:36:14 <kgriffs> sory 19:36:16 <kgriffs> max 19:37:22 <flaper87> kind of. So, if a message expires in 30s and it was claimed and the claim ttl is 60, the message's ttl would be 60s 19:37:32 <flaper87> we give it a 30s life coupon 19:37:35 <flaper87> :) 19:37:58 <kgriffs> oic 19:37:59 <kgriffs> sure 19:38:07 <kgriffs> +1 19:38:17 <flaper87> cool, place an agreed there 19:38:19 <flaper87> :P 19:38:27 <bryansd-home> I like this. Then a consumer can have some confidence it'll have enough processing time to get through the message. 19:38:36 <jdprax> Yep. 19:38:45 <flaper87> bryansd-home: correct, were processing time == claim ttl 19:38:46 <bryansd-home> Whereas before it may "claim" a message with only 1s ttl... chaos 19:38:53 <flaper87> bryansd-home: lol 19:39:40 <kgriffs> #agreed claim ttls should be adjusted for 30s past the largest age of the messages at claim time 19:39:49 <kgriffs> #note use age of messages not ttl 19:40:17 <flaper87> no wait, that's not the idea 19:40:19 <flaper87> :P 19:40:24 <bryansd-home> kgriffs: I think we're talking about extending the age of messages, not claims 19:40:31 <flaper87> bryansd-home: correct 19:40:39 <kgriffs> oh, oops. got it backwards 19:41:42 <kgriffs> so, extend age of messages to 30s past expiration of the claim? 19:41:53 <kgriffs> (if needed) 19:42:09 <kgriffs> actually, seems like you want more fudge room than that 19:42:09 <flaper87> if (msg_ttl - msg_age) < claims_ttl: msg_ttl += claim_ttl - msg_ttl 19:43:02 <kgriffs> well, consider that if workers poll once every, say, 5 minutes for work 19:43:10 <kgriffs> but a worker dies. 19:43:24 <kgriffs> the claim must expire and the messages still need to live long enough for another worker to pick them up 19:43:53 <flaper87> kgriffs: nope, because the would expire with the claim (if their life was extended) 19:44:08 <kgriffs> but I don't want them to expire with the claim 19:44:28 <flaper87> kgriffs: they would have expire anyway 19:44:38 <flaper87> expired* 19:44:40 <kgriffs> isn't the whole point of having a ttl on claims is so if a worker dies another can pick up the slack? 19:45:29 <flaper87> kgriffs: sure 19:46:03 <flaper87> so, the suggestion is: 19:47:14 <flaper87> Extend messages' expiration time to the claim's expiration time for messages that would expire before the claim does 19:47:28 <bryansd-home> flaper87: +1 19:47:30 <kgriffs> no fudge factor? 19:47:52 <kgriffs> so, clients would still be responsible for doing that themselves if they need it 19:48:14 <flaper87> kgriffs: clients would set claim's ttl 19:49:12 <kgriffs> yes, but if a worker client dies and you want a different worker to pick up the unprocessed messages, those messages will need long enough TTLs that they don't expire before another worker picks them up and claims them, thereby extending their lifetimes again. 19:50:32 <kgriffs> so, setting messages to expire along with a claim only solves the issue of them expiring while a claim is active, not what happens when an agent crashes. 19:51:08 <flaper87> kgriffs: yes but consider that if those message wouldn't have been claimed they would have expired anyway 19:51:39 <kgriffs> I agree that it is useful to do, and avoids surprises. 19:51:58 <kgriffs> My point is just that the latter problem needs to be addressed somehow as well. 19:52:34 <kgriffs> basically you need some grace period 19:52:44 <kgriffs> to ensure messages aren't dropped 19:52:57 <kgriffs> I wonder if a claim could have a "grace" field? 19:53:18 <bryansd-home> Or should message TTLs be renewed when they get claimed? 19:53:18 <kgriffs> the the formula would become: 19:53:32 <flaper87> kgriffs: what I don't understand of your concern is: Why should a claimed message have a grace period and a not claimed message shouldn't ? 19:53:47 <kgriffs> claims_ttl: msg_ttl += claim_ttl - msg_ttl + grace 19:54:13 <flaper87> kgriffs: I'm ok with the "after_claim_grace" 19:54:57 <flaper87> kgriffs: well, you are right actually. the claimed messages deserve a grace period because the were hold from being processed by the stupid, suicide, worker 19:55:11 <bryansd-home> hah! 19:55:16 <flaper87> :) 19:55:32 <kgriffs> so everyone was right! 19:55:33 <flaper87> so, +1 for extending and +1 for grace_post_claim 19:56:24 <kgriffs> #agreed Extend lifetime of claimed messages to be at least as long as the lifetime of the claim itself, plus a specified grace period to deal with crashed workers 19:56:30 <kgriffs> <phew> 19:56:46 <kgriffs> OK folks. 19:56:47 <flaper87> w00000t 19:56:58 <kgriffs> we have 4 mins 19:57:03 <kgriffs> open discussion 19:57:27 <flaper87> o/ 19:58:29 <flaper87> so, we created now a new milestone series (havana) and a new milestone (havana-1). Marconi is not part of core projects so we're not expected to follow upstream release cycles but it seems to be that it would be good to start "fitting" into those release cycles since now 19:58:47 <flaper87> so that we're a step ahead when the time will come for marconi to be evaluated 19:58:55 <flaper87> as a project, organization and blah blah blah 19:59:21 <kgriffs> +1 19:59:32 <flaper87> there are some upstream milestones we're following (havana-1) and some marconi's milestones (trello) and I think they both can work together 19:59:42 <flaper87> actually, they would make everything better 20:00:12 <flaper87> so, for anyone with new ideas, proposals or whatsoever, pls, write a blueprint and lets set a milestone to that blueprint based on the workload 20:00:14 <kgriffs> I guess we could make the summit demo "grizzly-1" 20:00:19 <flaper87> (which is huge, btw) 20:00:40 <kgriffs> then everything after that would be havana milestones 20:01:19 <ametts-atl> Want to propose Marconi sessions at summit.openstack.org, but there doesn't seem to be a suitable option in the "Topics" drop-down. (https://wiki.openstack.org/wiki/Marconi) 20:01:49 <ametts-atl> Anyone know who can help us get an option added to the drop-down? Or at least a "Incubated Projects" option? 20:01:57 <flaper87> also, I think it'd be good to get as much feedback as possible from other people 20:02:09 <flaper87> ametts-atl: mmh, ttx ? 20:02:19 <flaper87> ametts-atl: Thierry 20:02:35 <ametts-atl> Thierry == ttx? 20:02:38 <flaper87> ametts-atl: yep 20:02:45 <flaper87> you'll find him in #openstack-dev 20:03:03 <flaper87> ametts-atl: at least he can point us to the right direction 20:03:18 <ametts-atl> Yeah, I just posted there. No response yet. Maybe I'll try an old fashioned email. 20:03:29 <flaper87> ametts-atl: sounds better 20:03:39 <kgriffs> OK folks, anything else? 20:03:48 <flaper87> ametts-atl: he's based in France, IIRC, so most likely he's doing something else now 20:03:57 <flaper87> kgriffs: not from me! Great meeting folks 20:04:06 <ametts-atl> But you're in Italia, no?! 20:04:19 <flaper87> ametts-atl: yep, but I'm lifeless and workaholic :P 20:04:23 <ametts-atl> :) 20:04:45 <kgriffs> OK, so next community meeting is 4 April 20:05:17 <kgriffs> in the meantime, folks can catch us in #openstack-marconi 20:05:23 <kgriffs> #endmeeting