15:01:01 #startmeeting Marconi 15:01:02 Meeting started Tue Feb 11 15:01:01 2014 UTC and is due to finish in 60 minutes. The chair is flaper87. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:06 The meeting name has been set to 'marconi' 15:01:11 alcabrera: do you have the agenda? 15:01:15 :D 15:01:20 yes I do. :) 15:01:32 #link https://wiki.openstack.org/wiki/Meetings/Marconi#Agenda 15:01:41 very packed - we'll have much discussion today 15:01:55 #info flaper87 could get fuel out of his coffee... It's soooo deeply dark 15:02:01 lol 15:02:11 that will be the highlight of today's meeting 15:02:21 lets get started 15:02:24 #topic summit 15:02:37 yayyy ATL 15:02:39 so, let's submit a talk(s) 15:02:41 so, the idea would be to do a joint talk for the summit as we did last year 15:02:45 #link https://etherpad.openstack.org/p/marconi-icehouse-talk 15:02:53 we've this week to submit the proposal 15:03:12 the idea would be not to do an introductory talk because we expect folks to be already familiar with Marconi 15:03:21 (at least those attending to the summit) 15:03:30 flaper87: +1 15:03:32 +1 15:03:37 what we would like to do is present Marconi for realz 15:03:48 Real examples, deployments and code. That is. 15:03:55 We've got great titles so far 15:03:55 how about the best practice in Rackspace ? 15:03:57 what abt a demo which wud replace some other queue used in OS with Marconi? 15:04:09 Without raising controversioes of course 15:04:23 malini: good idea but I'd avoid that :( 15:04:41 I think there's no way to do it without having people saying "YOU'RE REPLACING MY DEAR RABBIT" 15:04:42 yeah..will create too much heated discussions 15:04:42 flwang: I heard that oz_akan from Rackspace might be presenting on deploying Marconi (best practices, too) 15:04:57 * flaper87 would happily eat that rabbit at lunch 15:05:00 alcabrera: cool, I will join 15:05:11 * flaper87 hopefully didn't offend any vegan nor vegetarian in this channel 15:05:23 so, there's an ehterpad that alcabrera already linked 15:05:27 battle of the queues might be an event held elsewhere. :) 15:05:29 lets all contribute there 15:05:46 We should agree on something by EOD 15:05:57 and submit the proposal today / tomorrow 15:06:00 may I get another Marconi T-shirt this time :)? 15:06:06 where's the etherpad? sorry I am so out of touch with tempest going on 15:06:18 https://etherpad.openstack.org/p/marconi-icehouse-talk 15:06:20 https://etherpad.openstack.org/p/marconi-icehouse-talk 15:06:22 flwang: just if you put yourself in the queue 15:06:24 :D 15:06:44 #link https://etherpad.openstack.org/p/marconi-icehouse-talk 15:06:46 malini: ^ 15:06:49 flaper87: coool, pls put me in the queue, amen 15:06:53 +1 about submitting by tomorrow EOD 15:07:01 queue.post(flwang) 15:07:01 thanks you for the link! 15:07:02 who wants to write the proposal for the joint talk? 15:07:17 * flaper87 is burried in stuff 15:07:18 I'm happy to volunteer. 15:07:23 alcabrera: it's yours 15:07:26 cool 15:07:30 action-ize me flaper87 15:07:32 #action alcabrera to write the talk proposal 15:07:37 +1 on contributing for the talk 15:07:47 alcabrera: I will chip in 15:07:49 any other comment? 15:08:05 alright, and there's 15:08:09 err, one more 15:08:19 I'll see if oz_akan wants to submit an ops-marconi proposal 15:08:24 action-ize me on that, too. 15:08:27 alcabrera: that would be awesome 15:08:59 #action alcabrera to force oz_akan to submit an ops-marconi talk proposal. Use brute force if necessary 15:09:02 thinking out loud - can oz 's talk be a workshop? 15:09:02 #info contribute towards a great marconi proposal https://etherpad.openstack.org/p/marconi-icehouse-talk 15:09:15 lol flaper87 15:09:32 hmm 15:09:34 balajiiyer: it could, not sure how much he hast to say about it 15:09:36 I wonder... 15:09:41 workshops are normally >1h 15:09:51 we shud let oz decide how he wants to do it (A) 15:09:59 +1 malini 15:10:01 we will ask him to do it real sloooooow 15:10:06 ;) 15:10:18 alright :D 15:10:21 t h i s i s m a r c o n i 15:10:23 I'd say we're ready to move on 15:10:24 like that? 15:10:27 ok, moving on 15:10:32 #topic sqlalchemy 15:10:33 sqlalchemy -> 15:10:38 w00t 15:10:41 alcabrera: SO haskelized 15:10:48 alcabrera: floor is yours 15:10:48 lol 15:10:56 map (review) patches 15:10:58 anyway 15:11:00 :D 15:11:16 we've got some lovely patches sitting in the review queue for sqlalchemy 15:11:31 #link https://review.openstack.org/#/c/70202/ 15:11:41 alcabrera: +1 15:11:45 thanks for the effort there 15:11:47 #link https://review.openstack.org/#/c/70946/ 15:11:56 #link https://review.openstack.org/#/c/71335/ 15:12:00 and... 15:12:09 #link https://review.openstack.org/#/c/70947/ 15:12:12 https://review.openstack.org/#/q/status:open+project:openstack/marconi+branch:master+topic:bp/sql-storage-driver,n,z 15:12:23 malini: even better! 15:12:23 hope tht has all of it 15:12:27 #link https://review.openstack.org/#/c/72677/ 15:12:37 so, I went ahead and worked on the message controller 15:12:37 that's all 5 of them 15:12:55 it should be complete now but it depends on the queue controller and the claim one (which we don't have yet) 15:13:07 sweet 15:13:08 so, that review is likely to fail but we could start putting some eyes there 15:13:11 thanks, flaper87! 15:13:18 on the other hand we should all work on reviewing alcabrera patches 15:13:23 #info flaper87 made great progress on message controller 15:13:29 those are independent and ready to be merged 15:13:35 +1 15:13:36 well,ready to be reviewed 15:13:38 :D 15:13:44 we'll see alcabrera, we'll see 15:13:46 :P 15:13:49 #info shard driver ready for review 15:13:51 lol 15:14:09 so, schedule-wise... 15:14:19 also, alcabrera patches have some things I'd love to base mine (and the queue's controller) on 15:14:29 so, lets focus on those reviews and start testing the driver 15:14:34 we're doing alright 15:14:43 I'll finish the catalogue driver by tomorrow EOD 15:14:44 alcabrera: agreed, we'll make it 15:14:47 and then we're all caught up 15:14:51 +1 flaper87 15:15:00 #info sqlalchemy on schedule 15:15:05 malini: any thoughts on testing? 15:15:22 I will start working on adding the gating jobs today 15:15:42 malini: AWESOME! 15:15:43 thanks! That'd be awesome. :) 15:15:47 cpallares: welcome :) 15:15:51 first one will be for mongo - since we already have everything in place for tht 15:15:54 o/ 15:16:08 malini: sounds good 15:16:13 is there anything blocking you? 15:16:21 no..just me :) 15:16:26 tht can be easily unblocked 15:16:50 #action malini to unblock herself 15:16:51 I have been focussing on getting the tempest stuff merged 15:17:05 So havent been able to spend time on the gating jobs yet 15:17:36 malini: once we complete the work on sqlalchemy, we could help you out if there are still pending things 15:17:51 cool :) 15:17:57 sweet 15:18:03 any other comment? 15:18:08 I'm good. 15:18:15 Next up is another hot topic 15:18:22 #topic API v1.1 15:18:28 kgriffs is missing out. :( 15:18:36 just a quick tempest update before tht? 15:18:43 #action to beat kgriffs for not being around 15:18:48 sure thing, malini 15:18:53 #topic tempest update 15:19:08 I think I am really close - but keeps moving an inch away each time 15:19:13 :| 15:19:26 I submitted another patch y'day for devstack https://review.openstack.org/#/c/72412/ 15:19:55 #link https://review.openstack.org/#/c/63449/ 15:19:57 #link https://review.openstack.org/#/c/72412/ 15:19:59 getting close 15:20:12 Have two +2s , so might get merged soon - but it needs another patch to be merged before 72412 can be merged 15:20:13 not sure why there are to +2s and no approve 15:20:23 #info malini making good progress on marconi/temptest integration 15:20:25 malini: feel free to ping them on irc 15:20:34 flaper87: I already am 15:20:39 malini: awesome 15:20:44 https://review.openstack.org/#/c/69497/ might end up as a blocker for us 15:21:02 hmmm 15:21:07 refactoring patch! 15:21:17 yeah.. 15:21:35 But I'll keep pinging folks so we can get tempest merged 15:21:36 well, hopefully that comes along quickly 15:21:38 we'll see 15:21:44 malini: +1 15:21:56 tht's it for tempest update 15:22:15 malini: thanks a lot for the update and the hard work there 15:22:21 as far as grad req go 15:22:33 we need to have a 'basic devstack gate job in place' 15:22:45 ok 15:22:53 So hopefully we'll be covered with the basic tempest patch merged 15:23:07 malini: awesome! That would cover API v1 15:23:08 right? 15:23:20 Do you think moving to API v1.1 will be hard ? 15:23:23 right now all we have is a basic create queue 15:23:33 flaper87: moving to 1.1 wont be hard 15:23:37 cool 15:23:41 is 1.1 needed to graduate? 15:23:42 that makes me happy 15:23:47 malini: no 15:23:54 no but it is for icehouse 15:24:04 I mean, it would be nice to release 1.1 as part of icehouse 15:24:10 hence, we need to test it 15:24:15 #info extending tempest test suite to include v1.1 not expected to be difficult 15:24:19 tht shud be do-able 15:24:25 cool beans 15:24:46 * flaper87 doesn't know where "cool beans" comes from but he learned that from kgriffs 15:24:57 :D 15:25:01 lol 15:25:02 anyway, moving on 15:25:04 yup 15:25:06 #topic API v1.1 15:25:08 back to api v1.1 15:25:21 #link https://wiki.openstack.org/wiki/Marconi/specs/api/v1.1 15:25:27 that's the spec 15:25:37 I imagine you know it all 15:25:42 hahaha 15:25:46 on good days, yes 15:25:47 :P 15:26:02 I added sharding to that document last week 15:26:09 so, I took a quick look and it seems to be in good shape 15:26:12 so we're offering quite a jump in power 15:26:17 going from 1.0 -> 1.1 15:26:37 I agree that it looks good 15:27:13 Something that makes me happy is that we're not doing any change to the schema in the database 15:27:29 that's often a good sign - we have a good foundation 15:27:31 for the record, we're going to basically copy v1 into v1.1 and change it 15:27:45 +1 15:27:48 that's not what we want to do in the long run but it's the best we can do now 15:27:58 * alcabrera forsees much refactoring 15:28:02 during June, I'd like us to dedicate some time figuring out a better way to do that 15:28:15 hopefully the experiment cpallares is working on will be of great help there 15:28:24 oh yes 15:28:46 \o/ 15:28:47 AFAIK, cpallares said it's all done and she's been using it for a couple of weeks already 15:28:51 OOOOOOOOPSSSSSSSSSSS 15:28:53 she's here 15:28:56 lol 15:29:18 lol 15:29:25 it needs tweaking 15:29:26 so, does anyone have comments w.r.t the API spec ? 15:29:35 not I 15:29:41 Lets make sure we make those comments NOW 15:29:41 oh 15:29:43 one thing 15:29:49 and by now I mean before implementing it 15:29:51 :P 15:29:57 alcabrera: gp 15:29:58 go 15:30:04 returning an empty default body things like GET messages is great 15:30:06 instead of 204 15:30:06 ha sit already been emailed out to openstack-dev ML ? 15:30:11 has it * 15:30:15 it simplifies the handling a bit 15:30:28 default reprs are a nifty idea we should do more of in the future 15:30:46 that's it 15:30:52 malini: I'm not sure... flaper87? 15:31:04 it hasn't been sent to the m-l yet 15:31:23 we should action-ize that - who wants to write email? :D 15:31:23 alcabrera: +1 for empty default bodies 15:31:30 * alcabrera looks at ghostly kgriffs 15:31:35 #action send the v1.1 spec to the mailing list 15:31:40 hehe 15:31:55 alright, so let's move on to specific v1.1 items 15:32:06 comments? 15:32:33 #topic health endpoint 15:32:38 flwang: alcabrera floor is yours 15:32:39 flwang: regarding v1.1 /health, it seems like the way to go is to start a v1_1 directory and implement it there 15:33:09 alcabrera: ok 15:33:15 so what we have now is marconi.queues.transport.wsgi. We need is marconi.queues.transport.wsgi.v1 and marconi.queues.transport.wsgi.v1_1 15:33:27 thoughts? 15:33:37 does this align with our v1.1 strategy alright? 15:33:46 alcabrera: that sounds like a plan 15:33:52 cool 15:33:54 so that 15:33:56 's two steps 15:34:01 I can do pacakage moving thing if you guys want 15:34:05 1. separate our current API into v1 15:34:11 2. implement health in v1_1 15:34:14 flaper87: that'd be awesome! 15:34:16 yes, please 15:34:18 :D 15:34:35 alcabrera: agree and we may need to implement the /ping first 15:34:43 #action flaper87 to put the wsgi transport under the v1 version and clone it to v1_1 15:34:56 cool 15:35:13 #action flwang to continue work on /health and /ping for v1.1 15:35:27 any other thoughts? 15:35:33 not from me 15:35:40 alcabrera: cool 15:35:43 sweet 15:35:52 flaper87: grace not required - the floor is yours 15:35:54 :D 15:35:57 I will implement the /ping endpoint first 15:35:58 #topic Grace not required 15:36:00 SOOOOO 15:36:13 tht is disgraceful 15:36:20 I found out today (and don't laugh at me) that we require grace 15:36:34 it gets me every time, hehe 15:36:38 I don't think that makes much sense TBH 15:36:39 :D 15:36:46 agreed 15:36:50 first impression from a user stand point is: WTF? 15:36:55 the second is: What should I use? 15:37:05 and all under the covers 15:37:08 the third is: I'll put whatever comes to my mind in 3 2 1 ... 15:37:08 ttl = ttl + grace 15:37:11 the magic secret 15:37:14 which is confusing 15:37:15 exactly 15:37:25 so, we should accept it but not require it 15:37:31 +1 15:37:45 I'd even go so far as suggesting we remove it from the public API for v1.1 or v2.0 15:37:50 it's an implementation detail 15:37:56 we can offer wiggle room internally 15:37:58 configurable and all 15:38:00 mmh 15:38:04 but users should only need to care about TTL 15:38:09 so, the change I was proposing was for v1.1 15:38:19 but if we think removing it makes more sense then fine 15:38:20 so have a default grace value, but use config value if provided? 15:38:29 although, removing it requires a major release 15:38:31 maybe 15:38:49 I'm advocating: mark as deprecated in public API v1.1, remove in v2.0 15:38:56 alcabrera: +1 15:38:59 sounds like a plan 15:39:06 lets see what kgriffs thinks 15:39:10 yup 15:39:12 to address malini 15:39:15 's question 15:39:43 For now, users have to provide it. The plan is to make it optional, and use a deployment-time/config option to replace it. The operator decides the wiigle grace. 15:39:50 *wiggle 15:40:33 I don't quite remember why we thought that requiring it makes sense 15:40:39 alcabrera: thanks!! that makes sense 15:40:39 I'm pretty sure there's a reason 15:40:50 we'll find out from kgriffs later. ;) 15:40:51 but it doesn't make sense to me now 15:40:55 alcabrera: yeha 15:40:57 yeah 15:41:08 so let me info this and let's move on. :D 15:41:23 #info /claims grace to be deprecated in v1.1 and removed in v2.0 15:41:44 awesome, thanks 15:41:47 moving on 15:41:51 let's talk about gacks and pops, flaper87 15:41:58 #topic Add gack and/or pop 15:42:03 #topic Add gack and/or pop to API v1.1 15:42:05 :D 15:42:17 could you elaborate on these two ideas, flaper87? 15:42:18 First, let me explain what gack and pop are 15:42:36 Gack (not exacly the best name) is basically get and ack 15:42:53 the idea behind it is to be able to get a message and make sure *no one else will* 15:43:15 when getting a message and then claiming it it is possible that some other client would try to do the same 15:43:25 this means that a message could be delivered more than once 15:43:33 doesn't /claim already fetch the message(s) for you? 15:43:33 and the jobs executed twice 15:43:40 mmh, does it ? 15:43:44 :D 15:43:46 I thought so. :P 15:43:48 * alcabrera checksa 15:44:10 #link https://wiki.openstack.org/wiki/Marconi/specs/api/v1.1#Claim_Messages 15:44:11 If it does then that covers gack which is awesome 15:44:12 yes, it does 15:44:19 I bet it is a mix of claim + limit 15:44:26 ah ok, v1.1 15:44:28 ok ok 15:44:31 so gack == claim?limit=1 15:44:37 yeah 15:44:48 I'd say we don't need it then. 15:44:54 #info gack is already provided as /claim?limit=1 15:44:55 folks can claim 1 message 15:45:10 next one, and alcabrera I'm sure as hell we don't have it, is pop 15:45:11 :D 15:45:19 this is basically get-and-delete 15:45:32 heh 15:45:41 so atomic get-and-delete 15:45:45 yeah 15:45:53 this is pretty useful for things like kombu 15:46:09 where you want to get a message out of the queue and you don't want to make another call to delete it 15:46:24 hmmm 15:46:29 Avoiding those 2 calls we would make the client life easier and Marconi's server load lower 15:46:44 there's a tricky point there I worry about, having used marconi extensively recently 15:46:53 shoot 15:47:03 So, a worker claims a message 15:47:14 that worker maybe does somethings 15:47:27 then crashes before deleting the message after doing those things 15:47:42 * alcabrera tries to think this through more clearly 15:47:55 The gist of it: I'm worried that - 15:48:03 alcabrera: tht is exactly what I am thinking too 15:48:13 I cannot think of an use case 15:48:15 so, if the worker uses pop, it doesn't have to delete the message from the queue. The message will be deleted when it gets the message 15:48:16 if claim-and-delete is used, the task might never really get processed if a worker crashes 15:48:26 *get-and-delete 15:48:27 it sounds like a claim with delete built in 15:48:33 alcabrera: ah yeah but that's up to the implementation 15:48:41 I mean, we can't protect the client from that 15:48:44 hmmm 15:48:45 that's up to the client 15:48:46 true 15:48:49 so 15:48:51 given that 15:48:56 I'm in favor of this pop behavior 15:48:57 what we can is give it an easier way to do claim->delete 15:49:09 given that clients can still manually do things if they want to separate the claim from the delete 15:49:20 indeed 15:49:26 cool 15:49:31 what do you think, malini? 15:49:33 I mean, a client could also do: claim->delete and then crash 15:49:45 so, we won't ever be able to protect them from that 15:49:57 we can make it easier for them to shoot their own feet though 15:49:58 :P 15:50:01 heh 15:50:03 if it crashes after processing, we are ok 15:50:16 kk 15:50:19 the message is not lost in tht case 15:51:12 Thing is that get-and-delete is a common pattern and doing claim->delete will be expensive for both the client and marconi 15:51:55 #info pop seems to be a good idea. It needs further discussion and it may make sense to have it for v1.1 15:51:55 I'm in favor of pop as an optimization, since alternate operations are still available 15:51:58 +1 15:52:04 that's the info I would've given. :D 15:52:05 any other comment? 15:52:07 flaper87: agreed abt more ops being expensive..will do some homework on kombu to understand an use case 15:52:10 none form me 15:52:12 *form 15:52:14 ... 15:52:16 *from 15:52:23 next 15:52:25 #topic Open Discussion 15:52:39 yup - +1 for open discussion 15:52:45 we don't have much time left to review BPs/bugs 15:53:04 So, at the next summit, kgriffs said he wants to offer beers, food and whatever we want 15:53:12 he told me to share this with y'all 15:53:15 lol 15:53:22 :D 15:53:27 this kgriffs fellow is very generous 15:53:29 <3 15:53:36 #action Everyone to thank kgriffs for being so nice and paying for whatever we need at the next summit 15:53:43 yaaay 15:53:50 #info this kgriffs fellow is very generous 15:53:52 better 15:53:54 so, I've been burried in stuff lately 15:53:54 ah 15:54:06 we are getting buried in snow ;) 15:54:07 lol 15:54:07 but I'd ask everyone to do an extra efford and review the sqlalchemy patches 15:54:17 give them some priorities over other things 15:54:44 not sure why I said "some priorities" but anyway, you got my point 15:55:00 malini: it hasn't snowed much here this winter :( 15:55:09 two non-sqlalchemy patches want review love, too 15:55:14 #link https://review.openstack.org/#/c/68267/ 15:55:27 ^^ guard against bad URI in shard registration 15:55:28 and... 15:55:36 #link https://review.openstack.org/#/c/70463/ 15:55:43 cleanup limit configs - great refactoring 15:55:44 alcabrera: dude, stop coding, you're burrying us all with reviews 15:55:49 lol 15:55:52 ;D 15:56:17 so, any other comment ? 15:56:33 any cat that needs to be rescued ? 15:56:41 yaks need shaving? 15:56:45 *needing 15:56:57 ok, that's all folks! 15:56:59 I think we're good. 15:57:02 thanks for joining! 15:57:05 thanks everyone and see you in #openstack-marconi 15:57:07 #endmeeting