16:07:53 #startmeeting marconi 16:07:54 Meeting started Mon Aug 26 16:07:53 2013 UTC and is due to finish in 60 minutes. The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:07:55 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:07:57 The meeting name has been set to 'marconi' 16:07:59 #topic review actions from last time 16:08:29 \o 16:08:36 #link http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-08-19-16.03.html 16:09:07 * kgriffs cppcabrera and flaper87 to turn apiclient-marconi etherpad into bps/bugs 16:09:31 kgriffs: ish 16:09:40 No bugs were produced. However, we decided to roll-back the common apiclient code. 16:09:44 That change is still pending. 16:09:48 kgriffs: what cppcabrera said 16:09:49 :D 16:09:55 That would set us at 0 bugs. ;) 16:09:56 cppcabrera: you're faster than me, sir 16:10:07 flaper87: hehe. :D 16:10:18 #ACTION cppcabrera and flaper87 to turn apiclient-marconi etherpad into bps/bugs 16:10:36 2.c. malini to work with flaper87 on TOXifying the tests 16:10:51 I submitted https://review.openstack.org/#/c/43388/ to refactor tests 16:10:57 Tox is still pending 16:11:12 need the patch reviewed & merged to add tox 16:11:14 kgriffs: I'm working on splitting tests 16:11:21 OK, let's swarm on the pending patches and get those taken care of 16:11:22 and getting there out of marconi 16:11:38 #action malini to work with flaper87 on TOXifying the tests 16:11:57 #action flaper87 to refactor tests 16:12:17 #action kgriffs to back-fill "Expected" release on bps 16:12:30 so, I didn't get around to doing that. :p 16:12:46 try again this week 16:12:55 2.g. team to finalize placement service architecture (high-level) (http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-08-19-16.03.log.html#l-100, 16:13:04 Finalized and in progress 16:13:11 cppcabrera: +1 16:13:13 I've been prototyping and testing since Thursday 16:13:17 * kgriffs is a happy panda 16:13:21 awesome 16:13:40 * flaper87 needs to read that wiki page 16:13:42 it will some awesome caching 16:14:15 I'd like to use flaper87's caching patch, but waiting for it to land and we need to benchmark it 16:14:24 cppcabrera: can we test it against current marconi on test environment? 16:14:26 I was planning on writing LRU and redis drivers 16:14:41 kgriffs: +1 16:14:49 (LRU would be used in a hierarchy with 1-2 other drivers) 16:14:56 oz_akan: Needs a touch more implementation, then yes. 16:15:03 cppcabrera: ok 16:15:14 what's the redis driver for? 16:15:23 placement? 16:15:28 flaper87: were you going to do a memcached driver? 16:15:40 kgriffs: done, it's up and waiting for review 16:15:46 oic 16:15:47 kgriffs: https://review.openstack.org/#/c/42878/ 16:15:55 zyuan_: I think they're talking about oslo.cache drivers. 16:16:15 ok 16:16:28 If you get a redis driver into oslo.cache, kgriffs, I can move that logic out of marconi-proxy. :) 16:16:37 anyway, I just wanted to call that out as an important dep. for the proxy 16:17:00 moving on 16:17:12 3.e megan_w_ to apply for incubation this week and invite everyone to the TC meeting 16:17:24 the first part is done 16:17:26 w000t w000t w0000000000t w0000000000000000000000t 16:17:29 #link https://wiki.openstack.org/wiki/Governance/TechnicalCommittee 16:17:43 FYI, It'll be discussed in the next TC meeting 16:17:49 it's already in the agenda 16:17:52 oh good, i was wondering when we'd fall 16:18:04 flaper87: do you know the time? 16:18:04 cool 16:18:39 sweet 16:19:07 megan_w_: TC meetings are on Tuesday (20:00 UTC) 16:19:23 cool. i'll send out an invite for everyone 16:19:47 We should check if there'll be one this week, I don't recall seeing the reminder in the M-L 16:20:10 i can ask anne 16:20:13 According to the page, the next TC meeting is in TBD status. 16:20:24 ah, never mind, I think they never send a reminder for it 16:20:27 just for project meetings 16:20:47 #action megan_w_ to send out mtg invite for TC meeting 16:21:10 #info TC meetings are on Tuesday (20:00 UTC) 16:21:35 5a. bot for #openstack-marconi 16:22:19 cppcabrera: ^ 16:22:21 What kind of bot - meeting bot? We did get gerritbot in last week, thanks to Alex_Gaynor. :) 16:22:25 I recall seeing a patch of it! 16:22:53 kgriffs: https://review.openstack.org/#/c/42956/ 16:23:05 eavesdrop ^ 16:23:09 Ahhh, eavesdrop. 16:23:29 I'll ping infra on that one. 16:23:37 Actionize me ^^ 16:24:45 yeah, so, everyone feel free to +1 the eavesdrop patch 16:25:04 #link https://review.openstack.org/#/c/42956/ 16:25:04 #action cppcabrera to follow up with infra on eavesdrop 16:25:53 Alex_Gaynor to take point on python-marconiclient under flaper87's direction 16:25:58 +1 16:26:02 did we make any progress on client stuff? 16:26:18 Minor progress. 16:26:25 kgriffs: not much, cppcabrera updated one of his patches and I'll work on that this week 16:26:26 The message controller is good to merge. 16:26:32 The rollback is good to merge. 16:26:35 That's the gist of it. 16:26:51 #action flaper87 to crack the whip this week on client development 16:26:52 :p 16:26:52 (good to merge - IMO. :D ) 16:27:01 LOL 16:27:08 final thing from last time 16:27:12 7a. Voted on "require claim_id in all cases when deleting a message that is claimed?" Results are, Yes: 5, No: 3 16:27:33 forgot to put an action item on that, but zyuan_'s been working on it (thanks, btw!) 16:27:45 the patch is there 16:27:54 https://review.openstack.org/#/c/43339/ 16:29:56 while we are on the topic, I wanted to mention that it was decided that bulk delete will *not* support claim_id as a query param 16:30:07 zyuan_: am I correct in that? 16:30:23 s/in/on 16:30:25 addressed in commit msg 16:30:53 ok 16:31:03 let's get that sucker reviewed and merged 16:31:12 I can update the spec 16:31:34 moving on... 16:31:52 #topic • Proposal: Add another core reviewer, designate domain experts 16:32:19 since we are still a relatively small team, I thought it would be best just to do this over IRC rather than via the list 16:34:02 #info I would like to formally propose that Alejandro Cabrera be added to Marconi core 16:34:24 kgriffs: +1 for me 16:34:26 I have discussed this proposal with Flavio (flaper87) as well as Alejandro's supervisor 16:34:51 so that explains the background discussions, heh. :P 16:35:25 so, Alej has proven himself as a consistent, constructive reviewer 16:35:49 also, he has demonstrated thought-leadership and a servant-leader attitude 16:36:19 +1 16:36:32 if this proposal is ratified, we will move to a 2 x +2 requirement for approving all future patches 16:36:52 this will put us in line with core OpenStack projects 16:37:03 * ametts thought we were already doing that 16:37:04 * flaper87 can't wait for that to happen 16:37:17 I'm happy to be working on the marconi project, so to get official backing on my involvement is an honor. 16:37:22 we have been doing it implicitly for a while, but this will make it official 16:37:45 cppcabrera: +1000 16:37:47 cppcabrera: this will require a long-term commitment from you to contribute to Marconi, at least on a part-time, regular basis 16:37:53 * flaper87 is giving numbers away 16:38:11 Without further ado… 16:38:11 #startvote Add cppcabrera to Marconi core? Yes, No 16:38:12 Begin voting on: Add cppcabrera to Marconi core? Valid vote options are Yes, No. 16:38:13 Vote using '#vote OPTION'. Only your last vote counts. 16:38:19 #vote Yes 16:38:20 #vote Yes 16:38:23 #vote Yes 16:38:23 #vote Yes 16:38:26 #vote Yes 16:38:29 #vote Yes 16:38:29 #vote Yes 16:38:38 #vote Yes 16:38:42 :) 16:38:43 cppcabrera: :P 16:38:45 cppcabrera :D 16:38:55 * amitgandhi wouldve been weird if alej voted no lol 16:38:57 I'm giving my commitment with '#vote Yes' :D 16:39:03 +1 16:39:04 #endvote 16:39:05 active is important 16:39:05 Voted on "Add cppcabrera to Marconi core?" Results are 16:39:06 Yes (8): kgriffs, ametts, malini, cppcabrera, zyuan_, amitgandhi, megan_w_, flaper87 16:39:09 #vote Yes 16:39:12 ah 16:39:17 tooo late! 16:39:20 haha 16:39:24 :) 16:39:39 #note oz_akan_ also voted "Yes" for adding cppcabrera to Marconi core 16:39:51 ok, I'll take care of that after the meeting 16:39:55 related: 16:40:27 flaper87: has proposed that we assign each core team member one or more subject matter areas to oversea wrt Marconi 16:40:38 +1 16:41:04 It'll make the review process easier to manage as we grow, knowing what we're each especially responsible for catching. 16:41:23 does that make if more difficult to review items that are outside your subject matter area? 16:41:27 it* 16:41:43 But we might get back to the waiting for core reviewer to get back, for certain areas 16:41:44 yes, but let's all continue to resist the urge not to perform a thorough review just because we know someone else has/will look at it. :D 16:42:03 That doesn't mean people is not expected (allowed) to review all patches, what that means is that some people comments in certain patches may be required before approval 16:42:45 Does this requirement re-introduce a potential bottleneck for getting patches approved? 16:43:07 Oh. "In certain patches". So maybe not. 16:43:07 ametts: nope, it's not a strong requirement 16:43:22 ametts: also, we've been doing it for non-trivial patches already 16:43:26 what is the advantage for having designated experts? 16:43:34 so, having one more reviewer should actually speed things up 16:43:43 * kgriffs is hoping 16:44:06 malini: For new contributors, it makes it easier (in theory) for someone commiting, say, a patch on mongodb optimizations to know that they should request flaper87's attention. 16:44:18 malini: I'd call them maintainers, responsible 16:44:24 Speaking of speeding things up, is there time for a performance discussion on the agenda? 16:44:27 or something along those terms 16:44:46 ametts: Open Discussion time, I'd say 16:44:49 ametts: It's on the agenda, three items after thsi. 16:44:50 tht will make us too person-dependent 16:44:52 *this 16:44:55 but we've got 15mins left 16:45:07 ok, real quick 16:45:26 here is my thinking on assigned areas: 16:45:27 Kurt Griffiths (kgriffs) - API, Security, Performance 16:45:27 Flavio Percoco (flaper87) - Storage, Client Libs, Oslo 16:45:27 Alejandro Cabrera (cpp-cabrera) - Scaling, Transport, Py3K 16:45:56 malini: it's not a strong requirement, patches shouldn't be blocked 16:46:14 however, if there's a performance issue, I'd definitely love to hear what kurt thinks 16:46:23 I will update the wiki per the above, let's try it for a couple weeks and then we can adjust as needed 16:46:27 Why not kgriffs on Transport, since he wrote Falcon and the WSGI stuff? 16:46:33 and I personally, won't approve a patch I'm not sure about 16:46:54 ametts: good question… mainly because we are lumping proxy and transport together 16:46:58 but we could break those out 16:47:13 Ok 16:47:18 #action kgriffs to update core team section on the wiki 16:47:29 should we really have assigned areas? I mean everyone should be cross functional across those area's and over time become experts in other areas 16:47:38 amitgandhi: +1 16:47:49 you put my thought better in words ;) 16:47:58 again, that doesn't mean people shouldn't review other patches 16:48:11 flaper87: +1 16:48:15 everyone should review whatever patch he could 16:48:32 but, when tons of patches start comming in, I'd like to focus on things I've been working on lately 16:48:35 (meaning storage) 16:48:42 assigned areas - they're the first person you should look for, but certainly not the last. ;) 16:48:47 than things others have focused on (like transport) 16:49:07 so, it's just a "logical" (implicit) split 16:49:12 and that will build up the expertise. i guess what i mean is that if you feel comfortable with the area a LGTM should be fine. but if its an area you are uncomfortable with, give the +1 but refer back to the expert 16:49:14 but not a requirement for a patch to get approved 16:49:50 It sounds like we're all agreed that these are informal areas of expertise, and the "favorite" area for core devs to zero in on when there are a bunch of patches to review. 16:49:57 in the interest of time, i would like to +1 ametts comment to chat about performance 16:50:00 "Sold". What's next? 16:50:04 any core developer should feel free to approva a patch but in doubt, lets just +2 / +1 and let others do that 16:50:16 +2 16:50:19 ;-) 16:50:27 ok, just a few secs on the next topic 16:50:34 #topic Incubation feedback 16:50:37 (that's what he said last time :) ) 16:50:42 ametts: LOL 16:50:48 I didn't notice any further email on the list over the weekend 16:51:28 notmyname hung out with us in #openstack-marconi last week for a bit and we chatted about a few things 16:52:00 one of his suggestions was to not get carried away implementing lots of different drivers, but to focus more on the semantics/use cases 16:52:18 …and let 3rd-parties write the extra drivers 16:52:19 +1 16:52:24 +1 16:52:43 I'd like to get the zmq transport in, though 16:52:48 so, we need to decide what core drivers we want and then not worry about the rest 16:53:00 flaper87: kk 16:53:03 kgriffs: ah ok, sorry, didn't want to anticipate 16:53:04 :D 16:53:14 we will discuss in a future mtg or something 16:53:16 kgriffs: and I'd also like to get proton in the storage side 16:53:18 ok 16:53:26 lets put it in the agenda 16:53:29 for next week 16:53:37 ok 16:53:42 will do 16:53:52 (mmh, or lets discuss it over #openstack-marconi since I think they'll ask about this in the TC meeting) 16:53:55 #action kgriffs to add driver scope to agenda for next time 16:54:01 ah 16:54:03 good point 16:54:15 i worry about how hard it is to specify the API for zmq transport 16:54:17 breakout after this meeting 16:54:18 if the API does not change, zmq is not better than wsgi 16:54:21 kk, moving on! 16:54:28 #topic performance 16:54:42 5 minutes for performance. 16:54:46 ... 16:55:08 Performance deteriorates when message count grows. Who's working this issue? 16:55:08 ametts: ? 16:55:14 https://bugs.launchpad.net/marconi/+bug/1216950 16:55:19 ametts: me, oz_akan_ and zyuan_ (?) 16:55:23 #link https://bugs.launchpad.net/marconi/+bug/1216950 16:55:35 my suggestion is to simply use another storage backend... 16:56:10 flaper87: do you have time to tackle that today? If not, I can take it. 16:56:11 Any preliminary thoughts on the resolution? Is this something we can fix in code? 16:56:15 zyuan_: which is the last option at this time 16:56:23 oz_akan_: yea 16:56:35 ametts: smells like a query isn't using an index 16:56:38 did we run the profiler on mongo? 16:56:42 kgriffs: mmh, not sure, so, I'd prefer you taking it! 16:56:47 (or it is, but not for the entire operation) 16:56:51 so, mms is down 16:56:52 flaper87: thinks a .count() is reached by tsung 16:57:03 zyuan_: nope 16:57:09 kgriffs: needs to fix mms, I can't make it work 16:57:12 Some thoughts I've seen mentioned are: avoid counts in storage, bad/unused indices, tests use a .count() that production wouldn't use 16:57:14 flaper87: then? 16:57:17 flaper87: ok, I'll take it 16:57:17 profiling is enabled on mongo 16:57:26 kgriffs: has access to DB 16:57:33 kgriffs: were you able to verify that? 16:57:37 oz_akan_: sent a queue query and said it was slow. We need to figure out where it's being called but it seems weird. 16:57:45 we have another one open for a while too 16:57:46 we also have https://bugs.launchpad.net/marconi/+bug/1213099 16:57:53 #link https://bugs.launchpad.net/marconi/+bug/1213099 16:57:54 kgriffs: indexes should be covering all queries 16:57:57 AFAICT 16:58:19 can we take the trouble shooting to #openstack-marconi & assign we also have https://bugs.launchpad.net/marconi/+bug/1213099 ? 16:58:21 oz_akan_: yes, I can ssh in 16:58:26 (at least to the primary) 16:58:37 ok 16:58:42 ok 16:58:44 I'd suggest fixing mms, cleaning up the profiling queue 16:58:49 flaper87: you are assigned to this - https://bugs.launchpad.net/marconi/+bug/1213099 16:58:49 and then running tsung again 16:59:07 kgriffs: Please keep us posted on your progress, latest thoughts, etc. -- lots of people are asking about performance these days.... 16:59:21 i bet 16:59:23 also 16:59:28 cppcabrera: there is no count() reached by tests (afaict from oz_akan_ 's sheet) 16:59:32 GC should be executed 16:59:41 kgriffs: regardless of performance, at the moment we need scale lol... 16:59:45 if we don't run it, we'll have a lot of expired messages and the index will be huge 17:00:08 which means mongodb will have to scan more records in the index 17:00:16 scaling - it's the next topic, but we won't have time to discuss it this meeting. :P (amitgandhi) 17:00:17 flaper87: as long as the index fits in RAM, it shouldn't matter a lot, should it? 17:00:33 flaper87: or are you saying that some scanning is still required given our current indexes? 17:00:43 (we need to wrap up) 17:01:04 kgriffs: nope, Indexes should be covering all queries now (expect counts which are really slow) 17:01:08 cppcabrera: scaling in the sense of handling the load we are giving it (ie not slowing down under load) 17:01:42 ok 17:01:58 #action kgriffs to work on slowdown caused by large collections 17:02:04 #endmeeting