19:03:33 #startmeeting marconi 19:03:34 Meeting started Thu Jul 11 19:03:33 2013 UTC. The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:37 The meeting name has been set to 'marconi' 19:03:39 \o/ 19:03:43 #topic review actions from last time 19:03:58 #link http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-06-27-19.05.html 19:04:01 #idea I liked ideas 19:04:26 ok, megan_w in da house? 19:04:31 (I need to learn more about irc commands) 19:04:34 yep 19:04:48 so, the incubation application got on the wiki, correct? 19:04:53 yes 19:04:59 * flaper87 read it 19:05:02 awesome, thanks 19:05:04 it looks very good 19:05:06 megan_w: +1 19:05:10 https://wiki.openstack.org/wiki/Marconi/Incubation 19:05:19 any feedback on it real quick? 19:05:36 flaper87: I got you more contributors. :) 19:05:51 if we merge common client lib, Alessio might be a contributor too ;) 19:06:15 megan_w: I would put "Kombu integration" instead of "Celery Integration" 19:06:26 ametts_: +1 for you 19:06:28 :D 19:06:43 ok, i can update 19:06:51 groovy 19:06:53 moving on 19:06:56 The docs looks great 19:07:02 *doc 19:07:16 are more contributors the only thing we need before submittign this? 19:07:17 I see that ametts submitted a patch 19:07:25 so that takes care of hi action item 19:07:46 everyone PLEASE go review that patch 19:08:15 which? 19:08:18 Yes, please. I want zyuan to pick it up from here, so it would be good to get my part merged. 19:08:19 megan_w: I'd say yes 19:08:21 I'm a little biased since I helped design some of it (ametts did the heavy lifting, tho) and I would like to get more opinions 19:08:32 Link: https://review.openstack.org/#/c/36572/ (ametts' patch) 19:08:47 cppcabrera: #link * 19:08:53 Ahh. 19:08:57 #link https://review.openstack.org/#/c/36572/ 19:08:59 oh, the one i'm looking at.... 19:09:04 right 19:09:21 moving on 19:09:23 I got some comments on that, we can talk about those afterward 19:09:37 item #3 19:09:41 o/ 19:10:03 ah, you said item #3 19:10:03 yo 19:10:16 mine is #2 19:10:17 :D 19:10:20 so, I started a wiki page: https://blueprints.launchpad.net/marconi/+spec/ops-stats 19:10:54 #link https://blueprints.launchpad.net/marconi/+spec/ops-stats 19:11:05 this is low priority, but still really nice to have for ops (and for tuning), so I kept it in Havana 19:11:17 how are we planing to do that? Middleware ? 19:11:33 possibly. That's how swift does it. 19:11:52 I guess that makes more sense 19:11:56 oke-doke 19:12:20 +1 i like stats 19:12:33 next up we had some actions for flaper87 to investigate ceilometer 19:12:41 o/ 19:12:43 yup 19:12:57 so, I did. The short answer is no, it's not possible to use it right now. 19:13:00 the long answer is: 19:13:27 The easiest way to use ceilo right now is to send notifications to some backend (rabbit) and let ceilo pull them from there 19:13:35 oic 19:13:37 good to know 19:13:42 they recently implemented an UDP listener that we could use as well 19:13:59 not sure how mature it is right now and whether we want to rely on that. 19:14:13 ok, we can keep an eye on it and revisit when it comes time to implement that bp 19:14:25 re statsd: No, they don't support it yet. They've talked about it and it'll be supported sometime in the future 19:14:48 but it hasn't be scheduled yet 19:14:54 ok 19:15:03 thanks! good info. 19:15:06 #info Ceilometer is not appropriate for Marconi stats at this time. 19:15:31 #info statsd not supported yet by Ceilometer but is on roadmap 19:15:51 #info recommended way to send data to Ceilometer is via a broker 19:15:58 moving on 19:16:18 #topic QA cluster remaining TODOs 19:16:47 oz_akan_, malini: ^^ 19:16:53 hey 19:16:54 Oz & me are running the load tests 19:17:22 malini: oz_akan_ any numbers? How's Marconi behaving ? 19:17:47 oz_akan_ : want to share the story ? 19:17:56 * flaper87 is a bit scared now 19:17:58 ok 19:18:00 * ametts_ snickers 19:18:30 flaper87: dont worry, we are slow and steady 19:18:44 if we have a write concern with 1 and if we read from secondary nodes we get better results 19:19:08 otherwise, a post_claim takes 7 seconds under light load 19:19:25 with all changes we did, it is a little over 1 second now 19:19:28 as in 1 user/sec (a user doing ~5 api calls) 19:19:36 main problem is database locks 19:19:57 I think we will have to create lots of databases on each instance 19:19:58 wow, that doesn't sound good 19:20:16 I mean, the 7 seconds thing 19:20:33 mmhh, are the indexes being used correctly? 19:20:41 placement service can handle this problem as well, or we can just hash a queue on a database on a mongodb instance 19:20:56 did you guys enable the query profiler ? 19:21:15 writes take good amount of time 19:21:28 if I send only one request, it is in milliseconds range 19:21:30 oz_akan_: The placement service will help but we should definitely improve that timing 19:21:59 to improve the performance 1- we need to check if code if optimal, 19:22:04 is anything whacky going on with the mongodb instances? 19:22:15 I will enable profiler and create a report 19:22:20 kk 19:22:29 2- we need more databases per instance, as a write, locks all database 19:22:35 can you guys get together and dig into this? 19:22:40 oz_akan_: +1 19:22:58 We should make sure the indexes are being used as expected 19:23:05 flaper87: +1 19:23:18 #action oz_akan_ and flaper87 to investigate mongo insert performance 19:23:23 almost all call marconi has, has a write operation 19:23:28 so we are very write heavy 19:23:51 oz_akan_: good point 19:23:56 and delete_queue is unresobavly slow 19:24:07 unreasonably 19:24:14 well, the good news is we have a baseline now 19:24:15 22 seconds, under light load 19:24:18 mongo write is suppose to be fast, since it does not wait disk 19:24:27 let's get to work making it better. 19:24:30 !!! 19:24:31 zyuan: Error: "!!" is not a valid command. 19:24:37 zyuan: depends on your write concern 19:24:50 if you expect the record to be journaled, it'll wait for disk 19:25:01 right, and if you have so many of them, reads get slow 19:25:02 flaper87: mongo has no journal 19:25:07 zyuan: it does 19:25:10 i see 600 requests in read queue 19:25:28 * kgriffs rings cowbell 19:25:30 i see.... 19:25:32 zyuan: FYI http://docs.mongodb.org/manual/core/journaling/ 19:25:35 journal can be disabled, which we did for now to see what is the max we can get 19:25:46 oz_akan_: ok, we can dig into this afterwards 19:25:52 moving on 19:25:53 flaper87: ok 19:26:00 malini: oz_akan_ thanks for getting that env up and running 19:26:06 many apple pies for you guys! 19:26:09 #topic • Rebooting python-marconiclient 19:26:12 woohoo 19:26:21 malini: is awesome with her tests 19:26:27 I just run them all day :) 19:26:36 (and all night!) 19:26:36 Thanks Oz..I'll give you the cheque 19:26:56 so, cppcabrera, wanna talk about the client? 19:27:11 Yeah. :) 19:27:13 cppcabrera, you have the floor. 19:27:18 Alright... 19:27:33 So there's been a lot going on with regards to the client lately. 19:27:48 Primarily, brainstorming and setting the foundation for developing it. 19:28:18 Alessio helped bootstrap our installation process by submitting a few patches. We now use pbr. 19:28:33 The README is in the process of getting a big upgrade. 19:28:50 A HACKING file is also being added, based on Marconi server's own HACKING.rst. 19:28:56 i guess we don't have much freedom on client's API design.... 19:29:12 all openstack clients use same set of .... dump apis 19:29:21 So what's next - finalizing the deisgn and developing. 19:29:25 *design 19:29:49 cppcabrera: re Alessio's patch! I don't think it is *ready* but I think we should get it merged and keep working on it 19:30:01 but, I do want to make sure someone is going to work on that 19:30:30 I don't see that patch on the review queue any more. Where did it go? 19:30:39 I'm not happy with merging things that are not ready but I realize that's the only way to start working on marconi's client 19:30:52 flaper87: Is there a list of what needs to be done? 19:30:53 https://review.openstack.org/#/c/29255/ 19:31:02 #link https://review.openstack.org/#/c/29255/ 19:31:12 Thanks. 19:31:35 I'm in favor of merging that, especially since it bootstraps our common libraries effort. 19:31:37 ametts_: There isn't, I've got some things in my mind, I will write those down 19:32:01 As far as cleanup, I'll be happy to champion that cause. 19:32:14 that patch is ready, it's not the marconi client; it's the commonly used library. 19:32:41 pls, notice that that patch takes code from several clients and puts it in the same package 19:32:54 so, there's a lot of cleanup to do there 19:32:57 kgriffs: You missed your chance to give flaper87 one of your action-thingies ^^^ 19:33:14 Yes cleanup will be required. 19:33:31 Three action points ^^ (flaper87 comments, merge patch, cleanup) 19:33:41 oh my, i guess it's a common part. 19:33:45 holymoly! 19:33:50 #action flaper87 to document client cleanup 19:33:59 #action flaper87 to merge patches 19:34:07 Also, finalize client design, distribute development effort. 19:34:16 So... 5 tasks. :) 19:34:17 cppcabrera: waaaaaaaaaaaaaaaaaaaaaaaaaaat ? 19:34:25 cppcabrera: dude, -1 Apple pie for ya! 19:34:32 Hahaha 19:34:36 #action flaper87, cppcabrera to finalize client design 19:34:49 :P 19:34:51 * kgriffs just works here 19:34:57 LOL 19:34:59 ok, anything else on that topic? 19:35:04 not from me 19:35:08 cppcabrera: shut up! 19:35:10 :P 19:35:18 one ques 19:35:22 Alright... that wraps it up for me. 19:35:27 do we really need the apiclient patch? 19:35:53 zyuan: yes and no, That's an effort we're doing to contribute back to Oslo's apiclient 19:36:10 it makes sense to unify client's common code and it isn't that simple to migrate existing clients 19:36:16 i definitely like apiclient goes to oslo 19:36:22 so, we proposed to use marconiclient as incubator for that package 19:36:27 but.... don't use marconi as a test field... 19:36:46 zyuan: before going into Oslo, it needs to have a POC and being used at least in 1 project 19:37:01 ok 19:37:03 marconiclient is brand new, it makes sense to use it as incubator for it 19:37:15 +1 19:37:17 I would also like to have a role in the client development, after finalizing design. 19:37:26 Sriram: +1 19:37:27 flaper87: agree -- marconi is in the best position to be the incubator. 19:37:36 then how about do this: 19:37:40 It's a good place to start hammering in "One way to do it." 19:37:48 merge the patch 19:37:54 write marconi client 19:37:58 after the client is done 19:38:11 decide which part of the apiclient goes away 19:38:19 like , garbadge collection 19:38:24 zyuan: that's our plan 19:38:58 +1 cppcabrera and sriram tag team the client dev 19:39:09 then just merge the patch... we don't have much energy to design a lib without using it. 19:39:24 plus, there'll be another cleanup when we'll try to propose it for Oslo 19:39:40 zyuan: that's what we just said :P 19:39:52 lets just make sure we've documented the cleanup well so we dont forget 19:39:52 cool! 19:40:00 ok, anything else on that topic? 19:40:07 not from me 19:40:10 None from me. I'm happy. :) 19:40:21 thats it from me. 19:40:41 next topic? 19:40:54 bugs 19:41:02 #topic enforcing queue metadata reserved field convention 19:41:16 :( 19:41:33 ... 19:41:36 * ametts_ thinks that's the worst topic name ever 19:41:47 lol 19:41:54 agreed, heh 19:42:13 so, flaper87 wanted to discuss this real quick 19:42:41 right, I barely followed the discussion yesterday 19:43:04 so, what's the plan here? 19:43:12 the proposal is to actually allow clients to create custom field names that start with an underscore, even though those are reserved for future use by the server 19:43:33 no, there is no, no, NO custom fields 19:43:56 um, by custom I mean arbitrary metadata fields. we allow that 19:44:57 what future use reserved fields do you envision? 19:45:08 are we violating YAGNI 19:45:12 for example, _default_message_ttl 19:45:25 or _message_delay 19:45:45 the idea is, we tell people not to use the naming convention of preceding underscore 19:45:51 there are many usage, since it's a per-queue attribute. 19:45:59 I don't think this is a YAGNI violation, we need to make sure we don't break backward compatibility in the future 19:46:35 absolutely, we don't break the reserved fields we already published. 19:46:39 if we ever end up wanting to use, say, _message_delay, then we need code or something to protect ourselves from blowing up on queues where a user actually used that field name in spite of us warning them not to 19:46:58 so, that's the proposal in a nutshell 19:47:07 1. don't enforce 19:47:33 2. mitigate name collisions 19:48:08 I prefer enforcing it, I mean, forbid people to use our reserved convention 19:48:17 can we not enforce, but warn and discourage? 19:48:24 flaper87: +1 19:48:42 not just warn, we told them not to do so 19:48:49 +1 for enforcing. 19:48:52 i can't think of a reason to pet them 19:48:57 I'll let zyuan argue for his proposal 19:49:07 take the example i posted yesterday 19:49:17 actually, #2 was my contribution, seemed like it is a given if you do #1 19:49:23 open our python, and see if you can define a method named __a__ 19:50:07 but python wouldn't clean up your function names, between version updates 19:50:25 and this is not python, we're talking about people's data 19:50:45 we can't just introduce a new keyword and just say: "We told you so" 19:51:09 python all programs everything by upgrading from 2 to 3 19:51:27 yeap..the restriction is somewhere deep in our API docs & I am sure most of us wouldnt ever have noticed it 19:51:32 if we don't want to break backward, we can cleanup 19:52:24 zyuan: can you speak briefly about client backwards compat? 19:52:57 iirc, that was your main driver behind this proposal 19:53:05 if we publish more _names between revisions 19:53:25 new clients can immediately use new _names without waiting for servers to be upgraded 19:54:17 and the new _names should not change the sematics within 1 API version 19:54:39 mmh, not sure I follow :( 19:54:39 if a _name does change the semantics, then we upgrade API version 19:54:53 what do you mean by _name ? 19:55:14 reserved queue metadata in use 19:55:56 (5 minutes) 19:56:06 bugs….. 19:56:06 like, today we publish a per-queue attribute, say "_max_payload" 19:56:08 mmh, I don't see the benefit 19:56:18 it only affects performance, say 19:56:43 kgriffs: should we vote? 19:56:44 then clients can send it even server does not optimize for it 19:56:57 1 minute, then we'll vote 19:56:57 zyuan: we can enforce it in our client as well 19:57:00 not just the server 19:57:15 at least don't do it in server 19:57:28 I vote to defer this discussion, and have zyuan write up his rationale in detail. 19:57:36 For next time. 19:57:42 +1 19:57:45 any opposed? 19:58:20 bugs? 19:58:25 #action zyuan to write up metadata handling proposal and review next time 19:58:29 #action bug triage 19:58:32 oops 19:58:34 they are solved going to be solved 19:58:35 LOL 19:58:36 #topic bug triage 19:58:39 https://bugs.launchpad.net/marconi/+bug/1192196 —> Truncated JSON..I saw this happen on a claim messages as well. So need somebody to look at this one 19:58:56 #link https://bugs.launchpad.net/marconi/+bug/1192196 19:59:09 malini: I tried to replicated it today but didn't get the test running. Can we sync up on that tomorrow ? 19:59:15 sure 19:59:16 that one is for Havana-2 19:59:32 I honestly have no idea why that's happening but replicating it is a good start 19:59:52 We have a few minor bugs as well in NEW status 20:00:14 at least one of them is a low hanging 20:00:40 Json Truncation - ive seen it before (in a prior project). limitations by whatever json parser being used and/or connection being closed before the entire stream has been read 20:00:54 flaper87: this is ready to merge - https://review.openstack.org/#/c/36319/ 20:01:17 ok folks, we are out of time 20:01:27 kgriffs: cool, will take a look at it 20:01:29 Good meeting. 20:01:30 Great meeting, folks. 20:01:32 really quick 20:01:35 w000t 20:01:38 thanks everyone 20:01:40 priorities for this week are FIX BUGS 20:01:43 MAKE IT FASTER 20:01:46 YES!!!!!! 20:01:49 YES! 20:01:54 :) 20:01:57 Great meeting! 20:02:05 Fix bugs faster? :) 20:02:18 ametts_: in parallel 20:02:26 we'll do more fine-grained prioritization next time 20:02:33 #endmeeting