19:03:33 <kgriffs> #startmeeting marconi
19:03:34 <openstack> Meeting started Thu Jul 11 19:03:33 2013 UTC.  The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:37 <openstack> The meeting name has been set to 'marconi'
19:03:39 <flaper87> \o/
19:03:43 <kgriffs> #topic review actions from last time
19:03:58 <kgriffs> #link http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-06-27-19.05.html
19:04:01 <oz_akan_> #idea I liked ideas
19:04:26 <kgriffs> ok, megan_w in da house?
19:04:31 <oz_akan_> (I need to learn more about irc commands)
19:04:34 <megan_w> yep
19:04:48 <kgriffs> so, the incubation application got on the wiki, correct?
19:04:53 <megan_w> yes
19:04:59 * flaper87 read it
19:05:02 <kgriffs> awesome, thanks
19:05:04 <flaper87> it looks very good
19:05:06 <flaper87> megan_w: +1
19:05:10 <megan_w> https://wiki.openstack.org/wiki/Marconi/Incubation
19:05:19 <kgriffs> any feedback on it real quick?
19:05:36 <ametts_> flaper87:  I got you more contributors. :)
19:05:51 <malini> if we merge common client lib, Alessio might be a contributor too ;)
19:06:15 <flaper87> megan_w: I would put "Kombu integration" instead of "Celery Integration"
19:06:26 <flaper87> ametts_: +1 for you
19:06:28 <flaper87> :D
19:06:43 <megan_w> ok, i can update
19:06:51 <kgriffs> groovy
19:06:53 <kgriffs> moving on
19:06:56 <Sriram> The docs looks great
19:07:02 <Sriram> *doc
19:07:16 <megan_w> are more contributors the only thing we need before submittign this?
19:07:17 <kgriffs> I see that ametts submitted a patch
19:07:25 <kgriffs> so that takes care of hi action item
19:07:46 <kgriffs> everyone PLEASE go review that patch
19:08:15 <zyuan> which?
19:08:18 <ametts_> Yes, please.  I want zyuan to pick it up from here, so it would be good to get my part merged.
19:08:19 <flaper87> megan_w: I'd say yes
19:08:21 <kgriffs> I'm a little biased since I helped design some of it (ametts did the heavy lifting, tho) and I would like to get more opinions
19:08:32 <cppcabrera> Link: https://review.openstack.org/#/c/36572/ (ametts' patch)
19:08:47 <flaper87> cppcabrera: #link *
19:08:53 <cppcabrera> Ahh.
19:08:57 <cppcabrera> #link https://review.openstack.org/#/c/36572/
19:08:59 <zyuan> oh, the one i'm looking at....
19:09:04 <kgriffs> right
19:09:21 <kgriffs> moving on
19:09:23 <flaper87> I got some comments on that, we can talk about those afterward
19:09:37 <kgriffs> item #3
19:09:41 <flaper87> o/
19:10:03 <flaper87> ah, you said item #3
19:10:03 <kgriffs> yo
19:10:16 <flaper87> mine is #2
19:10:17 <flaper87> :D
19:10:20 <kgriffs> so, I started a wiki page: https://blueprints.launchpad.net/marconi/+spec/ops-stats
19:10:54 <flaper87> #link https://blueprints.launchpad.net/marconi/+spec/ops-stats
19:11:05 <kgriffs> this is low priority, but still really nice to have for ops (and for tuning), so I kept it in Havana
19:11:17 <flaper87> how are we planing to do that? Middleware ?
19:11:33 <kgriffs> possibly. That's how swift does it.
19:11:52 <flaper87> I guess that makes more sense
19:11:56 <flaper87> oke-doke
19:12:20 <amitgandhi> +1 i like stats
19:12:33 <kgriffs> next up we had some actions for flaper87 to investigate ceilometer
19:12:41 <flaper87> o/
19:12:43 <flaper87> yup
19:12:57 <flaper87> so, I did. The short answer is no, it's not possible to use it right now.
19:13:00 <flaper87> the long answer is:
19:13:27 <flaper87> The easiest way to use ceilo right now is to send notifications to some backend (rabbit) and let ceilo pull them from there
19:13:35 <kgriffs> oic
19:13:37 <kgriffs> good to know
19:13:42 <flaper87> they recently implemented an UDP listener that we could use as well
19:13:59 <flaper87> not sure how mature it is right now and whether we want to rely on that.
19:14:13 <kgriffs> ok, we can keep an eye on it and revisit when it comes time to implement that bp
19:14:25 <flaper87> re statsd: No, they don't support it yet. They've talked about it and it'll be supported sometime in the future
19:14:48 <flaper87> but it hasn't be scheduled yet
19:14:54 <kgriffs> ok
19:15:03 <kgriffs> thanks! good info.
19:15:06 <cppcabrera> #info Ceilometer is not appropriate for Marconi stats at this time.
19:15:31 <kgriffs> #info statsd not supported yet by Ceilometer but is on roadmap
19:15:51 <kgriffs> #info recommended way to send data to Ceilometer is via a broker
19:15:58 <kgriffs> moving on
19:16:18 <kgriffs> #topic QA cluster remaining TODOs
19:16:47 <kgriffs> oz_akan_, malini: ^^
19:16:53 <oz_akan_> hey
19:16:54 <malini> Oz & me are running the load tests
19:17:22 <flaper87> malini: oz_akan_ any numbers? How's Marconi behaving ?
19:17:47 <malini> oz_akan_ : want to share the story ?
19:17:56 * flaper87 is a bit scared now
19:17:58 <oz_akan_> ok
19:18:00 * ametts_ snickers
19:18:30 <malini> flaper87: dont worry, we are slow and steady
19:18:44 <oz_akan_> if we have a write concern with 1 and if we read from secondary nodes we get better results
19:19:08 <oz_akan_> otherwise, a post_claim takes 7 seconds under light load
19:19:25 <oz_akan_> with all changes we did, it is a little over 1 second now
19:19:28 <malini> as in 1 user/sec (a user doing ~5 api calls)
19:19:36 <oz_akan_> main problem is database locks
19:19:57 <oz_akan_> I think we will have to create lots of databases on each instance
19:19:58 <flaper87> wow, that doesn't sound good
19:20:16 <flaper87> I mean, the 7 seconds thing
19:20:33 <flaper87> mmhh, are the indexes being used correctly?
19:20:41 <oz_akan_> placement service can handle this problem as well, or we can just hash a queue on a database on a mongodb instance
19:20:56 <flaper87> did you guys enable the query profiler ?
19:21:15 <oz_akan_> writes take good amount of time
19:21:28 <oz_akan_> if I send only one request, it is in milliseconds range
19:21:30 <flaper87> oz_akan_: The placement service will help but we should definitely improve that timing
19:21:59 <oz_akan_> to improve the performance 1- we need to check if code if optimal,
19:22:04 <amitgandhi> is anything whacky going on with the mongodb instances?
19:22:15 <oz_akan_> I will enable profiler and create a report
19:22:20 <kgriffs> kk
19:22:29 <oz_akan_> 2- we need more databases per instance, as a write, locks all database
19:22:35 <kgriffs> can you guys get together and dig into this?
19:22:40 <flaper87> oz_akan_: +1
19:22:58 <flaper87> We should make sure the indexes are being used as expected
19:23:05 <oz_akan_> flaper87: +1
19:23:18 <kgriffs> #action oz_akan_ and flaper87 to investigate mongo insert performance
19:23:23 <oz_akan_> almost all call marconi has, has a write operation
19:23:28 <oz_akan_> so we are very write heavy
19:23:51 <flaper87> oz_akan_: good point
19:23:56 <oz_akan_> and delete_queue is unresobavly slow
19:24:07 <oz_akan_> unreasonably
19:24:14 <kgriffs> well, the good news is we have a baseline now
19:24:15 <oz_akan_> 22 seconds, under light load
19:24:18 <zyuan> mongo write is suppose to be fast, since it does not wait disk
19:24:27 <kgriffs> let's get to work making it better.
19:24:30 <zyuan> !!!
19:24:31 <openstack> zyuan: Error: "!!" is not a valid command.
19:24:37 <flaper87> zyuan: depends on your write concern
19:24:50 <flaper87> if you expect the record to be journaled, it'll wait for disk
19:25:01 <oz_akan_> right, and if you have so many of them, reads get slow
19:25:02 <zyuan> flaper87: mongo has no journal
19:25:07 <flaper87> zyuan: it does
19:25:10 <oz_akan_> i see 600 requests in read queue
19:25:28 * kgriffs rings cowbell
19:25:30 <zyuan> i see....
19:25:32 <flaper87> zyuan: FYI http://docs.mongodb.org/manual/core/journaling/
19:25:35 <oz_akan_> journal can be disabled, which we did for now to see what is the max we can get
19:25:46 <flaper87> oz_akan_: ok, we can dig into this afterwards
19:25:52 <kgriffs> moving on
19:25:53 <oz_akan_> flaper87: ok
19:26:00 <flaper87> malini: oz_akan_ thanks for getting that env up and running
19:26:06 <flaper87> many apple pies for you guys!
19:26:09 <kgriffs> #topic 	•	Rebooting python-marconiclient
19:26:12 <malini> woohoo
19:26:21 <oz_akan_> malini: is awesome with her tests
19:26:27 <oz_akan_> I just run them all day :)
19:26:36 <cppcabrera> (and all night!)
19:26:36 <malini> Thanks Oz..I'll give you the cheque
19:26:56 <kgriffs> so, cppcabrera, wanna talk about the client?
19:27:11 <cppcabrera> Yeah. :)
19:27:13 <Sriram> cppcabrera, you have the floor.
19:27:18 <cppcabrera> Alright...
19:27:33 <cppcabrera> So there's been a lot going on with regards to the client lately.
19:27:48 <cppcabrera> Primarily, brainstorming and setting the foundation for developing it.
19:28:18 <cppcabrera> Alessio helped bootstrap our installation process by submitting a few patches. We now use pbr.
19:28:33 <cppcabrera> The README is in the process of getting a big upgrade.
19:28:50 <cppcabrera> A HACKING file is also being added, based on Marconi server's own HACKING.rst.
19:28:56 <zyuan> i guess we don't have much freedom on client's API design....
19:29:12 <zyuan> all openstack clients use same set of .... dump apis
19:29:21 <cppcabrera> So what's next - finalizing the deisgn and developing.
19:29:25 <cppcabrera> *design
19:29:49 <flaper87> cppcabrera: re Alessio's patch! I don't think it is *ready* but I think we should get it merged and keep working on it
19:30:01 <flaper87> but, I do want to make sure someone is going to work on that
19:30:30 <cppcabrera> I don't see that patch on the review queue any more. Where did it go?
19:30:39 <flaper87> I'm not happy with merging things that are not ready but I realize that's the only way to start working on marconi's client
19:30:52 <ametts_> flaper87:  Is there a list of what needs to be done?
19:30:53 <zyuan> https://review.openstack.org/#/c/29255/
19:31:02 <flaper87> #link https://review.openstack.org/#/c/29255/
19:31:12 <cppcabrera> Thanks.
19:31:35 <cppcabrera> I'm in favor of merging that, especially since it bootstraps our common libraries effort.
19:31:37 <flaper87> ametts_: There isn't, I've got some things in my mind, I will write those down
19:32:01 <cppcabrera> As far as cleanup, I'll be happy to champion that cause.
19:32:14 <zyuan> that patch is ready, it's not the marconi client; it's the commonly used library.
19:32:41 <flaper87> pls, notice that that patch takes code from several clients and puts it in the same package
19:32:54 <flaper87> so, there's a lot of cleanup to do there
19:32:57 <ametts_> kgriffs: You missed your chance to give flaper87 one of your action-thingies ^^^
19:33:14 <Sriram> Yes cleanup will be required.
19:33:31 <cppcabrera> Three action points ^^ (flaper87 comments, merge patch, cleanup)
19:33:41 <zyuan> oh my, i guess it's a common part.
19:33:45 <flaper87> holymoly!
19:33:50 <kgriffs> #action flaper87 to document client cleanup
19:33:59 <kgriffs> #action flaper87 to merge patches
19:34:07 <cppcabrera> Also, finalize client design, distribute development effort.
19:34:16 <cppcabrera> So... 5 tasks. :)
19:34:17 <flaper87> cppcabrera: waaaaaaaaaaaaaaaaaaaaaaaaaaat ?
19:34:25 <flaper87> cppcabrera: dude, -1 Apple pie for ya!
19:34:32 <cppcabrera> Hahaha
19:34:36 <kgriffs> #action flaper87, cppcabrera to finalize client design
19:34:49 <cppcabrera> :P
19:34:51 * kgriffs just works here
19:34:57 <flaper87> LOL
19:34:59 <kgriffs> ok, anything else on that topic?
19:35:04 <flaper87> not from me
19:35:08 <flaper87> cppcabrera: shut up!
19:35:10 <flaper87> :P
19:35:18 <zyuan> one ques
19:35:22 <cppcabrera> Alright... that wraps it up for me.
19:35:27 <zyuan> do we really need the apiclient patch?
19:35:53 <flaper87> zyuan: yes and no, That's an effort we're doing to contribute back to Oslo's apiclient
19:36:10 <flaper87> it makes sense to unify client's common code and it isn't that simple to migrate existing clients
19:36:16 <zyuan> i definitely like apiclient goes to oslo
19:36:22 <flaper87> so, we proposed to use marconiclient as incubator for that package
19:36:27 <zyuan> but.... don't use marconi as a test field...
19:36:46 <flaper87> zyuan: before going into Oslo, it needs to have a POC and being used at least in 1 project
19:37:01 <zyuan> ok
19:37:03 <flaper87> marconiclient is brand new, it makes sense to use it as incubator for it
19:37:15 <cppcabrera> +1
19:37:17 <Sriram> I would also like to have a role in the client development, after finalizing design.
19:37:26 <flaper87> Sriram: +1
19:37:27 <ametts_> flaper87:  agree  -- marconi is in the best position to be the incubator.
19:37:36 <zyuan> then how about do this:
19:37:40 <cppcabrera> It's a good place to start hammering in "One way to do it."
19:37:48 <zyuan> merge the patch
19:37:54 <zyuan> write marconi client
19:37:58 <zyuan> after the client is done
19:38:11 <zyuan> decide which part of the apiclient goes away
19:38:19 <zyuan> like , garbadge collection
19:38:24 <flaper87> zyuan: that's our plan
19:38:58 <amitgandhi> +1 cppcabrera and sriram tag team the client dev
19:39:09 <zyuan> then just merge the patch... we don't have much energy to design a lib without using it.
19:39:24 <flaper87> plus, there'll be another cleanup when we'll try to propose it for Oslo
19:39:40 <flaper87> zyuan: that's what we just said :P
19:39:52 <amitgandhi> lets just make sure we've documented the cleanup well so we dont forget
19:39:52 <flaper87> cool!
19:40:00 <kgriffs> ok, anything else on that topic?
19:40:07 <flaper87> not from me
19:40:10 <cppcabrera> None from me. I'm happy. :)
19:40:21 <Sriram> thats it from me.
19:40:41 <zyuan> next topic?
19:40:54 <malini> bugs
19:41:02 <kgriffs> #topic enforcing queue metadata reserved field convention
19:41:16 <malini> :(
19:41:33 <zyuan> ...
19:41:36 * ametts_ thinks that's the worst topic name ever
19:41:47 <flaper87> lol
19:41:54 <kgriffs> agreed, heh
19:42:13 <kgriffs> so, flaper87 wanted to discuss this real quick
19:42:41 <flaper87> right, I barely followed the discussion yesterday
19:43:04 <flaper87> so, what's the plan here?
19:43:12 <kgriffs> the proposal is to actually allow clients to create custom field names that start with an underscore, even though those are reserved for future use by the server
19:43:33 <zyuan> no, there is no, no, NO custom fields
19:43:56 <kgriffs> um, by custom I mean arbitrary metadata fields. we allow that
19:44:57 <amitgandhi> what future use reserved fields do you envision?
19:45:08 <amitgandhi> are we violating YAGNI
19:45:12 <kgriffs> for example, _default_message_ttl
19:45:25 <kgriffs> or _message_delay
19:45:45 <kgriffs> the idea is, we tell people not to use the naming convention of preceding underscore
19:45:51 <zyuan> there are many usage, since it's a per-queue attribute.
19:45:59 <flaper87> I don't think this is a YAGNI violation, we need to make sure we don't break backward compatibility in the future
19:46:35 <zyuan> absolutely, we don't break the reserved fields we already published.
19:46:39 <kgriffs> if we ever end up wanting to use, say, _message_delay, then we need code or something to protect ourselves from blowing up on queues where a user actually used that field name in spite of us warning them not to
19:46:58 <kgriffs> so, that's the proposal in a nutshell
19:47:07 <kgriffs> 1. don't enforce
19:47:33 <kgriffs> 2. mitigate name collisions
19:48:08 <flaper87> I prefer enforcing it, I mean, forbid people to use our reserved convention
19:48:17 <amitgandhi> can we not enforce, but warn and discourage?
19:48:24 <malini> flaper87: +1
19:48:42 <zyuan> not just warn, we told them not to do so
19:48:49 <Sriram1> +1 for enforcing.
19:48:52 <zyuan> i can't think of a reason to pet them
19:48:57 <kgriffs> I'll let zyuan argue for his proposal
19:49:07 <zyuan> take the example i posted yesterday
19:49:17 <kgriffs> actually, #2 was my contribution, seemed like it is a given if you do #1
19:49:23 <zyuan> open our python, and see if you can define a method named __a__
19:50:07 <malini> but python wouldn't clean up your function names, between version updates
19:50:25 <flaper87> and this is not python, we're talking about people's data
19:50:45 <flaper87> we can't just introduce a new keyword and just say: "We told you so"
19:51:09 <zyuan> python all programs everything by upgrading from 2 to 3
19:51:27 <malini> yeap..the restriction is somewhere deep in our API docs & I am sure most of us wouldnt ever have noticed it
19:51:32 <zyuan> if we don't want to break backward, we can cleanup
19:52:24 <kgriffs> zyuan: can you speak briefly about client backwards compat?
19:52:57 <kgriffs> iirc, that was your main driver behind this proposal
19:53:05 <zyuan> if we publish more _names between revisions
19:53:25 <zyuan> new clients can immediately use new _names without waiting for servers to be upgraded
19:54:17 <zyuan> and the new _names should not change the sematics within 1 API version
19:54:39 <flaper87> mmh, not sure I follow :(
19:54:39 <zyuan> if a _name does change the semantics, then we upgrade API version
19:54:53 <flaper87> what do you mean by _name ?
19:55:14 <zyuan> reserved queue metadata in use
19:55:56 <kgriffs> (5 minutes)
19:56:06 <malini> bugs…..
19:56:06 <zyuan> like, today we publish a per-queue attribute, say "_max_payload"
19:56:08 <flaper87> mmh, I don't see the benefit
19:56:18 <zyuan> it only affects performance, say
19:56:43 <flaper87> kgriffs: should we vote?
19:56:44 <zyuan> then clients can send it even server does not optimize for it
19:56:57 <kgriffs> 1 minute, then we'll vote
19:56:57 <flaper87> zyuan: we can enforce it in our client as well
19:57:00 <flaper87> not just the server
19:57:15 <zyuan> at least don't do it in server
19:57:28 <cppcabrera> I vote to defer this discussion, and have zyuan write up his rationale in detail.
19:57:36 <cppcabrera> For next time.
19:57:42 <flaper87> +1
19:57:45 <kgriffs> any opposed?
19:58:20 <amitgandhi> bugs?
19:58:25 <kgriffs> #action zyuan to write up metadata handling proposal and review next time
19:58:29 <kgriffs> #action bug triage
19:58:32 <kgriffs> oops
19:58:34 <zyuan> they are solved going to be solved
19:58:35 <flaper87> LOL
19:58:36 <kgriffs> #topic bug triage
19:58:39 <malini> https://bugs.launchpad.net/marconi/+bug/1192196 —> Truncated JSON..I saw this happen on a claim messages as well. So need somebody to look at this one
19:58:56 <cppcabrera> #link https://bugs.launchpad.net/marconi/+bug/1192196
19:59:09 <flaper87> malini: I tried to replicated it today but didn't get the test running. Can we sync up on that tomorrow ?
19:59:15 <malini> sure
19:59:16 <zyuan> that one is for Havana-2
19:59:32 <flaper87> I honestly have no idea why that's happening but replicating it is a good start
19:59:52 <malini> We have a few minor bugs as well in NEW status
20:00:14 <malini> at least one  of them is a  low hanging
20:00:40 <amitgandhi> Json Truncation - ive seen it before (in a prior project).  limitations by whatever json parser being used and/or connection being closed before the entire stream has been read
20:00:54 <kgriffs> flaper87: this is ready to merge - https://review.openstack.org/#/c/36319/
20:01:17 <kgriffs> ok folks, we are out of time
20:01:27 <flaper87> kgriffs: cool, will take a look at it
20:01:29 <cppcabrera> Good meeting.
20:01:30 <ametts_> Great meeting, folks.
20:01:32 <kgriffs> really quick
20:01:35 <flaper87> w000t
20:01:38 <flaper87> thanks everyone
20:01:40 <kgriffs> priorities for this week are FIX BUGS
20:01:43 <kgriffs> MAKE IT FASTER
20:01:46 <malini> YES!!!!!!
20:01:49 <flaper87> YES!
20:01:54 <cppcabrera> :)
20:01:57 <Sriram1> Great meeting!
20:02:05 <ametts_> Fix bugs faster? :)
20:02:18 <flaper87> ametts_: in parallel
20:02:26 <kgriffs> we'll do more fine-grained prioritization next time
20:02:33 <kgriffs> #endmeeting