13:01:16 <isviridov> #startmeeting magnetodb
13:01:16 <rushiagr> hello all!
13:01:16 <openstack> Meeting started Thu Sep 18 13:01:16 2014 UTC and is due to finish in 60 minutes.  The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:01:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:01:20 <openstack> The meeting name has been set to 'magnetodb'
13:01:48 <isviridov> hello everybody
13:01:58 <dukhlov> Hello
13:02:19 <rushiagr> o/
13:02:26 <ajayaa> o/
13:02:27 <isviridov> Let me find the agenda :)
13:02:31 * rushiagr tries to find out the agenda
13:02:48 <ajayaa> https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda
13:02:55 <isviridov> ajayaa, thank you
13:03:19 <ajayaa> isviridov, welcome!
13:03:26 <isviridov> #topic Juno milestone status
13:04:14 <isviridov> So, we have released  juno-3 milestone with a lot of implemented blueprints and bugs
13:04:35 <ajayaa> that's great.
13:04:38 <isviridov> #link https://launchpad.net/magnetodb/juno/juno-3
13:05:47 <isviridov> Actually it is our first release of mdb following openstack versioning.
13:06:10 <isviridov> And it mostly just a cut of what we have done.
13:06:52 <isviridov> We have also released python-magnetodbclient
13:07:16 <rushiagr> but we are not stopping feature development in the RC cycle as of now, am I right?
13:07:59 <ajayaa> Where are the python-magnetodbclient and cli located?
13:08:31 <rushiagr> ajayaa: https://github.com/stackforge/python-magnetodbclient
13:08:56 <ikhudoshyn_> rushiagr, yes, several things are not i merged yet
13:09:11 <isviridov> rushiagr, yeap. You are right. There is no expectation from us about strict following the procedure. However we will avoid huge features, giving the priority for bugs.
13:09:55 <rushiagr> isviridov: okay. I think it is fine even if we put features in for this cycle
13:10:05 <isviridov> We have now BP approvement process, so any new feature will be agreed with team
13:10:32 <rushiagr> anyways we're not 'releasing' anything. Nobody is going to package it, so blocking features for J cycle will only hamper development efforts
13:10:37 <isviridov> ajayaa, #link https://launchpad.net/python-magnetodbclient
13:10:59 <rushiagr> from K hopefully we'll be mature enough to follow 'no features in RCs plz' funda
13:11:17 <rushiagr> isviridov: cool
13:11:55 <isviridov> rushiagr, yeap. Usually packaging efforts start from incubation. From my expierene with in Trove
13:12:07 <ikhudoshyn_> in fact, i believe we'll have it packaged once we hit release, but not RC of course
13:12:22 <rushiagr> ikhudoshyn_: oh, okay. That's good to know
13:12:54 <ajayaa> ikhudoshyn_, Can you please elaborate a bit on aync table creation and deletion?
13:13:05 <ikhudoshyn_> sure,
13:13:06 <isviridov> ikhudoshyn_, not sure about it. There is effort withing Symantec, but not public
13:13:57 <keith_newstadt> rushiagr: are there new features in particular that you're looking to start development on, or that you'd like to see implemented?
13:14:10 <ikhudoshyn_> ajayaa, we'll pass a lil'bit later to that
13:14:13 <isviridov> keith_newstadt, ajayaa let go thourg agenda
13:14:52 <isviridov> Next topic?
13:15:23 <rushiagr> keith_newstadt: I'd like to look into trove integration more, but we can discuss it once it's turn comes :)
13:15:24 <ikhudoshyn_> isviridov, I meant having mdb in pypi at least, don't expect rpms or debs
13:15:46 <ikhudoshyn_> isviridov, +1 to next topic
13:16:12 <isviridov> #topic Asynchronous table creation and removal
13:16:17 <isviridov> #link https://blueprints.launchpad.net/magnetodb/+spec/async-schema-operations
13:16:38 <isviridov> ikhudoshyn_, news?
13:16:48 <ikhudoshyn_> here they go
13:17:21 <ajayaa> Is table creation not asynchronous now?
13:17:33 <ikhudoshyn_> well, we faced a situation when several concurrent table create/delete requests could get C* crazy
13:18:04 <ikhudoshyn_> so what we want is to have all such requests executed one by one
13:18:13 <ajayaa> Are we looking at some kind of scheduling of table creations?
13:18:27 <ajayaa> or just one by one for table creation?
13:18:38 <rushiagr> ikhudoshyn_: It would be great if a detailed spec is written on what is going to be implemented
13:18:40 <ikhudoshyn_> before that we had async_storage_manager that did the job, but per API node basis
13:18:53 * isviridov thinks that SPEC could help a lot with all questions
13:19:09 <ikhudoshyn_> rushiagr, sure, it will get published pretty soon
13:19:40 <isviridov> #action ikhudoshyn_ write spec
13:19:54 <ajayaa> ikhuddoshyn_, How many concurrent table creations get c* crazy in your experience?
13:19:54 <ikhudoshyn_> in brief, we gonna use standard MQ shipped with OS via oslo.messaging.rpc
13:20:08 <rushiagr> isviridov: the spec (BP) says "Suggested implementation includes...", but nothing concrete..
13:20:37 <ikhudoshyn_> ajayaa, it depends on several factors. We first started to observe that on up to 30 concurrent requests
13:20:48 <isviridov> rushiagr, +1 we need more detailed description
13:21:01 <ajayaa> And how many nodes in c*?
13:21:11 <ikhudoshyn_> but (recoverable) networking issues could cause it even without any load at all
13:21:25 <rushiagr> I'm +1 on the feature. It should definitely be asynchronous, with a limit on the number of concurrent create/delete table operations a user can perform
13:22:13 <isviridov> ikhudoshyn_, I've filed an action for you to write spec. Ok?
13:22:23 <ikhudoshyn_> isviridov, yes, thanks
13:22:40 <isviridov> Next topic?
13:22:53 <rushiagr> I would like to review the spec too, when it is put up on launchpad..
13:22:54 <ikhudoshyn_> should we elaborate this topic any more or could we move on?
13:23:25 <ikhudoshyn_> rushiagr, I'll publish it ASAP
13:23:41 <rushiagr> I think we can move ahead. I'll wait for the detailed spec. Thanks
13:23:49 <isviridov> #topic Monitoring API
13:23:54 <isviridov> #link https://blueprints.launchpad.net/magnetodb/+spec/monitoring-api
13:24:06 <isviridov> ominakov-, around?
13:25:11 <ominakov-> yep, i've published patch with getting size of table through JMX https://review.openstack.org/#/c/122330/
13:25:35 <isviridov> #link https://review.openstack.org/#/c/122330/
13:25:42 <ominakov-> but i faced with some problem about item count
13:26:21 <rushiagr> +1 to the nova-specs like writing of blueprint
13:26:40 <ajayaa> ominakov, isviridov, Are we looking at only two paramemters? table_size and item_count
13:27:04 <ajayaa> *as a part of monitoring apis.
13:27:05 <ominakov-> there are no JMX bean for this metric and from C* 2.1 even node tool don't show this counter
13:27:39 <isviridov> ajayaa, currently yes
13:27:42 <ominakov-> now we looking for this two metrics only
13:28:17 <isviridov> ominakov-, I think guys from C* could help
13:28:53 <ominakov-> isviridov, yes i'll ask this question in C* irc today
13:29:51 <dukhlov> About problem with C* 2.1
13:30:18 <isviridov> ominakov-, looking forward for more details and info from C* guys
13:30:19 <dukhlov> do you have some links or docs which explains this?
13:31:02 <ominakov-> dukhlov, not yet, i just read the C* code
13:31:21 * isviridov thinks that code is good source of true
13:31:37 <rushiagr> quick question: table size is in KBs (storage) or number of rows/items (count)?
13:32:09 <isviridov> rushiagr, both
13:32:41 <isviridov> rushiagr dukhlov ominakov- moving on?
13:32:54 <rushiagr> isviridov: sure
13:33:00 <ominakov-> so, if anyone have some ideas for that - welcome
13:33:17 <ominakov-> isviridov, yep
13:33:18 <isviridov> ominakov-, sure
13:33:24 <isviridov> #topic Light weight session for authorization
13:33:29 <isviridov> #link https://blueprints.launchpad.net/magnetodb/+spec/light-weight-session
13:33:43 <isviridov> Oh, it is not assigned
13:33:48 <isviridov> achudnovets, around?
13:33:53 <achudnovets> yep
13:34:10 <achudnovets> We are using keystonemiddleware package for token-auth now. It can cache tokens. In memory or in memcached.
13:34:13 <isviridov> What do we do with it?
13:34:29 <achudnovets> Now I'm researching how tokens without service catalog impacts magnetodb performance.
13:34:41 <achudnovets> I think there will be not great impact, but still we need to know it for sure.
13:34:57 <achudnovets> Mb we'll need to change client code to enforce it turn service catalog off.
13:35:13 <achudnovets> Even in case if we hav'n big performance boost we can turn service catalog off to avoid this bug https://bugs.launchpad.net/keystone/+bug/1190149
13:35:15 <uvirtbot> Launchpad bug 1190149 in heat/havana "Token auth fails when token is larger than 8k" [Critical,Fix committed]
13:35:28 <keith_newstadt> also good to reduce overall network traffic
13:35:35 * isviridov would like to avoid adding additional session layer
13:35:58 <ajayaa> +1 isviridov
13:36:01 <rushiagr> +1.
13:36:05 <achudnovets> +1
13:36:23 <keith_newstadt> +1
13:36:26 <aostapenko> +1
13:36:47 <rushiagr> why don't we just use the short UUID token of keystone, which is now the default? will that involve a call to keystone for every request?
13:36:47 <isviridov> keith_newstadt, PKI token without katalog it is about 1KB, it is close to body size
13:36:53 <keith_newstadt> we'd need some numbers to show that it's worthwhile before we'd implement it
13:37:02 <isviridov> keith_newstadt, +1
13:37:11 <rushiagr> keith_newstadt: +1
13:37:31 <keith_newstadt> rushiagr: right, UUID causes a callback to keystone on every request
13:37:37 <ajayaa> rushiagr, yes
13:37:50 <achudnovets> rushiagr: +1 to keith_newstadt comment
13:38:00 <keith_newstadt> we have use cases where clients will be making 10s of thousands of requests per second
13:38:03 <isviridov> rushiagr, yes it is. We will kill keystone with mdb
13:38:07 <keith_newstadt> that is too much load to put on keystone
13:38:36 <ajayaa> or worse we will have a slow mdb!
13:39:06 <isviridov> #action achudnovets provide numbers about performance impact from big PKI token in ML
13:39:13 <isviridov> achudnovets, is it ok with you?
13:39:36 <achudnovets> isviridov: sure
13:40:03 <isviridov> achudnovets, keith_newstadt rushiagr ajayaa moving on?
13:40:14 <keith_newstadt> sounds good
13:40:18 <achudnovets> +1
13:40:32 <rushiagr> keith_newstadt: does keystone return the time for which that token is valid? if yes, (say one hour), we need not make call to keystone for that one hour, provided the user uses the same token
13:40:35 <isviridov> #topic Next meeting arrangement
13:40:55 <achudnovets> rushiagr: yes, it does
13:41:07 <rushiagr> sorry, I'm a newbie to keystone. We can take these questions later too, if it is blocking important items
13:41:28 <ajayaa> rushiagr, the token could be revoked!
13:41:40 <rushiagr> ajayaa: oh. damn
13:41:52 <achudnovets> and actually tokens can be revoked on server side. So client checks revoken tokens time to time
13:41:53 <rushiagr> okay, let's move ahead to the next topic
13:42:03 <ajayaa> rushiagr, keystonemiddleware polls keystone to get a list of revoked tokens.
13:42:04 <isviridov> rushiagr, not really but, let us keep an order and continue discussion in mdb channel or in open topic section
13:42:17 <isviridov> So, the meeting time and day
13:42:26 <rushiagr> isviridov: agree. Thanks
13:42:44 <isviridov> Are you ok to have this meeting weekly exactly this time?
13:43:02 <ajayaa> +1
13:43:02 <ikhudoshyn_> +1
13:43:03 <achudnovets> it's ok for me
13:43:10 <aostapenko> +1
13:43:13 <dukhlov> +1
13:43:14 <achuprin_> +1
13:43:16 <rushiagr> +1 from me
13:43:32 <ominakov-> +1
13:43:53 <rushiagr> I'm +1 for having this earlier too, but -1 on having it later in the day :)
13:43:54 <isviridov> #agreed every Thuersday 1300 UTC at #openstack-meeting
13:44:26 <keith_newstadt> +1
13:44:27 <isviridov> And here open discussion come :))
13:44:31 <isviridov> #topic Open discussion
13:45:03 <isviridov> keith_newstadt, is charles around?
13:45:16 <isviridov> Really wanted to say my congratulations ^)
13:46:06 <ajayaa> How about keeping a count of items in c* itself and have monitoring-api use it?
13:46:08 <keith_newstadt> he's on his way
13:46:55 <isviridov> keith_newstadt, I'll see him in #magnetodb layer if so
13:47:21 <isviridov> ajayaa, I think eventually we will do it for some time concuming metrics.
13:47:50 <ajayaa> time consuming metrics?
13:47:59 <ajayaa> Can you give an example please?
13:48:57 <isviridov> count is a good example
13:49:06 <ajayaa> The metric which takes time to collect, oh okay.
13:49:08 <isviridov> There was a good question
13:49:09 <isviridov> rushiagr: are there new features in particular that you're looking to start development on, or that you'd like to see implemented?
13:49:30 <isviridov> ajayaa, exactly sorry for confusing
13:49:43 <rushiagr> isviridov: I'd like to have auto-managed cassandra clusters
13:49:53 <dukhlov> I don't know how do we know that item exists or not in cassandra
13:49:57 <rushiagr> isviridov: in short, I'd like to see trove part done, atleast in part
13:50:00 <isviridov> rushiagr, it sounds like Trove programm
13:50:33 <rushiagr> isviridov: yeah, I agree. Not much w.r.t. magnetoDB as of today
13:51:11 <keith_newstadt> rushiagr: what is the "trove part"?
13:51:33 <rushiagr> I would like us to have an easy way to manage, and scale cassandra clusters in cloud
13:52:15 <rushiagr> keith_newstadt: This, is necessary IMO before magnetoDB can be used as a service properly. This mostly comes under trove
13:53:24 <keith_newstadt> rushiagr: so you're looking to be able to deploy mdb using trove to deploy and manage cassandra?
13:53:35 <rushiagr> I agree that managing and scaling of cassandra nodes  come under trove
13:53:42 <keith_newstadt> +1
13:54:08 <isviridov> rushiagr, you know we thought about having trove as alternative cassandra provider. But it is not ready for that.
13:54:08 <keith_newstadt> rushiagr: is there additional featurework that you feel needs to be done inside of mdb to support this?
13:55:21 <rushiagr> is there going to be an altenative of trove's cssandra in mdb?
13:55:23 <isviridov> rushiagr, another side of it, trove works within tenant and spins up VMs for databases. What is not always good for performace.
13:55:58 <rushiagr> keith_newstadt: i've not looked into MDB into that much of a detail yet.
13:56:25 <ajayaa> isviridov, what if cassandra nodes use ephmeral storage rather than cinder volume?
13:56:26 <rushiagr> isviridov: I agree. Ephemeral disks work best for c*
13:56:39 <isviridov> rushiagr, deployed under the cloud on the same level as swift and other OS services
13:57:11 <keith_newstadt> rushiagr: i think trove is potentially a good option for deploying cassandra, which mdb can then use. heat is another option.
13:57:50 <isviridov> keith_newstadt, +1 for heat for deployment
13:57:57 <rushiagr> i'llrestart myclient. It is eatin a lot of caracters
13:57:58 <keith_newstadt> rushiagr: not sure if there's much we'd need to do in mdb specifically to support either
13:58:44 <isviridov> rushiagr, keith_newstadt we are mostly out of time. Let us move our discussion to #magnetodb
13:58:45 <rushiagr> ok, seems better now, my keyboard :)
13:58:55 <rushiagr> isviridov: sure!
13:59:03 <rushiagr> thanks folks! It was nice talking to you :)
13:59:05 <isviridov> Thank you guys!
13:59:09 <aostapenko> Thanks!
13:59:11 <keith_newstadt> sounds good
13:59:11 <isviridov> #endmeeting