13:01:16 #startmeeting magnetodb 13:01:16 hello all! 13:01:16 Meeting started Thu Sep 18 13:01:16 2014 UTC and is due to finish in 60 minutes. The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:01:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:01:20 The meeting name has been set to 'magnetodb' 13:01:48 hello everybody 13:01:58 Hello 13:02:19 o/ 13:02:26 o/ 13:02:27 Let me find the agenda :) 13:02:31 * rushiagr tries to find out the agenda 13:02:48 https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda 13:02:55 ajayaa, thank you 13:03:19 isviridov, welcome! 13:03:26 #topic Juno milestone status 13:04:14 So, we have released juno-3 milestone with a lot of implemented blueprints and bugs 13:04:35 that's great. 13:04:38 #link https://launchpad.net/magnetodb/juno/juno-3 13:05:47 Actually it is our first release of mdb following openstack versioning. 13:06:10 And it mostly just a cut of what we have done. 13:06:52 We have also released python-magnetodbclient 13:07:16 but we are not stopping feature development in the RC cycle as of now, am I right? 13:07:59 Where are the python-magnetodbclient and cli located? 13:08:31 ajayaa: https://github.com/stackforge/python-magnetodbclient 13:08:56 rushiagr, yes, several things are not i merged yet 13:09:11 rushiagr, yeap. You are right. There is no expectation from us about strict following the procedure. However we will avoid huge features, giving the priority for bugs. 13:09:55 isviridov: okay. I think it is fine even if we put features in for this cycle 13:10:05 We have now BP approvement process, so any new feature will be agreed with team 13:10:32 anyways we're not 'releasing' anything. Nobody is going to package it, so blocking features for J cycle will only hamper development efforts 13:10:37 ajayaa, #link https://launchpad.net/python-magnetodbclient 13:10:59 from K hopefully we'll be mature enough to follow 'no features in RCs plz' funda 13:11:17 isviridov: cool 13:11:55 rushiagr, yeap. Usually packaging efforts start from incubation. From my expierene with in Trove 13:12:07 in fact, i believe we'll have it packaged once we hit release, but not RC of course 13:12:22 ikhudoshyn_: oh, okay. That's good to know 13:12:54 ikhudoshyn_, Can you please elaborate a bit on aync table creation and deletion? 13:13:05 sure, 13:13:06 ikhudoshyn_, not sure about it. There is effort withing Symantec, but not public 13:13:57 rushiagr: are there new features in particular that you're looking to start development on, or that you'd like to see implemented? 13:14:10 ajayaa, we'll pass a lil'bit later to that 13:14:13 keith_newstadt, ajayaa let go thourg agenda 13:14:52 Next topic? 13:15:23 keith_newstadt: I'd like to look into trove integration more, but we can discuss it once it's turn comes :) 13:15:24 isviridov, I meant having mdb in pypi at least, don't expect rpms or debs 13:15:46 isviridov, +1 to next topic 13:16:12 #topic Asynchronous table creation and removal 13:16:17 #link https://blueprints.launchpad.net/magnetodb/+spec/async-schema-operations 13:16:38 ikhudoshyn_, news? 13:16:48 here they go 13:17:21 Is table creation not asynchronous now? 13:17:33 well, we faced a situation when several concurrent table create/delete requests could get C* crazy 13:18:04 so what we want is to have all such requests executed one by one 13:18:13 Are we looking at some kind of scheduling of table creations? 13:18:27 or just one by one for table creation? 13:18:38 ikhudoshyn_: It would be great if a detailed spec is written on what is going to be implemented 13:18:40 before that we had async_storage_manager that did the job, but per API node basis 13:18:53 * isviridov thinks that SPEC could help a lot with all questions 13:19:09 rushiagr, sure, it will get published pretty soon 13:19:40 #action ikhudoshyn_ write spec 13:19:54 ikhuddoshyn_, How many concurrent table creations get c* crazy in your experience? 13:19:54 in brief, we gonna use standard MQ shipped with OS via oslo.messaging.rpc 13:20:08 isviridov: the spec (BP) says "Suggested implementation includes...", but nothing concrete.. 13:20:37 ajayaa, it depends on several factors. We first started to observe that on up to 30 concurrent requests 13:20:48 rushiagr, +1 we need more detailed description 13:21:01 And how many nodes in c*? 13:21:11 but (recoverable) networking issues could cause it even without any load at all 13:21:25 I'm +1 on the feature. It should definitely be asynchronous, with a limit on the number of concurrent create/delete table operations a user can perform 13:22:13 ikhudoshyn_, I've filed an action for you to write spec. Ok? 13:22:23 isviridov, yes, thanks 13:22:40 Next topic? 13:22:53 I would like to review the spec too, when it is put up on launchpad.. 13:22:54 should we elaborate this topic any more or could we move on? 13:23:25 rushiagr, I'll publish it ASAP 13:23:41 I think we can move ahead. I'll wait for the detailed spec. Thanks 13:23:49 #topic Monitoring API 13:23:54 #link https://blueprints.launchpad.net/magnetodb/+spec/monitoring-api 13:24:06 ominakov-, around? 13:25:11 yep, i've published patch with getting size of table through JMX https://review.openstack.org/#/c/122330/ 13:25:35 #link https://review.openstack.org/#/c/122330/ 13:25:42 but i faced with some problem about item count 13:26:21 +1 to the nova-specs like writing of blueprint 13:26:40 ominakov, isviridov, Are we looking at only two paramemters? table_size and item_count 13:27:04 *as a part of monitoring apis. 13:27:05 there are no JMX bean for this metric and from C* 2.1 even node tool don't show this counter 13:27:39 ajayaa, currently yes 13:27:42 now we looking for this two metrics only 13:28:17 ominakov-, I think guys from C* could help 13:28:53 isviridov, yes i'll ask this question in C* irc today 13:29:51 About problem with C* 2.1 13:30:18 ominakov-, looking forward for more details and info from C* guys 13:30:19 do you have some links or docs which explains this? 13:31:02 dukhlov, not yet, i just read the C* code 13:31:21 * isviridov thinks that code is good source of true 13:31:37 quick question: table size is in KBs (storage) or number of rows/items (count)? 13:32:09 rushiagr, both 13:32:41 rushiagr dukhlov ominakov- moving on? 13:32:54 isviridov: sure 13:33:00 so, if anyone have some ideas for that - welcome 13:33:17 isviridov, yep 13:33:18 ominakov-, sure 13:33:24 #topic Light weight session for authorization 13:33:29 #link https://blueprints.launchpad.net/magnetodb/+spec/light-weight-session 13:33:43 Oh, it is not assigned 13:33:48 achudnovets, around? 13:33:53 yep 13:34:10 We are using keystonemiddleware package for token-auth now. It can cache tokens. In memory or in memcached. 13:34:13 What do we do with it? 13:34:29 Now I'm researching how tokens without service catalog impacts magnetodb performance. 13:34:41 I think there will be not great impact, but still we need to know it for sure. 13:34:57 Mb we'll need to change client code to enforce it turn service catalog off. 13:35:13 Even in case if we hav'n big performance boost we can turn service catalog off to avoid this bug https://bugs.launchpad.net/keystone/+bug/1190149 13:35:15 Launchpad bug 1190149 in heat/havana "Token auth fails when token is larger than 8k" [Critical,Fix committed] 13:35:28 also good to reduce overall network traffic 13:35:35 * isviridov would like to avoid adding additional session layer 13:35:58 +1 isviridov 13:36:01 +1. 13:36:05 +1 13:36:23 +1 13:36:26 +1 13:36:47 why don't we just use the short UUID token of keystone, which is now the default? will that involve a call to keystone for every request? 13:36:47 keith_newstadt, PKI token without katalog it is about 1KB, it is close to body size 13:36:53 we'd need some numbers to show that it's worthwhile before we'd implement it 13:37:02 keith_newstadt, +1 13:37:11 keith_newstadt: +1 13:37:31 rushiagr: right, UUID causes a callback to keystone on every request 13:37:37 rushiagr, yes 13:37:50 rushiagr: +1 to keith_newstadt comment 13:38:00 we have use cases where clients will be making 10s of thousands of requests per second 13:38:03 rushiagr, yes it is. We will kill keystone with mdb 13:38:07 that is too much load to put on keystone 13:38:36 or worse we will have a slow mdb! 13:39:06 #action achudnovets provide numbers about performance impact from big PKI token in ML 13:39:13 achudnovets, is it ok with you? 13:39:36 isviridov: sure 13:40:03 achudnovets, keith_newstadt rushiagr ajayaa moving on? 13:40:14 sounds good 13:40:18 +1 13:40:32 keith_newstadt: does keystone return the time for which that token is valid? if yes, (say one hour), we need not make call to keystone for that one hour, provided the user uses the same token 13:40:35 #topic Next meeting arrangement 13:40:55 rushiagr: yes, it does 13:41:07 sorry, I'm a newbie to keystone. We can take these questions later too, if it is blocking important items 13:41:28 rushiagr, the token could be revoked! 13:41:40 ajayaa: oh. damn 13:41:52 and actually tokens can be revoked on server side. So client checks revoken tokens time to time 13:41:53 okay, let's move ahead to the next topic 13:42:03 rushiagr, keystonemiddleware polls keystone to get a list of revoked tokens. 13:42:04 rushiagr, not really but, let us keep an order and continue discussion in mdb channel or in open topic section 13:42:17 So, the meeting time and day 13:42:26 isviridov: agree. Thanks 13:42:44 Are you ok to have this meeting weekly exactly this time? 13:43:02 +1 13:43:02 +1 13:43:03 it's ok for me 13:43:10 +1 13:43:13 +1 13:43:14 +1 13:43:16 +1 from me 13:43:32 +1 13:43:53 I'm +1 for having this earlier too, but -1 on having it later in the day :) 13:43:54 #agreed every Thuersday 1300 UTC at #openstack-meeting 13:44:26 +1 13:44:27 And here open discussion come :)) 13:44:31 #topic Open discussion 13:45:03 keith_newstadt, is charles around? 13:45:16 Really wanted to say my congratulations ^) 13:46:06 How about keeping a count of items in c* itself and have monitoring-api use it? 13:46:08 he's on his way 13:46:55 keith_newstadt, I'll see him in #magnetodb layer if so 13:47:21 ajayaa, I think eventually we will do it for some time concuming metrics. 13:47:50 time consuming metrics? 13:47:59 Can you give an example please? 13:48:57 count is a good example 13:49:06 The metric which takes time to collect, oh okay. 13:49:08 There was a good question 13:49:09 rushiagr: are there new features in particular that you're looking to start development on, or that you'd like to see implemented? 13:49:30 ajayaa, exactly sorry for confusing 13:49:43 isviridov: I'd like to have auto-managed cassandra clusters 13:49:53 I don't know how do we know that item exists or not in cassandra 13:49:57 isviridov: in short, I'd like to see trove part done, atleast in part 13:50:00 rushiagr, it sounds like Trove programm 13:50:33 isviridov: yeah, I agree. Not much w.r.t. magnetoDB as of today 13:51:11 rushiagr: what is the "trove part"? 13:51:33 I would like us to have an easy way to manage, and scale cassandra clusters in cloud 13:52:15 keith_newstadt: This, is necessary IMO before magnetoDB can be used as a service properly. This mostly comes under trove 13:53:24 rushiagr: so you're looking to be able to deploy mdb using trove to deploy and manage cassandra? 13:53:35 I agree that managing and scaling of cassandra nodes come under trove 13:53:42 +1 13:54:08 rushiagr, you know we thought about having trove as alternative cassandra provider. But it is not ready for that. 13:54:08 rushiagr: is there additional featurework that you feel needs to be done inside of mdb to support this? 13:55:21 is there going to be an altenative of trove's cssandra in mdb? 13:55:23 rushiagr, another side of it, trove works within tenant and spins up VMs for databases. What is not always good for performace. 13:55:58 keith_newstadt: i've not looked into MDB into that much of a detail yet. 13:56:25 isviridov, what if cassandra nodes use ephmeral storage rather than cinder volume? 13:56:26 isviridov: I agree. Ephemeral disks work best for c* 13:56:39 rushiagr, deployed under the cloud on the same level as swift and other OS services 13:57:11 rushiagr: i think trove is potentially a good option for deploying cassandra, which mdb can then use. heat is another option. 13:57:50 keith_newstadt, +1 for heat for deployment 13:57:57 i'llrestart myclient. It is eatin a lot of caracters 13:57:58 rushiagr: not sure if there's much we'd need to do in mdb specifically to support either 13:58:44 rushiagr, keith_newstadt we are mostly out of time. Let us move our discussion to #magnetodb 13:58:45 ok, seems better now, my keyboard :) 13:58:55 isviridov: sure! 13:59:03 thanks folks! It was nice talking to you :) 13:59:05 Thank you guys! 13:59:09 Thanks! 13:59:11 sounds good 13:59:11 #endmeeting