14:00:30 #startmeeting magnetodb 14:00:32 Meeting started Thu Nov 20 14:00:30 2014 UTC and is due to finish in 60 minutes. The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:35 The meeting name has been set to 'magnetodb' 14:00:45 Hello everybody 14:00:57 o/ 14:01:06 nunosantos : o/ 14:01:15 _o/ 14:01:37 o/ 14:01:41 miqui : miqui_____ welcome to mdb weekly meeting 14:01:46 dukhlov works as a traffic regulator 14:02:07 _o_| 14:02:08 hello 14:02:17 sorry had some irc client uissues on my side.. 14:02:41 miqui_____ : have we meet each other on summit? 14:02:57 i was in april 2014 atlanta summit 14:03:19 Today agenda #link https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda#Agenda 14:03:21 your nick seems familiar somehow.. 14:03:47 miqui_____ : welcome back if so :) 14:03:54 ..thanks... 14:04:08 Let us start from action items 14:04:14 #topic Go through action items isviridov 14:04:22 #link http://eavesdrop.openstack.org/meetings/isviridov/2014/isviridov.2014-11-13-14.01.html 14:04:35 * isviridov dukhlov data encryption support blueprint 14:05:23 hm 14:05:29 * isviridov good start 14:05:55 specification for this blueprint is under review now 14:06:20 #link https://review.openstack.org/#/c/134936/ 14:06:27 #link https://review.openstack.org/#/c/133505/ 14:06:40 I got -1 from isviridov, but there were only grammar mistakes 14:07:07 I'm waiting for other feedback 14:07:33 dukhlov : I was a bit surprised that we have the general one. I mean - Specification for data-encryption-support. Do you think we need it? 14:09:12 charlesw : hello 14:09:24 ok, for encryption support we need to have implemented at least part of management API 14:09:25 Hi 14:09:28 o/ 14:09:34 hi rushiagr 14:09:43 Hi everyone. 14:10:10 hello ajayaa 14:10:51 so I added sub smaller blueprint - part of management API for encryption and set it as dependency for encryption support blueprint 14:11:05 charlesw : rushiagr ajayaa discussing https://review.openstack.org/#/c/134936/ 14:11:54 dukhlov : ok. I think we have to link them during merge 14:12:06 dukhlov : great job! 14:12:11 as you wish 14:12:35 Ok, let us move to next item 14:12:36 * isviridov ikhudoshyn file a bug about dynamodb version support documentation 14:12:50 done 14:13:01 https://bugs.launchpad.net/magnetodb/+bug/1394575 14:13:32 ikhudoshyn : yes, I've seen. Great 14:14:10 Ok, we have 2 other topics to discuss 14:14:11 * isviridov keith_newstadt isviridov ikhudoshyn clarify if we can avoid having two different apis for db backup/restore inside of openstack 14:14:21 * isviridov keith_newstadt isviridov ikhudoshyn clarify if we can avoid having two different implementations of that api 14:14:45 Let us do it together with bp discussion 14:14:53 sorry guys, i'm late 14:15:04 #topic Discuss Backup/restore API specification draft https://review.openstack.org/#/c/133933/ ikhudoshyn 14:15:43 it hangs for couple weeks yet 14:15:58 only small amount of comments arise 14:16:15 i tried to address all of them 14:16:17 #link http://docs-draft.openstack.org/33/133933/12/check/gate-magnetodb-specs-docs/e0b7b92/doc/build/html/specs/kilo/approved/backup-restore-api.html for easier reading 14:17:13 isviridov, that is helpful. 14:18:50 during backup/restore, shouldn't the tenant/table be locked? 14:19:14 we don't see any general reason 14:19:33 it may still be required for some implementations of backup 14:20:24 some of us thought about exporting data in json format as the 1st approach 14:20:44 we dont seem to need any loking in that case 14:20:53 * locking 14:22:09 ikhudoshyn : but, I believe, all others will need it and it affects usage scenario. Do you think that we have to describe teh locking process within this spec? 14:22:37 not in *that* spec since it only describes API 14:23:09 i'd prefer not to manage table/tenant locking manually 14:23:36 We should document it so we can manage user expectations whether locking is used or not 14:24:07 charlesw: somewhere.. 14:24:47 ok, preferably at API level 14:25:04 ikhudoshyn: as API Impact section if any is expected 14:25:58 as I said i'd rather not to lock it manually -- so we could just add new tables status, like MAINTENANCE 14:27:07 ikhudoshyn, +1 14:27:17 +1 14:27:29 ikhudoshyn : does it mean the users requests will be rejected if table is in this status? 14:28:02 ..that would de3pend of the use case.. 14:28:03 yep.. i thought about 403 14:28:18 guess there should be more suitable err code 14:28:31 1 - hey here your latest snapshot http 200 14:28:35 2 - or simply 403 14:28:57 not sure latest snapshot is always available 14:29:36 ikhudoshyn +1, also for writing as well 14:29:39 503 Service Unavailable may be more appropriate 14:29:44 hmm good point...if that is the case then 503 14:30:16 charlesw: we might think more.. 14:30:40 actually it is not the whole service that is unavailable.. 14:30:55 #idea we could just add new tables status, like MAINTENANCE 14:30:56 423 Locked 14:30:59 ? 14:31:06 ajayaa: exactly 14:31:10 tnx 14:31:30 #idea 423 Locked on request during backup 14:31:51 * when in MAINTENANCE status 14:32:08 not necessary for every backup 14:32:21 ikhudoshyn : +1 14:33:00 ikhudoshyn : charlesw ajayaa miqui_____ move on? 14:33:17 +1 14:33:32 agree, I'm just waiting for yr +/1's 14:33:47 * +/-1's 14:34:11 ikhudoshyn : you will have it :) 14:34:16 #topic Open discussion isviridov 14:34:49 Code review: https://review.openstack.org/#/c/124391/ 14:34:57 *needed 14:35:57 spec #link https://wiki.openstack.org/wiki/MagnetoDB/specs/rbac 14:36:16 ajayaa : great progress! 14:36:38 #action dukhlov charlesw ikhudoshyn isviridov review https://review.openstack.org/#/c/124391/ 14:37:23 for my part... am new to the project.. 14:37:34 ajayaa : I just don't remember if we have finished with a spec 14:37:40 so am going through the bugs and see where i can start 14:38:00 isviridov, This was before we had a spec system in place. :) 14:38:11 miqui_____ : welcome on board! 14:38:16 ...thanks!! 14:39:12 But the wiki page is informative enough I guess. 14:39:14 miqui_____ : pay attention to https://bugs.launchpad.net/magnetodb/+bugs?field.tag=low-hanging-fruit and https://launchpad.net/magnetodb/+milestone/kilo-1 bugs 14:39:38 awesome... thanks... 14:39:55 miqui_____ : looking for your patches. Always feel free to ask. 14:40:07 miqui_____ : what is your timezone? 14:40:13 EST 14:40:50 EST ( US east coast) 14:41:09 ajayaa : yeap, let me look at it. I believe we will add monitoring action at least. 14:41:16 @miqui, welcome, we are in same tz 14:41:24 cool... 14:41:52 isviridov, I didn't get you. 14:43:01 ajayaa : there is a list of apis to restict, and we have monitoring api now 14:43:35 isviridov, got you! okay. 14:43:39 ajayaa : anyhow great spec! 14:43:56 isviridov, We can add it later by filing a bug and then fixing it. 14:44:36 Yeap 14:45:49 Team, anything else to discuss now? 14:46:28 Seems we are done 14:46:45 looks like 14:46:47 I have a spec WIP, not ready yet. But I'd like to hear your use cases and comments. https://wiki.openstack.org/wiki/MagnetoDB/specs/requestmetrics 14:47:22 charlesw : the first q why not in spec repo? 14:48:16 #link https://wiki.openstack.org/wiki/MagnetoDB/specs/requestmetrics 14:48:23 @isviridov, can you educate us on the process? 14:48:55 the same as with our usual repo 14:49:04 but magnetodb-specs 14:49:31 git clone, branch, set up git-review, commit, git review 14:49:48 charlesw : I've seen something similar in swift. So, actually it is one more monitoring 14:50:12 I was using our wiki template: https://wiki.openstack.org/wiki/MagnetoDB/specs/template 14:51:45 charlesw : why not put it as a part of monitoring api? 14:52:09 I looked at swift as well they have 3 different ways: recon, informant, and statsd w/o middleware 14:53:03 we don't need an API for monitoring. We can publish it thru statsd/graphite/ganglia/etc 14:53:36 similar to swift 14:53:45 charlesw : yes we can, but in such case we are loosing this information for celiometer 14:54:21 charlesw : what do you think? 14:54:44 right, there are some areas unclear like whether to use/work with ceilometer 14:55:03 that's the kind of comments I'd like to hear more:) 14:56:16 Ok, I think that would be greate to keep it under Monitoring API as one source of cluster metrics. And call it via any monitoring solutions like nagios so on. 14:56:40 ..even sensuapp 14:57:40 charlesw : how fast do you need it? 14:58:01 @isviridov, I'll think about it. Thanks for comments. 14:58:36 probably the week after thanksgiving 14:58:59 Another approach can be just implement it for statsd as it is easier and faster, and move to Monitoring API just solution is more or less machure. 14:59:28 the overhead/perf impact can be big 14:59:31 With Monitoring API we have to implement own storage to keep data about every node 15:00:05 charlesw : what performance impact do you mean? 15:00:39 * isviridov time is over 15:00:43 we need to capture metrics for every request. For statsd, just udp post can be done. 15:00:58 got to go, have another meeting 15:01:20 charlesw : but data can be cashed and agregted in memory 15:01:23 charlesw : sure 15:01:32 Thank you everybody for comming 15:01:39 #endmeeting