14:01:32 #startmeeting magnetodb 14:01:33 Meeting started Thu Jan 22 14:01:32 2015 UTC and is due to finish in 60 minutes. The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:37 The meeting name has been set to 'magnetodb' 14:01:48 Today agenda https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda#Agenda 14:01:55 o/ 14:02:03 щ/ 14:02:19 o/ 14:03:04 ikhudoshyn cyrillic here 14:03:15 oh, really ?! 14:03:30 It looks like Bostin is not with us today 14:03:45 gonna be fast? 14:03:52 Let us go throught action items 14:04:07 ajayaa start mail discussion about TTL implementation 14:04:22 #topic Go through action items isviridov 14:05:04 So, the question was rised 14:05:47 Everybody seen this email? 14:06:39 ikhudoshyn aostapenko? 14:06:55 hm 14:07:03 oh, yea 14:07:06 there was 14:07:17 #link http://osdir.com/ml/openstack-dev/2015-01/msg00944.html 14:07:36 isviridov: tnx 14:07:56 move on? 14:08:24 ikhudoshyn I expected to hear any oppinions 14:09:03 * isviridov I know it is hard without dukhlov 14:09:30 i don't like the idea of only enabling ttl on insert 14:09:37 The question is how to implement update item with TTL 14:09:54 ikhudoshyn +1 14:10:50 i think first should agree if Dima's native LSI are prod-ready, then switch to them 14:11:13 and then evaluate re-evaluate required efforts 14:11:45 from what i've seen our insert/update/delete logic becomes much cleaner with the latter 14:12:04 ikhudoshyn I think that we have adopted LSI already 14:12:26 we've merged them, it's not the same for me)) 14:12:43 Here it is the same :) 14:13:00 Merged and swithed our configuration 14:13:07 ok, so we could go with the 2nd step) 14:13:26 are we to keep out outdated stuff forever? 14:13:44 (like old LSI implementation) 14:14:28 if we think that new LSI is good enuogh lets get rid of old 14:14:38 ikhudoshyn now, just we have migration we can foget about it 14:14:49 I see no reason to support it anymore 14:14:54 I don't think we should support previous implementation 14:15:03 hurray 14:15:18 * ikhudoshyn loves throwing away old stuff 14:15:25 * isviridov not sure if ominakov has created BP for migration 14:15:37 * isviridov loves it as well 14:15:53 getting back to ttl, let's re-check estimates 14:16:17 isviridov, i'll do it 14:16:32 The key problem is: C* doesn't support TTL per row 14:16:57 yep, we should emulate it 14:17:47 we may consider having it per item but thus we'd diverge from AWS API 14:17:48 Means we have to update whole row 14:18:03 isviridov: exactly 14:18:25 * isviridov is checking TTLs support in AWS 14:18:54 ikhudoshyn are there any TTL in AWS? 14:19:13 hm, i guess it was 14:20:09 * ikhudoshyn seems to be wrong 14:20:26 http://www.datastax.com/dev/blog/amazon-dynamodb no ttl in dynamodb 14:20:42 ikhudoshyn it doesn't accroding to AWS API 14:21:29 Ok, let us return back to TTL per row in mdb api only :) 14:21:39 the only issue i could think of, is a backend that wouldn't have native ttl 14:22:02 i mean, we could have it per attribute, but.. 14:22:11 ikhudoshyn it is responcibility of driver aouthor to implement it or not 14:22:39 * isviridov don't like per table TTL 14:22:54 if we'd want to support another backend w/o ttl, emulating ttl per item would be much more complex 14:24:02 so.. 14:24:11 Do you mean that TTL feature is not needed? 14:24:15 are we to have per item ttl? 14:25:00 isviridov: no, i just mean that emulate per-row ttl is easier than per item 14:25:06 #link https://blueprints.launchpad.net/magnetodb/+spec/row-expiration by Symantec 14:26:19 ok, let's not use C* native ttl)) 14:26:23 Item == row 14:26:48 *easier than per-field 14:27:35 charlesw is TTL per field is expected to be needed? 14:28:32 I'd think so 14:28:57 until C* comes up with a solution 14:29:02 we can't have both at the same time 14:29:09 ikhudoshyn why? 14:29:43 I was reading C* may expose row marker as a column, then we can set ttl on row marker 14:29:50 usage seems to be far too complex 14:30:38 charlesw really interesting. Could you point us where we can read it? 14:30:43 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Setting-TTL-to-entire-row-UPDATE-vs-INSERT-td7595072.html 14:32:20 that's exactly what ajayya told in nis email 14:32:27 *his 14:33:16 doesn't look like row marker is available via CQL ap 14:33:26 *api 14:34:05 "Wouldn't it be simpler if Cassandra just let us change the ttl on the row marker?" --> This is internal impl details, not supposed to be exposed as public API 14:34:15 that's from that thread 14:35:26 Better to say not exposed 14:35:57 #idea suggest exporing row marker for C* community 14:35:58 we could agree 'it's not exposed right now' 14:36:25 #idea overwrite all colums with new TTL 14:36:40 Does it look correct? 14:36:51 #idea implement per-row ttl manually 14:37:07 using dedicated ttl field 14:37:09 is ttl allowed on primary key? 14:37:22 i doubt 14:37:55 if not, setting ttl on all columns won't work 14:38:17 charlesw +1 14:38:43 But according to you #link http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Setting-TTL-to-entire-row-UPDATE-vs-INSERT-td7595072.html it should work 14:39:37 ikhudoshyn do you mean manually check TTL and manually delete it? 14:39:49 isviridov: exactly 14:40:14 Sounds like async job and check on each read 14:40:29 isviridov: yup 14:40:46 ikhudoshyn will you share your view in ML? 14:40:55 ok will do 14:41:13 Great, I'll join 14:41:39 charlesw we would like hearing from you as well 14:41:48 Moving on? 14:41:54 +1 14:41:59 sure I'd love to join 14:42:35 Next action item 14:42:40 achudnovets update spec 14:43:10 achudnovets_ was going to drive to clinic with his son 14:43:19 I think we can go ahead 14:43:32 lets do 14:43:35 #topic Discuss https://blueprints.launchpad.net/magnetodb/+spec/test-concurrent-writes aostapenko 14:43:43 aostapenko stage it your 14:44:16 *yours 14:44:17 Yes, I'm going to implement this scenarios with tempest 14:44:38 With conditional writes? 14:45:30 I did not write cases yet. working on framework 14:45:50 I will share list of scenarios 14:46:46 I believe it is the only reasonable way to update the row 14:47:16 So will use it. We'll have negative cases too 14:47:52 charlesw any hints how aostapenko can do it? 14:48:15 * isviridov Paul would be useful here 14:49:00 I'll ask Paul. Andrei could you please write a doc so we are clear on our cases? 14:49:46 charlesw do you think bp itself is a good place for this? 14:50:04 yes 14:50:06 charlesw: sure 14:50:15 #action aostapenko charlesw isviridov brainstorm the scenario 14:50:32 aostapenko till then not approved' 14:50:59 Moving on 14:51:11 #topic Open discussion isviridov 14:51:30 Anything for this topic? 14:51:35 guys pls review aostapenko's patch about swift and lets merge it 14:51:47 Link? 14:51:48 After discussion with Dima, I plan to refactor some of the notification code 14:52:21 charlesw great 14:52:27 moving notification from storage manager layer to API controller layer 14:52:32 https://review.openstack.org/#/c/146534/ 14:52:34 Anything else critical for review? 14:52:39 what do you guys think 14:52:51 charlesw: why? 14:53:45 make storage manager code cleaner 14:53:51 #action dukhlov_ charlesw aostapenko isviridov review https://review.openstack.org/#/c/146534/ 14:54:23 charlesw how will you measure table async tasks duration if so? 14:54:27 charlesw: ... and make API code messier 14:54:29 ? 14:55:36 i don't mind adding notifications to API layer 14:55:38 And the request metrics collection can use the notification mechanism. So we won't have two sets of notification (in API/middleware using StatsD, and other places using messaging) 14:55:45 ikhudoshyn the more notification the more information we have about system 14:56:25 isviridov: +1, i just wouldn't like to remove existing notifications from storage 14:56:44 ..hi... 14:56:50 ikhudoshin: now we have unstructured notifications 14:57:12 miqui_ hello 14:57:17 dukhlov_: what d'you mean 'unstructured' 14:57:21 miqui_: hi 14:57:28 hi miqui 14:57:32 we are sending notification somehere 14:57:35 charlesw what do you mean two sets? 14:57:59 we don't have any strategy when where and what we need to send 14:58:25 in middlware/API controller, we sends StatsD metrics, in storage, we use messaging 14:58:42 maybe it is because we don't have customer's real usecase for that 14:59:15 but now we have first one - integrate statsd to notification mechanism 14:59:39 for this case we need request based notification 15:00:09 like request done or request failed and took some time for this job 15:00:11 dukhlov_: so lets consider ADDING motifications to API 15:00:18 Let us return back to use case 15:00:36 1. we need to send information to ceilometer 15:00:46 we plan to have a central event registry, it will describe each event: type, messaging or metrics event name, delivery mechanism(messaging/metrics/or both). And use one notifier to decide what to do based on event description. 15:01:06 which information exactly? 15:01:22 * isviridov will listen a bit 15:01:51 if we just add notifications it will be duplicate each other in storage and api 15:02:20 +1 15:02:26 +1 15:02:47 The official meeting finished 15:02:52 #endmeeting