*** dmakogon_ has quit IRC | 00:19 | |
*** achanda has quit IRC | 00:31 | |
*** dmakogon_ has joined #magnetodb | 00:33 | |
*** achanda has joined #magnetodb | 00:37 | |
*** achanda has quit IRC | 00:50 | |
*** achanda has joined #magnetodb | 00:54 | |
*** rushiagr_away has quit IRC | 01:13 | |
*** rushiagr_away has joined #magnetodb | 01:14 | |
*** achanda has quit IRC | 01:30 | |
*** charlesw has joined #magnetodb | 01:48 | |
*** charlesw_ has joined #magnetodb | 02:01 | |
*** charlesw has quit IRC | 02:06 | |
*** charlesw_ is now known as charlesw | 02:06 | |
*** achanda has joined #magnetodb | 02:31 | |
*** achanda has quit IRC | 02:36 | |
*** achanda has joined #magnetodb | 03:33 | |
*** achanda has quit IRC | 03:37 | |
*** vivekd has joined #magnetodb | 04:21 | |
*** charlesw has quit IRC | 04:23 | |
*** ajayaa has joined #magnetodb | 04:32 | |
*** achanda has joined #magnetodb | 05:34 | |
*** achanda has quit IRC | 05:39 | |
*** ajayaa has quit IRC | 06:47 | |
*** achanda has joined #magnetodb | 06:53 | |
*** achanda has quit IRC | 07:39 | |
*** achanda has joined #magnetodb | 07:46 | |
*** romainh has joined #magnetodb | 07:50 | |
*** achanda has quit IRC | 08:06 | |
*** ajayaa has joined #magnetodb | 08:17 | |
*** achanda has joined #magnetodb | 08:20 | |
*** achanda has quit IRC | 08:31 | |
*** ygbo has joined #magnetodb | 09:06 | |
*** ajayaa has quit IRC | 09:34 | |
*** ajayaa has joined #magnetodb | 09:38 | |
*** ajayaa has quit IRC | 09:43 | |
openstackgerrit | Illia Khudoshyn proposed stackforge/magnetodb: Add restore manager https://review.openstack.org/146909 | 11:21 |
---|---|---|
openstackgerrit | Illia Khudoshyn proposed stackforge/magnetodb: (WIP) Add simple backup implementation https://review.openstack.org/148963 | 12:54 |
*** dmakogon_ is now known as denis_makogon | 12:58 | |
*** charlesw has joined #magnetodb | 13:19 | |
isviridov | Hello aostapenko | 13:39 |
aostapenko | Hello, isviridov | 13:39 |
*** rushiagr_away is now known as rushiagr | 13:40 | |
*** [o__o] has quit IRC | 13:43 | |
*** [o__o] has joined #magnetodb | 13:44 | |
*** miqui has joined #magnetodb | 13:54 | |
*** charlesw has quit IRC | 13:57 | |
*** miqui_ has joined #magnetodb | 13:57 | |
isviridov | Hello everybody | 13:58 |
isviridov | Anybody here for meeting? | 13:58 |
ominakov | o/ | 13:58 |
isviridov | Hello ominakov | 14:00 |
aostapenko | hello, everyone | 14:01 |
isviridov | ikhudoshyn aostapenko charlesw | 14:01 |
isviridov | dukhlov let us start | 14:01 |
isviridov | #startmeeting magnetodb | 14:01 |
openstack | Meeting started Thu Jan 22 14:01:32 2015 UTC and is due to finish in 60 minutes. The chair is isviridov. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:01 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:01 |
openstack | The meeting name has been set to 'magnetodb' | 14:01 |
isviridov | Today agenda https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda#Agenda | 14:01 |
isviridov | o/ | 14:01 |
ikhudoshyn | щ/ | 14:02 |
ominakov | o/ | 14:02 |
isviridov | ikhudoshyn cyrillic here | 14:03 |
ikhudoshyn | oh, really ?! | 14:03 |
isviridov | It looks like Bostin is not with us today | 14:03 |
ikhudoshyn | gonna be fast? | 14:03 |
isviridov | Let us go throught action items | 14:03 |
isviridov | ajayaa start mail discussion about TTL implementation | 14:04 |
isviridov | #topic Go through action items isviridov | 14:04 |
isviridov | So, the question was rised | 14:05 |
isviridov | Everybody seen this email? | 14:05 |
isviridov | ikhudoshyn aostapenko? | 14:06 |
ikhudoshyn | hm | 14:06 |
ikhudoshyn | oh, yea | 14:07 |
ikhudoshyn | there was | 14:07 |
isviridov | #link http://osdir.com/ml/openstack-dev/2015-01/msg00944.html | 14:07 |
ikhudoshyn | isviridov: tnx | 14:07 |
ikhudoshyn | move on? | 14:07 |
isviridov | ikhudoshyn I expected to hear any oppinions | 14:08 |
* isviridov I know it is hard without dukhlov | 14:09 | |
ikhudoshyn | i don't like the idea of only enabling ttl on insert | 14:09 |
isviridov | The question is how to implement update item with TTL | 14:09 |
isviridov | ikhudoshyn +1 | 14:09 |
ikhudoshyn | i think first should agree if Dima's native LSI are prod-ready, then switch to them | 14:10 |
ikhudoshyn | and then evaluate re-evaluate required efforts | 14:11 |
ikhudoshyn | from what i've seen our insert/update/delete logic becomes much cleaner with the latter | 14:11 |
isviridov | ikhudoshyn I think that we have adopted LSI already | 14:12 |
ikhudoshyn | we've merged them, it's not the same for me)) | 14:12 |
isviridov | Here it is the same :) | 14:12 |
isviridov | Merged and swithed our configuration | 14:13 |
ikhudoshyn | ok, so we could go with the 2nd step) | 14:13 |
ikhudoshyn | are we to keep out outdated stuff forever? | 14:13 |
ikhudoshyn | (like old LSI implementation) | 14:13 |
ikhudoshyn | if we think that new LSI is good enuogh lets get rid of old | 14:14 |
isviridov | ikhudoshyn now, just we have migration we can foget about it | 14:14 |
isviridov | I see no reason to support it anymore | 14:14 |
aostapenko | I don't think we should support previous implementation | 14:14 |
ikhudoshyn | hurray | 14:15 |
* ikhudoshyn loves throwing away old stuff | 14:15 | |
* isviridov not sure if ominakov has created BP for migration | 14:15 | |
* isviridov loves it as well | 14:15 | |
ikhudoshyn | getting back to ttl, let's re-check estimates | 14:15 |
ominakov | isviridov, i'll do it | 14:16 |
isviridov | The key problem is: C* doesn't support TTL per row | 14:16 |
ikhudoshyn | yep, we should emulate it | 14:16 |
ikhudoshyn | we may consider having it per item but thus we'd diverge from AWS API | 14:17 |
isviridov | Means we have to update whole row | 14:17 |
ikhudoshyn | isviridov: exactly | 14:18 |
* isviridov is checking TTLs support in AWS | 14:18 | |
*** charlesw has joined #magnetodb | 14:18 | |
isviridov | ikhudoshyn are there any TTL in AWS? | 14:18 |
ikhudoshyn | hm, i guess it was | 14:19 |
* ikhudoshyn seems to be wrong | 14:20 | |
ikhudoshyn | http://www.datastax.com/dev/blog/amazon-dynamodb no ttl in dynamodb | 14:20 |
isviridov | ikhudoshyn it doesn't accroding to AWS API | 14:20 |
isviridov | Ok, let us return back to TTL per row in mdb api only :) | 14:21 |
ikhudoshyn | the only issue i could think of, is a backend that wouldn't have native ttl | 14:21 |
ikhudoshyn | i mean, we could have it per attribute, but.. | 14:22 |
isviridov | ikhudoshyn it is responcibility of driver aouthor to implement it or not | 14:22 |
* isviridov don't like per table TTL | 14:22 | |
ikhudoshyn | if we'd want to support another backend w/o ttl, emulating ttl per item would be much more complex | 14:22 |
ikhudoshyn | so.. | 14:24 |
isviridov | Do you mean that TTL feature is not needed? | 14:24 |
ikhudoshyn | are we to have per item ttl? | 14:24 |
ikhudoshyn | isviridov: no, i just mean that emulate per-row ttl is easier than per item | 14:25 |
isviridov | #link https://blueprints.launchpad.net/magnetodb/+spec/row-expiration by Symantec | 14:25 |
ikhudoshyn | ok, let's not use C* native ttl)) | 14:26 |
isviridov | Item == row | 14:26 |
ikhudoshyn | *easier than per-field | 14:26 |
isviridov | charlesw is TTL per field is expected to be needed? | 14:27 |
*** dukhlov_ has joined #magnetodb | 14:28 | |
charlesw | I'd think so | 14:28 |
charlesw | until C* comes up with a solution | 14:28 |
ikhudoshyn | we can't have both at the same time | 14:29 |
isviridov | ikhudoshyn why? | 14:29 |
charlesw | I was reading C* may expose row marker as a column, then we can set ttl on row marker | 14:29 |
ikhudoshyn | usage seems to be far too complex | 14:29 |
isviridov | charlesw really interesting. Could you point us where we can read it? | 14:30 |
charlesw | http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Setting-TTL-to-entire-row-UPDATE-vs-INSERT-td7595072.html | 14:30 |
ikhudoshyn | that's exactly what ajayya told in nis email | 14:32 |
ikhudoshyn | *his | 14:32 |
isviridov | doesn't look like row marker is available via CQL ap | 14:33 |
isviridov | *api | 14:33 |
ikhudoshyn | "Wouldn't it be simpler if Cassandra just let us change the ttl on the row marker?" --> This is internal impl details, not supposed to be exposed as public API | 14:34 |
ikhudoshyn | that's from that thread | 14:34 |
isviridov | Better to say not exposed | 14:35 |
isviridov | #idea suggest exporing row marker for C* community | 14:35 |
ikhudoshyn | we could agree 'it's not exposed right now' | 14:35 |
isviridov | #idea overwrite all colums with new TTL | 14:36 |
isviridov | Does it look correct? | 14:36 |
ikhudoshyn | #idea implement per-row ttl manually | 14:36 |
ikhudoshyn | using dedicated ttl field | 14:37 |
charlesw | is ttl allowed on primary key? | 14:37 |
ikhudoshyn | i doubt | 14:37 |
charlesw | if not, setting ttl on all columns won't work | 14:37 |
isviridov | charlesw +1 | 14:38 |
isviridov | But according to you #link http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Setting-TTL-to-entire-row-UPDATE-vs-INSERT-td7595072.html it should work | 14:38 |
isviridov | ikhudoshyn do you mean manually check TTL and manually delete it? | 14:39 |
ikhudoshyn | isviridov: exactly | 14:39 |
isviridov | Sounds like async job and check on each read | 14:40 |
ikhudoshyn | isviridov: yup | 14:40 |
isviridov | ikhudoshyn will you share your view in ML? | 14:40 |
ikhudoshyn | ok will do | 14:40 |
isviridov | Great, I'll join | 14:41 |
isviridov | charlesw we would like hearing from you as well | 14:41 |
isviridov | Moving on? | 14:41 |
ikhudoshyn | +1 | 14:41 |
charlesw | sure I'd love to join | 14:41 |
isviridov | Next action item | 14:42 |
isviridov | achudnovets update spec | 14:42 |
isviridov | achudnovets_ was going to drive to clinic with his son | 14:43 |
isviridov | I think we can go ahead | 14:43 |
ikhudoshyn | lets do | 14:43 |
isviridov | #topic Discuss https://blueprints.launchpad.net/magnetodb/+spec/test-concurrent-writes aostapenko | 14:43 |
isviridov | aostapenko stage it your | 14:43 |
isviridov | *yours | 14:44 |
aostapenko | Yes, I'm going to implement this scenarios with tempest | 14:44 |
isviridov | With conditional writes? | 14:44 |
aostapenko | I did not write cases yet. working on framework | 14:45 |
aostapenko | I will share list of scenarios | 14:45 |
isviridov | I believe it is the only reasonable way to update the row | 14:46 |
aostapenko | So will use it. We'll have negative cases too | 14:47 |
isviridov | charlesw any hints how aostapenko can do it? | 14:47 |
* isviridov Paul would be useful here | 14:48 | |
charlesw | I'll ask Paul. Andrei could you please write a doc so we are clear on our cases? | 14:49 |
isviridov | charlesw do you think bp itself is a good place for this? | 14:49 |
charlesw | yes | 14:50 |
aostapenko | charlesw: sure | 14:50 |
isviridov | #action aostapenko charlesw isviridov brainstorm the scenario | 14:50 |
isviridov | aostapenko till then not approved' | 14:50 |
*** idegtiarov_ is now known as idegtiarov | 14:50 | |
isviridov | Moving on | 14:50 |
isviridov | #topic Open discussion isviridov | 14:51 |
isviridov | Anything for this topic? | 14:51 |
*** cl__ has joined #magnetodb | 14:51 | |
ikhudoshyn | guys pls review aostapenko's patch about swift and lets merge it | 14:51 |
isviridov | Link? | 14:51 |
charlesw | After discussion with Dima, I plan to refactor some of the notification code | 14:51 |
isviridov | charlesw great | 14:52 |
charlesw | moving notification from storage manager layer to API controller layer | 14:52 |
ikhudoshyn | https://review.openstack.org/#/c/146534/ | 14:52 |
isviridov | Anything else critical for review? | 14:52 |
charlesw | what do you guys think | 14:52 |
ikhudoshyn | charlesw: why? | 14:52 |
charlesw | make storage manager code cleaner | 14:53 |
isviridov | #action dukhlov_ charlesw aostapenko isviridov review https://review.openstack.org/#/c/146534/ | 14:53 |
isviridov | charlesw how will you measure table async tasks duration if so? | 14:54 |
ikhudoshyn | charlesw: ... and make API code messier | 14:54 |
ikhudoshyn | ? | 14:54 |
ikhudoshyn | i don't mind adding notifications to API layer | 14:55 |
charlesw | And the request metrics collection can use the notification mechanism. So we won't have two sets of notification (in API/middleware using StatsD, and other places using messaging) | 14:55 |
isviridov | ikhudoshyn the more notification the more information we have about system | 14:55 |
ikhudoshyn | isviridov: +1, i just wouldn't like to remove existing notifications from storage | 14:56 |
miqui_ | ..hi... | 14:56 |
dukhlov_ | ikhudoshin: now we have unstructured notifications | 14:56 |
isviridov | miqui_ hello | 14:57 |
ikhudoshyn | dukhlov_: what d'you mean 'unstructured' | 14:57 |
ikhudoshyn | miqui_: hi | 14:57 |
charlesw | hi miqui | 14:57 |
dukhlov_ | we are sending notification somehere | 14:57 |
isviridov | charlesw what do you mean two sets? | 14:57 |
dukhlov_ | we don't have any strategy when where and what we need to send | 14:57 |
charlesw | in middlware/API controller, we sends StatsD metrics, in storage, we use messaging | 14:58 |
dukhlov_ | maybe it is because we don't have customer's real usecase for that | 14:58 |
dukhlov_ | but now we have first one - integrate statsd to notification mechanism | 14:59 |
dukhlov_ | for this case we need request based notification | 14:59 |
dukhlov_ | like request done or request failed and took some time for this job | 15:00 |
ikhudoshyn | dukhlov_: so lets consider ADDING motifications to API | 15:00 |
isviridov | Let us return back to use case | 15:00 |
isviridov | 1. we need to send information to ceilometer | 15:00 |
charlesw | we plan to have a central event registry, it will describe each event: type, messaging or metrics event name, delivery mechanism(messaging/metrics/or both). And use one notifier to decide what to do based on event description. | 15:00 |
dukhlov_ | which information exactly? | 15:01 |
* isviridov will listen a bit | 15:01 | |
dukhlov_ | if we just add notifications it will be duplicate each other in storage and api | 15:01 |
charlesw | +1 | 15:02 |
aostapenko | +1 | 15:02 |
isviridov | The official meeting finished | 15:02 |
isviridov | #endmeeting | 15:02 |
openstack | Meeting ended Thu Jan 22 15:02:52 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:02 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/magnetodb/2015/magnetodb.2015-01-22-14.01.html | 15:02 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/magnetodb/2015/magnetodb.2015-01-22-14.01.txt | 15:02 |
openstack | Log: http://eavesdrop.openstack.org/meetings/magnetodb/2015/magnetodb.2015-01-22-14.01.log.html | 15:02 |
isviridov | dukhlov_ it depends what this notification describes | 15:03 |
dukhlov_ | at storage layer it is reasonable to have only notifications for async jobs | 15:03 |
dukhlov_ | isviridov, sure | 15:04 |
isviridov | dukhlov_agree or specific for storage | 15:04 |
ikhudoshyn | dukhlov_: are we to limit our set of sent notifications to request_arrived/request_completed/request_failed ? | 15:04 |
isviridov | charlesw dukhlov_ do we have specific example right now? | 15:05 |
isviridov | I mean the notification we are going to move out of storage | 15:07 |
aostapenko | only from manager as I understood | 15:11 |
ikhudoshyn | btw, we already have two APIs, are we to have notifications duplicated | 15:11 |
ikhudoshyn | ? | 15:11 |
charlesw | ikhudoshyn, we need to have a unified notifier, and a central event registry, notifier will inspect the event and decide how to deliver message | 15:13 |
dukhlov_ | actually yes | 15:13 |
dukhlov_ | because at least it is different evens - call ddb and mdb | 15:14 |
ikhudoshyn | dukhlov_: 'yes' to douplicate notification code? does not sound good for me | 15:14 |
dukhlov_ | first of all it can be middleware | 15:15 |
ikhudoshyn | dukhlov_: 'middleware' sounds good )) | 15:16 |
charlesw | All API layer notifications can be moved to middleware | 15:16 |
ikhudoshyn | but only if we could rip ALL notifications gode from the rest of codebase | 15:16 |
dukhlov_ | and instead of a lot of notifier.notify in each method we can do try {do_request, notify_done} catch {notify.error} | 15:16 |
isviridov | charlesw do we have any now? | 15:17 |
charlesw | only async notifications will stay in storage manager | 15:17 |
dukhlov_ | we can, but for this we need to collect information in request.context in api and storage layers | 15:17 |
ikhudoshyn | i think it would be nice to have a table of all our notifications with current location and where to move | 15:18 |
charlesw | in progress, I'll send out new patch today hopefully | 15:18 |
dukhlov_ | ikhudoshyn, agree | 15:18 |
charlesw | I'll put together a doc | 15:18 |
isviridov | charlesw yes, even without patch | 15:19 |
ikhudoshyn | charlesw: that would be really helpful | 15:19 |
charlesw | will do soon | 15:19 |
*** cl__ has quit IRC | 15:20 | |
*** vivekd has quit IRC | 15:23 | |
ominakov | charlesw, ikhudoshyn tnx for comments to my patch (https://review.openstack.org/#/c/147162/4). I have one more question. When we do delete in async-task-executor table already DELETING, so we can't determine is it a first or second delete | 15:46 |
charlesw | ominakov, why would we delete a table already in DELETING state? We can do delete for DELETE_FAILED. | 15:49 |
ominakov | charlesw, yep, but when task executor picks task from queue - table already in DELETING state | 15:51 |
ikhudoshyn | 'cos manager puts it in DELETING state before passing it to async executor)) | 15:51 |
ikhudoshyn | i think we should refactor that | 15:52 |
ominakov | ikhudoshyn, thx | 15:52 |
ominakov | ikhudoshyn, +1 | 15:52 |
ikhudoshyn | we need additional statuses like {DELETE, CREARE, whatsoever}_REQUEST_ACCEPTED when request arrives | 15:53 |
ikhudoshyn | and use DELETING/CREATING only when async exec actually performs the operation | 15:53 |
charlesw | what's the problem if you delete the table already deleted but table_info entry hasn't been removed? | 15:56 |
*** charlesw has quit IRC | 16:05 | |
*** charlesw has joined #magnetodb | 16:17 | |
*** romainh has left #magnetodb | 16:23 | |
ominakov | charlesw, in async-task-executor we don't know is it table already deleted but table_info entry hasn't been removed or is it active table | 16:38 |
charlesw | Then you can just go ahead and delete again. Drop table if exists should work. | 16:42 |
charlesw | DELETE is supposed to be idempotent. response code 404 or 200 should both be ok | 16:42 |
*** ygbo has quit IRC | 16:51 | |
*** charlesw has quit IRC | 17:08 | |
ominakov | we have no problem with response code, just async-task-executor can't make decision about suppressing exception or not (from backend) | 17:09 |
*** charlesw has joined #magnetodb | 17:09 | |
*** isviridov is now known as isviridov_away | 17:32 | |
*** achanda has joined #magnetodb | 17:49 | |
*** denis_makogon has quit IRC | 17:57 | |
*** ajayaa has joined #magnetodb | 17:58 | |
*** charlesw has quit IRC | 18:03 | |
*** charlesw has joined #magnetodb | 18:03 | |
openstackgerrit | Alexander Chudnovets proposed stackforge/magnetodb: (WIP) Monitoring API URLs refactoring https://review.openstack.org/145247 | 18:12 |
*** rushiagr is now known as rushiagr_away | 18:13 | |
*** ajayaa has quit IRC | 18:32 | |
*** achanda has quit IRC | 19:13 | |
*** achanda has joined #magnetodb | 19:15 | |
*** charlesw has quit IRC | 19:20 | |
*** achanda has quit IRC | 19:51 | |
*** achanda has joined #magnetodb | 19:57 | |
*** achanda has quit IRC | 20:12 | |
*** achanda has joined #magnetodb | 20:23 | |
openstackgerrit | Andrei V. Ostapenko proposed stackforge/magnetodb: Migrates to oslo.context library https://review.openstack.org/149393 | 20:38 |
*** romainh has joined #magnetodb | 20:51 | |
*** romainh has left #magnetodb | 20:51 | |
openstackgerrit | Andrei V. Ostapenko proposed stackforge/magnetodb: Adds Swift support https://review.openstack.org/146534 | 21:37 |
*** charlesw has joined #magnetodb | 22:03 | |
*** dukhlov_ has quit IRC | 23:02 | |
*** charlesw has quit IRC | 23:32 | |
*** dukhlov_ has joined #magnetodb | 23:50 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!