14:03:43 #startmeeting magnetodb 14:03:43 Meeting started Thu Feb 19 14:03:43 2015 UTC and is due to finish in 60 minutes. The chair is aostapenko. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:03:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:03:46 The meeting name has been set to 'magnetodb' 14:04:47 #topic action items 14:06:10 dukhlov, how is your investigation about moving to cassandra 2.1.2 14:06:38 still in progress 14:08:11 I am trying to narrow scope of problem 14:08:58 now I can reproduce it with only one cassandra node, even with consistency level ALL 14:09:18 also I removed async task executor 14:09:40 and use SimpleStorageManager 14:09:46 instead 14:10:11 I'm going to create a bug in cassandra project 14:10:31 so that is it 14:11:48 #action dukhlov create a bug in cassandra project related to issue about moving to C* 2.1.2 14:12:01 ok, thank you very much 14:12:36 very appreciate 14:13:02 awesome 14:13:21 Do you need any assistance? me or charlesw could try to help 14:14:05 A'm going to take vacation next week 14:15:05 you could continue my work if I will not manage to finish it 14:16:30 ok, I'll get into the swing of things 14:17:47 Anything else you are working on? 14:18:32 unfortunately not 14:19:33 because of lack of my time 14:19:38 dukhlov what do you think about backup blueprint https://blueprints.launchpad.net/magnetodb/+spec/backup-restore 14:20:08 I think it is very useful feature 14:20:11 ikhudoshyn have no ability to finish it 14:20:25 ah, got it 14:20:32 Maybe you could look what is need to be done 14:20:50 charlesw, Hi 14:20:57 Hi guys 14:21:13 It looks like it lose the priority 14:21:27 Am I wrong? 14:22:11 blueprint has high priority, and much code is in repo already 14:22:25 so I suggest not to drop it 14:23:09 and do at least simple implementation 14:24:09 I would think the migration utility is more urgent 14:24:32 charlesw: could you adopt it? 14:24:32 that is good but who will take care about it? 14:25:07 It's WIP. We need to understand what's missing, and what the remaining work is? 14:25:49 And same thing with the table metrics 14:26:44 As I understand we have a lack of resources 14:27:41 So I suggest to share this blueprints between dukhlov and charlesw 14:28:11 can you help us understand what the gap is? 14:29:50 I can take backup/restore after vacation 14:30:02 Great, dukhlov 14:30:11 I'll assign it to you 14:30:15 I'll take the other two 14:30:55 but I need help understanding the gaps 14:31:01 Great 14:32:01 https://blueprints.launchpad.net/magnetodb/+spec/migration-script 14:32:13 https://blueprints.launchpad.net/magnetodb/+spec/statsd-tables-metrics 14:32:18 https://blueprints.launchpad.net/magnetodb/+spec/backup-restore 14:33:25 lets move on 14:33:43 charlesw, what about "Create a blueprint on periodically convert healthcheck API call results to metrics" 14:34:32 I looked at the table metrics blueprint. They are similar. I was think we need to have a general approach instead. 14:35:34 table metrics spec was saying the daemon is an add-on service, in contrib maybe 14:35:44 charlesw, could you add it to existing blueprint or create a new dependent one? 14:36:08 But do we really need to go with that approach? 14:36:52 I would think once the statsd metrics patch is merged, we can just have an optional service in core MagnetoDB 14:37:17 instead of having a separate service, which adds to the complexity of deployment 14:40:04 Say we have a periodic task runner in MagnetoDB API server, which can run any task at any interval. We don't need to deploy another separate service 14:40:54 It should be some background task 14:41:35 will it be deamon or periodic task runner it MagnetoDB API server it is another question 14:41:56 and it can be different depend on deployment 14:42:06 Also the current approach of going thru rest API will have the problem of Keystone authn and authz 14:43:06 but we should provide API for that deamon/async task 14:43:37 for what purpose? to configure interval/tasks? 14:43:45 Also the current approach of going thru rest API will have the problem of Keystone authn and authz, it is not fully true, we can disable authentification 14:44:18 It will have to pass policy check 14:44:34 AM I missing something? 14:45:06 ok, I 'm personally not sure that async job in scope of MagnetoDB server process is good Idea 14:45:20 but ew can do it fi we have api 14:45:59 because move it to deamon is easy 14:45:59 I'm not sure if I fully understand what you mean by api? 14:46:18 monitoring rest API for getting table metrics 14:47:02 It will have to pass policy check - it is up to us 14:47:25 I'd say statsd is the way to go in general, instead of API 14:48:25 If we just do statsd without API, we won't have such problems. 14:48:35 statsd is quite different thing as far I understant 14:49:25 statsd is passive service. It waits for notifications 14:50:02 That's why we need periodic task runner to send notifications 14:51:10 ok, I agrree but how this periodic task will get information about table metrics 14:51:11 ? 14:52:33 You can either call storage manager directly, or go thru API layer with an internal special role 14:54:35 I prefer an old approach thru the monitoring api 14:55:05 At least we have it already and i think it is not resonable to remove it 14:55:21 dukhlov, agree 14:55:35 but you can use storage manager directly if your async task works as pasrt of magnetodb 14:55:43 It wouldn't work for our use cases 14:56:02 why? 14:56:14 we need statsd metrics 14:56:28 to integrate with our monitoring system 14:57:57 statsd metrics it is another part 14:58:25 the question where to locate async task 14:59:35 one question, the monitoring API, what kind of keystone authn do we need? 14:59:58 it is fully configurable 15:00:10 we can fully disable it 15:00:53 Can anybody use the monitoring API to find out table metrics? 15:01:36 anybody from internal network 15:01:47 charlesw, monitoring requests should be forbidden externally 15:02:02 without authn? 15:02:15 yes 15:02:18 that was an idea 15:02:25 sounds like a security hole 15:03:20 If we use statsd metrics, we won't have such issue 15:03:46 we have the same issue 15:04:05 because cassandra port is open for whole intenal network 15:04:24 We will have to fix that later 15:04:43 how? 15:04:43 We can't use that as an excuse to add more vulability 15:05:05 using ssl, user name/password, etc 15:06:49 #action duklov, charlesw, aostapenko make a decision about monitoring system 15:06:53 We are out of time, we can continue a discussion after the meeting in this channel 15:07:02 #endmeeting