14:10:18 #startmeeting Magnetodb Weekly Meeting 14:10:19 Meeting started Thu Feb 12 14:10:18 2015 UTC and is due to finish in 60 minutes. The chair is aostapenko. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:10:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:10:24 The meeting name has been set to 'magnetodb_weekly_meeting' 14:10:34 Hi, all 14:10:54 I will carry meeting this time 14:12:58 Hi, dukhlov, ikhudoshyn, achuprin_, keith_newstadt, miqui 14:13:09 ...hello... 14:13:26 hi Andrew 14:15:09 We have no action items from previous meeting and no special status in agenda. So I propose to walk through statuses 14:15:54 dukhlov, what about your patch https://review.openstack.org/152513 14:16:11 Moving to cassandra 2.1.2 14:17:09 hm, I faced with some strange behavior 14:18:13 #topic Moving to cassandra 2.1.2 14:18:16 It works fine, but our tests on gate work very slow and even don't meet job timeout sometimes 14:18:41 I'm troubleshooting this problem 14:18:48 what I know now... 14:19:21 problem is only with test_list_tables 14:19:52 there we create a few table (5) 14:20:27 and then when test is over we cleanup those tables 14:21:13 magnetodb processes this request , set status to DELETEING and create job for async task manager 14:22:14 then I see in logs that this job is processed and DELETE FROM table_info WHERE tenant= AND name= executed 14:24:10 list_tables has its own cleanup mechanism. I will try to assist you with this investigation 14:24:36 but our tests in this time are executing describe table to get information that table is deleted and somehow after job execution it keep receive that table is DELETING for 2 minute 14:24:52 and then somehow table is gone at least 14:25:50 why we have such delay for 2 minute - I still can not understand and continue investigation 14:26:18 It's our default delay for deleting table in tempest 14:26:19 aostapenko: I saw it 14:26:54 but in case of timeout it should raise exception 14:27:07 Are you sure that table is gone? Not DELETE_FAILED state 14:27:19 I am sure 14:27:28 At least I think so 14:27:47 Ok. Lets continue an investigation 14:29:09 #action dukhlov aostapenko Investigate problem with tempest tests in https://review.openstack.org/152513 14:29:41 Anything else, dukhlov? 14:30:40 not today 14:31:54 Lets move on. miqui, what are you working on? 14:32:33 nothing specific atm, more focused on learning some cassandra basics 14:32:40 and getting my dev env to work 14:33:33 miqui: thank you for your patches on table creation validation 14:35:13 I'm still working on refactoring and extending healthcheck request 14:36:12 oh, excuse me. That are not your patches. Thanks to vivekd :) 14:36:34 ..no worries... 14:37:31 Does anybody has to say something? 14:37:41 oh, hi, charlesw 14:38:08 Hi guys 14:38:38 charlesw: Could you share a status of your notification system refactoring? 14:38:59 Yes 14:39:16 please 14:39:49 It's close to done. Going thru the notification refactoring comments from Dima. had some offline discussion. 14:40:25 Will send out an updated patch. 14:40:49 charlesw: anything else you are working on? 14:41:17 For now, we will use existing celiomenter doc for evnets. But we will need to update the ceilometer doc. 14:41:29 Anyone knows the process? 14:41:55 I have an internal project to integrate metrics into a portal. 14:42:23 I will need to convert health_check API results into metrics to be sent to StatsD 14:42:47 Was thinking about a daemon process to call health_check peridically 14:42:56 charlesw: now we have a problem with integration with ceilometer. We need to transfer to not durable queues. I will send a patch soon 14:44:27 So the community work I have in mind next is to create such daemon to call health_check/monitoring API to periodically convert API call results to metrics 14:45:04 #action aostapenko Make a patch to make magnetodb to use nondurable queues 14:45:05 If it's ok, I will create a blueprint 14:46:50 #action charlesw Create a blueprint on periodically convert healthcheck API call results to metrics 14:47:01 Hi nunosantos 14:47:33 dukhlov, do you have any thoughts about that? 14:48:26 charlesw, waiting for bp 14:48:31 I think we go in the same direction with openstak 14:49:07 agree 14:49:07 dukhlov, could you be more specific? 14:49:38 it looks like oslo.messaging does not support different configuration for different topics 14:50:05 so we can only make all topics durable or make all topics not durable 14:50:43 Not it should be openstack wide option for avoid conpatibility problems 14:51:05 so all topics should be durable or not durable 14:51:35 question, but diff openstack projects that config rabbit in diff ways? 14:51:38 in devstack topics are not durable 14:51:55 or they seem to agree on what type of queues to use (i.e. durable vs not) 14:52:29 Should we go to oslo messaging instead asking for support to configuring durability for different q 14:52:45 miqui, yes different project have diff configuration 14:52:56 miqui ceilometer notification agent forces us to use its own configuration for notification queue 14:53:06 k, thanks... 14:53:07 but different projects usually use the same topic 14:53:15 ah k.. 14:53:21 for communication 14:54:01 so then all of this depends on how ceilometer configs its queue regardless of oslo no? 14:54:40 ceilometer has a msg topo that all have to abide by right? 14:56:20 ceilometer creates exchanges for all other services. And for its redeclaration config (e.g. durability) should be the same 14:56:21 mmm, I'm not fully agreed with your terms, but yes 14:56:39 k.. 14:57:48 Does anybody have to add something? 14:58:56 Ok. lets move to open discussion 14:58:59 #topic Open Discussion 14:59:11 am fine thanks.... 15:00:53 we are out of time. Does anybody have something to share or any questions? 15:01:23 somehow I received a cancellation request of this meeting 15:01:56 just want to make sure the meeting is still good going forward 15:02:35 charlesw, thank you. I'll figure this out 15:04:18 So lets finish. Thank you, guys 15:04:26 #endmeeting