04:02:20 #startmeeting masakari 04:02:21 Meeting started Tue Dec 13 04:02:20 2016 UTC and is due to finish in 60 minutes. The chair is samP. Information about MeetBot at http://wiki.debian.org/MeetBot. 04:02:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 04:02:24 The meeting name has been set to 'masakari' 04:02:30 Hi all 04:03:02 Thank you all for attending to our first masakari IRC 04:03:52 First I would like to make quick intro from every one 04:04:03 I think we should add weekly agenda on masakari wiki page 04:04:12 #link https://wiki.openstack.org/wiki/Masakari 04:04:32 #link https://wiki.openstack.org/wiki/Meetings/Masakari#Agenda_for_next_meeting 04:04:37 can we use the rollcall? 04:05:23 tpatil: sure, I will do that 04:05:43 #action samP add add weekly agenda on masakari wiki page 04:05:50 samP: I have added some points for todays discussion 04:06:00 abhishekk: thank you 04:08:22 shall we start with the discussion? 04:08:30 sure, 04:09:15 Let's start the discussion as per agenda 04:09:27 we dont have opens bugs right? can we stat jump in to next? 04:09:32 yes 04:09:36 sure 04:10:03 Only one issue is open 04:10:26 #topic Discussion new features 04:10:34 in fact there are 3 open issues but none is critical 04:11:20 tpatil: OK, lets do it after this discussion 04:11:35 samP: ok 04:11:36 First, evacuate_all config option 04:12:38 now we dont evacuate VMs without HA flag, this option enable/disable the evacuation of all VMs 04:12:49 yes, we are using this option for host_failure flow 04:13:18 if this option is True then we should evacuate all the instances else only ha_enabled instances should be evacuated 04:13:34 #link https://review.openstack.org/#/c/407538/ 04:13:55 IMO we should rename this option so that we can use for instance_failure flow as well 04:14:38 abhishekk: agree, we have kind of same issue there 04:15:02 as of now in instance_failure we are only processing HA-Enabled instances 04:15:14 how about rescue_all? 04:15:59 rescue is other API name of nova. 04:16:12 right, we can decide about the config name in internal disscussion 04:16:31 rkmrhj: ah,, thank you 04:17:48 In future, we are going to impliment custormize rescue patterns, 04:19:05 I think we need to define separate options for evacuate and instance_failure 04:19:27 samP: Let's add a new blueprint to describe about new feature 04:19:52 such as, evacuate for all but instance_failure only for HA enable VMS 04:19:57 also, we should add a litespec to describe how we are going to implement it 04:20:10 tpatil: sure 04:20:42 do we need a spec repo? or just document it someware? 04:20:54 repo is better 04:21:39 OK, in future, spec repo is more useful. I'll try to get one 04:21:51 #action create spec repo for masakari 04:23:09 are we going to create BP for "evacuate_all config option"? 04:23:41 I think blueprint should be enough for this change as design wise it's not a big change 04:24:02 tpatil: agree 04:25:00 can we move to the next item 04:25:11 any volunteer for that BP? 04:25:46 I will do that 04:26:04 #action abhishekk crate evacuate_all config option BP 04:26:09 abhishekk: thanks 04:26:20 OK lets go to next item 04:26:25 samP: no problem 04:26:46 item2: an we have one periodic task? 04:26:54 Ok, I will explain about this 04:26:56 Earlier we were planning to have two periodic tasks, 04:27:06 process_error_notifications, for processing notifications which are in error state 04:27:13 process_queued_notifications, for processing notifications which are in new state for long time due to ignored/missed by messaging server. 04:27:42 but we can club this into one as in both the tasks we are going to execute the workflow again 04:27:54 this way we can eliminate the duplicate code 04:29:05 the question is, we can process both these tasks in a single periodic task 04:29:56 the only difference is in case of notification status is new, if the periodic task fails to execute the workflow , should the status be set to "failed" or "error"? 04:30:45 abhishekk: can you please explain the status transition that takes place during processing notifications 04:30:53 ok 04:31:04 process_error_notifications: 04:31:14 Error flow, error >> running >> error >> failed 04:31:22 Success flow, error >> running >> finished 04:31:33 for process_queued_notifications: 04:31:40 Error flow, new >> running >> error 04:31:47 Success flow, new >> running >> finished 04:32:35 In case of secnond periodic task it we set status to error then that will again be taken for execution by process_error_notifications 04:32:51 so we cab club this and have common flow like, 04:32:59 Error flow, new/error >> running >> error >> failed 04:33:06 Success flow, new/error >> running >> finished 04:33:31 Is there any flag to stop it at some point? 04:34:07 abhishekk: Let's add a litespec to explain all these possible cases 04:34:08 no, these periodic tasks will run at regular interval 04:34:38 ok 04:35:01 OK, lets discuss this further on the spec 04:35:28 abhishekk: can I assign this spec to you? 04:35:45 samP: yes 04:36:20 #action abhishekk create spc for merge periodic tasks 04:36:25 abhishekk: thank you 04:36:34 shall we move to next then? 04:37:02 item3: configurable workflow 04:37:37 this is a new requirement 04:37:45 is this configurable recovery patterns or smt else? 04:37:52 yes 04:37:58 abhishekk: ok 04:38:16 samP: configurable recovery patterns 04:38:34 tpatil: thanks 04:38:43 I think Kajinami explained you the problems we are having in the current design 04:39:26 tpatil: actually, I couldn't. we gonna meet tomorrow 04:39:44 samP: Ok 04:40:14 Post that discussion, let's finalize on the new requirement before we go ahead and add a new blueprint for it 04:41:12 tpatil: sure, I will discuss with this on ML with kajinami 04:41:34 tpatil: we can have more details discuss on next meeting 04:41:54 samP: Sure 04:42:28 samP: we have one more item for discussion 04:42:40 abhishekk: sure 04:42:44 Dinesh_Bhor will explain you about that 04:42:58 ok I have a question that whether the workflow should be executed synchronously or asynchronously? 04:43:31 specific work flow or all of them? 04:44:31 Prticulary host_failure 04:44:31 The problem is we want to mark the used reserved_hosts after the execution of host failure workflow as reserved=False 04:45:17 For this we are passing that reserved_host_list dictionary to workflow for further execution. 04:45:21 ah.. got it 04:45:33 When the reserved_host is taken for evacuation, it is set to reserved_host['reserved'] = False. As the dictionary is mutable we get the updated dictionary after the execution of workflow. 04:45:59 After the execution of whole workflow we are looping over through the reserved_host_list in manager.py and if the reserved_host is marked as false then we are getting the related object and marking it as reserved=False. 04:46:25 The above solution is based on the assumption that we are executing the workflow synchronously. 04:47:01 In future, if some one wants to contribute another driver say Mistral then the workflow might execute asychronously and you might not get the results from the workflow execution in engine, right? 04:47:33 tpatil: yes, correct 04:47:33 where you will can db apis to update reserve_host flag to False 04:47:48 s/can/call 04:49:38 the current supported driver run on the local machine where engine is running. but in future any one can contribute new driver and we don't know whether it will return result or not. 04:49:39 as tpatil said, if some one bring other driver to call this workflow, we can not do this synchronously 04:50:31 so the main question is how to set reserve_host flag to False after the instances are evacuated from the failover segment. 04:51:55 let's discuss the design offline but one thing is sure we cannot assume workflow to return results 04:52:04 samP: Do you agree? 04:52:38 tpatil: yes, I am thing abt some kind of locking or intermediate state for it 04:53:19 tpatil: agree, shall we raise a spec for this? 04:53:29 samP: yes 04:53:38 tpatil: thanks 04:53:54 Dinesh_Bhor: may I assign this spec to you? 04:54:04 samP: yes 04:55:01 #action Dinesh_Bhor spec for synchronous/asynchronous work flows 04:56:06 any other discussion topics, if no lest move to #any_other_topics 04:56:45 #topic AOB 04:57:20 I will update masakari wiki with our release schedule. 04:58:30 Our initial plan, we had milestone b1 on 12/9 04:59:33 since we have new topics to discuss, I would like this to extend this 12/16 05:00:35 Sure. 05:00:44 ok then. 05:00:51 I think we should use LP milestone feature to figure out details of each milestone 05:01:07 tpatil: sure 05:01:15 thank you all 05:01:29 samP: Thank you 05:01:29 OK then, its almost time 05:01:40 yes, thanks all 05:02:00 please use ML openstack-dev[masakari] for further discussions 05:02:10 Thank you all 05:02:13 Sure 05:02:15 bye 05:02:16 #endmeeting