04:00:46 <samP> #startmeeting masakari
04:00:47 <openstack> Meeting started Tue Jan 31 04:00:46 2017 UTC and is due to finish in 60 minutes.  The chair is samP. Information about MeetBot at http://wiki.debian.org/MeetBot.
04:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
04:00:51 <openstack> The meeting name has been set to 'masakari'
04:00:55 <takashi> o/
04:01:03 <samP> takashi: hi
04:01:10 <abhishekk> o/
04:01:18 <samP> since no critical bugs lets move to discussion..
04:01:34 <samP> #topic Discussion points
04:01:57 <tpatil> There is one issue reported in masakari
04:01:57 <samP> 1st one, Who will set ha_enabled?
04:02:02 <tpatil> #link : https://bugs.launchpad.net/masakari/+bug/1659495
04:02:02 <openstack> Launchpad bug 1659495 in masakari "taskflow version is not compatible with latest engine code" [Undecided,New]
04:02:15 <samP> tpatil: sorry,
04:02:47 <tpatil> Dinesh will fix this issue
04:02:59 <Dinesh_Bhor> yes
04:03:20 <samP> This is taskflow version issue, right?
04:03:27 <tpatil> samP: correct
04:03:38 <takashi> Can we just bump up required taskflow version?
04:04:11 <tpatil> yes, that's what we will need to do to fix this issue
04:04:32 <takashi> I'm just wondering this issue happens because the requreiemt.txt is not synced with global-requirements.txt
04:04:35 <samP> what is the globel req version for taskflow?
04:04:42 <takashi> our requreiemtns.txt in masakari
04:05:10 <Dinesh_Bhor> takashi: makes sense to me, masakari requirements are not getting bumped by bot jobs
04:05:11 <tpatil> taskflow>=2.7.0
04:05:17 <abhishekk> yes, but why it was not caught by jenkins? IMO we should have a test case to check fail formaters
04:05:32 <abhishekk> s/formaters/formatters
04:05:37 <samP> takashi: thanks, upper is set to taskflow===2.9.0
04:05:58 <takashi> fyi: https://github.com/openstack/requirements/blob/master/global-requirements.txt#L280
04:06:22 <takashi> Maybe we should manually sync requirement.txt before we relase Ocata...
04:06:25 <takashi> at least
04:06:28 <takashi> and at workst
04:06:33 <takashi> s/workst/worst/
04:06:36 <samP> takashi: agree
04:07:55 <samP> after Ocata relase, we may use bot to do this.
04:08:13 <Dinesh_Bhor> samP:ok
04:08:33 <tpatil> Dinesh_Bhor: Please bump the taskflow version to 2.7.0 in requirements.txt and upload the patch for review
04:08:49 <Dinesh_Bhor> tpatil: yes
04:09:15 <tpatil> Dinesh_Bhor: Thanks
04:09:21 <samP> Dinesh_Bhor: tpatil thanks
04:09:51 <samP> #action Dinesh_Bhor Fix https://bugs.launchpad.net/masakari/+bug/1659495
04:09:51 <openstack> Launchpad bug 1659495 in masakari "taskflow version is not compatible with latest engine code" [Undecided,New]
04:10:32 <samP> Ok then, any other bugs to discuss?
04:12:00 <tpatil> samP: No
04:12:15 <samP> tpatil: thanks
04:12:29 <samP> lets move to the discussion
04:12:36 <takashi> yes :-)
04:12:56 <samP> 1st one, Who will set the ha_enabled tag?
04:13:14 <abhishekk> I have added that in agenda
04:13:22 <samP> abhishekk: thanks
04:13:43 <samP> In previous masakari, only operator set this tag to each VM
04:14:26 <abhishekk> so how to rstrict normal user from setting this flag?
04:14:54 <abhishekk> in glance there is a property protection which we can set using policy.json
04:14:56 <tpatil> samP: HA_Enabled will be set as a tag or metadata?
04:15:15 <samP> tpatil: sorry, in meta data
04:15:29 <tpatil> samP: Ok
04:16:26 <samP> abhishekk: I have to check, but I think we did not expose metadata API to end user.
04:16:42 <samP> abhishekk: so, end user can not set metadata to server
04:17:13 <abhishekk> ok, I need to check about that
04:17:42 <Dinesh_Bhor> It can be set at the time of boot as well and normal user can do that
04:17:49 <samP> anyway, in normal openstack env, end user can add the metadata
04:18:04 <samP> Dinesh_Bhor: correct
04:18:46 <samP> I am not sure nova policy does support this kind of restrections on metadata
04:19:51 <samP> as abhishekk said, I remember we set simillar setting for glance
04:20:05 <abhishekk> https://github.com/openstack/nova/blob/master/nova/policies/server_metadata.py#L31
04:21:03 <samP> abhishekk: thanks, we can controll it.
04:21:14 <abhishekk> but IMO thiese policies are for meta api
04:21:49 <samP> abhishekk: seems you are right.
04:22:13 <abhishekk> we can set or remove metadata using meta set/delete, need to check whether this will work for boot as well
04:23:36 <samP> abhishekk: Do you mean, set metadata at boot?
04:23:58 <abhishekk> while using boot command we can pass --metadat key=value
04:24:14 <samP> abhishekk: yep.. got it
04:24:23 <abhishekk> s/metadat/metadata
04:26:01 <samP> so, is this an implementation related issue or operation related issue?
04:26:09 <abhishekk> nope as Dinesh says normal user can set this while booting the instance
04:26:30 <abhishekk> IMO operation related issue
04:26:59 <tpatil> samP: since you haven't exposed metadata api for normal users, there will be no issues, but for other operators there is an issue
04:27:10 <samP> abhishekk: thanks
04:27:21 <samP> tpatil: correct
04:27:50 <tpatil> samP: Maybe we can add a support in Nova to restrict adding certain metadata keys to an instance using policy
04:28:02 <samP> IMO, we cannot fix this from masakari side, need to do some work in nova
04:28:05 <samP> tpatil: yes
04:28:19 <takashi> tpatil: makes sense
04:29:04 <samP> tpatil: somwhat simillar thing we did in "license metadata" in nova..
04:29:38 <abhishekk> samP: in glance
04:29:50 <samP> I think abhishekk mentionted part of it, in glance
04:29:54 <samP> abhishekk: yes
04:30:59 <tpatil> samP: similar to glance, we can add this support in Nova as abhishekk has pointed out
04:31:21 <samP> If we propose this to nova, it will be in Pike (at best) right?
04:31:30 <tpatil> samP: correct
04:31:46 <abhishekk> samP: yes
04:33:38 <takashi> samP: yus. IMO I think we should propose spec as soon as nova spec repo for Pike is opened
04:33:45 <samP> got it. what would be the best way to approach?
04:34:02 <samP> I can discuss this in PTG.
04:34:49 <samP> but first I think we need some pre-discussion with nova
04:35:28 <samP> takashi: sorry, your comment came late..
04:35:56 <takashi> samP: np. as you say, we need some discussion in nova project
04:36:57 <samP> takashi: OK then, lets propse to Pike spec.
04:38:10 <takashi> samP: yes
04:38:17 <tpatil> samP: I have noted down this point, we will submit a spec in Nov to address this use case.
04:38:25 <tpatil> s/Nov/Nova
04:38:34 <samP> tpatil: thanks
04:38:36 <takashi> samP: maybe we can discuss our usecase with nova team, and confirm this is the best solution
04:38:40 <takashi> tpatil: yes, thanks!
04:39:36 <samP> Do they set a specific date for pike spec start?
04:39:56 <takashi> #link https://releases.openstack.org/pike/schedule.html
04:40:15 <takashi> so may TBDs...
04:40:19 <takashi> s/may/many/
04:40:30 <samP> takashi: thanks, seems TBD
04:40:54 <takashi> samP: AFAIK, nova spec freeze happens at the same time as *-1 milestone
04:41:28 <takashi> samP: so we should get the spec approved before Pike-1 milestone
04:41:28 <samP> tpatil: may I assign this task to you for now?
04:41:41 <tpatil> samP: Yes
04:41:49 <samP> takashi: got it
04:42:46 <samP> #action tpatil Propse Nova spec for matadata controll policy
04:42:54 <samP> tpatil: thanks
04:43:16 <samP> abhishekk: thanks for adding this point
04:43:39 <samP> shall we move to next topic?
04:43:45 <abhishekk> samP: no problem
04:44:00 <samP> #link https://review.openstack.org/#/c/423072/
04:44:50 <samP> abhishekk: thanks for the nice idea. but I have some operation related issues (pls see my comment on gerrit)
04:45:13 <abhishekk> samP: I have seen your comments
04:46:43 <samP> abhishekk: Those are just my comments, but other may have different opinion on this
04:46:45 <abhishekk> IMO it makes sense to balance the pool of reserved hosts failed node can be reassigned as a reserved host
04:51:21 <samP> abhishekk: Are you in favour of add reserved_host=False once we evacuate VM? or wait for other failures?
04:52:21 <abhishekk> samP: yes, because once we enable compute service on reserved host we cannot restrict nova to launch instance on that host
04:52:24 <tpatil> abhishekk: We should set reserved=False immediately after all instances are evacuated from a failed compute node.
04:52:45 <abhishekk> tushar san makes sense
04:52:52 <samP> abhishekk: correct
04:53:51 <samP> takashi: agree
04:55:17 <abhishekk> I think we have 6 minutes now
04:55:21 <samP> abhishekk: could you please update the spec whit this info?
04:55:34 <samP> abhishekk: yes, just 5 mins left
04:55:48 <abhishekk> saaP: yes
04:55:57 <samP> abhishekk: thanks
04:56:16 <samP> #topic AOB
04:56:25 <abhishekk> set reserved_host to false as soon as all instances are evacuated from the failed node, right?
04:56:44 <samP> abhishekk: correct
04:57:17 <abhishekk> samP: ok, it's already there in the specs, I just need to rephrase it
04:58:06 <samP> abhishekk: yes.. sorry it is there.. my bad
04:58:17 <Dinesh_Bhor> May I ask a question related to the new requirement to add the reserved_host to the same aggregate in which the failed_host is?
04:58:43 <samP> Dinesh_Bhor: sure
04:58:54 <Dinesh_Bhor> So my question is: A failed_host can be associated with multiple aggregates, so to which aggregate the reserved_host should be added?
04:59:35 <samP> Dinesh_Bhor: all the aggregates of failed host
04:59:49 <takashi> samP, Dinesh_Bhor: Can we move to #openstack-maskari?
04:59:57 <tpatil> Dinesh_Bhor: in nova, there is unique constraint applied for host, aggregate uuid, delete column
04:59:58 <samP> sure
05:00:00 <takashi> because we run out all meeting time...
05:00:01 <Dinesh_Bhor> samP: ok
05:00:07 <samP> takashi: sure
05:00:13 <tpatil> Dinesh_Bhor: so this situation will never arise
05:00:27 <samP> OK, then, lets move to openstack-masakari for further discussions..
05:00:36 <samP> Lets end this meeting...
05:00:42 <samP> thank you all
05:00:59 <abhishekk> thank you
05:01:08 <Dinesh_Bhor> thanks
05:01:15 <samP> #endmeeting