07:01:01 <stevebaker> #startmeeting heat
07:01:02 <openstack> Meeting started Wed May 27 07:01:01 2015 UTC and is due to finish in 60 minutes.  The chair is stevebaker. Information about MeetBot at http://wiki.debian.org/MeetBot.
07:01:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
07:01:05 <openstack> The meeting name has been set to 'heat'
07:01:11 <stevebaker> #topic rollcall
07:01:19 <tspatzier> hi all
07:01:19 <pas-ha> o/
07:01:20 <prazumovsky> hi all
07:01:22 <stevebaker> welcome, the rest of the world
07:01:22 <dgonzalez> hi
07:01:24 <ramishra> hi all
07:01:27 <shardy> o/
07:01:43 <Qiming> o/
07:01:45 <tlashchova_> hi
07:02:02 <stevebaker> I see some new nicks in the room tonight
07:02:14 <stevebaker> #topic Adding items to agenda
07:02:23 <stevebaker> #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282015-05-27_0700_UTC.29
07:02:55 <stevebaker> this may end up being a shortish meeting, if we're all feeling a bit post-summit
07:02:55 <kairat_kushaev> o/
07:03:08 <skraynev_> o/
07:03:11 <stevebaker> anything else for the agenda>
07:03:13 <stevebaker> ?
07:03:40 <stevebaker> #topic heat reviews
07:03:51 <stevebaker> #link https://etherpad.openstack.org/p/heat-reviews
07:04:06 <prazumovsky> >only item about semantics of support status parameters
07:04:25 <stevebaker> prazumovsky: yep, I pushed that one down
07:04:31 <prazumovsky> OK
07:04:45 <stevebaker> So here is the page I'll be curating https://etherpad.openstack.org/p/heat-reviews
07:05:57 <stevebaker> feel free to add your own changes, or changes you think are important, and I'll keep it so that there are no more than 10 reviews so that heat-core have something manageable to put their attention on
07:06:28 <skraynev_> stevebaker: ok
07:06:36 <stevebaker> hey, https://review.openstack.org/#/c/154977/ merged. we'll need to find something else important but not convergence
07:07:14 <shardy> Where are we with fixing the memory usage issues?
07:07:35 <stevebaker> sirushti: do you think you could refresh https://review.openstack.org/#/c/154977/ ?
07:07:41 <inc0> uhh...you mean for example parent stack inspection for example?
07:07:46 <sirushti> stevebaker, sure, will do
07:07:51 <stevebaker> shardy: thats a point, I'll add my 2 for that
07:08:49 <Qiming> anyone has signed up on the client changes?
07:09:56 <stevebaker> Qiming: which ones?
07:10:12 <Qiming> outputs as not-pretty-tables, stevebaker
07:10:27 <inc0> https://github.com/openstack/heat/blob/master/heat/engine/stack.py#L299 I blame this for memory usage
07:10:28 <stevebaker> oh, here is an important client change, it fixes the broken gate https://review.openstack.org/#/c/185834/
07:10:48 <stevebaker> inc0: my changes fix that
07:11:39 <stevebaker> Qiming: I think ryansb was going to convert his existing openstackclient work to an in-tree plugin
07:11:39 <inc0> stevebaker, thing is, this is only run in validation
07:11:45 <inc0> from what I know
07:11:51 <inc0> so before stack.store()
07:12:25 <stevebaker> inc0: and create, and update. so I think the resources will be in db for the latter times it is called
07:14:00 <stevebaker> looks like the heatclient sessionclient change might be almost ready to land <- prazumovsky
07:14:36 <prazumovsky> yes, I just wait https://review.openstack.org/#/c/185834/ merging
07:14:43 <prazumovsky> *it
07:15:11 <inc0> stevebaker, https://github.com/openstack/heat/blob/master/heat/engine/resources/stack_resource.py#L226 this is only place its called
07:16:02 <stevebaker> inc0: which is called by  _child_parsed_template, which is called by create_with_template and update_with_template
07:16:03 <inc0> and we can't put db call there, because it will be called before stack.store()
07:16:32 <stevebaker> inc0: so its not validate only (and its being called too often, but that is a separate issue)
07:17:12 <inc0> fair enough, I guess we should dig into that more, but yeah, db call will help a lot
07:17:41 <stevebaker> inc0: I just want something that is appropriate to backport to kilo, then we should completely rethink the approach
07:18:06 <inc0> yeah...whole logic of this fucntion seems...overengineered to say the least
07:18:41 <inc0> I mean recursively call total resources just to check if it doesn't exceed one variable in config?
07:18:48 <stevebaker> are there any heat-specs burning hot right now?
07:18:53 <inc0> but that's for another discussion.
07:19:10 <inc0> I'll be cooking spec for random default parameter
07:19:26 <inc0> I guess its not burning hot, but it touches hot;)
07:20:26 <stevebaker> ok, I'll update the heat-specs section later
07:20:30 <stevebaker> moving on
07:20:47 <stevebaker> #topic parameters for SupportStatus for displaying current status and from which object is supported
07:20:52 <prazumovsky> OK
07:21:19 <prazumovsky> Now we have one parameter version which displays version of the current status
07:21:26 <stevebaker> #link http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/support.py
07:22:12 <prazumovsky> I want to add new parameter since which will means since which release this feature is available
07:22:43 <inc0> ... or when it was deprecated?:)
07:22:51 <stevebaker> prazumovsky: like a history of support statuses?
07:22:56 <prazumovsky> yeah
07:23:10 <prazumovsky> but 'since' is more suitable for current status, e.g. status.DEPRECATED since ... and version of object is ...
07:23:17 <ramishra> can we not provide multiple support statuses, probably a list  ex. SUPPORTED, version DEPRECATED version?
07:23:38 <Qiming> +1 to 'since'
07:24:17 <prazumovsky> So
07:24:25 <prazumovsky> since for current statuses
07:24:33 <stevebaker> prazumovsky: to me, 'version' is what since means
07:25:06 <inc0> can we bump version without changing status?
07:25:26 <prazumovsky> stevebaker: you mean version for current statuses and since for dispaying 'available since'?
07:25:35 <stevebaker> inc0: why would we do that?
07:25:55 <inc0> I don't know, I'm asking - if we can't then I agree - version == since
07:26:37 <inc0> thing is, what does version mean anyway if we change it *only* on status change?
07:26:40 <prazumovsky> inc0: I think, we can bump version only if status changed, else for what
07:26:49 <stevebaker> afaik, the version never changes after it is set (at least it hasn't to my knowledge), so it seems to already be behaving like a 'since'
07:27:05 <inc0> prazumovsky, then every version bump is essentialy "since" right?
07:28:13 <stevebaker> prazumovsky: since aside, it sounds like you would like a list of status objects to communicate the resource's history, which seems useful
07:28:25 <ramishra> +1
07:28:54 <Qiming> SUPPORTED since 2014.1, DEPRECATED since 2015.1
07:29:06 <prazumovsky> stevebaker: it's sounds reasonable
07:29:41 <stevebaker> if we didn't want to make every status attribute accept a list of SupportStatus, we could always add a SupportStatus.previous_status, and they just accumulate like a singly linked list
07:29:51 <stevebaker> with the newest status at the root
07:30:44 <prazumovsky> stevebaker: that's what I like
07:31:12 <stevebaker> prazumovsky: sure, +1
07:31:44 <prazumovsky> OK, now it's clear for me, what to do, thanks:)
07:32:01 <stevebaker> prazumovsky: this is probably worthy of a short spec
07:32:17 <inc0> please, add an use case as well there
07:32:29 <prazumovsky> there is a spec https://review.openstack.org/#/c/153235/ ehich can contains this
07:32:31 <prazumovsky> *which
07:33:26 <prazumovsky> because this changes relates to deprecation improvements
07:33:34 <stevebaker> deprecating improvements? I thought we wanted to do the opposite ;)
07:33:58 <stevebaker> (dad joke)
07:34:20 <prazumovsky> hah
07:34:29 <stevebaker> prazumovsky: i've put that spec on the heat-reviews page
07:34:39 <inc0> but in specs it's just done in regards to docs...
07:34:48 <prazumovsky> stevebaker: OK
07:34:49 <stevebaker> #topic Open discussion
07:35:07 <stevebaker> inc0: I think SupportStatus is only used for docs generation?
07:35:42 <Qiming> stevebaker, it would be good to show support_status in resource-type-list too
07:35:43 <inc0> maybe...I honestly don't know:)
07:36:15 <prazumovsky> QIming: I work on it
07:36:19 <stevebaker> Qiming: yeah, +1 if we don't already. there may be a spec for filtering by status too
07:36:59 <Qiming> prazumovsky, https://review.openstack.org/#/c/147761/
07:37:22 <inc0> one question from me - regarding total_resources
07:37:38 <prazumovsky> Qiming: thank you, I don't know about it
07:37:53 <inc0> can we think of a way to count these without recursive counts? (I'm talking about not-yet-in-database case)
07:38:21 <inc0> or wheter or not we *actually* need cfg.CONF.max_resources_per_stack in regards of nested stacks?
07:38:39 <inc0> I mean once we decoupled these in kilo...what's point of this config?
07:38:59 <stevebaker> inc0: its harder now that nested stacks are created via rpc calls. In the old days all stacks were in memory anyway
07:39:22 <inc0> I know, but that's why this config existed in the first place right?
07:39:34 <inc0> to limit memory consumption if user wants huge stack
07:39:54 <inc0> now we decoupled nested stacks, so its only a problem with huge single stack
07:40:04 <inc0> and these are way easyier to count...
07:40:16 <shardy> inc0: well, or a huge tree of nested stacks on a single heat-engine..
07:40:30 <stevebaker> inc0: you have a point, since they're really separate stacks now we may not need to limit by the whole stack tree, and I think we already have a total stack quota I think
07:40:56 <inc0> anyone uses single heat engine? and should we care about such people if they're asking themselves for problems anyway?
07:41:17 <shardy> but yeah, maybe just a limit on stacks, and resources per individual stack is enough, then rely on underlying services to enforce their own quotas
07:41:30 <shardy> inc0: well, TripleO does in many cases..
07:41:38 <inc0> fair enough
07:41:52 <inc0> but in tripleo total_resources is irrelevant anyway right?
07:41:56 <shardy> which is the use-case which exposed the excessive memory consumption problems recently
07:41:59 <inc0> validation I mean
07:42:21 <shardy> inc0: yeah, keep creating resources until your seed OOM kills something :\
07:42:24 <inc0> ironically, validation takes most of memory, not actual execution
07:42:36 <shardy> But you're right, maybe we should make enforcing the limits optional
07:42:42 <stevebaker> the irony was not lost on us
07:42:46 <shardy> and turn them off for the tripleo seed
07:43:22 <inc0> my question is - do we need that kind of validation at all;)
07:43:29 <therve> inc0, GOOD question
07:43:48 <stevebaker> shardy: that may not be backportable though, I may prefer intrepreting max_resources_per_stack to not include nested stack resources
07:43:53 <inc0> it was relevant in tenat cloud without decouple nested
07:44:05 <therve> Frankly I feel that validation mostly exists because services provide poor error feedback
07:44:46 <shardy> stevebaker: I was thinking a new value to an existing config option, e.g max_resources_per_stack = False or something
07:44:57 <shardy> that probably would be backportable, as it's not adding a new option
07:45:20 <stevebaker> shardy: but if the default remains the same then it still remains broken for everyone else
07:45:40 <inc0> we can revise if it's required at all, and for now keep this optional
07:45:48 <stevebaker> shardy: and it probably has to remain the same for a backport
07:45:50 <inc0> and later on possibly deprecate whole mechanism
07:45:50 <inc0> ?
07:46:11 <shardy> stevebaker: true, but not everyone is creating trees 50 stacks deep
07:46:22 <shardy> and the option would be available to everyone
07:46:40 <shardy> stevebaker: if there's a way to fix the validation for the default case, obviously we should do that though :)
07:47:25 <stevebaker> thankfully the root_stack_id change which has landed has merits on its own
07:48:44 <inc0> soo...spec for applying "false" as possible value for max_resources_for_stack?
07:48:55 <inc0> or just point out 0 == unlimited?
07:49:04 <inc0> or -1 ;)
07:49:04 <stevebaker> shardy, inc0, therve, etc: could you put your thoughts in the review and poke the USians when they're up? I don't mind throwing away that work
07:49:20 <shardy> stevebaker: sure, will do
07:49:32 <therve> Yep
07:49:42 <inc0> ofc
07:50:17 <stevebaker> should we finish off now? any other business?
07:51:02 <stevebaker> #endmeeting