20:00:14 <asalkeld> #startmeeting heat
20:00:15 <openstack> Meeting started Wed Feb 25 20:00:14 2015 UTC and is due to finish in 60 minutes.  The chair is asalkeld. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:19 <openstack> The meeting name has been set to 'heat'
20:00:27 <prazumovsky> hello
20:00:32 <asalkeld> #topic rollcall
20:00:33 <stevebaker> hi
20:00:42 <jpeeler> hello
20:00:44 <dgonzalez> hi
20:00:48 <shardy> o/
20:00:50 <miguelgrinberg> hello
20:00:52 <inc0> \o
20:01:04 <pas-ha> o/
20:01:06 <skraynev> hi all
20:01:38 <spzala> hi
20:01:47 <asalkeld> #topic Adding items to the agenda
20:02:14 <asalkeld> any new topics?
20:03:17 <asalkeld> ok, moving on
20:03:31 <asalkeld> there is alway open discussion
20:03:33 <asalkeld> #topic Vancouver Design Summit space needs
20:04:01 <asalkeld> ttx needs to figure out what space we need at summit
20:04:04 <zaneb> o/
20:04:12 <asalkeld> #link https://docs.google.com/spreadsheets/d/14pryalH3rVVQGHdyE3QeeZ3eDrs_1KfqfykxSTvrfyI/edit?usp=sharing
20:04:16 <skraynev> zaneb: morning
20:04:19 <asalkeld> can you guys view this
20:04:30 <stevebaker> I can
20:04:32 <ryansb> yup
20:04:34 <zaneb> asalkeld: yep
20:05:08 <asalkeld> ok so there are some rules
20:05:12 <asalkeld> You shouldn't ask for more than 18 sessions total (fishbowl + work)
20:05:16 <zaneb> remind me what a work session is?
20:05:29 * zaneb knew this at one point
20:05:43 <stevebaker> like a fishbowl in a smaller bowl?
20:05:43 <asalkeld> work sesion is small semi closed door
20:05:55 <zaneb> ok
20:06:03 <asalkeld> less spectators
20:06:23 <asalkeld> spectators good for usability stuff
20:06:31 <asalkeld> assuming they use heat
20:06:44 <skraynev> agree
20:06:47 <inc0> uhh....when it comes to usability, everyone has different opinion
20:06:59 <stevebaker> I guess topics that affect operators and users would be good for fishbowl
20:07:13 <shardy> inc0: true, but operator/end-user feedback is good vs all developers :)
20:07:30 <inc0> stevebaker, +1 , something about HA of heat would be good
20:07:33 <asalkeld> so I put down 5 fishbowl and 10 work + 1 friday
20:07:48 <asalkeld> any tweaking ?
20:07:57 <asalkeld> suggestions ??
20:08:03 <shardy> sounds good to me
20:08:06 <stevebaker> that sounds fine
20:08:08 <zaneb> sounds pretty good
20:08:14 <asalkeld> ok cool
20:08:18 <shardy> we had an etherpad with session suggestions didn't we?
20:08:23 * shardy looks for it
20:08:24 <zaneb> hopefully the fishbowl will reduce the need for 2 sessions on Friday
20:08:34 <zaneb> as long as we get Friday AM
20:08:42 <inc0> shardy, https://etherpad.openstack.org/p/kilo-heat-summit-topics
20:08:44 <asalkeld> +1 to that
20:08:46 <zaneb> because everyone always leaves at lunch time for some reason
20:08:47 <stevebaker> the gibbering wreck session
20:08:53 <inc0> ahh wrong release
20:09:08 <inc0> https://etherpad.openstack.org/p/liberty-heat-sessions
20:09:19 <zaneb> sorry s/fishbowl/work sessions/
20:09:36 <shardy> #link https://etherpad.openstack.org/p/liberty-heat-sessions
20:09:38 <shardy> inc0: thanks
20:10:04 <asalkeld> need to start adding ideas to that
20:10:13 <inc0> shardy, by the way, maybe you'd like to say word or two about decouple nested?
20:10:14 <asalkeld> we do have a heap of sessions to fill
20:10:40 <asalkeld> inc0: in this topic?
20:11:05 <skraynev> asalkeld: also , I think we should re-read heat usability - improvements page and remove some completed staff
20:11:05 <inc0> yeah, seems like nice leap in terms of heat HA
20:11:19 <shardy> inc0: we can chat about that in open discussion, by all means
20:11:27 <asalkeld> i'll move on
20:11:33 <asalkeld> #topic Heat Mission statement:
20:11:49 <asalkeld> this is just a heads up, that someone posted a new one
20:11:52 <shardy> Oh no, not this again! ;)
20:11:54 <asalkeld> as ours is missing
20:12:00 <stevebaker> diediedie
20:12:07 <asalkeld> #link https://review.openstack.org/#/c/154049/
20:12:11 <skraynev> lol...
20:12:21 <asalkeld> can't we just get a simple something in there?
20:12:53 <asalkeld> "heat makes stuff with templates" ;)
20:13:17 <stevebaker> I thought we had one somewhere else
20:13:21 <inc0> "heat makes stuff" is enough;)
20:13:26 <zaneb> with programs going away, I'm not sure this is even needed any more
20:13:41 <zaneb> it's definitely a lot less critical to get the wording right
20:13:52 <asalkeld> yip
20:13:58 <zaneb> since in the big tent, competing projects are welcome
20:14:05 <stevebaker> yeah, heat is a project, and that proposed mission seems to sum up what it does. +1
20:14:07 <shardy> #link https://review.openstack.org/#/c/116703/1/reference/programs.yaml
20:14:09 <zaneb> so there's no danger of staking out too much space
20:14:17 <shardy> that's the original review which some of us commented on
20:14:34 <stevebaker> maybe s/standardized/declarative/
20:15:13 <zaneb> stevebaker: +1
20:15:56 <shardy> stevebaker: +1
20:15:57 <asalkeld> template based declarative resource orchestration?
20:15:59 <inc0> I'd add a word that heat is really only project with full context of network, compute and so on...but someone with better english would have to rephrase that
20:16:37 <stevebaker> inc0: I think "composite cloud applications" implies that
20:16:50 <asalkeld> actually what stevebaker said is fine
20:17:04 <asalkeld> stevebaker: you adding that as a comment?
20:17:05 <pas-ha> +1 for declarative, we keep fighting off imperative stuff :)
20:17:31 <ryansb> +1 seriously.
20:17:41 <zaneb> +2
20:18:14 <asalkeld> ours was too wordy
20:18:41 <asalkeld> cool to move on?
20:18:48 <stevebaker> please
20:19:01 <asalkeld> #topic update from cross project
20:19:13 <asalkeld> #link https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking
20:19:25 <asalkeld> please all read that ^ sometime
20:19:55 <asalkeld> also there is this use asyncio / threads spec
20:20:08 <asalkeld> (getting link)
20:20:45 <asalkeld> #link https://review.openstack.org/#/q/status:open+project:openstack/openstack-specs,n,z
20:21:01 <asalkeld> we should all be adding feedback there I think
20:21:20 <asalkeld> or at least if you are passionate about the $topic
20:21:33 <stevebaker> where did starting heat-engine with multiple workers end up?
20:21:47 <asalkeld> stevebaker: afaik it works
20:21:57 <asalkeld> shardy: ^
20:22:07 <zaneb> I believe it works but is not on by default
20:22:08 <shardy> Yeah it works, or at least it did last time I tested it
20:22:34 <shardy> Getting it tested in the gate would be cool, but there was disagreement about running multiple workers by default
20:22:41 <stevebaker> so in theory heat-engine could just ditch eventlet entirely and we call workers our concurrency solution?
20:22:49 <inc0> uhh...having to move whole thing to pure async will cost a lot of work...
20:23:03 <asalkeld> inc0: i agree with that
20:23:13 <asalkeld> prefer stevebaker solution
20:23:20 <inc0> async is cool, but its hell for debugging
20:23:21 <zaneb> stevebaker: arguably it's more complicated than that
20:23:21 <asalkeld> processes + messaging
20:23:52 * stevebaker waves dismissable and mumbles
20:24:27 <asalkeld> ok, moving on - this was just a heads up
20:24:31 <asalkeld> stuff is happening
20:24:41 <stevebaker> thanks for the update
20:24:44 <zaneb> stevebaker: workers are just engines that are not managed by systemd
20:24:59 <asalkeld> #topic Short question about increasing nested depth default value for using it in Sahara.
20:25:07 <skraynev> it's mine
20:25:22 <stevebaker> what is the default depth currently?
20:25:23 <skraynev> could we increase this value to 4 or 5 https://github.com/openstack/heat/blob/master/heat/common/config.py#L80 ?
20:25:26 <asalkeld> 3
20:25:33 <stevebaker> thats really low
20:25:51 <shardy> Yeah it does seem pretty low now, given all the provider resources etc
20:26:01 <pas-ha> recent discussion with Sahara devs suggests that they need it at least 4 if they move to all-Heat internal orchestration
20:26:33 <skraynev> pas-ha: it's for now ;) so may be we want to make it bigger then 4 ;)
20:26:45 <pas-ha> yep, that's for starters
20:27:00 <asalkeld> pas-ha: the use resource group?
20:27:05 <pas-ha> yes
20:27:09 <skraynev> asalkeld: yes
20:27:12 <asalkeld> with a template as the resource
20:27:19 <skraynev> nested resource groups
20:27:25 <stevebaker> the only reason we had a nested depth limit was because it was expensive to do a total resource count. Don't we store resource counts now? We could rely on a resource limit instead of nested limit
20:27:26 <asalkeld> so that's 3 levels right there?
20:27:49 <asalkeld> stevebaker: it's the same
20:27:52 <pas-ha> Cluster->NodeGroup->NodeGroupElement->Instance+ResourceGroup->volumes
20:27:56 <asalkeld> stevebaker: i really need to change tat
20:28:13 <skraynev> stack -> resource_group1 -> resource_group2
20:28:16 <asalkeld> as we can't pass parent_resource object
20:28:19 <skraynev> it is 4
20:28:37 <asalkeld> so each stack would need to reload every stack
20:28:39 <skraynev> pas-ha: thinks in the same graphic way:)
20:28:47 <asalkeld> which is horrible
20:28:49 <shardy> https://bugs.launchpad.net/heat/+bug/1331227
20:28:50 <openstack> Launchpad bug 1331227 in heat "nested stack depth too low" [Wishlist,Opinion]
20:29:01 <skraynev> stevebaker: agree
20:29:16 <asalkeld> stevebaker: it should be a db lookup
20:29:32 <stevebaker> asalkeld: it should. I thought it was already
20:30:01 <asalkeld> stevebaker: the nested level is in the db, the total res count is not
20:30:39 <asalkeld> more levels == more stacks to load and s.total_resources()
20:31:11 <asalkeld> https://review.openstack.org/#/c/156546/
20:31:19 <asalkeld> i am working on that 6
20:31:24 <asalkeld> ^
20:31:47 <inc0> I think that should be solvable by some clever sql query tho
20:32:00 <asalkeld> skraynev: so yeah lets  increase it, but we need to think about total_resources()
20:32:11 <asalkeld> inc0: +1
20:32:11 <stevebaker> so in the short term we could raise the limit to ... 5? but we need to acknowelege that this makes us vulnerable to users launching DOS stacks with lots of resources
20:32:41 <skraynev> I am ok with 5
20:32:43 <asalkeld> yeah, and make a bug for this
20:33:11 <pas-ha> max resource count is 1000, counting the nested too?
20:33:11 <asalkeld> #topic Mistral resources in Heat.
20:33:21 <stevebaker> longer term our resource limit needs to apply to all resources (including nested)
20:33:36 <asalkeld> stevebaker: it does
20:33:48 <stevebaker> asalkeld: oh, it does?
20:34:03 <asalkeld> that's why it's so horrible
20:34:04 <skraynev> about  topic: this is mine too ;)
20:34:10 <stevebaker> asalkeld: its just expensive to calculate?
20:34:19 <shardy> stevebaker: yeah, it does, but the way we do it isn't nice
20:34:28 <stevebaker> oh, oki
20:34:28 <prazumovsky> https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/mistral-resources-for-heat,n,z
20:34:37 <asalkeld> stevebaker: yip as inc0 suggested we need a db query
20:34:48 <skraynev> both resources are ready for review
20:35:00 <asalkeld> #link https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/mistral-resources-for-heat,n,z
20:35:00 <inc0> asalkeld, I'll look into it in next few days
20:35:09 <asalkeld> thanks inc0
20:35:26 <stevebaker> inc0: or an extra stack column with the resource count (including nested)
20:35:35 <skraynev> also we prepared this document with use-cases and examples: https://docs.google.com/a/mirantis.com/document/d/10iAe--I17c7iFoN0Ouk2DxdrxjsV-QTuzqqNdHzq2-Y/edit?usp=sharing
20:35:44 <skraynev> #link https://docs.google.com/a/mirantis.com/document/d/10iAe--I17c7iFoN0Ouk2DxdrxjsV-QTuzqqNdHzq2-Y/edit?usp=sharing
20:35:53 <asalkeld> prazumovsky: nice, sorry haven't come back to review
20:36:06 <inc0> stevebaker, I'll start with query...I dislike redundancy like that - its prone to desynchronizations
20:36:11 <shardy> nice work, I've been meaning to try these
20:36:21 <inc0> but we'll see how it goes
20:36:25 <skraynev> shardy: welcome to play with it ;)
20:36:32 <stevebaker> inc0: ok, sounds fine
20:36:40 <shardy> I can't access the doc
20:37:15 <shardy> skraynev: can you put the docs in-tree with the resources as rst, or short term put it in a wiki/etherpad?
20:37:36 <skraynev> shardy: sure
20:37:45 <shardy> great, thanks
20:37:47 <skraynev> will do it tomorrow ;)
20:38:18 <skraynev> and will send email with related information
20:38:21 <shardy> It will be good to see some simple examples to see how it works :)
20:38:47 <asalkeld> yeah nice work, just need time to review
20:39:17 <asalkeld> prazumovsky: skraynev need any other input?
20:39:21 <skraynev> shardy: yes. they are. hope, that it will be interesting ;)
20:39:28 <asalkeld> or just an update?
20:40:12 <skraynev> asalkled: updatem that it's ready for review
20:40:22 <asalkeld> thanks skraynev
20:40:24 <skraynev> s/updatem/update,
20:40:28 <asalkeld> i'll move to open discussion
20:40:36 <asalkeld> #topic open discussion
20:40:44 <prazumovsky> sorry, thank you, skraynev for answer
20:40:57 <skraynev> prazumovsky: np
20:41:20 <jpeeler> i have something
20:41:25 <jpeeler> My day to day role has shifted away from Heat and so I think it's best that I step down from core, rather than get voted out later due to inactivity
20:41:38 <jpeeler> Is this meeting an official enough announcement or do I need to send out an email?
20:41:38 <asalkeld> jpeeler: :-(
20:41:53 <inc0> shardy, about talk about decouple-nested - I got questions all the time about lack of HA in heat - I think any improvement in this matter would be good to announce broadly
20:42:10 <inc0> ops still distrust us:(
20:42:11 <shardy> jpeeler: sorry to hear that, thanks for all the work until now :)
20:42:26 <skraynev> jpeeler: bad news...
20:42:32 <asalkeld> jpeeler: not sure, normally an email to mailing list?
20:42:41 <stevebaker> jpeeler: thanks for all your reviews
20:42:51 <ryansb> jpeeler: you'll be missed
20:43:00 <shardy> inc0: decouple nested isn't really about HA, it's about making deployment of large trees of stacks more scalable
20:43:11 <asalkeld> jpeeler: do you have any blueprints assigned?
20:43:44 <inc0> shardy, but it breaks very long tasks into few smaller tasks - its good for both topics
20:44:01 <skraynev> 
20:44:01 <shardy> inc0: it was driven, initially, by the use-case of TripleO, where they want to e.g launch several ResourceGroups with hundreds of nodes in simultaneously
20:44:21 <asalkeld> jpeeler: i'll take you off lazy-load-outputs - or do you want to finish that?
20:44:22 <shardy> obviously it benefits anyone with that sort of scale requirement too though
20:44:25 <jpeeler> asalkeld: https://blueprints.launchpad.net/heat/+spec/lazy-load-outputs
20:44:49 <shardy> inc0: the thing is, if an engine dies, decouple-nested doesn't enable any HAish recovery from that failure
20:44:53 <jpeeler> asalkeld: it's possible one day i could finish it, but i'm not sure it's all that important is it?
20:45:18 <shardy> So while it's clearly a step in the right direction (decouple all-the-things, moving towards fully decoupled convergence model)
20:45:20 <asalkeld> i can change the milestone
20:45:36 <shardy> I don't think it really warrants major announcement
20:45:49 <shardy> What do others think?
20:46:18 <asalkeld> shardy: not sure
20:46:21 <skraynev> I think, that we can wait it for the next release.
20:46:26 <asalkeld> normal release notes?
20:46:37 <stevebaker> its definitely release not worthy
20:46:43 <shardy> Maybe we land it and document what it can and can't do in the release notes?
20:46:55 <stevebaker> release note worthy
20:47:04 <shardy> Yeah, and by then we'll know what pieces of convergence have landed so we can relate the topics if appropriate
20:47:29 <asalkeld> yip
20:47:39 <shardy> inc0: thanks for bringing it up, it's definitely good to think about
20:47:40 <inc0> well, my point is, it seems we have great deal of mistrust from ops side... it would be good to show people that we care about downtime
20:47:46 <asalkeld> jpeeler: did you post any reviews related to that bp?
20:48:05 <asalkeld> if so can you post it into the blueprint
20:48:10 <jpeeler> asalkeld: yeah, but it's pretty far from complete
20:48:13 <jpeeler> ok will do
20:48:24 <asalkeld> so the next person doesn't start from nothing
20:49:36 <asalkeld> jpeeler: also say something like "i started this, here is the patch. feel free to take this over"
20:51:15 <pas-ha> asalkeld, this=heat :)
20:51:28 <shardy> inc0: Ok, well that's useful feedback, we definitely need to keep that in mind when communicating features at release time
20:51:52 <asalkeld> shardy: in decouple-nested did you get ActionInProgress in nested stack updates?
20:52:07 <asalkeld> one of the last issues i have
20:52:37 <asalkeld> this is with update policy tests
20:52:46 <shardy> asalkeld: Hmm, no, but I only tested fairly simple templates tbh
20:52:59 <asalkeld> ok
20:53:55 <asalkeld> O, one notice
20:54:34 <asalkeld> Apparently we have ~10 days to at least post code for reviews (for blueprints)
20:54:59 <asalkeld> by then we need to have all blueprints in "needs code review"
20:55:38 <asalkeld> #link https://launchpad.net/heat/+milestone/kilo-3
20:55:49 <skraynev> wow
20:55:52 <skraynev> ok
20:56:13 <asalkeld> i think we are doing quite well, but some convergence bp's need *some* code posted
20:56:31 <skraynev> asalkeld: suppose, that the same for bugs related with convergence
20:56:52 <asalkeld> they are bugs
20:57:02 <asalkeld> bugs != blueprints
20:57:18 <skraynev> oh, got it.
20:57:38 <asalkeld> anything else folks?
20:57:48 <jpeeler> thanks for all the nice responses earlier! (didn't want to interrupt the meeting earlier)
20:58:12 <asalkeld> jpeeler: been great having you
20:58:19 <ryansb> jpeeler: cheers!
20:58:34 <asalkeld> if you come back on heat we can fast track your core status
20:59:04 <skraynev> jpeeler: come to us with beer ;)
20:59:04 <jpeeler> ah thanks for that
20:59:27 <asalkeld> #endmeeting