20:00:14 #startmeeting heat 20:00:15 Meeting started Wed Feb 25 20:00:14 2015 UTC and is due to finish in 60 minutes. The chair is asalkeld. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:19 The meeting name has been set to 'heat' 20:00:27 hello 20:00:32 #topic rollcall 20:00:33 hi 20:00:42 hello 20:00:44 hi 20:00:48 o/ 20:00:50 hello 20:00:52 \o 20:01:04 o/ 20:01:06 hi all 20:01:38 hi 20:01:47 #topic Adding items to the agenda 20:02:14 any new topics? 20:03:17 ok, moving on 20:03:31 there is alway open discussion 20:03:33 #topic Vancouver Design Summit space needs 20:04:01 ttx needs to figure out what space we need at summit 20:04:04 o/ 20:04:12 #link https://docs.google.com/spreadsheets/d/14pryalH3rVVQGHdyE3QeeZ3eDrs_1KfqfykxSTvrfyI/edit?usp=sharing 20:04:16 zaneb: morning 20:04:19 can you guys view this 20:04:30 I can 20:04:32 yup 20:04:34 asalkeld: yep 20:05:08 ok so there are some rules 20:05:12 You shouldn't ask for more than 18 sessions total (fishbowl + work) 20:05:16 remind me what a work session is? 20:05:29 * zaneb knew this at one point 20:05:43 like a fishbowl in a smaller bowl? 20:05:43 work sesion is small semi closed door 20:05:55 ok 20:06:03 less spectators 20:06:23 spectators good for usability stuff 20:06:31 assuming they use heat 20:06:44 agree 20:06:47 uhh....when it comes to usability, everyone has different opinion 20:06:59 I guess topics that affect operators and users would be good for fishbowl 20:07:13 inc0: true, but operator/end-user feedback is good vs all developers :) 20:07:30 stevebaker, +1 , something about HA of heat would be good 20:07:33 so I put down 5 fishbowl and 10 work + 1 friday 20:07:48 any tweaking ? 20:07:57 suggestions ?? 20:08:03 sounds good to me 20:08:06 that sounds fine 20:08:08 sounds pretty good 20:08:14 ok cool 20:08:18 we had an etherpad with session suggestions didn't we? 20:08:23 * shardy looks for it 20:08:24 hopefully the fishbowl will reduce the need for 2 sessions on Friday 20:08:34 as long as we get Friday AM 20:08:42 shardy, https://etherpad.openstack.org/p/kilo-heat-summit-topics 20:08:44 +1 to that 20:08:46 because everyone always leaves at lunch time for some reason 20:08:47 the gibbering wreck session 20:08:53 ahh wrong release 20:09:08 https://etherpad.openstack.org/p/liberty-heat-sessions 20:09:19 sorry s/fishbowl/work sessions/ 20:09:36 #link https://etherpad.openstack.org/p/liberty-heat-sessions 20:09:38 inc0: thanks 20:10:04 need to start adding ideas to that 20:10:13 shardy, by the way, maybe you'd like to say word or two about decouple nested? 20:10:14 we do have a heap of sessions to fill 20:10:40 inc0: in this topic? 20:11:05 asalkeld: also , I think we should re-read heat usability - improvements page and remove some completed staff 20:11:05 yeah, seems like nice leap in terms of heat HA 20:11:19 inc0: we can chat about that in open discussion, by all means 20:11:27 i'll move on 20:11:33 #topic Heat Mission statement: 20:11:49 this is just a heads up, that someone posted a new one 20:11:52 Oh no, not this again! ;) 20:11:54 as ours is missing 20:12:00 diediedie 20:12:07 #link https://review.openstack.org/#/c/154049/ 20:12:11 lol... 20:12:21 can't we just get a simple something in there? 20:12:53 "heat makes stuff with templates" ;) 20:13:17 I thought we had one somewhere else 20:13:21 "heat makes stuff" is enough;) 20:13:26 with programs going away, I'm not sure this is even needed any more 20:13:41 it's definitely a lot less critical to get the wording right 20:13:52 yip 20:13:58 since in the big tent, competing projects are welcome 20:14:05 yeah, heat is a project, and that proposed mission seems to sum up what it does. +1 20:14:07 #link https://review.openstack.org/#/c/116703/1/reference/programs.yaml 20:14:09 so there's no danger of staking out too much space 20:14:17 that's the original review which some of us commented on 20:14:34 maybe s/standardized/declarative/ 20:15:13 stevebaker: +1 20:15:56 stevebaker: +1 20:15:57 template based declarative resource orchestration? 20:15:59 I'd add a word that heat is really only project with full context of network, compute and so on...but someone with better english would have to rephrase that 20:16:37 inc0: I think "composite cloud applications" implies that 20:16:50 actually what stevebaker said is fine 20:17:04 stevebaker: you adding that as a comment? 20:17:05 +1 for declarative, we keep fighting off imperative stuff :) 20:17:31 +1 seriously. 20:17:41 +2 20:18:14 ours was too wordy 20:18:41 cool to move on? 20:18:48 please 20:19:01 #topic update from cross project 20:19:13 #link https://wiki.openstack.org/wiki/Release_Cycle_Management/Liberty_Tracking 20:19:25 please all read that ^ sometime 20:19:55 also there is this use asyncio / threads spec 20:20:08 (getting link) 20:20:45 #link https://review.openstack.org/#/q/status:open+project:openstack/openstack-specs,n,z 20:21:01 we should all be adding feedback there I think 20:21:20 or at least if you are passionate about the $topic 20:21:33 where did starting heat-engine with multiple workers end up? 20:21:47 stevebaker: afaik it works 20:21:57 shardy: ^ 20:22:07 I believe it works but is not on by default 20:22:08 Yeah it works, or at least it did last time I tested it 20:22:34 Getting it tested in the gate would be cool, but there was disagreement about running multiple workers by default 20:22:41 so in theory heat-engine could just ditch eventlet entirely and we call workers our concurrency solution? 20:22:49 uhh...having to move whole thing to pure async will cost a lot of work... 20:23:03 inc0: i agree with that 20:23:13 prefer stevebaker solution 20:23:20 async is cool, but its hell for debugging 20:23:21 stevebaker: arguably it's more complicated than that 20:23:21 processes + messaging 20:23:52 * stevebaker waves dismissable and mumbles 20:24:27 ok, moving on - this was just a heads up 20:24:31 stuff is happening 20:24:41 thanks for the update 20:24:44 stevebaker: workers are just engines that are not managed by systemd 20:24:59 #topic Short question about increasing nested depth default value for using it in Sahara. 20:25:07 it's mine 20:25:22 what is the default depth currently? 20:25:23 could we increase this value to 4 or 5 https://github.com/openstack/heat/blob/master/heat/common/config.py#L80 ? 20:25:26 3 20:25:33 thats really low 20:25:51 Yeah it does seem pretty low now, given all the provider resources etc 20:26:01 recent discussion with Sahara devs suggests that they need it at least 4 if they move to all-Heat internal orchestration 20:26:33 pas-ha: it's for now ;) so may be we want to make it bigger then 4 ;) 20:26:45 yep, that's for starters 20:27:00 pas-ha: the use resource group? 20:27:05 yes 20:27:09 asalkeld: yes 20:27:12 with a template as the resource 20:27:19 nested resource groups 20:27:25 the only reason we had a nested depth limit was because it was expensive to do a total resource count. Don't we store resource counts now? We could rely on a resource limit instead of nested limit 20:27:26 so that's 3 levels right there? 20:27:49 stevebaker: it's the same 20:27:52 Cluster->NodeGroup->NodeGroupElement->Instance+ResourceGroup->volumes 20:27:56 stevebaker: i really need to change tat 20:28:13 stack -> resource_group1 -> resource_group2 20:28:16 as we can't pass parent_resource object 20:28:19 it is 4 20:28:37 so each stack would need to reload every stack 20:28:39 pas-ha: thinks in the same graphic way:) 20:28:47 which is horrible 20:28:49 https://bugs.launchpad.net/heat/+bug/1331227 20:28:50 Launchpad bug 1331227 in heat "nested stack depth too low" [Wishlist,Opinion] 20:29:01 stevebaker: agree 20:29:16 stevebaker: it should be a db lookup 20:29:32 asalkeld: it should. I thought it was already 20:30:01 stevebaker: the nested level is in the db, the total res count is not 20:30:39 more levels == more stacks to load and s.total_resources() 20:31:11 https://review.openstack.org/#/c/156546/ 20:31:19 i am working on that 6 20:31:24 ^ 20:31:47 I think that should be solvable by some clever sql query tho 20:32:00 skraynev: so yeah lets increase it, but we need to think about total_resources() 20:32:11 inc0: +1 20:32:11 so in the short term we could raise the limit to ... 5? but we need to acknowelege that this makes us vulnerable to users launching DOS stacks with lots of resources 20:32:41 I am ok with 5 20:32:43 yeah, and make a bug for this 20:33:11 max resource count is 1000, counting the nested too? 20:33:11 #topic Mistral resources in Heat. 20:33:21 longer term our resource limit needs to apply to all resources (including nested) 20:33:36 stevebaker: it does 20:33:48 asalkeld: oh, it does? 20:34:03 that's why it's so horrible 20:34:04 about topic: this is mine too ;) 20:34:10 asalkeld: its just expensive to calculate? 20:34:19 stevebaker: yeah, it does, but the way we do it isn't nice 20:34:28 oh, oki 20:34:28 https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/mistral-resources-for-heat,n,z 20:34:37 stevebaker: yip as inc0 suggested we need a db query 20:34:48 both resources are ready for review 20:35:00 #link https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/mistral-resources-for-heat,n,z 20:35:00 asalkeld, I'll look into it in next few days 20:35:09 thanks inc0 20:35:26 inc0: or an extra stack column with the resource count (including nested) 20:35:35 also we prepared this document with use-cases and examples: https://docs.google.com/a/mirantis.com/document/d/10iAe--I17c7iFoN0Ouk2DxdrxjsV-QTuzqqNdHzq2-Y/edit?usp=sharing 20:35:44 #link https://docs.google.com/a/mirantis.com/document/d/10iAe--I17c7iFoN0Ouk2DxdrxjsV-QTuzqqNdHzq2-Y/edit?usp=sharing 20:35:53 prazumovsky: nice, sorry haven't come back to review 20:36:06 stevebaker, I'll start with query...I dislike redundancy like that - its prone to desynchronizations 20:36:11 nice work, I've been meaning to try these 20:36:21 but we'll see how it goes 20:36:25 shardy: welcome to play with it ;) 20:36:32 inc0: ok, sounds fine 20:36:40 I can't access the doc 20:37:15 skraynev: can you put the docs in-tree with the resources as rst, or short term put it in a wiki/etherpad? 20:37:36 shardy: sure 20:37:45 great, thanks 20:37:47 will do it tomorrow ;) 20:38:18 and will send email with related information 20:38:21 It will be good to see some simple examples to see how it works :) 20:38:47 yeah nice work, just need time to review 20:39:17 prazumovsky: skraynev need any other input? 20:39:21 shardy: yes. they are. hope, that it will be interesting ;) 20:39:28 or just an update? 20:40:12 asalkled: updatem that it's ready for review 20:40:22 thanks skraynev 20:40:24 s/updatem/update, 20:40:28 i'll move to open discussion 20:40:36 #topic open discussion 20:40:44 sorry, thank you, skraynev for answer 20:40:57 prazumovsky: np 20:41:20 i have something 20:41:25 My day to day role has shifted away from Heat and so I think it's best that I step down from core, rather than get voted out later due to inactivity 20:41:38 Is this meeting an official enough announcement or do I need to send out an email? 20:41:38 jpeeler: :-( 20:41:53 shardy, about talk about decouple-nested - I got questions all the time about lack of HA in heat - I think any improvement in this matter would be good to announce broadly 20:42:10 ops still distrust us:( 20:42:11 jpeeler: sorry to hear that, thanks for all the work until now :) 20:42:26 jpeeler: bad news... 20:42:32 jpeeler: not sure, normally an email to mailing list? 20:42:41 jpeeler: thanks for all your reviews 20:42:51 jpeeler: you'll be missed 20:43:00 inc0: decouple nested isn't really about HA, it's about making deployment of large trees of stacks more scalable 20:43:11 jpeeler: do you have any blueprints assigned? 20:43:44 shardy, but it breaks very long tasks into few smaller tasks - its good for both topics 20:44:01 20:44:01 inc0: it was driven, initially, by the use-case of TripleO, where they want to e.g launch several ResourceGroups with hundreds of nodes in simultaneously 20:44:21 jpeeler: i'll take you off lazy-load-outputs - or do you want to finish that? 20:44:22 obviously it benefits anyone with that sort of scale requirement too though 20:44:25 asalkeld: https://blueprints.launchpad.net/heat/+spec/lazy-load-outputs 20:44:49 inc0: the thing is, if an engine dies, decouple-nested doesn't enable any HAish recovery from that failure 20:44:53 asalkeld: it's possible one day i could finish it, but i'm not sure it's all that important is it? 20:45:18 So while it's clearly a step in the right direction (decouple all-the-things, moving towards fully decoupled convergence model) 20:45:20 i can change the milestone 20:45:36 I don't think it really warrants major announcement 20:45:49 What do others think? 20:46:18 shardy: not sure 20:46:21 I think, that we can wait it for the next release. 20:46:26 normal release notes? 20:46:37 its definitely release not worthy 20:46:43 Maybe we land it and document what it can and can't do in the release notes? 20:46:55 release note worthy 20:47:04 Yeah, and by then we'll know what pieces of convergence have landed so we can relate the topics if appropriate 20:47:29 yip 20:47:39 inc0: thanks for bringing it up, it's definitely good to think about 20:47:40 well, my point is, it seems we have great deal of mistrust from ops side... it would be good to show people that we care about downtime 20:47:46 jpeeler: did you post any reviews related to that bp? 20:48:05 if so can you post it into the blueprint 20:48:10 asalkeld: yeah, but it's pretty far from complete 20:48:13 ok will do 20:48:24 so the next person doesn't start from nothing 20:49:36 jpeeler: also say something like "i started this, here is the patch. feel free to take this over" 20:51:15 asalkeld, this=heat :) 20:51:28 inc0: Ok, well that's useful feedback, we definitely need to keep that in mind when communicating features at release time 20:51:52 shardy: in decouple-nested did you get ActionInProgress in nested stack updates? 20:52:07 one of the last issues i have 20:52:37 this is with update policy tests 20:52:46 asalkeld: Hmm, no, but I only tested fairly simple templates tbh 20:52:59 ok 20:53:55 O, one notice 20:54:34 Apparently we have ~10 days to at least post code for reviews (for blueprints) 20:54:59 by then we need to have all blueprints in "needs code review" 20:55:38 #link https://launchpad.net/heat/+milestone/kilo-3 20:55:49 wow 20:55:52 ok 20:56:13 i think we are doing quite well, but some convergence bp's need *some* code posted 20:56:31 asalkeld: suppose, that the same for bugs related with convergence 20:56:52 they are bugs 20:57:02 bugs != blueprints 20:57:18 oh, got it. 20:57:38 anything else folks? 20:57:48 thanks for all the nice responses earlier! (didn't want to interrupt the meeting earlier) 20:58:12 jpeeler: been great having you 20:58:19 jpeeler: cheers! 20:58:34 if you come back on heat we can fast track your core status 20:59:04 jpeeler: come to us with beer ;) 20:59:04 ah thanks for that 20:59:27 #endmeeting