20:00:24 <shardy> #startmeeting heat
20:00:25 <openstack> Meeting started Wed Aug 14 20:00:24 2013 UTC and is due to finish in 60 minutes.  The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:28 <openstack> The meeting name has been set to 'heat'
20:00:35 <shardy> #topic rollcall
20:00:38 <radix> hello hello
20:00:39 <bgorski> o/
20:00:41 <sdake_> o/
20:01:03 <asalkeld> o/
20:01:07 <zaneb> howdy y'all
20:01:20 <bnemec> \o
20:01:22 <spzala> HEllo
20:01:34 <funzo> hello
20:02:09 <tspatzier> Hi
20:02:18 <stevebaker> \o
20:02:21 <jasond> hi
20:02:27 <mhagedorn> hi
20:02:38 <shardy> SpamapS, therve around?
20:03:02 <shardy> Ok hi all, lets get started
20:03:11 <radix> I think therve's on vacation
20:03:25 <shardy> radix: Ok, cool, thanks
20:03:43 <shardy> #topic Review last week's actions
20:03:45 <zaneb> tomorrow is a holiday in most of Europe
20:04:01 <shardy> #link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-08-07-20.00.html
20:04:16 <randallburt> hi all
20:04:20 <shardy> zaneb: it is? doh, not in the UK! ;(
20:04:35 <shardy> Don't see any actions, anyone have anything to raise from last week?
20:04:36 <zaneb> shardy: only in the Catholic parts ;)
20:04:45 <zaneb> shardy: yes, mission statement
20:05:04 <asalkeld> we still busy with that?
20:05:06 <shardy> #link https://etherpad.openstack.org/heat-mission
20:05:11 <asalkeld> wow, over cooking it
20:05:24 <sdake_> imo doesn't need to be this complicated ;)
20:05:27 <shardy> asalkeld: yeah, agreed
20:06:25 <shardy> zaneb: what did you want to raise, other than we still need to put it somewhere?
20:06:30 <zaneb> not suggesting that we need to discuss it again here, just that it's time to post it
20:06:38 <radix> huh. I think I lagged out for a bit
20:07:02 <shardy> zaneb: OK, I was going to previously but the discussion was still in progress so I left it and was out last week
20:07:09 <shardy> #action shardy to post mission statement
20:07:17 <stevebaker> stick a fork in it, its done
20:07:29 <shardy> Ok, anything else from last week?
20:08:04 <asalkeld> nope
20:08:05 <andrew_plunk> I was wondering the status of the "heat config" bug, but I can figure that out in #heat later
20:08:15 <shardy> #topic Reminder re Havana_Release_Schedule FeatureProposalFreeze
20:08:42 <shardy> So the feature proposal freeze was agreed as Aug 23
20:08:47 <shardy> #link https://wiki.openstack.org/wiki/Havana_Release_Schedule
20:09:10 <shardy> So we've all got just over a week to get our stuff posted for review, then a couple of weeks to get it reviewed and merged
20:09:20 <stevebaker> I'm waiting on various tempest/devstack/dib things, so unless somebody wants a hand with something I might take a look at Horizon
20:09:55 <shardy> Anything which won't be posted by the 23rd will be bumped to Icehouse, so speak now if that's likely ;)
20:10:19 <shardy> which brings us to:
20:10:26 <shardy> #topic h3 blueprint status
20:10:27 <asalkeld> when you say posted you mean patches or bp?
20:10:50 <shardy> #link https://launchpad.net/heat/+milestone/havana-3
20:11:39 <shardy> asalkeld: It's patches must be posted AIUI, as in propose the change?
20:11:40 <radix> I'll try to keep up on reviews a bit every day
20:11:49 <shardy> way too late for new BPs now IMO
20:13:19 <shardy> we've got 28 BPs, about half done - I'm hoping heat-trusts will land but gated on keystoneclient
20:13:29 <shardy> sdake will the nova native resource BP land?
20:13:46 <sdake_> hopefully - been busy with openshifty stuff
20:13:48 <shardy> #link https://blueprints.launchpad.net/heat/+spec/native-nova-instance
20:14:08 <stevebaker> sdake_ I'm happy to help with that if you like
20:14:12 <shardy> #link https://blueprints.launchpad.net/heat/+spec/heat-trusts
20:14:29 <shardy> Yeah, be best if that doesn't get deferred IMO
20:14:30 <sdake_> stevebaker atm i have to abandon my openshift work to make the 23rd deadline for that above blueprint
20:14:38 <sdake_> i'll point you at my wip for rhel dib support
20:14:56 <shardy> sdake: Happy to reassign to stevebaker then?
20:14:58 <stevebaker> not so interested in that ;)
20:15:00 <sdake_> i've got a few patches merged already
20:15:09 <sdake_> either way works for me
20:15:18 <sdake_> what do you prefer stevebaker
20:15:23 <sdake_> both need to get done soon
20:15:33 <stevebaker> i wouldn't mind working on native instance
20:15:44 <sdake_> ok assign to yourself then
20:15:50 <stevebaker> ok
20:16:11 <radix> does anyone know what else needs to happen for https://blueprints.launchpad.net/heat/+spec/instance-resize-update-stack ? I see that it had a patchset merged
20:16:22 <shardy> tspatzier: some stuff has landed for hot-specification, how much more is planned?
20:16:34 <shardy> tspatzier: It's still "Started" atm
20:16:50 <shardy> #link https://blueprints.launchpad.net/heat/+spec/hot-specification
20:17:51 <shardy> radix: IIRC therve posted a patch and it's marked as Implemented, so done?
20:17:57 <spzala> shardy: Seems like tspatzier is not here.. I will follow up with him
20:18:07 <radix> well, the bp isn't marked as complete. I guess I'll just ask him when he gets back
20:18:10 <shardy> spzala: OK, thanks
20:18:14 <tspatzier> shardy, sorry, was distracted for a moment
20:18:23 <spzala> shardy: no problem!
20:18:26 <asalkeld> does anyone need the rest api for this? https://blueprints.launchpad.net/heat/+spec/provider-upload
20:18:30 <shardy> radix: It's marked as Implemented?
20:18:34 <shardy> #link https://blueprints.launchpad.net/heat/+spec/instance-resize-update-stack
20:18:45 <shardy> Completed by
20:18:46 <shardy> Thomas Herve on 2013-08-06
20:18:46 <radix> oh.
20:18:56 <tspatzier> shanks, so for hot-specification I guess there won't be much more in havana, since we will only document was is implemented.
20:18:58 <radix> shardy: ok, never mind, I assumed everything in the list that I was looking at was still open
20:19:07 <spzala> tspatzier: oh you are here :), cool.
20:19:27 <zaneb> asalkeld: I don't think we should have a ReST API for that
20:19:38 <asalkeld> mark as done?
20:19:48 <shardy> tspatzier: Ok, if you're happy, please move to Implemented so we know there aren't more patches pending
20:19:48 <tspatzier> shardy, so if you are ok with that direction that hot-specification is only the initial sub-set of HOT we did in havana, we could close it
20:19:57 <tspatzier> sure, will do
20:20:15 <shardy> tspatzier: Ok, sounds good
20:20:34 <zaneb> asalkeld: it would be cool if we had the thing that will allow the client to automatically stick relevant files in the files section... dunno if that should go in that blueprint
20:21:11 <stevebaker> that already happens for env registry stuff
20:21:19 <shardy> asalkeld: Yeah, IIRC the API thing was a historical idea?
20:21:21 <asalkeld> zaneb, yea maybe I can do that - just for ever reworking cw stuff
20:22:01 <shardy> Anyone else need work, or need to get rid of work for h3 before we move on?
20:22:28 <zaneb> stevebaker: this stuff: http://lists.openstack.org/pipermail/openstack-dev/2013-May/009551.html
20:23:43 <stevebaker> ja?
20:23:45 <stevebaker> ah
20:23:49 <shardy> #topic Open discussion
20:24:06 <shardy> Anyone have anything to mention?
20:24:16 <zaneb> stevebaker: genau
20:24:24 <asalkeld> shardy probably need to chat about that Policy now returning a url
20:24:48 <asalkeld> I chatted to zaneb about it and that seeemed the best solution
20:25:08 <asalkeld> https://review.openstack.org/#/c/41855/
20:25:13 <shardy> asalkeld: Yeah, so both you and therve have posted patches with compatibility stuff, which pollutes the AWS resources with native stuff, or vice-versa
20:25:20 <shardy> are we happy this is the way to do?
20:25:24 <shardy> s/do/go?
20:25:43 <shardy> considering when we release it in Havana, we'll probably be stuck maintaining it..
20:25:50 <asalkeld> the problem is when you compose the alarm with template
20:26:08 <asalkeld> you can't reference the policy from the nested stack
20:26:16 <asalkeld> so the url solves that
20:26:45 <shardy> asalkeld: Ok, and I guess because it's just the result of the Ref, it's not actually a template-level incompatibility with cfn
20:27:33 <sdake> funzo had a question for asalkeld about alarms wrt the reworked cloudwatch stuff and openshift
20:27:33 <shardy> If it's been discussed already then fair enough, I just saw it and wanted a sanity-check discussion ;)
20:27:45 <asalkeld> I like the cw as a template, but if you guys are very against that I can make a plugin for the ceilometer CW
20:28:05 <zaneb> I don't hate this idea
20:28:26 <zaneb> scaling policy seems like one of the less important Refs to maintain consistency with
20:29:09 <asalkeld> hopefully we will have a better autoscaling story soon
20:29:22 <shardy> asalkeld: Where I was coming from is just that we want ideally to continue moving in a direction which decouples us from the AWSisms, rather than baking an unholy mixture of AWS-compatible and native resources ;)
20:29:34 <zaneb> yeah, whatever happened to the separate autoscaling api?
20:29:57 <radix> zaneb: gradually getting there :)
20:30:04 <asalkeld> well we need a native autoscaling resource
20:30:19 <radix> I'm also working vaguely around it. hence my work on InstanceGroup and tinkering with ResourceGroup
20:30:20 <asalkeld> cool, radix getting there
20:30:24 <funzo> sdake: is this the forum to discuss what openshift would like to be able to do with autoscaling?
20:30:42 <shardy> asalkeld: Yeah, we need native all-the-things, but those should be clean native implementations, eventually with the AWS-compatible stuff built on top?
20:30:44 <sdake> funzo think its appropriate to ask if what you want can be serviced by asalkeld's new work
20:30:46 <radix> given the 7-week freeze I've decided to refocus a bit, hopefully we can get things rolling quickly after unfreeze
20:30:50 <asalkeld> funzo,  chat to radix
20:31:01 <asalkeld> (off line)
20:31:06 <funzo> asalkeld: ok
20:31:07 <zaneb> radix: cool :)
20:31:40 <asalkeld> radix, we are all looking forward to the new autoscaling:)
20:31:44 <radix> yay :)
20:31:44 <sdake> asalkeld the issue is around alarms not around autoscaling
20:31:55 <jasond> regarding the multi-engine bug https://bugs.launchpad.net/heat/+bug/1211276, if anybody has feedback about https://bugs.launchpad.net/heat/+bug/1211276/comments/8 please let me know
20:32:05 <uvirtbot> Launchpad bug 1211276 in heat "can't cancel wedged stack-create" [High,Confirmed]
20:32:20 <radix> zaneb: wanna make sure we have all of your use cases so we don't take it in the wrong direction
20:32:42 <sdake> asalkeld i believe funzo wants to pass a parameter to an alarm
20:32:54 <stevebaker> i put my recollection of the Portland channels discussion in there
20:32:56 <shardy> funzo: want to give us a summary of your issue, or maybe ping a mail to the list where we can discuss it?
20:33:04 <radix> zaneb, asalkeld: we're probably going to be implement Heat resources for Otter which will give us a better idea of how the native autoscaling API will work
20:33:06 <shardy> better than individual discussions IMO
20:33:19 <funzo> shardy: sure
20:33:38 <radix> there was one other thing... let me think...
20:33:52 <radix> oh, right, InstanceGroup. we need a way to control it from the API
20:34:07 <funzo> shardy: I wrote up a document of what I would like to be able to do from openshift here https://github.com/openshift/openshift-pep/blob/master/openshift-pep-007.md
20:35:08 <radix> has there been any thought on the ability to patch subsections of the template? If we have an external autoscaling service that's controlling an InstanceGroup in Heat, we need to change the "Size" of the InstanceGroup from that autoscaling service
20:35:18 <radix> literally just that one property on that one resource
20:35:35 <shardy> radix: You just do a stack update
20:35:45 <funzo> shardy: radix: it would require being able to invoke a scale-up/scale-down using a tool from within the openshift infrastructure. that call would need be able to pass parameters to specify user data
20:35:53 <radix> shardy: right but download/change/update can lead to consistency errors
20:35:58 <asalkeld> shardy but then you have to send the whole template
20:36:00 <funzo> I'll follow up with you guys offline
20:36:06 <radix> asalkeld: right exactly.
20:36:27 <asalkeld> radix seems like a good change
20:36:29 <radix> what happens if two actors are doing fetch/change/update, too?
20:36:39 <radix> they can stomp on each other
20:36:48 <shardy> funzo: thanks for the info, will read and we can have a followup discussion
20:37:28 <radix> in general IMO UpdateStack should be something that users do when they want to change something manually in their stack, but the size of an InstanceGroup has a different feel to it. is that not right?
20:37:38 <radix> I mean, InstanceGroup's "Size" property
20:37:45 <shardy> asalkeld: I thought we'd had this discussion recently, where we decided that allowing per-resource stack creation (or update) was a bad-idea (tm)
20:37:48 <radix> but I can also imagine a PATCH for a stack
20:38:09 <asalkeld> shardy that's just an update
20:38:11 <radix> which would be isomorphic to stack-update
20:38:19 <shardy> Ok, so just update, not a piecemeal create
20:38:24 <radix> right, exactly
20:38:31 <radix> I brought up "resource-create" a while ago and that was shot down
20:38:33 <radix> this is subtly different :)
20:38:38 <zaneb> so it's an update, but Heat does the remixing of the template itself
20:38:49 <zaneb> ?
20:38:53 <radix> zaneb: yes
20:39:11 <radix> imagine "replace *this* element in the template with *this* template snippet"
20:39:15 <shardy> I'm not sure what real advantage it has, other than a tiny simplification to some dict mangling on the user side
20:39:24 <radix> scoping all the way down to individual properties
20:39:27 <asalkeld> lots of data
20:39:28 <radix> shardy: well, like I said, consistency
20:39:35 <radix> lots of data too, but consistency moreso
20:40:01 <radix> if you have two actors downloading/changing/updating two different parts of the stack, they can stomp on each other
20:40:10 <shardy> radix: It's much easier to keep a consistent stack definition if you have one template, rather than something in a random state after a bazillion resource updates over time
20:40:13 <zaneb> I don't think anything else in Heat is consistent
20:40:25 <asalkeld> haha
20:40:29 <zaneb> if you kill your nova servers, heat won't know about it
20:40:35 <zaneb> solution: don't do that
20:40:39 <zaneb> sorted.
20:40:49 <shardy> radix: use git for your templates
20:40:55 <andrew_plunk> right now an error is raised if you try to update while a stack is in update not complete I believe
20:40:57 <radix> right but this use case is intrinsically dynamic
20:41:06 <radix> remember, autoscale service controlling an InstanceGroup
20:41:15 <radix> this isn't something that a user is doing manually
20:41:33 <zaneb> radix: you are talking about read/modify/update... but why read?
20:41:39 <asalkeld> well patch is a well know/used way of doing an update - don't see the problem with it
20:41:48 <radix> zaneb: how do you know what to send to update?
20:41:48 <zaneb> the AS service knows what outcome it wants, just generate the right template and update
20:41:55 <shardy> radix: but you're not changing the dynamic behaviour, just the definition used to perform it, or the limits
20:42:03 <radix> zaneb: oh, I think I haven't explained it properly
20:42:37 <radix> zaneb: so, look at InstanceGroup right now. all you have to do is change the "Size" Property and it adjusts the underlying nested stack as appropriate
20:42:40 <zaneb> radix: I think you're talking about having all of the state in Heat and none in AS, even though it's AS that's calling Heat
20:42:40 <shardy> radix: maybe do a wiki page, and start a ML thread?
20:42:54 <radix> it's actually already there in the wiki page :)
20:43:18 <zaneb> link?
20:43:25 <radix> https://wiki.openstack.org/wiki/Heat/AutoScaling
20:43:34 <shardy> #link https://wiki.openstack.org/wiki/Heat/AutoScaling
20:43:51 <radix> it doesn't actually propose the specifics how AS controls the InstanceGroup, but it mentions that it needs to be solved
20:44:24 <radix> I can start a mailing list thread
20:45:22 <shardy> radix: I've said this before, but I think we just need a policy resource which is generic and has hooks to call out to (or be signalled by) a policy-calculation service
20:45:49 <shardy> ie something which sits between ceilometer and the InstanceGroup
20:45:59 <radix> ah, actually it says something about some webhook or whatever, but I think that can change :)
20:46:06 * stevebaker has to go
20:46:16 <radix> shardy: hm. I don't think I've ever heard the idea of a "policy resource"
20:46:24 <radix> I have heard the phrase "policy service" but it has never been clear to me
20:46:34 <zaneb> you mean like AWS::AutoScaling::ScalingPolicy?
20:46:52 <asalkeld> well you always need a resource for the template to use
20:46:54 <radix> I understand ScalingPolicy :)
20:47:41 <radix> I *guess* I can imagine the autoscaling service directly doing an UpdateStack on the *nested* stack
20:47:50 <shardy> zaneb: Yeah, but the whole premise of this AS service thing seems to be that folks want to plug something other than that in
20:47:54 <radix> but so far I had been imagining it would just update the "Size" of the InstanceGroup in the *parent* stack
20:48:22 <zaneb> radix: ooooooh. that's crazy ;)
20:48:28 <radix> zaneb: which one?
20:48:35 <zaneb> parent stack
20:48:44 <shardy> I'm still not that clear what this "autoscaling service" will actually do, which ceilometer and heat don't already do
20:48:51 <zaneb> "autoscaling service directly doing an UpdateStack on the *nested* stack" is what I was thinking
20:48:54 <radix> shardy: a couple of things
20:49:13 <asalkeld> shardy cool new stuff
20:49:18 <asalkeld> :)
20:49:28 <radix> shardy: first, it's just a place to plug in more types of policies. scheduling-based ones, integration with other monitoring systems, etc.
20:49:30 <shardy> $shiny_stuff
20:49:32 <asalkeld> like what funzo needs
20:49:44 <radix> shardy: it can also provide an API similar to Amazon Auto Scaling API
20:49:45 <zaneb> shardy: it will be an actual resource that you can reference from anywhere (incl different template), rather than something buried behind a heat plugin
20:50:34 <asalkeld> +1
20:50:43 <shardy> zaneb: Having some new *resource* I understand, a whole new service/project, not so much atm
20:51:09 <zaneb> I don't care whether it's in a new service/project
20:51:31 <zaneb> I think it needs a separate endpoint so we're not precluded from deciding that it does need to be later
20:51:48 <radix> shardy: the *main* reason to make it a separate service is to provide an isolated autoscale API, which indeed is not really that important to a Heat purist's goals. Otherwise all the other functionality (scheduling, webhook support for arbitrary other custom monitoring systems to use) can be built directly into Heat
20:51:56 <radix> zaneb: +1
20:52:02 <shardy> well people keep saying "autoscaling service", hence my question about what that service will actually do
20:52:10 <lifeless> scale
20:52:12 <lifeless> automatically.
20:52:27 <zaneb> shardy: autoscaling API is a better name than service
20:52:35 <sdake> lifeless i think the gap in communication is that heat already does scale automatically
20:52:37 <radix> agreed
20:52:43 <radix> (with zaneb)
20:52:47 <lifeless> sdake: I know, I was being /totally/ unhelpful :)
20:53:03 <radix> even though I've been saying "autoscaling service" :)
20:53:23 <radix> shardy: so imagine how CFN probably gets along with AWS AS. it's basically the same thing
20:53:42 <zaneb> when we have an API we can decide whether it makes sense for it to be a separate service. But we need an API IMO.
20:53:46 <radix> AS can be used separately, but there's also probably some glue in AS to play well with CFN
20:54:14 <shardy> zaneb: Ok, cool, I guess I'd rather just see us make our existing stuff more flexible, e.g native resources with more generic interfaces, but if an API is something people we need (rather than controlling via stack update, which can be done right now), then fair enough
20:54:37 <shardy> s/people we/people think we/
20:55:17 <shardy> Ok, lets follow up on the ML, anything else for the last 5mins?
20:55:30 <asalkeld> shardy, I'd guess that if people are using the as api,  then they are not using heat
20:55:38 <radix> shardy: I will try to put together another mailing list post that just lays out the next design issue in a clear way, and put it in the context of the whole expected design
20:55:44 <radix> (specifically, how the separate AS API will talk back to Heat to get it to do stuff)
20:55:47 <asalkeld> but that is still a valid usecase
20:55:49 <shardy> asalkeld: why would they want to do that ;p
20:55:56 <radix> I know, I know, it's terrible :)
20:56:12 <asalkeld> cos not everyone wants to use heat?
20:56:20 <randallburt> blasphemy
20:56:22 <sdake> need to not divide developer resources
20:56:25 <asalkeld> ;)
20:56:25 <shardy> asalkeld: Yeah, joking ;)
20:56:58 <zaneb> more than that, they might want to e.g. use nested stacks and share launch configs or whatever across stacks
20:57:13 <zaneb> and right now they can't, because it has to be in the same stack as the group
20:57:49 <radix> zaneb: hmm, but don't we allow arbitrary inclusion of templates now anyway?
20:57:51 <radix> or will?
20:57:55 <zaneb> we can fix that either by implementing a separate api, or doing the lookup internally and trying not to screw up any of the security stuff
20:58:02 <zaneb> I know what I would vote for
20:58:08 <asalkeld> yeah that sounds confusing
20:58:19 <SpamapS> o/ (sorry late.. had conflicting things)
20:58:31 <shardy> SpamapS: lol, 2mins left ;)
20:58:35 <asalkeld> just in time for the end
20:58:44 <radix> SpamapS: you missed out on so much!
20:58:52 <SpamapS> yeah
20:58:56 <SpamapS> wanted to talk about event table
20:59:00 <SpamapS> but we can wait till next week
20:59:03 <sdake> spamaps I see how it is, heat is #2 in your book :)
20:59:15 <SpamapS> sdake: #2? oh yeah, right.. number _two_
20:59:23 <shardy> Ok, time's up, thanks all
20:59:31 <shardy> #endmeeting