20:00:02 <stevebaker> #startmeeting heat 20:00:02 <openstack> Meeting started Wed Nov 27 20:00:02 2013 UTC and is due to finish in 60 minutes. The chair is stevebaker. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:06 <openstack> The meeting name has been set to 'heat' 20:00:12 <bgorski_> o/ 20:00:12 <stevebaker> #topic rollcall 20:00:15 <jasond`> o/ 20:00:16 <shardy> o/ 20:00:19 <skraynev> o/ 20:00:19 <tspatzier> hi 20:00:19 <asalkeld> o/ 20:00:20 <zaneb> servus 20:00:21 <vijendar> hi 20:00:22 <sdake> o/ 20:00:24 <jpeeler> o/ 20:00:36 <stevebaker> (no actions from last week) 20:00:45 <mspreitz> o/ 20:01:03 <stevebaker> kebray: are you about? 20:01:15 <lakshmi> o/ 20:01:46 <stevebaker> #topic Adding items to the agenda 20:01:48 <tims1> o/ 20:01:50 <stevebaker> #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda 20:02:07 <stevebaker> plonk anything here and I'll add it 20:02:11 <shardy> stevebaker: Do we want to discuss v2 API, or keep it on the ML thread? 20:02:41 <stevebaker> shardy: ummmm, how about at the end of open discussion if there is time ;) 20:02:51 <shardy> stevebaker: kk ;) 20:02:53 <mspreitz> generic agents and software config 20:03:07 <stevebaker> mspreitz: that is already there 20:03:33 <mspreitz> great 20:03:37 <stevebaker> #topic icehouse-1 release 20:03:47 <stevebaker> #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 20:03:53 <radix> here 20:04:09 <stevebaker> The 3rd is i-1 feature freeze! 20:04:16 <stevebaker> #link https://launchpad.net/heat/+milestone/icehouse-1 20:04:23 <shardy> come round quickly! 20:04:25 <stevebaker> I'll need to start getting brutal 20:05:16 <stevebaker> there are 10 high triaged bugs, 4 with no asignee 20:06:15 <stevebaker> some of them may not be high anyway 20:06:25 <zaneb> at least one of those is properly a blueprint 20:06:39 <zaneb> they've all been around forever I think... 20:06:56 <stevebaker> tims1: so it looks like this should be closed and 3 new blueprints created? https://blueprints.launchpad.net/heat/+spec/namespace-stack-metadata 20:07:10 <shardy> And one looks like a nova bug: 20:07:10 <tims1> stevebaker: yeah 20:07:13 <shardy> https://bugs.launchpad.net/heat/+bug/1249494 20:07:15 <uvirtbot> Launchpad bug 1249494 in heat "Using heat with yaml template causes SQL syntax error in nova " [High,Triaged] 20:07:38 <stevebaker> cool, sql injection in nova! 20:08:12 <radix> -_- 20:08:25 <stevebaker> I can't tell if this one is complete https://blueprints.launchpad.net/heat/+spec/oslo-db-support 20:08:32 <zaneb> ouch, that should be marked security-sensitive 20:09:43 <stevebaker> I'll mark this implemented, for better or worse https://blueprints.launchpad.net/heat/+spec/native-resource-group 20:10:24 <stevebaker> asalkeld: will this make i-1? https://blueprints.launchpad.net/heat/+spec/send-notification 20:10:32 <asalkeld> yeah 20:10:40 <asalkeld> not much left 20:10:45 <stevebaker> ok 20:10:46 <shardy> zaneb: possibly, or the reporter hasn't sync'd their nova db.. 20:11:02 <zaneb> shardy: ah, that's more likely 20:11:25 <asalkeld> feeling horrid - I am going to lie down 20:11:32 <asalkeld> later 20:11:36 <sdake_> ,enjoy asalkeld 20:11:42 <stevebaker> zaneb: do you want to ask for a decorator based approach for this? Or should we continue as is for now? https://blueprints.launchpad.net/heat/+spec/resource-support-status 20:12:08 <zaneb> I don't see the decorator thing as incompatible 20:12:21 <zaneb> it builds on the SupportStatus class 20:12:31 <zaneb> i.e. the decorator is what creates the SupportStatus 20:12:37 <stevebaker> ok 20:12:53 <zaneb> apparently is wasn't clear from my review that that's what I was driving at 20:13:56 <stevebaker> jasond`, zaneb, shardy, where are the multi-engine reviews at? 20:14:07 <zaneb> pass 20:14:32 <shardy> stevebaker: Not sure, I need to take a proper look again 20:14:36 <stevebaker> I'm scared now to touch https://review.openstack.org/#/c/56476/ ;) 20:15:04 <sdake_> stevebaker scared in terms of breaking the code? 20:15:19 <zaneb> oh yeah, I think it was just about there 20:15:30 <shardy> sdake_: Yes, last time we merged and had to revert 20:15:46 <jasond`> stevebaker: i think it's looking good, but i've thought that several times before :/ 20:15:50 <sdake_> must have been on the road 20:16:07 <sdake_> may make more sense to bump to i2 then? 20:16:20 <sdake_> give more time for testing/review? 20:16:40 <jasond`> the database patch is ready IMO 20:16:50 <stevebaker> sdake_: If it is ready, its ready 20:16:53 <jasond`> the second patch has a small issue with the test i'm working on today 20:17:04 <jasond`> and the third patch is pretty trivial 20:17:05 <zaneb> pro tip: after the 3rd or 4th patchset in a day I start leaving it for a day or two before I bother to review a new one ;) 20:17:15 <sdake_> i reviewed several in that series and the ylooked good 20:17:22 <stevebaker> This needs another review, it might just make it in https://blueprints.launchpad.net/heat/+spec/heat-build-info 20:17:54 <jasond`> zaneb: i'll try to do fewer updates 20:18:15 <stevebaker> vijendar: do you think there will be some action on https://blueprints.launchpad.net/heat/+spec/dbaas-trove-resource in time for i-1? 20:18:33 <vijendar> stevebaker: currently I am working on it 20:18:43 <andersonvom> stevebaker: we're just waiting for a review on that one and on https://review.openstack.org/#/c/57782/ 20:18:52 <vijendar> will send updated patch in a day or two 20:19:09 <stevebaker> anyway, blueprints look in pretty good shape, the number of targeted bugs is quite high - there is an easy way of fixing that though 20:19:20 <andersonvom> stevebaker: it seems both would be able to make it in 20:19:24 <stevebaker> if you've got some spare time for bugs, then go for it 20:19:38 <stevebaker> #topic Software config update 20:20:19 <stevebaker> I'll have a POC to post in the next few hours, it would be great if you could provide feedback on the general approach 20:20:43 <mspreitz> great 20:21:06 <stevebaker> Currently it just gets the metadata onto the server via os-collect-config, but nothing is consuming that data, and the format of that data will change many times as we figure out the right way 20:21:32 <stevebaker> but right now I'm more interested in the approach taken to implementing the SoftwareConfig/SoftwareDeployment resources 20:21:46 <lakshmi> what is included in this metadata 20:21:57 <stevebaker> probably not much more to say on that until the reviews are up 20:22:00 <zaneb> jasond`: are you running unit tests/pep8 locally before uploading? You should almost never be getting legitimate failures in the gate. 20:22:39 <stevebaker> lakshmi: currently it is a mostly verbatim dump of the SoftwareDeployment/SoftwareConfig data structures 20:22:41 <shardy> zaneb: Getting consistent results from the gate is always a bug ;) 20:22:57 <stevebaker> lakshmi: but it can be transformed into anything 20:23:03 <lakshmi> stevebaker that is a great start for the metadata 20:23:03 <mspreitz> stevebaker: do we think there should be generic agents, or should the software config agent cover it all? 20:23:13 <zaneb> shardy: I said *legitimate* failures ;) Neutron failures are sadly unavoidable 20:23:48 <stevebaker> mspreitz: I think there will be an agent per CM tool, which transforms the metadata to a CM tool invocation, and collects outputs to signal back to heat 20:24:16 <stevebaker> mspreitz: how to get the agent onto the server is an open question - probably golden images or cloud-init cloud-config 20:24:22 <mspreitz> I was hoping we would write one generic software config agent, it gets scpeicalized to a CM tool by a hook 20:24:28 <shardy> stevebaker: per CM tool? Can't we just bootstrap the CM specific tool with os-apply-config or something? 20:25:07 <stevebaker> mspreitz: In that case, that is what we are doing. The agent is called os-collect-config, and there is a custom *hook* per CM tool. 20:25:21 <shardy> stevebaker: Or just configure cloud-init to install e.g puppet and run the tool, pointed at the metadata 20:25:32 <mspreitz> one hook point and custom thing hooked in per CM tool? 20:25:41 <jasond`> zaneb: i guess i have been relying on the gate a little too much (something in tox is always breaking). will try to run the full suite before uploading 20:25:55 <tspatzier> I think we will have to figure out the details when looking at least two CM tools once we see the initial POC from stevebaker 20:26:21 <andrew_plunk> I don't see the point of installing all cm tools on each image, it will just take longer to bootstrap server. I would prefer one per cm tool 20:26:39 <stevebaker> shardy: the hook will probably be an os-refresh-config script generated with an os-apply-config template 20:27:08 <shardy> stevebaker: Ok cool, so not really an agent per tool then, just a config per tool 20:27:14 <mspreitz> andrew_plunk: I think that's what we are talking about 20:27:16 <stevebaker> andrew_plunk: you will only install the CM tool (and hook) that you want to use 20:27:33 <andrew_plunk> haha well maybe I should have just said +1 then 20:27:51 <stevebaker> any other questions? 20:27:56 <mspreitz> yes... 20:28:06 <lakshmi> stevebaker is there any particular protocol for the CM agent to return the results to Heat? 20:28:20 <mspreitz> Should we be telling trove et al that they should not be going about creating agents to do their config? 20:28:47 <stevebaker> lakshmi: first implementation will just use cfn-signal/WaitCondition style signalling 20:29:01 <zaneb> mspreitz: trove's agent does a lot more than config though 20:29:05 <lakshmi> stevebaker so the cm tool agent will signal Heat from the vm? 20:29:30 <mspreitz> zaneb: more than can be done with os-collect-config? 20:29:37 <stevebaker> mspreitz: trove's agent talks to rabbitmq, ours does http polling. They are currently quite different but we're talking about possible collaboration 20:30:07 <stevebaker> lakshmi: the cm tool, or the hook. that is still unknown 20:30:17 <shardy> stevebaker: I've been looking at a native approach to waitcondition handles using trust tokens 20:30:29 <shardy> which maybe we can use later 20:30:47 <mspreitz> savanna is also talking about an agent 20:30:56 <stevebaker> shardy: is this different to using a trust to create the user? 20:31:22 <tspatzier> stevebaker, so the signal would update the state of the SoftwareDeployment resource and provide outputs so afterwards a get_attr on the resource would work etc? 20:31:24 <shardy> stevebaker: yes, we can use trusts to create the ec2-keypair (which solves the admin to create stack issue) 20:31:34 <mspreitz> Murano is also talking about an agent 20:31:54 <shardy> but we can also create a trust scoped token and use that directly to authenticate the response for a waitcondition 20:32:00 <stevebaker> mspreitz: we are already talking to them all. can we move on from this topic? 20:32:08 <mspreitz> sure 20:32:10 <shardy> stevebaker: It's the curl string containing token approach I mentioned in HK 20:32:36 <stevebaker> shardy: would that token expire? 20:32:56 <shardy> stevebaker: yes, but not for 24 hours, and the max wait condition time is 12 20:33:10 <andrew_plunk> The ansible approach of temporarily installing and turning on zeromq on an instance for config is interesting. They call it fireball mode. 20:33:21 <shardy> stevebaker: I'm thinking also create a OS::Keystone::Token resource, which creates a new token every stack update 20:33:24 <zaneb> shardy: doesn't that depend on the cloud? 20:33:46 <stevebaker> shardy: software config polling would be long-lived. I would worry about a transient outage resulting in dead-duck servers with expired tokens 20:33:49 <shardy> then the agent collecting the config can refresh the token after stack update 20:34:30 <shardy> zaneb: Yep, but I'm assuming the token expiry will be much greater than typical wait-condition use-cases 20:34:34 <stevebaker> shardy: but they can't fetch a new token if they need a non-expired token to fetch it with 20:34:50 <shardy> stevebaker: They can if you refresh the token much more often than it expires 20:35:09 <shardy> stevebaker: The alternative is to stick with trust-derived ec2-keypairs for everything 20:35:24 <shardy> which don't expire, and we can continue to use for the AWS compatible resources 20:35:41 <stevebaker> shardy: but if there is an outage for the entire duration, you could still end up with servers with expired tokens after the outage is over 20:35:47 <shardy> but some folks don't want ec2 keypairs, and the alternative is not ready in keystone yet 20:36:32 <shardy> stevebaker: Agreed, I'm only advocating this for wait conditions, not for config collection, we'll need a better solution for that 20:36:39 <stevebaker> ok 20:36:54 <stevebaker> #topic Template catalog use cases 20:37:16 <stevebaker> I was hoping kebray would be here, or someone else from Rackspace looking at the template catalog 20:37:44 <tims1> o/ 20:37:45 <zaneb> we seem to be making good progress on the ML anyway 20:38:11 <tims1> although kebray and randallburt have been running the discussion more recently 20:38:53 <tims1> if the topic is to put use cases together I would expect to be more prepared for that next week 20:38:58 <stevebaker> So it would be great if there could be some formal writeup of all the use cases for a template catalog 20:39:06 <stevebaker> snap 20:39:21 <tims1> agreed 20:39:59 <tims1> I have recently been practicing writing use cases ;) 20:40:58 <stevebaker> It could probably be a separate project, hosted in stackforge, maybe even with its own +2 team. But the design should really happen in collaboration with the heat community so that it has the greatest chance of being accepted into the OpenStack Orchestration program 20:41:12 <shardy> tims1: What do you mean "running the discussion"? I've not seen much from them on the ML thread? 20:41:34 <tims1> well I haven't been able to sync with them on this topic since Hong Kong 20:42:06 <tims1> I have use cases that we started before the summit, they may be outdated 20:42:17 <tims1> we have been discussing a possible stack forge project 20:42:26 <shardy> tims1: Ok, cool 20:42:27 <tims1> and I agree on the design collaboration 20:42:54 <tims1> They are both out the rest of this week 20:43:00 <tims1> which is why the silence 20:43:17 <tims1> I would also agree on a separate +2 team 20:43:27 <stevebaker> ok, I won't create a meeting action for randall to write all the use cases then ;) 20:43:29 <shardy> tims1: np, I know it's holiday time, it's good that we've got some communication going on the ML now :) 20:43:39 <tims1> yes thanks shardy 20:43:43 <tims1> haha 20:43:48 <tims1> by all means assign it to randall 20:44:04 <andrew_plunk> +1 20:44:09 <andersonvom> +1 20:44:21 <stevebaker> #action Randall to publish use cases for heat template catalog 20:44:25 <stevebaker> booom 20:44:28 <shardy> lol 20:44:28 <tims1> haha 20:44:29 <andrew_plunk> ahaha 20:44:30 <andersonvom> ;) 20:44:40 <tims1> he just sensed a disturbance in the force 20:44:51 <stevebaker> his ears are burning 20:45:06 <stevebaker> #topic Open discussion 20:45:30 <stevebaker> shardy: so when you say v2 API, do you mean the changes required to make a management API possible? 20:45:55 <shardy> stevebaker: Partly, and to align with what other projects are doing wrt project/tenant id 20:46:07 <andersonvom> I was under the impression that would mosty remote the tenant ID from the URL, right? 20:46:20 <andersonvom> s/remote/remove/ 20:46:32 <shardy> Yup, so remove the tenant ID from the URL, and from all the request bodies, use the project_id in the context instead 20:46:49 <shardy> that is the main change, but I spotted a few other cleanups we could consider 20:47:32 <shardy> Also, if we do it, we have to decide if we go pecan/wsme, or reuse what we have 20:48:22 <stevebaker> shardy: regarding https://blueprints.launchpad.net/heat/+spec/management-api can you think of a way of progressing with a management api which doesn't yet require service-scoped token or request-scoping-policy? An approach which can be used with what we have now, and then move to the new keystone features as soon as they are available? 20:48:49 <shardy> stevebaker: Nothing which isn't a horrible hack, no 20:49:45 <shardy> stevebaker: You could add the management API before the tenant_id in the path I suppose, but we nacked that for build_info 20:50:16 <shardy> stevebaker: the service scoped token stuff does seem to be making some progress, but I'm not sure when it will be ready 20:50:21 <zaneb> shardy: see, I think that could make sense for the management api 20:50:36 <stevebaker> shardy: same, this is not like build_info 20:50:39 <zaneb> shardy: it's allowed to have its own endpoint in the catalog, for starters 20:51:34 <shardy> Ok, then we still need an unscoped request, that could be an unscoped token, but I don't think you can have a role associated with that 20:51:58 <zaneb> would we even put management api stuff in python-heatclient? or would we create python-heat-manageclient? 20:52:29 <stevebaker> zaneb: I'm pretty sure novaclient has admin commands 20:52:33 <zaneb> ok 20:52:35 <shardy> We *could* just say anyone with a special heat_service_admin role gets super super powers to access the management-api features, but that seems wrong, if we're servicing scoped requests 20:52:43 <andersonvom> zaneb: it would make sense to add it there, since once we move to the new keystone, it would be there too, right? 20:53:11 <andersonvom> zaneb: at least it would remain consistent 20:53:54 <stevebaker> shardy: so it sounds like there might be an interim solution which isn't too hacky? It would be nice to unblock rackspace 20:54:44 <shardy> stevebaker: Yeah, we could just use rbac and a new endpoint, and short-circuit the tenant-scoping for those requests 20:54:59 <shardy> Makes me slightly uneasy, but I guess it would work 20:55:24 <tims1> +1 for rbac and new endpoint in the interim 20:55:27 <zaneb> shardy: +1 20:55:41 <stevebaker> shardy: OK, thanks. Could you write that up to the ML? 20:55:49 <zaneb> what could go wrong? ;) 20:55:52 <andersonvom> shardy: agreed 20:56:03 <andersonvom> zaneb: :P 20:56:35 <shardy> Ok, will do, and if the keystone devs don't shoot the idea to pieces, I'll start on a patch 20:56:56 <stevebaker> hey, they don't have -2 on our project ;) 20:56:57 <morganfainberg> shoot something to death? I'm here to help! --keystone dev >.> 20:57:07 * morganfainberg goes back to lurking 20:57:12 <shardy> morganfainberg: haha ;) 20:57:34 <morganfainberg> stevebaker, you could give me -2 on your project just for this one case you know ;) 20:57:42 <stevebaker> 3 minutes, anything else? 20:58:43 <stevebaker> #endmeeting