20:00:02 #startmeeting heat 20:00:02 Meeting started Wed Nov 27 20:00:02 2013 UTC and is due to finish in 60 minutes. The chair is stevebaker. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:06 The meeting name has been set to 'heat' 20:00:12 o/ 20:00:12 #topic rollcall 20:00:15 o/ 20:00:16 o/ 20:00:19 o/ 20:00:19 hi 20:00:19 o/ 20:00:20 servus 20:00:21 hi 20:00:22 o/ 20:00:24 o/ 20:00:36 (no actions from last week) 20:00:45 o/ 20:01:03 kebray: are you about? 20:01:15 o/ 20:01:46 #topic Adding items to the agenda 20:01:48 o/ 20:01:50 #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda 20:02:07 plonk anything here and I'll add it 20:02:11 stevebaker: Do we want to discuss v2 API, or keep it on the ML thread? 20:02:41 shardy: ummmm, how about at the end of open discussion if there is time ;) 20:02:51 stevebaker: kk ;) 20:02:53 generic agents and software config 20:03:07 mspreitz: that is already there 20:03:33 great 20:03:37 #topic icehouse-1 release 20:03:47 #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 20:03:53 here 20:04:09 The 3rd is i-1 feature freeze! 20:04:16 #link https://launchpad.net/heat/+milestone/icehouse-1 20:04:23 come round quickly! 20:04:25 I'll need to start getting brutal 20:05:16 there are 10 high triaged bugs, 4 with no asignee 20:06:15 some of them may not be high anyway 20:06:25 at least one of those is properly a blueprint 20:06:39 they've all been around forever I think... 20:06:56 tims1: so it looks like this should be closed and 3 new blueprints created? https://blueprints.launchpad.net/heat/+spec/namespace-stack-metadata 20:07:10 And one looks like a nova bug: 20:07:10 stevebaker: yeah 20:07:13 https://bugs.launchpad.net/heat/+bug/1249494 20:07:15 Launchpad bug 1249494 in heat "Using heat with yaml template causes SQL syntax error in nova " [High,Triaged] 20:07:38 cool, sql injection in nova! 20:08:12 -_- 20:08:25 I can't tell if this one is complete https://blueprints.launchpad.net/heat/+spec/oslo-db-support 20:08:32 ouch, that should be marked security-sensitive 20:09:43 I'll mark this implemented, for better or worse https://blueprints.launchpad.net/heat/+spec/native-resource-group 20:10:24 asalkeld: will this make i-1? https://blueprints.launchpad.net/heat/+spec/send-notification 20:10:32 yeah 20:10:40 not much left 20:10:45 ok 20:10:46 zaneb: possibly, or the reporter hasn't sync'd their nova db.. 20:11:02 shardy: ah, that's more likely 20:11:25 feeling horrid - I am going to lie down 20:11:32 later 20:11:36 ,enjoy asalkeld 20:11:42 zaneb: do you want to ask for a decorator based approach for this? Or should we continue as is for now? https://blueprints.launchpad.net/heat/+spec/resource-support-status 20:12:08 I don't see the decorator thing as incompatible 20:12:21 it builds on the SupportStatus class 20:12:31 i.e. the decorator is what creates the SupportStatus 20:12:37 ok 20:12:53 apparently is wasn't clear from my review that that's what I was driving at 20:13:56 jasond`, zaneb, shardy, where are the multi-engine reviews at? 20:14:07 pass 20:14:32 stevebaker: Not sure, I need to take a proper look again 20:14:36 I'm scared now to touch https://review.openstack.org/#/c/56476/ ;) 20:15:04 stevebaker scared in terms of breaking the code? 20:15:19 oh yeah, I think it was just about there 20:15:30 sdake_: Yes, last time we merged and had to revert 20:15:46 stevebaker: i think it's looking good, but i've thought that several times before :/ 20:15:50 must have been on the road 20:16:07 may make more sense to bump to i2 then? 20:16:20 give more time for testing/review? 20:16:40 the database patch is ready IMO 20:16:50 sdake_: If it is ready, its ready 20:16:53 the second patch has a small issue with the test i'm working on today 20:17:04 and the third patch is pretty trivial 20:17:05 pro tip: after the 3rd or 4th patchset in a day I start leaving it for a day or two before I bother to review a new one ;) 20:17:15 i reviewed several in that series and the ylooked good 20:17:22 This needs another review, it might just make it in https://blueprints.launchpad.net/heat/+spec/heat-build-info 20:17:54 zaneb: i'll try to do fewer updates 20:18:15 vijendar: do you think there will be some action on https://blueprints.launchpad.net/heat/+spec/dbaas-trove-resource in time for i-1? 20:18:33 stevebaker: currently I am working on it 20:18:43 stevebaker: we're just waiting for a review on that one and on https://review.openstack.org/#/c/57782/ 20:18:52 will send updated patch in a day or two 20:19:09 anyway, blueprints look in pretty good shape, the number of targeted bugs is quite high - there is an easy way of fixing that though 20:19:20 stevebaker: it seems both would be able to make it in 20:19:24 if you've got some spare time for bugs, then go for it 20:19:38 #topic Software config update 20:20:19 I'll have a POC to post in the next few hours, it would be great if you could provide feedback on the general approach 20:20:43 great 20:21:06 Currently it just gets the metadata onto the server via os-collect-config, but nothing is consuming that data, and the format of that data will change many times as we figure out the right way 20:21:32 but right now I'm more interested in the approach taken to implementing the SoftwareConfig/SoftwareDeployment resources 20:21:46 what is included in this metadata 20:21:57 probably not much more to say on that until the reviews are up 20:22:00 jasond`: are you running unit tests/pep8 locally before uploading? You should almost never be getting legitimate failures in the gate. 20:22:39 lakshmi: currently it is a mostly verbatim dump of the SoftwareDeployment/SoftwareConfig data structures 20:22:41 zaneb: Getting consistent results from the gate is always a bug ;) 20:22:57 lakshmi: but it can be transformed into anything 20:23:03 stevebaker that is a great start for the metadata 20:23:03 stevebaker: do we think there should be generic agents, or should the software config agent cover it all? 20:23:13 shardy: I said *legitimate* failures ;) Neutron failures are sadly unavoidable 20:23:48 mspreitz: I think there will be an agent per CM tool, which transforms the metadata to a CM tool invocation, and collects outputs to signal back to heat 20:24:16 mspreitz: how to get the agent onto the server is an open question - probably golden images or cloud-init cloud-config 20:24:22 I was hoping we would write one generic software config agent, it gets scpeicalized to a CM tool by a hook 20:24:28 stevebaker: per CM tool? Can't we just bootstrap the CM specific tool with os-apply-config or something? 20:25:07 mspreitz: In that case, that is what we are doing. The agent is called os-collect-config, and there is a custom *hook* per CM tool. 20:25:21 stevebaker: Or just configure cloud-init to install e.g puppet and run the tool, pointed at the metadata 20:25:32 one hook point and custom thing hooked in per CM tool? 20:25:41 zaneb: i guess i have been relying on the gate a little too much (something in tox is always breaking). will try to run the full suite before uploading 20:25:55 I think we will have to figure out the details when looking at least two CM tools once we see the initial POC from stevebaker 20:26:21 I don't see the point of installing all cm tools on each image, it will just take longer to bootstrap server. I would prefer one per cm tool 20:26:39 shardy: the hook will probably be an os-refresh-config script generated with an os-apply-config template 20:27:08 stevebaker: Ok cool, so not really an agent per tool then, just a config per tool 20:27:14 andrew_plunk: I think that's what we are talking about 20:27:16 andrew_plunk: you will only install the CM tool (and hook) that you want to use 20:27:33 haha well maybe I should have just said +1 then 20:27:51 any other questions? 20:27:56 yes... 20:28:06 stevebaker is there any particular protocol for the CM agent to return the results to Heat? 20:28:20 Should we be telling trove et al that they should not be going about creating agents to do their config? 20:28:47 lakshmi: first implementation will just use cfn-signal/WaitCondition style signalling 20:29:01 mspreitz: trove's agent does a lot more than config though 20:29:05 stevebaker so the cm tool agent will signal Heat from the vm? 20:29:30 zaneb: more than can be done with os-collect-config? 20:29:37 mspreitz: trove's agent talks to rabbitmq, ours does http polling. They are currently quite different but we're talking about possible collaboration 20:30:07 lakshmi: the cm tool, or the hook. that is still unknown 20:30:17 stevebaker: I've been looking at a native approach to waitcondition handles using trust tokens 20:30:29 which maybe we can use later 20:30:47 savanna is also talking about an agent 20:30:56 shardy: is this different to using a trust to create the user? 20:31:22 stevebaker, so the signal would update the state of the SoftwareDeployment resource and provide outputs so afterwards a get_attr on the resource would work etc? 20:31:24 stevebaker: yes, we can use trusts to create the ec2-keypair (which solves the admin to create stack issue) 20:31:34 Murano is also talking about an agent 20:31:54 but we can also create a trust scoped token and use that directly to authenticate the response for a waitcondition 20:32:00 mspreitz: we are already talking to them all. can we move on from this topic? 20:32:08 sure 20:32:10 stevebaker: It's the curl string containing token approach I mentioned in HK 20:32:36 shardy: would that token expire? 20:32:56 stevebaker: yes, but not for 24 hours, and the max wait condition time is 12 20:33:10 The ansible approach of temporarily installing and turning on zeromq on an instance for config is interesting. They call it fireball mode. 20:33:21 stevebaker: I'm thinking also create a OS::Keystone::Token resource, which creates a new token every stack update 20:33:24 shardy: doesn't that depend on the cloud? 20:33:46 shardy: software config polling would be long-lived. I would worry about a transient outage resulting in dead-duck servers with expired tokens 20:33:49 then the agent collecting the config can refresh the token after stack update 20:34:30 zaneb: Yep, but I'm assuming the token expiry will be much greater than typical wait-condition use-cases 20:34:34 shardy: but they can't fetch a new token if they need a non-expired token to fetch it with 20:34:50 stevebaker: They can if you refresh the token much more often than it expires 20:35:09 stevebaker: The alternative is to stick with trust-derived ec2-keypairs for everything 20:35:24 which don't expire, and we can continue to use for the AWS compatible resources 20:35:41 shardy: but if there is an outage for the entire duration, you could still end up with servers with expired tokens after the outage is over 20:35:47 but some folks don't want ec2 keypairs, and the alternative is not ready in keystone yet 20:36:32 stevebaker: Agreed, I'm only advocating this for wait conditions, not for config collection, we'll need a better solution for that 20:36:39 ok 20:36:54 #topic Template catalog use cases 20:37:16 I was hoping kebray would be here, or someone else from Rackspace looking at the template catalog 20:37:44 o/ 20:37:45 we seem to be making good progress on the ML anyway 20:38:11 although kebray and randallburt have been running the discussion more recently 20:38:53 if the topic is to put use cases together I would expect to be more prepared for that next week 20:38:58 So it would be great if there could be some formal writeup of all the use cases for a template catalog 20:39:06 snap 20:39:21 agreed 20:39:59 I have recently been practicing writing use cases ;) 20:40:58 It could probably be a separate project, hosted in stackforge, maybe even with its own +2 team. But the design should really happen in collaboration with the heat community so that it has the greatest chance of being accepted into the OpenStack Orchestration program 20:41:12 tims1: What do you mean "running the discussion"? I've not seen much from them on the ML thread? 20:41:34 well I haven't been able to sync with them on this topic since Hong Kong 20:42:06 I have use cases that we started before the summit, they may be outdated 20:42:17 we have been discussing a possible stack forge project 20:42:26 tims1: Ok, cool 20:42:27 and I agree on the design collaboration 20:42:54 They are both out the rest of this week 20:43:00 which is why the silence 20:43:17 I would also agree on a separate +2 team 20:43:27 ok, I won't create a meeting action for randall to write all the use cases then ;) 20:43:29 tims1: np, I know it's holiday time, it's good that we've got some communication going on the ML now :) 20:43:39 yes thanks shardy 20:43:43 haha 20:43:48 by all means assign it to randall 20:44:04 +1 20:44:09 +1 20:44:21 #action Randall to publish use cases for heat template catalog 20:44:25 booom 20:44:28 lol 20:44:28 haha 20:44:29 ahaha 20:44:30 ;) 20:44:40 he just sensed a disturbance in the force 20:44:51 his ears are burning 20:45:06 #topic Open discussion 20:45:30 shardy: so when you say v2 API, do you mean the changes required to make a management API possible? 20:45:55 stevebaker: Partly, and to align with what other projects are doing wrt project/tenant id 20:46:07 I was under the impression that would mosty remote the tenant ID from the URL, right? 20:46:20 s/remote/remove/ 20:46:32 Yup, so remove the tenant ID from the URL, and from all the request bodies, use the project_id in the context instead 20:46:49 that is the main change, but I spotted a few other cleanups we could consider 20:47:32 Also, if we do it, we have to decide if we go pecan/wsme, or reuse what we have 20:48:22 shardy: regarding https://blueprints.launchpad.net/heat/+spec/management-api can you think of a way of progressing with a management api which doesn't yet require service-scoped token or request-scoping-policy? An approach which can be used with what we have now, and then move to the new keystone features as soon as they are available? 20:48:49 stevebaker: Nothing which isn't a horrible hack, no 20:49:45 stevebaker: You could add the management API before the tenant_id in the path I suppose, but we nacked that for build_info 20:50:16 stevebaker: the service scoped token stuff does seem to be making some progress, but I'm not sure when it will be ready 20:50:21 shardy: see, I think that could make sense for the management api 20:50:36 shardy: same, this is not like build_info 20:50:39 shardy: it's allowed to have its own endpoint in the catalog, for starters 20:51:34 Ok, then we still need an unscoped request, that could be an unscoped token, but I don't think you can have a role associated with that 20:51:58 would we even put management api stuff in python-heatclient? or would we create python-heat-manageclient? 20:52:29 zaneb: I'm pretty sure novaclient has admin commands 20:52:33 ok 20:52:35 We *could* just say anyone with a special heat_service_admin role gets super super powers to access the management-api features, but that seems wrong, if we're servicing scoped requests 20:52:43 zaneb: it would make sense to add it there, since once we move to the new keystone, it would be there too, right? 20:53:11 zaneb: at least it would remain consistent 20:53:54 shardy: so it sounds like there might be an interim solution which isn't too hacky? It would be nice to unblock rackspace 20:54:44 stevebaker: Yeah, we could just use rbac and a new endpoint, and short-circuit the tenant-scoping for those requests 20:54:59 Makes me slightly uneasy, but I guess it would work 20:55:24 +1 for rbac and new endpoint in the interim 20:55:27 shardy: +1 20:55:41 shardy: OK, thanks. Could you write that up to the ML? 20:55:49 what could go wrong? ;) 20:55:52 shardy: agreed 20:56:03 zaneb: :P 20:56:35 Ok, will do, and if the keystone devs don't shoot the idea to pieces, I'll start on a patch 20:56:56 hey, they don't have -2 on our project ;) 20:56:57 shoot something to death? I'm here to help! --keystone dev >.> 20:57:07 * morganfainberg goes back to lurking 20:57:12 morganfainberg: haha ;) 20:57:34 stevebaker, you could give me -2 on your project just for this one case you know ;) 20:57:42 3 minutes, anything else? 20:58:43 #endmeeting