20:00:16 #startmeeting heat 20:00:18 Meeting started Wed Jun 5 20:00:16 2013 UTC. The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:21 The meeting name has been set to 'heat' 20:00:27 #topic rollcall 20:00:32 o/ 20:00:36 hello all 20:00:38 hi 20:00:39 hello 20:00:40 heya 20:00:41 hello 20:00:43 hi all 20:01:09 howdy 20:01:19 hi 20:01:29 hi 20:01:31 hi 20:01:35 hi 20:02:13 hi all, lets get started :) 20:02:21 #topic Review last week's actions 20:02:31 I actually don't think there are any: 20:02:44 #link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-05-29-20.00.html 20:03:04 Did anyone have anything from last meeting they wanted to mention? 20:03:11 nope 20:03:27 ok 20:03:33 #topic h2 blueprint status 20:03:50 So thanks to zaneb for attending the release/status meeting yesterday 20:03:55 np 20:04:03 there were a couple of queries re h2 bps 20:04:16 luckily it was the one just after h1, so not much to report :) 20:04:39 need an assignee for stack-metadata randallburt, or anyone, interested in picking that up for h2? 20:04:53 #link https://blueprints.launchpad.net/heat/+spec/stack-metadata 20:04:57 I started some work on https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L574 will probably have many questions for irc tomorrow ;) but I can switch gears with little issue 20:05:00 I don't like the idea of it 20:05:22 asalkeld: ? 20:05:24 see comment 20:05:26 asalkeld: I saw your whiteboard comment, can you expand? 20:05:45 well rather make a static resource 20:05:48 and link to it 20:05:54 late, but present. 20:05:57 via a reference 20:06:05 asalkeld: rather than use a pseudo-parameter you mean? 20:06:09 yea 20:06:11 asalkeld: so a Metadata-only resource? 20:06:16 sounds like a hack 20:06:23 asalkeld: are we asking the template author to wire these things up explicitly? 20:06:35 not sure 20:06:41 probably 20:06:49 asalkeld: so, the idea is that you have your "provider" template for, say, an Instance 20:06:56 if a resource needs/uses metadata, then the user passes that in via a parameter today, yes? 20:06:58 and the Instance has metadata 20:07:16 and you have to pass that through somehow to the _actual_ Instance inside the provider template 20:07:20 so use a load config 20:07:58 asalkeld: but not all provider templates will be providers for Instances 20:08:01 that we it is _more_ composeable 20:08:11 You mean launch config? 20:08:15 ya 20:08:27 or an openstack equiverelant 20:08:29 I thought the idea was to make the template look and feel just like any other resource; sounds like we need a little more discussion here before proceeding? maybe on the ML? 20:08:43 but this is about Metadata for e.g nested stacks, which need not necessarily map to an instance config? 20:08:49 asalkeld: a launch config _outside_ of the provider template? 20:08:58 ya 20:09:10 could be a seperate file 20:09:33 so almost no need for the nested stack 20:09:37 asalkeld: I think that's a good way for users to implement it once we actually have that resource 20:09:44 If I am understanding the blueprint properly, I think it's asking the rhetorical question: if resources can have attributes (referred to here as metadata) then so should stacks so stacks can masquerade as resources, as needed. Do I understand it correctly? 20:10:08 asalkeld: but it doesn't solve the problem that resources have metadata and providers, without this feature, effectively can't get at it 20:10:30 adrian_otto: correct 20:10:50 ok, then I support the addition of the feature. 20:10:52 why can' t nested stacks just have metadata? 20:11:02 (not as a parameter) 20:11:15 asalkeld: they can, the point is giving people a way to access it from inside the provider template 20:11:41 can't you? 20:12:02 ok, well I think it needs some more thought 20:12:11 doesn't have to be here 20:12:11 asalkeld: in code, yes, but not using the template syntax 20:12:19 yeah, let's punt this to the mailing list 20:12:21 (just seems ugly) 20:12:26 zaneb: agreed 20:12:27 can you give an example for metadata, I mean in contrast to properties? 20:12:44 I meant parameters ... 20:12:59 shardy: should I switch focus to this issue/bp then or keep going with attribute schema for now? 20:13:06 doesn't AWS support this? so seems we should too… http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-metadata.html 20:13:08 tspatzier: look at how SpamapS is using it 20:13:30 lo 20:13:40 randallburt: no stick with what you're working on, we just need someone to take owndership of this sometime soon as it's a high prio BP targetted at h2 20:13:53 k 20:13:55 zaneb: you mean in the openstack-ops repo? 20:14:00 alexheneveld: no, we're talking about accessing it within provider templates, which aws doesn't have 20:14:01 Anyone going to take an action to start the ML discussion? 20:14:09 tspatzier: yes 20:14:27 zaneb: thx 20:14:35 chirp chirp 20:14:35 *cough*asalkeld*cough* 20:14:46 you ok zaneb 20:14:48 ? 20:15:01 sure I can email 20:15:01 #action asalkeld/zaneb to start ML discussion re stack metadata ;) 20:15:21 next item is: 20:15:26 #link https://blueprints.launchpad.net/heat/+spec/discover-catalog-resources 20:15:41 SpamapS: is this going to happen for h2, or should we bump it to h3? 20:16:18 is SpamapS here? he hasn't said anything 20:16:19 horizon are working on similar stuff here, we should see what they are doing 20:16:35 Ok, we'll leave that and discuss it another time 20:16:42 #topic blueprint/bug process reminder 20:17:09 well, they already hide features if an endpoint isn't available. they're now working on introspecting api features for more fine-grained ui changes 20:17:26 This is just a reminder, can everyone please make sure, if they're working on features, that they have an associated blueprint, and that they are targetted to whenever the feature is likely to land 20:17:39 and likewise with bugs 20:18:27 I know some people don't like the process, but if we all just follow the process and use launchpad etc it makes my life much, much easier :) 20:19:14 also pls make sure commit messages and topic branches map to the BP/bug etc 20:19:52 onwards.. 20:19:58 #topic h1 milestone release/status of Tempest integration 20:20:31 So. We released the h1 milestone with a critical regression, which was unfortunate 20:20:33 sorry I was pulled off onto other stuff 20:20:33 * stevebaker summons mordred 20:20:51 anyone (stevebaker?) got an update on Tempest integration? 20:21:25 so heat will be switched on for devstack tempest soon, and we can gate on that 20:21:43 shardy: bump discover-catalog-resources to h3. I need to focus on rolling updates. 20:21:43 meaningful tests will require launching images though... 20:21:46 also, everyone, please, please, please actually run heat and test stuff before posting big changes, particularly before we have the Tempest stuff sorted 20:22:32 first barrier to that is getting this review through, https://review.openstack.org/#/c/28651/ had some comments sdague 20:22:45 sorry, I meant sdague had some comments 20:23:16 stevebaker: Ok, so it's all in-progress then, think we'll have reasonable Tempest coverage by h2? 20:23:17 some completely wrong comments :/ 20:23:37 really good to hear about Tempest integration 20:23:43 second task is having an image to test against. mordred mentioned that they intend to get dib building within openstack ci, putting test images on static.openstack.org 20:24:09 however I wonder if we could just put some known good images on static.openstack.org in the meantime 20:24:38 stevebaker: +1, just actually launching a stack would be a great start 20:24:39 tl;dr, we'll have something by h2 ;) 20:24:49 stevebaker: Ok great 20:24:51 stevebaker: what? 20:24:52 a stock Ubuntu 12.04 image works fine with heat. You just don't get heat-cfntools. 20:25:05 F17 probably works fine too 20:25:16 mordred: ^^ see second task 20:25:16 stevebaker: just move it to thirdparty directory 20:25:23 yeah - we actually have pleia2 doing work on testing tripleo right now 20:25:28 do we have an up-to-date guide on how to install devstack on a VM, with Heat, with images and get it all running first time? 20:25:33 sdague: our point is that it is not a third party test 20:25:36 part of that workflow chain will probably involve figuring out image publication 20:25:59 zaneb: I've been trying just that recently, it's... not easy 20:26:01 stevebaker: it's using non openstack native datastructures, that puts it in thirdparty in tempest 20:26:11 zaneb: I don't think so, we need to update all our getting started docs 20:26:20 sdague: it's using the only datastructures we support 20:26:24 sdague: seeing as cloudformation does not have support for yaml.. I beg to differ. 20:26:35 HeatTemplateFormatVersion: '2012-12-12' 20:27:07 mordred: should we wait for the image building chain to be in place, or could we start with some manually built and uploaded images? 20:27:20 zaneb: there are heat blueprints to implement native resource types, once those are in, that can go in api 20:27:23 stevebaker: in tempest? 20:27:41 stevebaker: I mean, to use inside of tempest/devstack testing? 20:27:43 mordred: yes, for heat to run image-launching tests in tempest 20:27:46 thirdparty all runs on all the runs right now anyway 20:27:55 sdague: yaml formatted templates are native to heat and heat alone. The API is a native Heat API. 20:27:56 so I'm not sure why you'd be oposed to putting it there 20:27:58 sdague: heat was accepted as a part of OpenStack without those 20:28:16 stevebaker: well - right now reddwarf is building images as part of their devstack sequence 20:28:23 stevebaker: that's the _easiest_ thing to do 20:28:48 mordred: ooo, I'll take a look 20:28:53 sdague: just because it works like cfn doesn't make it thirdparty. 20:28:54 stevebaker: but - if you do have an unchanging base image or two, I imagine it wouldn't be too hard to get onto static.o.o for the short term either 20:29:21 mordred: ok, lets figure it out later 20:29:24 stevebaker: cool 20:29:55 Ok, well we can continue this discussion over the coming weeks, but in the meantime, we share a collective responsibility to actually test stuff manually (not just unit tests) before postin 20:30:07 posting even 20:30:34 #topic Open discussion 20:30:43 sdague: the criteria for being in thirdparty or not seems vague, you're implying we would come out of it as soon as neither the strings "aws" nor "ec2" appear in our templates, which seems somewhat arbitrary 20:31:13 anyone have anything else they want to mention? 20:31:22 right, what I'd rather see is that thirdparty is used for *pure* cloudformation template testing. 20:31:32 is anyone else currently working on heat UI/interested in working on heat UI? 20:31:35 shardy: I'm trying as hard as I can to become a heat developer ;-P just struggling with getting a *real* working test environment 20:31:48 SpamapS: +1. Or for testing the heat-cfn-api 20:32:08 sdague: to me, thirdparty is to validate compatibility with non-native APIs 20:32:12 stevebaker: well, that's my -1. just went through a lot of work restructuring tempest to get all aws references out of api 20:32:26 radix: well you just need a working grizzly/havana openstack plus heat 20:32:31 We had some nice chats with radix and therve while they were in SF about autoscaling, shardy. :) 20:32:31 +1 for working on heat UI 20:32:37 you don't *have* to use devstack 20:32:43 sdague: there's no AWS references in the _API_ 20:33:00 shardy: yeah... I don't think I have access to any instance of that kind of environment, at the moment 20:33:06 zaneb: he means the api package in tempest 20:33:12 wirehead_: so actually, do you plan contributions in the AS area, e.g the discussed AS API? 20:33:17 stevebaker: ah, ok 20:33:30 shardy: yeah, that's my job :-) 20:33:31 I've left it off the havana plan for now, as everything went kind of quiet after summit... 20:33:52 radix: this -- http://www.cloudsoftcorp.com/blog/getting-started-with-heat-devstack-vagrant/ -- may be helpful. networking can be somewhat fiddly, i've found. 20:33:58 We needed to assemble for you a dark army of doom. 20:34:08 alexheneveld: yes, networking is the entirety of the cause of my headaches 20:34:35 alexheneveld: I guess I can try it this way 20:35:05 radix: ok, well let us know in #heat if you need help and we can try to get you started 20:35:11 thanks a lot :) 20:35:18 shardy: I need help 20:35:22 sounds like all our getting started docs need a refresh 20:35:22 lol 20:35:25 wait, I need help first! 20:35:33 zaneb: lol ;) 20:35:35 hehe 20:35:38 * SpamapS gets pulled away again 20:36:03 I have the urge to fix up the docs. Just haven't been able to find the focused time to really get it done. :/ 20:36:37 any news on a rackspace server resource landing? 20:36:43 Yep 20:36:50 jason, andrew, and vijendar are making really good progress on resources for rackspace cloud servers, loadbalancers, and databases. Additionally, we are planning a Horizon blueprint to graphically show progress on deploying a stack that displays resource state. 20:37:16 kebray: is there somewhere we can see the code? 20:37:17 kebray: I've been thinking about that too 20:37:17 * asalkeld keen on a server resource (alpha?) 20:37:45 jasond: ? 20:37:46 stevebaker: so when heat takes in ec2 resources, what kinds of API calls is it making to nova? 20:37:47 jasond, you have the code in public yet? 20:38:00 it's very early, but https://github.com/jasondunsmore/heat/branches 20:38:15 sdague: it makes openstack native (not ec2) calls to the nova API 20:38:16 sdague: all calls from heat to other openstack APIs are native openstack 20:38:17 here are my TODOs http://dunsmor.com/heat/cloud-servers-provider.html 20:38:36 we're still playing with cnf-tools and cloud-init.. learning process. but, good progress is being made. 20:38:56 sdague: only the resource definition in the template mentions aws, its native openstack end-to-end 20:39:01 did we ever land on whether this are going in-tree? 20:39:40 The last time we discussed this, everybody seemed to be gung-ho about in-tree, but not what it meant to be in-tree. 20:39:46 stevebaker: and in h2 we'll get os native resources? 20:39:56 jasond / kebray try posting reviews and marking them as "work in progress" 20:40:11 kebray: I think we decided yes at last weeks meeting, but nobody that keen to drive reorganizing the tree ;) 20:40:14 (there is a special button "work in progress" 20:40:31 +1 on posting WIP/draft reviews 20:40:33 sdague: yes, native resource writing is ongoing, nova::server is being worked on right now 20:40:36 i put the cloud servers resource provider under heat/engine/resources/rackspace_cloud/ 20:40:37 kebray: there was widespread agreement on in-tree, and carping about /contrib. Mostly from me ;) 20:40:41 asalkeld: ok.. thx. Yeah, we need to get this code in the right place. 20:41:06 zaneb, why contib? 20:41:22 stevebaker: so I'll +2 on the condition someone files a bug that we can track to get that cut over after heat supports it 20:41:27 whats wrong with engine/resources/rackspace/ 20:41:30 asalkeld: it doesn't seem like something to be installed by default to me 20:41:45 to me it does 20:41:58 sdague: ok, much appreciated. I think we already have an umbrella bp for that 20:41:59 I'll stick that in the review comments, we can finish the conversation there 20:42:16 asalkeld: having a different resource type for every cloud provider is... you know... a bug, not a feature, long term 20:42:34 zaneb: agreed :-) 20:42:36 Yeah. The whole thing where Heat becomes really and truly awesome is the ability to run your own Heat instance that talks to other people's clouds. 20:42:38 but a real need in the short term 20:42:42 sdague: https://blueprints.launchpad.net/heat/+spec/abstract-aws 20:42:43 really 20:42:58 randallburt: exactly 20:43:19 wirehead_, +1 20:43:23 I think we ought to proceed with the understanding that there will be a re-org. 20:43:24 zaneb, -1 20:43:27 :) 20:43:33 stevebaker: can you file a tempest bug that links to that, just so as we approach h2 we can make sure it gets handled? 20:43:35 lol 20:43:55 sdague: ok, will do 20:44:00 so, from our perspective, we're not fussed about where to put it. we'll start submitting review patches for our resources and can go from there about where folks want to put stuff 20:44:05 I also suspect that once we've written 2-3 server providers, the pattern for how to not have to write 20 providers will be clearer. 20:44:32 o/ - sorry i'm alte 20:44:44 well don't be alte again! ;) 20:45:06 I can see where cloud service provider may want to optimize resource implementations for their cloud.. and, I could see contributing those back to the community. So, in that sense, maybe they are features and not bugs… I can see both sides. but, I digress. It's just terminology at this point. The solution is the same. 20:45:15 hi sdake 20:45:49 service desk ship me my new laptop so I can stop using the delete key pls :) 20:46:27 kebray: yeah I think we're agreed on resource sub-directory per provider, and hopfully avoid undue duplication via the review process 20:46:55 anyone got anything else they want do discuss? 20:46:57 aws reorg happening in h2? 20:47:00 kebray: I'm trying to make a case for /contrib (badly, it appears), but I certainly wouldn't -1 a review over it 20:47:03 (just reading scrollback) 20:47:38 excellent. we'll work to get our code in the right place for review asap.. help on getting the rackspace resources written from others would be great.. but, I know they aren't committed for the H milestones. 20:47:42 sdake: we basically need native resources for everything, then the YAML templates become abstracted from awsisms 20:48:00 shardy yup I got that - I am working on nova server atm 20:48:16 * sdake loves rebasing 20:48:18 sdake: so what was your query re h2, can you clarify pls? 20:48:25 shardy, Do we have blueprints for all AWS resources to native? 20:48:47 therve: iirc there's one mega-blueprint 20:48:53 If we are reorganizing the tree, my work will lhave to be rebased which is fine, but would prefer to know if we plan to reorg the resources dir in h2 20:48:54 kebray: zaneb: providers for other clouds feel like they should be plugins long term, maintained by third parties not by openstack - esp if heat goes core. i've been in the adapters-keep-up game before and it's for mugs. 20:48:59 therve: there is an abstract-aws BP, but we may not have related BPs to create native versions of every resource yet 20:49:00 previously we weren't going to do that 20:49:10 sdake: what's the state of the nova server resource? Any chance to try it out. Might want to reference it from the first HOT samples I am trying to get running. 20:49:12 there are certainly BPs for some native resources tho 20:49:25 shardy: replacing cfntools is part of aws removal too 20:49:37 tspatzier not finished yet and have other tasks to attend to atm, but will get back to it in the next week 20:49:52 shardy, for some reason bp's are getting defined for smaller and smarller bits of work 20:50:01 stevebaker: yeah, I was thinking about that recently, we need to port cfntools to understand how to talk to the ReST API 20:50:15 and/or the CFN API configurably 20:50:15 sdake: sounds good. I also have some prep work for HOT to be done first 20:50:21 which also removes the recurring boto pain 20:50:31 asalkeld that is for the rest of the downstream openstack community (like marketing) to know what changed in heat 20:50:39 alexheneveld I don't disagree, but it seems like enough people want us to include these at the moment. I don't have much of an opinion.. just need shardy to tell us where to put our code, and we'll push the gerrit review. 20:50:45 they can't reasonably be expected to look at a git changelog can they? :) 20:50:58 alexheneveld: it works for the kernel 20:51:09 asalkeld: Is that a problem, a BP needs to be achivevable withing a relatively short time for one person? 20:51:09 yea 20:51:15 boto botulism. 20:51:26 botoxic 20:51:32 * adrian_otto gags 20:51:43 kebray: +1 for now -- just wouldn't want to see it collapse under its own weight 20:51:46 on that note I'm outa here 20:52:21 Ok, is there anything else, or shall we finish early? 20:52:32 shardy so the aws reorg is landing in h2? 20:52:37 +1 for finish early : 20:52:38 still unclear on that 20:52:38 zaneb: good point :) 20:52:39 :) 20:52:49 sdake: I still don't really know what you're asking? 20:53:01 what will land, all the native resources, or... 20:53:05 the moving all the resources around for environments + everything else 20:53:10 the directory structure 20:53:34 alexheneveld: see last week's meeting log for more discussion 20:53:48 sdake: I think that's not really a BP, just the first person who wants to can move stuff and deal with the resulting breakage 20:54:17 sdake: Is it important to you that it happens for h2? 20:54:27 I don't care I just want to know if h2 or h3 :) 20:54:43 I think it's important to him that it doesn't so he doesn't have to rebase on it ;) 20:54:46 sdake: honestly I don't know - is anyone planning it imminently? 20:54:50 * shardy is not 20:54:51 I would prefer it doesn't happen in 2h 20:54:54 zaneb wins :) 20:55:16 sdake: I think you and SpamapS were the only ones talking about doing it 20:55:28 sounds good then i'll sync with SpamapS about that 20:55:47 Ok, sounds good, I don't really see it as all that high-priority tbh 20:56:12 Ok, if nothing else, we can wrap up.. 20:56:45 #endmeeting