20:00:17 <shardy> #startmeeting heat
20:00:18 <openstack> Meeting started Wed Jul  3 20:00:17 2013 UTC.  The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:21 <openstack> The meeting name has been set to 'heat'
20:00:26 <shardy> #topic rollcall
20:00:36 <sdake> o/
20:00:49 <m4dcoder> o/
20:00:54 <therve> Hi!
20:00:59 <andrew_plunk> hello
20:01:06 <tspatzier> hi
20:01:07 <stevebaker> hi
20:01:20 <jasond> hi
20:01:31 <jpeeler> !
20:02:29 <shardy> asalkeld, SpamapS?
20:02:38 <asalkeld> o/
20:02:41 <kebray> o/
20:02:48 <sdake> is zane on pto - haven't seen him around
20:02:53 <shardy> ok, hi all, lets get started
20:02:54 <therve> Clint mentioned he'd be late
20:03:01 <shardy> sdake: yeah he's on pto IIRC
20:03:09 <shardy> therve: ok, cool, thanks
20:03:19 <shardy> #topic review last weeks actions
20:03:31 <shardy> Actually I don't think there were any
20:03:44 <shardy> anyone have anything to raise from last week's meeting?
20:03:59 <shardy> #link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-06-26-20.01.html
20:04:24 <adrian_otto1> hi
20:04:28 <shardy> #topic h2 bug/blueprint status
20:05:03 * radix is belatedly here
20:05:03 <shardy> So only 2 weeks to go, and probably less than 1 week to go allowing for some testing and branch to milestone-proposed
20:05:30 <shardy> #link https://launchpad.net/heat/+milestone/havana-2
20:06:12 <shardy> So please if you have BPs or bugs outstanding, make sure the status is updated, and bump them if they won't land in time
20:06:24 <sdake> wife has planned a last minute vacation - so may not make https://blueprints.launchpad.net/heat/+spec/native-nova-instance since I may be out of town ;)
20:06:48 <shardy> stevebaker will handle the h2 release as I'm on PTO, reminder that he'll be the one chasing from next week ;)
20:07:21 <shardy> sdake: Ok, cool, if you can bump to h3 if it definitely won't make it that would be good
20:07:30 <shardy> h3 starting to look busy ;)
20:07:34 <sdake> still seeing if we can go, I'll let you know
20:07:46 <shardy> anyone have anything to raise re h2?
20:08:08 <shardy> bug queue has been looking better, so the relaxation on 2*+2 seems to have helped
20:08:13 <shardy> s/bug/review
20:08:34 <therve> Yep looking great even!
20:08:35 <asalkeld> yea, that's been good
20:08:36 <sdake> which relaxaction?
20:08:53 <therve> Once you'll figure out what to do with the stable branch that'd be even better
20:09:31 <shardy> sdake: last week we agreed that core reviewers can use their discretion and approve, e.g if there's been a trivial revision after a load of rework, ie rebase or whatever
20:09:44 <bgorski> o/ sorry i'm late
20:10:22 <shardy> #topic stable branch process/release
20:10:36 <shardy> So this is a request/reminder re the stable/grizzly branch
20:11:03 <shardy> I've been trying to get on top of which bugs need to be considered for backport, which is something we can all do when fixing stuff in master
20:11:54 <shardy> you tag the bug as grizzly-backport-potential, and target to series grizzly (which then gives you also affects grizzly)
20:12:06 <shardy> then you can test a backport and propose via this process:
20:12:09 <shardy> https://wiki.openstack.org/wiki/StableBranch#Proposing_Fixes
20:12:44 <shardy> There will be another stable/grizzly release around the same time as h2 AIUI so something to consider over the next couple of weeks
20:12:53 <asalkeld> ok
20:13:06 <shardy> cool
20:13:22 <shardy> that's all I have
20:13:27 <shardy> #topic Open discussion
20:13:28 <stevebaker> sweet
20:13:53 <shardy> anyone have anything, or shall we have a super-short meeting? :)
20:13:55 <kebray> In case you haven't seen it, there's an awesome topology animation/visualization for stack creation that animates the provisioning of resources and information about them.
20:14:03 <kebray> https://wiki.openstack.org/w/images/e/ee/Topology_screenshot.jpg
20:14:14 <kebray> This will be merged proposed to Horizon by Tim Schnell.
20:14:25 <therve> Awesome
20:14:36 <adrian_otto> I saw the demo, it was pretty awesome.
20:14:47 <stevebaker> nice, looking forward to seeing it
20:15:00 <radix> huh
20:15:04 <radix> that is cool
20:15:05 <stevebaker> is Tim here?
20:15:06 <sdake> cool - anyone have a screencast of the demo?
20:15:10 <adrian_otto> It's bouncy just like Curvature
20:15:18 <sdake> my jpg doesn't seem to be animating
20:15:22 <kebray> I've asked Tim to create a screen cast… or, I'll create one if I find time.
20:15:30 <adrian_otto> kebray: we should definitely do that
20:15:44 <asalkeld> nice
20:15:45 <radix> adrian_otto: I can't wait to have all of the Curvature-like stuff implemented with Heat
20:15:51 <kebray> can do. will do.
20:15:56 <sdake> I would really like to use it  for an upcoming talk so if we could sync on getting the code in an external repo I'd appreciate that :)
20:16:23 <kebray> already external.. Tim isn't around… but, can get you the link.
20:16:29 <sdake> thanks kebray
20:16:52 <radix> shardy: should I explicitly target https://blueprints.launchpad.net/heat/+spec/instance-group-nested-stack for h3?
20:17:31 <stevebaker> if you think it is feasible
20:17:38 <asalkeld> anyone know where user related docs go? (thinking of environment/provider stuff)
20:17:40 <radix> well, the bug it's associated with is
20:18:07 <sdake> asalkeld join openstack-docs mailing list and ask there?
20:18:26 <asalkeld> yeah
20:18:26 <stevebaker> asalkeld: we need a template writers guide, when that exists it should go there
20:18:41 <stevebaker> the guide should live in our tree
20:18:46 <sdake> probbably wouldn't hurt everyone to join openstack-docs so we can sort out how to get docs into our program
20:18:47 <shardy> radix: done, and you want to take it?
20:18:54 <radix> yeppers
20:19:08 <radix> I assigned it tomyself
20:19:16 <radix> thanks
20:19:18 <stevebaker> i have a vague intention to focus on docs later in H
20:19:58 <shardy> radix: I assigned the BP to you too
20:20:06 <radix> ok cool :)
20:20:09 <sdake> one option is to take a couple weeks out of dev and focus on docs as a team - so we can learn from one another
20:20:11 <sdake> just a thought :)
20:20:35 <stevebaker> yeah, we could give the doc sprint another crack, but with a bit more structure this time
20:20:37 <sdake> this is something shardy could facilitate with anne
20:20:48 <sdake> stevebaker agree with more structure
20:20:50 <shardy> Yeah, sounds like a good plan
20:21:06 <shardy> #action shardy to organize docs sprint during h3
20:21:08 <asalkeld> I really don't mind doing docs, it's just the cruft we have to install
20:21:37 <shardy> asalkeld: Yeah, last time I gave up after installing a bazillion packages
20:21:37 <stevebaker> they won't be authored in xml
20:21:39 <sdake> stevebaker I think we learned from the last doc sprint that it was a) too short b) not well organized
20:22:01 <SpamapS> o/
20:22:13 <shardy> Probably need to generate a list of all the docs we need and assign them to people
20:22:30 <shardy> maybe we raise bugs for the missing stuff?
20:22:36 <sdake> shardy involve anne in the process if possible ;)
20:22:55 <shardy> sdake: Yeah I will speak to her
20:23:30 <therve> Somewhat related to doc, https://wiki.openstack.org/wiki/Heat/AutoScaling got some great content from asalkeld and radix, feedback is welcome
20:23:43 <kebray> Anne Gentle sits two rows over from me.. just fyi.
20:23:48 <radix> yeah, that :)
20:23:53 <radix> it'd be good to see people reviewing that page
20:24:09 <kebray> when she's in the office that is.
20:24:50 <asalkeld> say is the rackspace db instance api like the trove api?
20:25:00 * SpamapS wishes it were
20:25:06 <stevebaker> or the cfn dbinstance?
20:25:31 <asalkeld> just wondering if we can crank out a trove resource
20:25:50 <asalkeld> (with little effort)
20:26:08 <kebray> Rackspace Cloud Databases is Trove, which was renamed from Reddwarf.
20:26:15 <shardy> asalkeld: sounds like something which would be good to look into
20:26:19 <radix> there was a bit of conversation about trove + heat integration on the ML
20:26:27 <shardy> if it's not going to take loads of effort
20:26:41 <SpamapS> kebray: then why are we calling it a rackspace resource, and not OS::Trove::xx ?
20:26:47 <asalkeld> so kebray we could problaby reuse a lot of code there/
20:26:47 <radix> though I guess that's a different side of what you're talking about
20:26:58 <stevebaker> writing a trove resource is different from using heat for trove's orchestration. either or both could be done
20:26:58 <SpamapS> yeah don't conflate those two things
20:27:06 <radix> my bad :)
20:27:07 <SpamapS> Trove may use Heat has nothing to do with Trove resources. :)
20:27:07 <adrian_otto> there are two perspectives of integration: 1) Heat uses Trove, and 2) Trove uses Heat
20:27:22 <adrian_otto> #1 is going to be pretty easy
20:27:30 <SpamapS> What about a Heat using Trove then using Heat to drive Trove to Heat...
20:27:37 <radix> YES
20:27:38 <adrian_otto> #2 may be more involved
20:27:41 <asalkeld> I know, I just want to do the "right thing from our end"
20:27:44 <sdake> circles
20:28:09 <kebray> SpamapS:  Great question.. because, so far we developed it against the Rackspace public service.  The Trove and RS DB Service run the same code, but we didn't run tests against stock trove and ensure compatibility.. but, your request is reasonable, and one that I think is worth having my team investigate.
20:28:42 <SpamapS> kebray: "stock trove" != "public rackspace trove" in what ways?
20:28:56 <asalkeld> there is auth differences?
20:29:10 <adrian_otto> both are key based, so not materially different
20:29:15 <asalkeld> so you could register 2 resource types for one plugin
20:29:59 <asalkeld> anyways, if it is easy, it's worth doing
20:30:10 <kebray> SpamapS:  slight feature enabled differences…  and we run a different in-guest agent (C++, optimized to be happy on a 512mb instance).  HP runs the Python agent, but ya'll don't offer 512mb instances, so you have more memory to burn on an agent.
20:31:44 <sdake> c++ and optimized oxymoron
20:31:54 <kebray> RS DB uses OpenVZ as the container technology.  I think Trove uses KVM out of the box.
20:31:55 * sdake ducks
20:32:10 <hub_cap> heyo
20:32:14 <kebray> Yeah, and as asalkeld said, auth differences.
20:32:24 <hub_cap> someone talkin Trove up in heah'
20:32:37 <hub_cap> feel free to direct Q's to me
20:32:44 <shardy> hey hub_cap, yeah we're talking about potentially implementing a trove Heat resource type
20:32:45 <hub_cap> im talkin heat in #openstack-meeting-alt fwiw ;)
20:32:57 <hub_cap> im talkin impling clusters in heat
20:33:05 <kebray> hub_cap:  question came up on why the Heat Resource implementation we did was specific for RS DB instead of generic just for Trove.
20:33:07 <shardy> and also mentioning that you guys may want to use heat for orchestration
20:34:07 <hub_cap> +1 to generic for trove kebray
20:34:26 <hub_cap> and yes ill be working on heat (again, got sidetracked) in about a wk
20:35:36 <asalkeld> I wonder what trove and lbaas use as agents
20:36:09 <asalkeld> (any commonality there)
20:36:20 <sdake> at some point we can inject them so it won't matter ;-)
20:36:24 <shardy> trove has it's own agent IIRC?
20:36:30 <SpamapS> sounds to me like the auth differences are the only one Heat would really care about.
20:36:32 <hub_cap> asalkeld: we have a python agent, and weve been wondering how we can easily pull it out and get it installed by heat
20:36:39 <hub_cap> shardy: correct
20:37:22 <SpamapS> Anyway, that can be done as a refactor when somebody steps up to add native Trove support.
20:37:22 <asalkeld> yeah, well you can install from pypi/package
20:37:37 <hub_cap> ya thats probably what we are going to do asalkeld
20:37:39 <sdake> installing from pypi is not suitable - pypi can be down or not routing for some reason
20:37:51 <hub_cap> we will do what the clients do
20:37:52 <sdake> or too slow
20:37:56 <hub_cap> probably tarballs.openstack
20:37:56 <radix> I've actually had a ton of 50* errors from pypi in the last week :P
20:38:14 <sdake> best bet is to inject or prebuild
20:38:23 <hub_cap> effectively we are going to think of it like we think of clients, a dependency we need in another project, we believe
20:38:31 <hub_cap> we have done almost no talking on it tho fwiw
20:38:40 <stevebaker> sdake: prebuilding with diskimage-builder will at some point be trivially easy
20:38:41 <hub_cap> but ill take what yall are saying back to the team
20:38:50 <sdake> stevebaker agree
20:39:09 <sdake> avoid pypi downloading - we are moving away from that for reliability reasons
20:39:31 <hub_cap> #agreed
20:39:39 <SpamapS> sdake: prebuild is my preference. But yeah, with custom images being hard on certain clouds <cough>myemployer</cough> ... inject probably needs to be an option.
20:39:57 <sdake> yup, that is why i am working on inject blueprint next ;)
20:39:58 <asalkeld> we need pypi cache as-a-service :/
20:40:09 <hub_cap> id also like to talk to yall eventually about the dependency between us, yall creating a trove resource, while we are creating a trove template as well to use to install.... but thats for another day.
20:40:20 <hub_cap> maybe we can consolidate work so we can use the same template or something
20:40:22 <sdake> yum and deb repos already need to be cached - groan
20:40:24 <shardy> asalkeld: Isn't that just called a mirror or a squid proxy?
20:40:43 <sdake> squid proxy no good, need a full on mirror
20:40:45 <stevebaker> SpamapS: got a feeling for how big an os-* inject payload would be?
20:40:55 <SpamapS> hub_cap: I don't think the two are actually related.. but it would be good to keep communication flowing in both directions.
20:40:56 <radix> yeah... this seems like a pretty general / far-reaching problem
20:41:03 <kebray> #action kebray to put on RS backlog testing of RS Database Resource against generic Trove, and consider rename/refactoring of provider to use Trove for naming.
20:41:07 <hub_cap> SpamapS: kk
20:41:24 <sdake> kebray I think only the chair can do an action
20:41:32 <kebray> hehe.. ok.
20:41:37 <hub_cap> id like to see the difference between them too. im always in #heat so lets chat about it, just ping me when u need me cuz i dont actively monitor it
20:41:43 <SpamapS> stevebaker: hmm.. looking now
20:41:45 <kebray> depends on how the meetbot is configured I guess.
20:41:51 <sdake> shardy I think kebray wants an action :)
20:41:54 <shardy> sdake: IMO yum/deb repo caching or mirroring is not a heat problem to solve, it's a deployment choice/detail
20:42:03 <sdake> shardy agree
20:42:16 <SpamapS> stevebaker: 50K zipped
20:42:21 <sdake> shardy but really you do want a mirror, otherwise if your mirrors are inaccessible bad things happen
20:42:26 <SpamapS> stevebaker: thats .zip
20:42:34 <shardy> kebray: what was you action again, sorry, missed the scrollback
20:42:41 <SpamapS> stevebaker: and with everything a 'setup.py sdist' gets you
20:42:43 <sdake> does anyone know for sure if openstack really ahs a 16k limit on the metadata?
20:42:58 <sdake> russellb ^
20:43:05 <SpamapS> stevebaker: but os-* have not been avoiding dependencies
20:43:18 <asalkeld> sdake, I am not convinced about that inserting
20:43:22 <russellb> sdake: don't know off of the top of my head
20:43:29 <asalkeld> we used to do that
20:43:30 <kebray> shardy to add to my backlog testing RS Database Provider against Trove, and then consider refactoring that provider to be for Trove and not specific to RS Cloud.
20:43:37 <shardy> #action kebray to put on RS backlog testing of RS Database Resource against generic Trove, and consider rename/refactoring of provider to use Trove for naming.
20:43:39 <asalkeld> and moved away from it
20:43:45 <sdake> asalkeld we never did insert of the full scripts
20:44:04 <SpamapS> stevebaker: so, for instance, os-collect-config deps on keystoneclient .. requests.. lxml.. eventlet (due to oslo stuff)
20:44:06 <sdake> we always used prebuilt jeos
20:44:15 <russellb> type is mediumtext in the db ... may be a limit in the code though
20:44:16 <stevebaker> I think the 16k limit is arbitrary, but however big it is, we'll find a way to hit the limit
20:44:24 <SpamapS> sdake: what about injection via object storage?
20:44:41 <SpamapS> I think that is a pretty valid alternative to userdata
20:44:48 <asalkeld> +1
20:44:53 <sdake> SpamapS that may work, although require some serious hacking to loguserdata.py
20:45:20 <SpamapS> yeah I'm not saying it will be easy. :)
20:45:45 <sdake> I'll add a blueprint after meeting
20:45:50 <SpamapS> but it may be more straight forward and more feasible long term to maintain.
20:45:54 <SpamapS> userdata is a column in a table
20:45:59 <SpamapS> so it _should_ be limited.
20:46:00 <sdake> its a valid idea - I think both solutions should be available for the operators choice
20:46:25 <SpamapS> sdake: if by both you mean "custom image" or "inject to object storage" .. I agree.. but I think you want inject via userdata too. ;)
20:46:39 <sdake> all 3 then :)
20:47:15 <sdake> if its optional, there is no harm
20:47:15 <kebray> Just turning my attention back to the conversation, but what about SSH/SFTP to inject/bootstrap/provision server post boot.  Will that solve the use case?
20:47:34 <sdake> kebray we went down ssh bootstrapping process, for alot of reasons it wont work with how heat operates
20:47:39 <sdake> mainly that we inject only one ssh key
20:47:58 <kebray> How many do you need?
20:48:08 <sdake> the one we inject is the user key
20:48:13 <kebray> k.
20:48:15 <stevebaker> one for the user, one for heat?
20:48:19 <andrew_plunk> sdake: after the original bootstrap process you can ssh another key over there
20:48:19 <sdake> we would need to inject another for heat
20:48:39 <kebray> I see.. you need ongoing capabilities.. not just one-time setup.
20:48:39 <sdake> we could inject it easily enough with cloudinit
20:48:43 <sdake> but key management is a pita
20:49:15 <sdake> and guests need to have ssh enabled by default - some people make images that don't do that for whatever reason
20:49:45 <SpamapS> "some people make images" -- so they can put in-instance tools in those images.
20:49:46 <sdake> we tried ssh injection - it wasn't optimal at the time - prebuild was
20:50:20 <sdake> IIRC asalkeld had some serious heartburn over ssh binary injection
20:50:42 <asalkeld> can't remember why
20:50:47 <sdake> me either
20:50:49 <andrew_plunk> can you not use ssh to install cloud-init after some initial bootstrapping?
20:50:51 <asalkeld> it's been a while
20:50:59 <sdake> ya that was 18 months ago
20:51:00 <shardy> paramiko was unreliable for the integration test SSH iirc
20:51:04 <SpamapS> andrew_plunk: isn't that what the rackspace stuff does already?
20:51:12 <kebray> andrew_plunk isn't that exactly what our RS Server Resource does?
20:51:13 <andrew_plunk> correct SpamapS:
20:51:22 <stevebaker> tempest uses paramiko now
20:51:46 <andrew_plunk> & keybray:
20:51:50 <shardy> stevebaker: maybe that's why it breaks all the time ;)
20:51:55 <stevebaker> lol
20:51:57 <sdake> well lets give object download a go and go from there
20:52:13 <asalkeld> well options are good
20:52:21 <asalkeld> but too many
20:52:31 <sdake> more options = more questions in #heat :)
20:52:39 <asalkeld> exactly
20:52:47 <kebray> stake asalkeld heartburn over how the RS Server Resource is bootstrapping with SSH and installing cloud-init?   we wanted to avoid having to pop pre-configured special images that already had cloud-init pre-installed.
20:53:19 <asalkeld> no kebray it was in a different life time
20:53:24 <kebray> As, we don't have default images with cloud-init in the RS Cloud.
20:53:26 <asalkeld> (or project)
20:53:26 <kebray> oh, ok.
20:53:34 <sdake> kebray it was very early stage of heat
20:53:38 <sdake> right when we got starteed
20:54:01 <sdake> now we have something working, we can make some incremental improvements to improve the situation :)
20:54:09 <sdake> back then we had nothing working
20:54:51 <asalkeld> so much for the short meeting
20:55:32 <m4dcoder> SpamapS: regarding rolling update and as-update-policy, can you answer my question at http://lists.openstack.org/pipermail/openstack-dev/2013-June/010593.html?
20:57:29 <shardy> #action SpamapS to reply to m4dcoder's ML question
20:57:41 <shardy> anything else before we finish?
20:58:18 <shardy> Ok, well thanks all :)
20:58:20 <SpamapS> m4dcoder: will answer, sorry for the delay!
20:58:26 <shardy> #endmeeting