20:00:30 <shardy> #startmeeting heat
20:00:31 <openstack> Meeting started Wed Jun 12 20:00:30 2013 UTC.  The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:34 <openstack> The meeting name has been set to 'heat'
20:00:40 <shardy> #topic rollcall
20:00:50 <zaneb> o/
20:00:56 <tspatzier> Hi all
20:00:57 <radix> hello
20:01:02 <stevebaker> \o/
20:01:02 <bgorski> o/
20:01:40 <TravT> o/
20:01:49 <shardy> asalkeld, sdake, jpeeler, therve around?
20:01:55 <asalkeld> hi
20:02:06 <jpeeler> hey
20:02:31 <andrew_plunk> hello
20:02:36 <shardy> Ok, hi all, lets get started
20:02:42 <shardy> #topic Review last week's actions
20:02:57 <kebray> I'm here.
20:03:00 <sdake> o/
20:03:10 <shardy> #link http://eavesdrop.openstack.org/meetings/heat/2013/heat.2013-06-05-20.00.html
20:03:18 <shardy> Only one action
20:03:29 <SpamapS> o/
20:03:31 <shardy> #info asalkeld/zaneb to start ML discussion re stack metadata
20:03:36 <zaneb> hey, that actually happened :)
20:03:43 <sdake> grats zaneb ;)
20:03:43 <zaneb> thanks asalkeld
20:03:58 <shardy> Cool, thanks guys
20:03:58 <asalkeld> yip, that was good
20:04:40 <shardy> #topic h2 blueprint status
20:05:01 <shardy> So just wanted to make sure everyone is happy with what they're doing for h2, what they have assigned etc
20:05:41 <sdake> shardy I was thinking of making a new blueprint and tackling in h2 as well - injection of data so we can use gold images
20:05:50 <randallburt> no complaints except the tempest gating is blocked on a nova bug :(
20:05:52 <sdake> i'll write a blueprint when i get done with my 3rd fulltime job at  rht :)
20:06:01 <shardy> #link https://launchpad.net/heat/+milestone/havana-2
20:06:15 <SpamapS> sdake: huh?
20:06:21 <SpamapS> sdake: we can use gold images now
20:06:22 <shardy> randallburt: Yeah the gate failures are a bit frustrating
20:06:38 <sdake> SpamapS i'll msg you when I write blueprint it will make more sense
20:06:43 <SpamapS> sdake: mmk
20:07:12 <sdake> problem we found with RHEL images that can't be easily resolved
20:07:12 <shardy> Ok, cool, well if anyone else has anything they expect to do for h2, please make sure it's captured in the plan by raising or targetting the bp/bug
20:07:16 <sdake> SUSE may have same problem
20:07:37 <shardy> sdake: what problem is that?
20:07:46 <sdake> wait for blueprint i'll explain there
20:07:54 <shardy> Ok, cool
20:08:03 <shardy> #action sdake to raise BP re gold images
20:08:34 <shardy> randallburt are you and andrew_plunk happy with the providers tasks you have allocated?
20:09:16 <randallburt> shardy:  yes
20:09:26 <shardy> ok, cool
20:09:27 <randallburt> I'm about to start on the next bits
20:09:48 <shardy> anyone have anything else related to h2 BP's or bugs they want to raise?
20:09:50 <randallburt> that being said, was leaving the json params for now since there was some controversy, but its not a blocker imo
20:10:07 <randallburt> I feel confident we'll get someting work-able by h2
20:10:22 <shardy> randallburt: OK, as long as we have clear direction and progress on the main bits then all good :)
20:11:02 <shardy> looks like it could be a short meeting today, anyone have any other topics before open discussion?
20:11:18 <sdake> heat-templates likely needs a launchpad home
20:11:29 <shardy> sdake: it already does
20:11:43 <sdake> cool then nm :)
20:11:50 <shardy> https://launchpad.net/heat-templates
20:11:51 <sdake> https://bugs.launchpad.net/heat/+bug/1186791 is actually a heat-templates bug
20:11:51 <shardy> ;)
20:11:53 <uvirtbot> Launchpad bug 1186791 in heat "NoKey template uses -gold images which no longer exist" [Medium,Confirmed]
20:12:04 <sdake> and fixed IIRC
20:12:05 <stevebaker> nifty
20:12:26 <shardy> #topic Open Discussion
20:12:50 <wirehead_> So, there's http://developer.rackspace.com/blog/rackspace-autoscale-is-now-open-source.html
20:13:16 <shardy> #link http://developer.rackspace.com/blog/rackspace-autoscale-is-now-open-source.html
20:13:52 <asalkeld> I have a small one
20:13:57 <asalkeld> hacking rules
20:13:59 <shardy> wirehead_: do you see this as contributing to or competing with the Heat AS efforts?
20:14:18 <asalkeld> holding off until wirehead_ done ...
20:14:54 <wirehead_> shardy: contribute.  Obviously, you can't just copy-paste our code into yours, given that it's built around Twisted and Cassandra.
20:15:14 <wirehead_> where I recognize that 'our' and 'yours' are really 'our' and 'ours'
20:15:33 <wirehead_> But I would view it as a failure if we're maintaining long-term Otter and Heat AS.
20:15:58 <shardy> wirehead_: Ok, I've not looked into the details but wanted to clarify if you're still onboard with the plan of incrementally adding features/api etc to heat, with a view to separating the AS functionality potentially later
20:16:09 <shardy> rather than proposing a new project from the outset
20:16:24 <zaneb> shardy: as I understand it, this was a response to our request to open the source :)
20:16:31 <shardy> the former approach is what we discussed at summit, but things have been very quiet since ;)
20:16:59 <SpamapS> Sounds to me like the team had a deadline, hit it, and then open sourced what they produced.
20:17:03 <sdake> wirehead_ a good first go would be to incorporate the api you have developed into heat proper
20:17:04 <zaneb> shardy: so that we can see the direction things are heading before the code arrives
20:17:06 <shardy> zaneb: aha, I misinterpreted it as more of a project unveiling, my bad :)
20:17:12 <SpamapS> But now the plan is to contribute what you've learned to Heat AS ?
20:17:41 <wirehead_> The plan is that Heat will take up our AS API and learnings.  And overall, Heat will be able to scale to Rackspace-hosted scale.
20:17:42 <SpamapS> As in, "we tried X, it does not work. Do Y instead."
20:18:18 <zaneb> shardy: yah, I think Adrian said that accidentally creating that impression was one of the reasons they were reluctant to release the code in the first place
20:18:19 <SpamapS> wirehead_: ah that too. One API to rule them all. :)
20:18:44 <wirehead_> I'm a little slow with the typing, yes
20:18:57 <wirehead_> And if the Heat API changes, I'd like to see the Otter API change at the same time.
20:19:10 <wirehead_> But that's something that we've had chats with radix and theve about
20:19:15 <wirehead_> (they are both otherwise occupied)
20:19:24 <sdake> otter graphic shell game Heat cloudwatch -> Ceilometer    Otter AS -> Heat ;)
20:19:32 <shardy> wirehead_: Ok, cool, well lets all take a look and we can continue discussions, but if you have specific stuff you want to happen for havana, e.g API, it would be good to start getting some details defined so we know what we're aiming for
20:19:51 <wirehead_> But, as you presume, RAX wrote that to meet a short-term important deadline.
20:21:36 <shardy> Ok, well thanks for the head-up, lets all take a look and continue discussion on the ML over the next few days
20:21:45 <wirehead_> Sure thing.
20:21:47 <shardy> asalkeld: you had something to raise?
20:21:51 <asalkeld> re: hacking rules
20:22:08 <asalkeld> are we required to implement all of them?
20:22:14 <asalkeld> as an openstack prooj
20:22:38 <shardy> I thought most projects skipped some, but I'm not sure
20:22:38 <zaneb> asalkeld: I think every project has its own list of ones they ignore, don't they?
20:23:00 <asalkeld> zaneb, but is that cos the just haven't got there?
20:23:11 <asalkeld> or they never want to go there?
20:23:20 <zaneb> I don't know
20:23:24 <SpamapS> It is worth implementing them all eventually.
20:23:26 <stevebaker> some project object to some rules
20:23:31 <lifeless> asalkeld: you're not.
20:23:35 <SpamapS> But not with any kind of priority.
20:23:41 <asalkeld> maybe a ml discussion
20:23:43 <lifeless> most of them are sane. Some are batshit.
20:23:55 <sdake> nova implements all hacking rules
20:23:59 <lifeless> [I disagree with Spamaps on 'all eventually']
20:24:04 <asalkeld> no is doesn'
20:24:09 <shardy> IMO features are more important than cosmetic refactoring atm
20:24:27 <asalkeld> I have added the comments in tox.ini
20:24:43 <SpamapS> shardy: the more features that are completed with the rules already in place, the less refactoring churn there is later.
20:24:48 <asalkeld> maybe we need (we do not intend doing X because)
20:24:58 <SpamapS> so perhaps I can revise all to "all that you don't think are batshit"
20:25:27 <wirehead_> I'd suggest "challenge all that you do think are batshit" but I'm a realist and know what roads that leads down. :P
20:25:59 <shardy> SpamapS: that is true, but we already have a huge pile of stuff to do, and most of it's going pretty slowly
20:26:18 <asalkeld> so I simply want to: 1 determine that we can long term ignore some rules, 2 if so make it clear in tox.ini
20:26:31 <SpamapS> shardy: agreed. Its not a priority, but if it makes the code more readable, it is worth doing _eventually_. And it costs more the longer you wait.
20:26:32 <shardy> so if there's quick cosmetic stuff, fine, but IMHO not worth spending days/weeks on
20:26:47 <stevebaker> shardy: +1
20:26:49 <SpamapS> I doubt any of the rules would take days.
20:27:09 <zaneb> +1 I support anything that makes the code better
20:27:16 <asalkeld> so I felt a bit off color and did some (not much brain power required)
20:27:19 <zaneb> not the batshit stuff :)
20:28:11 <stevebaker> so I've started looking at native replacements for heat-cfntools, specifically to meet tripleo's current needs (which afaict is a replacement cfn-hup and cfn-signal)
20:28:35 <SpamapS> stevebaker: right.
20:29:15 <sdake> #link https://wiki.openstack.org/wiki/PTLguide#Answers
20:29:34 <sdake> that link suggests we should send people to ask.openstack.org for q&a support
20:29:42 <sdake> I presume to provide a record
20:30:24 <shardy> stevebaker: can we plug in a backend (instead of boto) which talks to the ReST API, and figure out the waitcondition API interface?
20:30:33 <stevebaker> aws waitconditions URLs are actually S3 buckets, but I'm assuming that installing swift in a tripleo undercloud is a non-starter
20:31:33 <asalkeld> object store (the simple one)
20:31:36 <asalkeld> ?
20:32:26 <zaneb> asalkeld: I'm guess the fancy pre-signed expiring URLs we want are only in Swift
20:32:32 <stevebaker> shardy: so evolve heat-cfntools to optionally use native APIs? I had assumed that was a non-starter if we were switching to the aws tools port
20:32:39 <SpamapS> stevebaker: swift would be ok. I don't see why that is necessary when we have a REST API though.
20:32:53 <lifeless> stevebaker: we can do swift in the undercloud, but the very start of the undercloud is a single machine.
20:32:55 <shardy> stevebaker: heat-cfntools could be ported to the native API
20:33:01 <lifeless> stevebaker: so it would be a little odd :)
20:33:40 <zaneb> SpamapS: because we have to effectively reimplement it (incl security stuff) if we want to use our own ReST API
20:33:53 <shardy> stevebaker: then some potential future aws tools port could talk to the cfn api I guess
20:33:58 <lifeless> zaneb: we can't factor it out into oslo?
20:34:09 <zaneb> lifeless: swift? ;)
20:34:17 <shardy> IMO there'd be more justification for maintaing heat-cfntools long term if it talks to the native API
20:34:27 <stevebaker> SpamapS: it comes down to how to do auth. A signed url would require no keystone auth. I'm actually OK with doing keystone calls from instances
20:35:02 <SpamapS> Why can't we give the tools a trust and a URL to use that trust in?
20:35:04 <asalkeld> but is everyone eles
20:35:11 <shardy> stevebaker: I hesitate to say this, but could we just insert the ec2token paste filter in the chain for the native API?
20:35:22 <lifeless> zaneb: from swift into oslo
20:35:28 <shardy> then we have presigned URLS etc which will work exactly the same as they do now
20:35:36 <shardy> even if it's an interim solution
20:35:47 <zaneb> can I just point out that the wait conditions don't use cfn-tools, only curl?
20:36:09 <shardy> zaneb: the problem is they depend on the CFN api, and ec2token authentication
20:36:15 <stevebaker> shardy: how about a dedicated pipeline just for native waitconditions
20:37:05 <SpamapS> zaneb: and we're also talking about metadata access.
20:37:13 <shardy> stevebaker: wfm, no point in reinventing this if we can reuse the existing mechanism, but there may be resistance due to the awsishness
20:37:47 <shardy> I'm assuming keystone doesn't provide any native signing functionality?
20:37:52 <shardy> signing/verification
20:38:17 <SpamapS> trusts?
20:38:46 <shardy> SpamapS: AFAICT trusts don't solve this problem yet, as you can't limit the scope to a specific endpoint, or action
20:38:49 <stevebaker> another option is to replicate a swift temp url in our API, then swift would be optional
20:38:50 <randallburt> do I have to have the ec2 extensions on for the existing ec2token stuff to work or is that simply an heat internal thing?
20:38:58 <shardy> they can be set to expire tho, so they may solve part of the problem
20:39:11 <SpamapS> shardy: so trust is just about full identity?
20:39:17 <shardy> SpamapS: more work to do basically
20:39:20 <SpamapS> shardy: not limited policy?
20:39:29 <zaneb> my preference would be 1) use swift, 2) copy swift like stevebaker just said
20:39:47 <shardy> SpamapS: you can drop roles, but you can't specify action/endpoint level granularity (yet)
20:40:09 <SpamapS> mmk
20:40:16 <asalkeld> and SpamapS are you ok with only readable metadata?
20:40:46 <SpamapS> well lets not design it now, but suffice to say, the boto/keystone/heat relationship is something I'd like to see go away sooner rather than later.
20:40:47 <radix> boo. sorry I missed the conversation about otter earlier, I had another meeting
20:40:57 <SpamapS> asalkeld: I require only readable metadata :)
20:41:04 <asalkeld> cool
20:41:19 <SpamapS> asalkeld: writable would make things tricky
20:41:21 <zaneb> I don't think writable metadata is safe in general
20:41:27 <shardy> SpamapS: I need to add keystoneclient support before I can fully investigate it, but for our purposes, it's only halfway to where we need it so far
20:41:33 <shardy> trusts that is
20:42:20 <stevebaker> shardy: maybe just focus on our other trusts use cases
20:42:58 <shardy> stevebaker: that's my plan, and create a wishlist of remaining functionality when I more fully understand what's there now
20:43:17 <shardy> planning to get into it as soon as I get the suspend-resume patches merged
20:43:29 * SpamapS has to run
20:43:40 <stevebaker> So in summary, reading metadata and signalling wait conditions can be done with urls that are passed to the instance, and those urls may be swift containers, or something we write which replicates that?
20:44:05 <shardy> stevebaker: yes
20:44:06 <zaneb> stevebaker: +1
20:44:19 <asalkeld> +1
20:44:45 <stevebaker> ok, sounds like the replacement for heat-cfntools is curl :)
20:44:59 <adrian_otto> will the eventual consistency of a swift container be an issue?
20:45:40 <zaneb> adrian_otto: eventually
20:45:42 <adrian_otto> I imagine that if you used one of those for a wait condition backing store taht you might end up in a race
20:46:09 <asalkeld> you might have to retry
20:46:21 <asalkeld> but doubt race
20:46:43 <zaneb> wait conditions only append data, but I don't know how that works in swift/s3
20:46:53 <adrian_otto> maybe a poor word choice,, ,I mean that different clients may not see a consistent view of state of that condition.
20:46:57 <shardy> yeah, one writer, one reader, so only problem would be some additional latency?
20:47:01 <stevebaker> all you can do with swift temp urls is GET or PUT
20:47:31 <shardy> adrian_otto: that depends on the sharding strategy we end up with for multiple engines
20:47:52 <zaneb> stevebaker: maybe we do need our own version of that then
20:48:21 <stevebaker> at least we have an api to copy
20:48:32 <stevebaker> even if it is sha1
20:48:40 <shardy> Yeah, looking at that seems like a good start
20:49:16 <shardy> 10mins left, anything else?
20:49:22 <sdake_> ask.openstack.org
20:49:46 <sdake_> whoever wrote that ptl guide thinks projects need to guide people to there
20:49:55 <sdake_> i guess to create a record
20:50:06 <sdake_> i notice i end up answering the same question several times a week
20:50:30 <sdake_> but if we use that we have to maintain it - eg answer questions there
20:50:41 <sdake_> point launchpad qs at ask.openstack.org etc
20:50:57 <shardy> sdake_: good point, but most of the time people drop into IRC looking for answers with a partial question, so we have to start the discussion to fully define the problem
20:51:18 <shardy> ah, so instead of LP Q's, got it
20:51:19 <sdake_> can do that in ask.openstack.org as well
20:51:21 <sdake_> like a bug report
20:51:29 <zaneb> it's always the same problem, they won't connect their computers to the damn internet
20:51:38 * sdake_ giggles at zaneb
20:51:56 <sdake_> we have the same troubleshooting tips
20:52:01 <sdake_> but keep repeating them
20:52:13 <sdake_> sucks up time that could be spent coding
20:52:34 <shardy> Maybe we need a "fix your nova networking" bot ;)
20:52:39 <sdake_> ask.openstack.org is async not sync
20:53:05 <sdake_> irc support a bit distracting when in middle of software dev
20:53:14 <shardy> sdake_: OK, good suggestion, lets try directing people there and see how it goes
20:53:30 <sdake_> need to also answer questions too :)
20:53:31 <shardy> most people seem to want an immediate reaction, but definitely worth trying
20:53:50 <sdake_> there are a few heatqs on there without answers now
20:54:12 <shardy> sdake_: can you setup alerts, like for LP Q's?
20:54:26 <shardy> the LP email is normally what prompts me to look at those
20:54:27 <sdake_> i dont think there is a way but someone in infra may know
20:54:42 <sdake_> good RFE for mordred :)
20:55:03 <zaneb> sdake_: maybe stick a notice in the topic on #heat too
20:55:12 <sdake_> zaneb i can do that
20:55:16 <radix> I think I'll post a thing to openstack-dev about autoscaling/otter
20:55:52 <shardy> radix: sounds good, please do
20:56:10 <shardy> anything else, or shall we finish?
20:56:37 <shardy> Ok, thanks all
20:56:42 <shardy> #endmeeting