20:01:26 <stevebaker> #startmeeting heat
20:01:27 <openstack> Meeting started Wed Jun  3 20:01:26 2015 UTC and is due to finish in 60 minutes.  The chair is stevebaker. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:01:29 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:01:32 <openstack> The meeting name has been set to 'heat'
20:01:37 <stevebaker> #topic rollcall
20:01:38 <asalkeld> o/
20:01:42 <pas-ha> o/
20:01:44 <dgonzalez> hi
20:01:48 <skraynev_> o/
20:01:53 <rpothier> o/
20:02:01 <karolyn> o/
20:02:06 <jruano> hi
20:02:20 <zaneb> \o
20:02:43 <skraynev_> tag is closed ...
20:02:53 <stevebaker> #topic Adding items to agenda
20:03:11 <stevebaker> The alt meeting time went really well, good turnout
20:03:21 <ryansb> great
20:03:31 <stevebaker> #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282015-06-03_2000_UTC.29
20:03:46 <zaneb> that makes a nice change :)
20:04:03 <zaneb> and as a bonus, I didn't have to get up ;)
20:04:24 <asalkeld> lucky you 6am here
20:05:07 <stevebaker> anything for the agenda?
20:05:29 <skraynev_> I have one question and one request for review :)
20:05:40 <stevebaker> #topic heat reviews
20:05:43 <stevebaker> #link https://etherpad.openstack.org/p/heat-reviews
20:05:53 <stevebaker> skraynev_: chuck it in :)
20:07:00 <stevebaker> There is only one proposed fix for a High bug, and it may be a bit stale. I'll ask Jamie to refresh it when he is about
20:07:59 <pas-ha> oh, there is a new bug
20:08:09 <pas-ha> looks like our kilo gate is wedged
20:08:15 <asalkeld> and if there are some tripleO guys about to help with the use case in https://review.openstack.org/#/c/134848/
20:08:17 <kairat_kushaev1> yep
20:09:10 <skraynev_> stevebaker: agenda is updated
20:09:29 <stevebaker> asalkeld: should an external resource even allow properties to be set?
20:09:57 <asalkeld> stevebaker: well it's a pain to remove and add them - I thought
20:10:12 <stevebaker> asalkeld: ah, so they'll just be ignored
20:10:26 <asalkeld> also messy to remove the validation that defines the required stuff
20:10:39 <asalkeld> (so yes, just ignore)
20:10:56 <asalkeld> i won't call handle_update etc..
20:10:57 <zaneb> stevebaker: in a pre-convergence world I think it would have to
20:11:29 <zaneb> to establish the baseline for a future update if you make it a non-external resource again
20:11:39 <zaneb> (disclaimer: I haven't read the spec)
20:11:47 <stevebaker> speaking of, https://review.openstack.org/#/c/180022 is the only convergence review I'm aware of, and we could possibly try switching it on once it lands
20:12:02 <zaneb> hahaha :)
20:12:15 <asalkeld> we might want to actually run it :-O
20:12:23 <stevebaker> i mean obviously nothing will work...
20:12:32 * asalkeld still got to try that, maybe today
20:12:38 <stevebaker> switch on locally to fix all the things I mean
20:12:52 <asalkeld> yeah, I'll see what happens today
20:13:02 <zaneb> https://review.openstack.org/#/c/163132/ is also convergence
20:13:48 <stevebaker> zaneb: got it
20:14:05 <pas-ha> or we can have an do-not-merge-me patch that switches it on and see how func.gate reacts to new code being merged :)
20:14:36 <stevebaker> #topic High priority bugs
20:14:37 <asalkeld> pas-ha: yeah, should be easy
20:14:41 <zaneb> +1
20:15:26 <stevebaker> If you refresh the agenda the item is hyperlinked. Too long to paste here
20:16:04 <stevebaker> I just wanted to do a quick scan of High,Critical bugs which haven't been started yet, just in case someone is inspired to work on one
20:16:13 <zaneb> #link https://bugs.launchpad.net/heat/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.importance%3Alist=CRITICAL&field.importance%3Alist=HIGH&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.u
20:16:14 <zaneb> sed=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search
20:16:25 <zaneb> you're right, it *is* too long
20:16:26 <stevebaker> ... or bump it down to Medium
20:16:34 <ryansb> zaneb: o.O
20:16:41 <stevebaker> zaneb: whhhhhhhy
20:16:59 <zaneb> stevebaker: it sounded like a sort of challenge
20:17:02 <skraynev_> btw: https://bugs.launchpad.net/heat/+bug/1388047 looks like a part of existed spec.
20:17:02 <openstack> Launchpad bug 1388047 in heat "Validation should catch unavailable resources" [High,Triaged] - Assigned to Tetiana Lashchova (tlashchova)
20:17:12 <skraynev_> I mean conditionally registered resources...
20:17:29 <pas-ha> yes it is
20:17:31 <stevebaker> Does anyone want to tackle https://bugs.launchpad.net/heat/+bug/1459837 ? It is really affecting TripleO
20:17:31 <openstack> Launchpad bug 1459837 in heat "Error messages from nested stacks are awful" [High,Triaged]
20:17:36 <asalkeld> i can try fix the error messages I messed up
20:17:45 <asalkeld> bug 1459837
20:18:14 <asalkeld> stevebaker: how long until L1?
20:18:58 <stevebaker> #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule
20:19:01 <zaneb> I wanna say like 4 weeks
20:19:06 <stevebaker> June 23-25
20:19:21 <stevebaker> more like 3
20:19:32 <asalkeld> ok thanks
20:19:34 <zaneb> 3 is like 4
20:19:37 * asalkeld bookmarking that
20:20:47 <stevebaker> ok, moving on
20:20:49 <skraynev_> https://bugs.launchpad.net/heat/+bug/1433340 - what about this bug?
20:20:49 <openstack> Launchpad bug 1433340 in heat "stack actions do not guarantee that the stack will be placed into IN_PROGRESS by the time they return" [High,Triaged]
20:20:50 <stevebaker> #topic update about functional test for heatclient
20:21:03 <skraynev_> I have not seen it on gates couple weeks :)
20:21:18 <skraynev_> ok. this item from me.
20:21:25 <zaneb> yeah, that one would be *really* nice to fix
20:21:26 <stevebaker> asalkeld, skraynev_: Is that a duplicate? It sounds like the one we closed yesterday
20:22:00 <asalkeld> stevebaker: that's similar, but not the same
20:22:15 <skraynev_> according to the results for https://review.openstack.org/#/c/180539/ - I suppose, that tests pass :)
20:22:15 <asalkeld> that's getting into in-progress
20:22:53 <skraynev_> so if somebody review it, it will be good, I plan to move experimental job to gate (non voting )
20:23:21 <asalkeld> nice skraynev_
20:23:22 <skraynev_> asalkeld: I have not re-check it. may be it works ?
20:23:36 <skraynev_> asalkeld: ^ it's about bug :)
20:23:45 <skraynev_> not about functional tests
20:24:02 <stevebaker> skraynev_: yeah, lets make it non-voting. I wasn't even aware of the experimental job
20:24:32 <skraynev_> stevebaker: ok. so will upload a patch for it.
20:25:14 <skraynev_> that is all for this item ;)
20:25:34 <stevebaker> #topic question about destroy behavior of outputs
20:25:44 <skraynev_> again from me.
20:26:49 <skraynev_> so the idea, that sometimes using outputs may be crucial for stacks.
20:27:08 <skraynev_> if we have template with 100 vms and 100 outputs values
20:27:20 <skraynev_> which asks ip or id of these vms
20:27:20 <stevebaker> skraynev_: do you mean that outputs are not available during IN_PROGRESS?
20:27:31 <skraynev_> no
20:28:00 <asalkeld> it's slow?
20:28:03 <skraynev_> that it can not be scale for big cloud
20:28:08 <skraynev_> asalkeld: right
20:28:22 <pas-ha> for a big stack you mean
20:28:25 <skraynev_> we asked all these instances and it fails with timeout error
20:28:42 <skraynev_> pas-ha: big cloud with big stacks :)
20:28:43 <pas-ha> heat stack show becomes a tiny ddos
20:28:55 <skraynev_> pas-ha: right
20:29:01 <asalkeld> what timeout comes into play?
20:29:02 <stevebaker> skraynev_: is each one resulting in a nova GET?
20:29:12 <pas-ha> stevebaker, yes
20:29:14 <skraynev_> asalkeld: 1 minute I suppose
20:29:27 <skraynev_> stevebaker: correct
20:29:30 <stevebaker> at least one heat has the value it is memoized for the duration of the operation
20:29:55 <asalkeld> well we do have the caching  on the way
20:30:30 <stevebaker> skraynev_: so ID shouldn't result in a nova call, since IP is so often requested maybe we should be storing it in the Server resource_data
20:30:56 <zaneb> I have sometimes thought that convergence should have like a dummy resource (or something, it could be a separate db table) for caching outputs
20:30:59 <stevebaker> hashtag fixedbyconvergence
20:31:04 <zaneb> so they'd be part of the dependency graph
20:31:23 <skraynev_> stevebaker: it solves only this case, but if it will be something else (another resources with another attribute)
20:31:28 <zaneb> stevebaker: no, it would require some extra development
20:31:39 <pas-ha> it seems it would be better to implement sort-of yielding thing in stackresource (like we have for check_*_thing) when it asks for outputs of its child resources
20:31:49 <skraynev_> stevebaker:  "hashtag fixedbyconvergence"  lol )))
20:31:50 <zaneb> (not currently planned)
20:31:55 <pas-ha> that would smear the ddosing
20:31:58 <pas-ha> oer time
20:32:28 <pas-ha> but timeout problem will be left for sure
20:32:50 <asalkeld> no easy solution here...
20:33:17 <skraynev_> zaneb: I like idea with separate table and caching. but I don't think that it will be soon or I am wrong?
20:33:32 <asalkeld> skraynev_: we have dogpile caching on the way
20:33:43 <asalkeld> not sure we need more mechanisms
20:33:49 <asalkeld> (for caching)
20:33:50 <skraynev_> asalkeld: yeah. Also we discussed some batching way ...
20:33:54 <zaneb> skraynev: no, you're correct
20:34:06 <zaneb> it won't be soon
20:34:26 <skraynev_> zaneb: convergence Phase 3 ? :)
20:34:35 <pas-ha> skraynev_, batching would help in create/update, not outputs resolution
20:35:04 <pas-ha> when no actual resource action is running
20:35:11 <zaneb> skraynev_: lol. well it would be possible to implement once phase 1 is working. phase 2a?
20:35:16 <skraynev_> asalkeld: may be. We may try to extend caching solution on this issue
20:36:01 <stevebaker> #topic open discussion
20:36:05 <asalkeld> i think that would be a good start skraynev_
20:36:09 <skraynev_> zaneb: I thought, that get_reality and some related stuff if phase 2a, so I agree with 2b :)
20:36:31 <zaneb> sold
20:37:02 <skraynev_> asalkeld: I just afraid about right approach, when we should update data in cache
20:37:24 <skraynev_> asalkeld: on each request or periodically...
20:37:33 <asalkeld> skraynev_: not sure
20:37:47 <asalkeld> skraynev_: maybe on stack-check?
20:37:50 <skraynev_> zaneb: :)
20:38:00 <stevebaker> #chair zaneb
20:38:01 <openstack> Current chairs: stevebaker zaneb
20:38:12 <zaneb> dammit
20:38:19 <stevebaker> I need to go soon, so zaneb can endmeeting
20:38:20 <skraynev_> asalkeld: but we speak about stack-show...
20:38:52 <zaneb> skraynev_: so I thing in convergence-world they should be refreshed when you trigger a converge
20:39:05 <skraynev_> stevebaker: ok. bye
20:39:05 <zaneb> s/thing/think/
20:39:16 <asalkeld> skraynev_: doesn't dogpile have an aging (you can set a timelimit on the caching)?
20:39:40 <kairat_kushaev1> asalkeld: Yes, it has
20:39:48 <zaneb> asalkeld: memcache does assuming you are using that as the backend
20:39:55 <zaneb> other backends not so much iirc
20:40:06 <asalkeld> so the next thing that needs it will have to go get it
20:40:07 <skraynev_> asalkeld: but we want get actual info on async stack-show request
20:40:28 <kairat_kushaev1> asalkeld: you can also invalidate some values using the dogpile
20:40:41 <kairat_kushaev1> invalidate = make old
20:40:51 <asalkeld> kairat_kushaev1: ok, thanks
20:41:22 <skraynev_> zaneb: so manually by command, and for other commands should be used values from DB ?
20:41:45 <asalkeld> skraynev_: that's also a good idea, trying to parallel'ise the show output
20:42:05 <zaneb> skraynev_: in convergence world that makes sense to me
20:43:24 <skraynev_> asalkeld, zaneb: but the one same issue - huge umbers of request in the same time  (after converge or after stack-show)
20:44:20 <zaneb> well with convergence we'll not be doing anything that we wouldn't have to anyway to do the converge
20:44:22 <skraynev_> hm... may be update cache step by step ...
20:44:36 <asalkeld> skraynev_: we should cache the attributes after create/update when we get the status
20:44:38 <zaneb> so I say we burn that bridge when we come to it
20:45:19 <zaneb> it's probably less polling-intensive than the create
20:45:57 <skraynev_> zaneb: sounds good
20:46:29 <skraynev_> asalkeld: and then update it in separate task periodically ..
20:46:59 <asalkeld> skraynev_: honestly not sure, i am not usually a fan of periodic tasks
20:47:07 <pas-ha> asalkeld, +1
20:47:19 <asalkeld> they become messy to scale
20:47:23 <skraynev_> asalkeld: ok. I think we will try to use caching + upload a spec for deep discussion
20:47:30 <asalkeld> i'd suggest on demand
20:47:40 <pas-ha> I would say cache every attr but "show" after create/update
20:47:44 <skraynev_> asalkeld: got it. I tend to agree :(
20:48:29 <pas-ha> they usually are not the thing which is changing in-band by itself
20:48:43 <skraynev_> pas-ha: not sure, what you mean by "show" after ...
20:48:48 <pas-ha> and out-of band changes - we wash our hands
20:48:55 <pas-ha> "show" attribute
20:49:04 <zaneb> -2 on periodic tasks
20:49:05 <pas-ha> which is everything an object has
20:49:42 <skraynev_> pas-ha: what about updating cache ?
20:50:09 <pas-ha> on demand if really needed. by check_reality for example
20:50:10 <zaneb> pas-ha: +1, and I wish we had implemented it that way from the beginning. slightly nervous about changing it now though
20:52:08 * asalkeld heading off .. sorting kids for school
20:52:40 <pas-ha> about "show" attr - it 1) might be too huge, 2) it *might* contain things that change in-band , you never now for every client
20:53:00 <skraynev_> pas-ha: and it's again do not solve issue with several requests in the same time.... however may it works successively now...
20:53:28 <skraynev_> pas-ha: unpredictable world!!
20:53:31 <skraynev_> :)
20:53:32 <pas-ha> skraynev_, several requests?
20:54:18 <skraynev_> pas-ha: one request - one output (which references on property not stored in Heat db)
20:55:24 <zaneb> any other discussion?
20:55:38 <pas-ha> if people put "show" attr to hard use - they should ask us to add a separate attr for what they are using it
20:55:40 <skraynev_> zaneb: none from me :)
20:55:42 <pas-ha> IMO
20:56:01 <zaneb> otherwise as your appointed overlord^W chair, I'm shutting this thing down :)
20:56:02 <pas-ha> zaneb, i'm good :)
20:56:05 <kairat_kushaev1> Guys, anyone noticed that stable/kilo is broken
20:56:17 <kairat_kushaev1> https://bugs.launchpad.net/heat/+bug/1461592
20:56:17 <openstack> Launchpad bug 1461592 in heat "stable\kilo is broken due to version conflict error" [Undecided,New]
20:56:39 <kairat_kushaev1> That's all from my side
20:57:12 <skraynev_> I think that pas-ha mentioned it above, but not sure about solution
20:57:27 <zaneb> Importance: Critical
20:57:44 <pas-ha> sorting out global requirements for kilo I guess
20:58:47 <zaneb> ok, thanks everyone
20:58:51 <zaneb> #endmeeting