Wednesday, 2014-10-15

*** tango|2 has quit IRC00:04
*** sarob has quit IRC00:04
*** ccrouch has quit IRC00:05
*** dims has joined #heat00:10
*** Qiming has joined #heat00:13
*** dims has quit IRC00:14
*** dims has joined #heat00:14
*** ccrouch has joined #heat00:16
*** sjmc7 has quit IRC00:20
*** Qiming has quit IRC00:22
*** randallburt has quit IRC00:29
*** sdake_ has joined #heat00:33
*** randallburt has joined #heat00:36
miguelgrinbergasalkeld: do you have a minute to chat about https://review.openstack.org/#/c/127699/?00:39
*** alexpilotti has quit IRC00:40
asalkeldok00:44
asalkeldjust looking00:44
*** cody-somerville has quit IRC00:44
miguelgrinbergI think your suggested solution does not work00:44
*** shakayumi has quit IRC00:45
miguelgrinbergand for context, what we want to do here is to set endpoint_type to internalURL for all clients as a default. We found that the only way right now is to set every client separately, the defaults in [clients] are not used00:45
miguelgrinbergbecause each [client_xxx] section has its own default set to externalURL00:46
asalkeldok, so what you need to know is "is the value set - non-default" not as i was reading "is there a config value of this attribute name00:47
miguelgrinbergcorrect00:47
asalkelddoes oslo.conf not give us a mechanism to say "is this the default"?00:47
miguelgrinbergand I achieved that by removing the defaults in the client specific sections00:47
miguelgrinbergmaybe I need to check again, didn't find how to do that00:48
asalkeldhttps://github.com/openstack/oslo.config/blob/master/oslo/config/cfg.py00:48
asalkeldjust looking now00:48
miguelgrinbergbut in any case, what would be the point in having a default in [client_xxx] if it will never be used?00:48
asalkeldi guess that's a fair point00:50
asalkeldcan't we set the default to the value of the [clients] section?00:51
asalkeldmiguelgrinberg, see https://github.com/openstack/oslo.config/blob/master/oslo/config/cfg.py#L23100:52
miguelgrinbergMaybe, but based on the brief debugging session I did I think at the time the defaults are set the config file hasn't been read yet.00:52
*** Drago has quit IRC00:52
asalkeldyou can use $another-value00:52
asalkeldso that the use of the config variable will be like any other cofig00:53
miguelgrinbergokay, that might work00:53
asalkeldit makes it easier to use00:53
miguelgrinberglet me give that a try, if that works I'll resubmit with that. Thanks!00:53
asalkeldnp00:53
*** spzala has joined #heat00:55
asalkeldmiguelgrinberg, :-(00:57
asalkeldhttps://github.com/openstack/oslo.config/blob/master/oslo/config/cfg.py#L241-L24400:57
asalkelddon't think that will work00:57
miguelgrinbergoh crap00:57
asalkeldtotally00:58
asalkeldwas a nice idea tho'00:58
miguelgrinbergyeah, would have been really nice00:58
miguelgrinbergvery explicit00:59
asalkeldwould be worth adding that to oslo.config so we can using it later00:59
*** ramishra has joined #heat01:00
miguelgrinbergI'll play with oslo.config a bit and see what can be done01:01
asalkeldcool, we can merge what you have tho' and clean up later01:02
miguelgrinbergasalkeld: that would be awesome, we would really like to have this fix by Juno.01:02
asalkeldok +2'd01:06
miguelgrinbergthank you01:07
*** apporc has joined #heat01:12
*** zhiwei has joined #heat01:13
*** sdake_ has quit IRC01:13
*** Qiming has joined #heat01:14
*** ramishra has quit IRC01:14
*** ramishra has joined #heat01:16
*** Yanyanhu has joined #heat01:20
ramishrastevebaker: Hi01:22
*** achanda has joined #heat01:24
*** EricGonczer_ has joined #heat01:24
*** rwsu has quit IRC01:30
*** Yanyanhu has quit IRC01:32
*** achanda__ has quit IRC01:32
*** EricGonczer_ has quit IRC01:32
*** gooblja_ has joined #heat01:32
*** EricGonczer_ has joined #heat01:32
*** ipolyzos_ has joined #heat01:32
*** erkules_ has joined #heat01:32
*** achanda_ has joined #heat01:32
*** zhiyan|afk has joined #heat01:32
*** Adri2000_ has joined #heat01:32
*** sileht_ has joined #heat01:32
*** stevebak` has joined #heat01:32
*** erkules has quit IRC01:32
*** rwsu has joined #heat01:33
*** ipolyzos has quit IRC01:33
*** gooblja has quit IRC01:33
*** zhiyan has quit IRC01:33
*** rushiagr_away has quit IRC01:33
*** stevebaker has quit IRC01:33
*** Adri2000 has quit IRC01:33
*** otoolee has quit IRC01:33
*** sileht has quit IRC01:33
*** john-n-seattle has quit IRC01:33
*** gooblja_ is now known as gooblja01:33
*** LiJiansheng has joined #heat01:33
*** achanda has quit IRC01:33
*** john-n-seattle has joined #heat01:34
*** achanda_ has quit IRC01:34
*** ramishra has quit IRC01:34
*** Yanyanhu has joined #heat01:34
*** dims has quit IRC01:34
*** zhiyan|afk is now known as zhiyan01:34
*** EricGonczer_ has quit IRC01:34
*** dims has joined #heat01:35
*** rushiagr_away has joined #heat01:35
*** achanda has joined #heat01:35
*** openstack has joined #heat01:43
Qimingstevebak`, there?01:52
*** anteaya has joined #heat01:52
*** openstackgerrit has joined #heat01:52
asalkeldQiming, might be lunch in NZ01:52
Qimingaha, 4 hours diff from Beijing01:52
*** LiJiansheng has quit IRC01:52
*** nosnos has joined #heat01:52
*** spzala has quit IRC01:52
*** rushiagr_away has joined #heat01:52
*** otoolee has joined #heat01:53
*** randallburt has quit IRC01:56
*** shakamunyi has joined #heat01:58
*** sdake_ has joined #heat02:00
*** LiJiansheng has joined #heat02:02
*** tonisbones has joined #heat02:05
*** tonisbones has quit IRC02:09
*** sdake has quit IRC02:10
*** larsks|alt is now known as larsks02:12
*** harlowja is now known as harlowja_away02:21
*** shakayumi has joined #heat02:24
*** shakamunyi has quit IRC02:27
openstackgerritliusheng proposed a change to openstack/heat: Log translation hint for Heat.contrib  https://review.openstack.org/10948402:27
*** Yanyanhu has quit IRC02:28
*** Yanyanhu has joined #heat02:28
openstackgerritA change was merged to openstack/heat: Add tox genconfig target  https://review.openstack.org/12844102:30
openstackgerritA change was merged to openstack/heat: Remove unused oslo lockutils module  https://review.openstack.org/12826702:31
*** shardy has quit IRC02:32
*** dims has joined #heat02:36
*** sdake_ has quit IRC02:37
*** dims has quit IRC02:40
*** sdake_ has joined #heat02:43
openstackgerritA change was merged to openstack/heat: Clarify snapshot deletion methods  https://review.openstack.org/11747302:44
openstackgerritA change was merged to openstack/heat: Make Rackspace Cloud DNS TTL constraints match API  https://review.openstack.org/12806102:44
openstackgerritA change was merged to openstack/heat: Adding tests for sahara client exeptions  https://review.openstack.org/12792702:44
asalkeldphew reached my review limit02:56
zhiweiHi, asalkeld. I saw some log message use dict(key=val) and others {'key': 'val'}. Which is preferred?02:56
asalkeldzhiwei, I am not sure it's import to me02:56
asalkeldwhat ever makes sense to the person writing it?02:57
zhiweibut one of my forks want me to change dict() to {}02:57
zhiweiand I don't want to change this.02:57
asalkeldone of your "forks"?02:57
asalkeldrebases?02:57
zhiweisorry.02:58
asalkeldzhiwei, what do you mean by "one of my forks"02:58
zhiweimy work mates.02:58
asalkeldi see02:58
asalkeldtell him/her "get over it"02:58
asalkeld;)02:58
zhiweibut he did not listen to me.02:59
asalkeldmore important stuff to worry about IMHO02:59
asalkeldzhiwei, what's the review?02:59
asalkeldi normally write code that looks like the code around it, so it fits in03:00
asalkeldjust try match what's around what you are doing03:00
asalkeldso when you are reading the code it looks consistent03:01
asalkeldzhiwei, does that ^ seem reasonable to you?03:01
zhiweiyes, I think it's ok for us.03:01
zhiweiI changed `i = i + 1` to `i += 1` before, but I don't want to change this again.03:02
asalkeldyeah, i think it's important as a reviewer to "pick your battles"03:03
asalkeldand not get too picky03:03
*** stevebak` is now known as stevebaker03:03
* asalkeld is trotting off to have lunch - brb03:04
zhiweiasalkeld: :)03:10
*** achanda has quit IRC03:14
*** achanda_ has joined #heat03:17
*** radez is now known as radez_g0n303:26
*** ramishra has joined #heat03:32
*** ramishra has quit IRC03:35
*** ramishra has joined #heat03:35
openstackgerritA change was merged to openstack/heat: Log translation hint for Heat.engine (part2)  https://review.openstack.org/12338703:36
openstackgerritA change was merged to openstack/heat: Log translation hint for Heat.engine (part3)  https://review.openstack.org/12338803:37
*** Goneri has quit IRC03:49
*** ramishra has quit IRC03:49
*** nosnos has quit IRC03:50
tiantianasalkeld: around?03:50
asalkeldhi, yip03:50
*** nosnos has joined #heat03:51
tiantianhttps://bugs.launchpad.net/heat/+bug/1381298 please have a look03:51
uvirtbotLaunchpad bug 1381298 in heat "Autoscaling failed due to the roles" [Undecided,New]03:51
*** achampion has quit IRC03:51
asalkeldokie dokie03:51
*** achampion has joined #heat03:52
asalkeldtiantian, can I defer to shardy for that one?03:52
asalkeldbut sounds about right03:53
asalkeldreminds me a little of another bug tho'03:53
tiantianAnd I will commit a patch for it, but I'm not sure it's right way03:53
*** wpf has quit IRC03:54
tiantianI found when create a trust context, only give the "heat_stack_owner" role, I think should inherit the roles03:55
*** nosnos has quit IRC03:55
openstackgerrithuangtianhua proposed a change to openstack/heat: Inherit roles for create_trust_context()  https://review.openstack.org/12850904:01
asalkeldtiantian, https://bugs.launchpad.net/heat/+bug/137656204:02
uvirtbotLaunchpad bug 1376562 in heat "Can't delegate optional roles" [High,Triaged]04:02
*** swygue has joined #heat04:04
asalkeldtiantian, would you be fixing both at the same time/04:06
*** renlt has joined #heat04:07
tiantianasalkeld: https://review.openstack.org/128509 is't ok handle in this way?04:07
asalkeldtiantian, that looks good to me04:08
asalkeldthis seems to be shardy's suggestion in bug 137656204:09
uvirtbotLaunchpad bug 1376562 in heat "Can't delegate optional roles" [High,Triaged] https://launchpad.net/bugs/137656204:09
asalkeldhis "option 1"04:09
tiantianyeah, I see it:)04:10
asalkeldtiantian, i think reference that bug in you patch too04:10
asalkeldso steven picks up on it04:10
tiantianI will add shardy to review my patch, and let's see what he suggest04:11
asalkeldtiantian, a test would be good too04:11
tiantiantks,will add the test:)04:12
openstackgerrithuangtianhua proposed a change to openstack/heat: Inherit roles for create_trust_context()  https://review.openstack.org/12850904:15
*** ramishra has joined #heat04:20
*** dims has joined #heat04:25
*** ramishra has quit IRC04:26
*** achampion has quit IRC04:28
*** achampion has joined #heat04:29
*** dims has quit IRC04:29
*** nosnos has joined #heat04:30
*** andreaf has joined #heat04:30
*** ramishra has joined #heat04:32
*** wpf has joined #heat04:47
*** sanjayu has joined #heat04:47
*** achanda_ has quit IRC04:49
*** achanda_ has joined #heat04:52
*** rushiagr_away is now known as rushiagr04:59
*** sdake_ has quit IRC05:02
*** arunrajan has joined #heat05:03
*** Yanyan has joined #heat05:11
*** rushiagr is now known as rushiagr_away05:11
*** jyoti-ranjan has joined #heat05:14
openstackgerritMike Spreitzer proposed a change to openstack/heat: Implement AZ spanning for AWS ASGs  https://review.openstack.org/11613905:14
*** saju_m has joined #heat05:14
*** lazy_prince has quit IRC05:15
*** Yanyanhu has quit IRC05:15
openstackgerritMike Spreitzer proposed a change to openstack/heat: Implement AZ spanning for AWS ASGs  https://review.openstack.org/11613905:16
*** rakesh_hs has joined #heat05:19
*** harlowja_at_home has joined #heat05:20
openstackgerritMike Spreitzer proposed a change to openstack/heat: Implement AZ spanning for AWS ASGs  https://review.openstack.org/11613905:23
openstackgerritEthan Lynn proposed a change to openstack/heat: Add unicode support for resource name  https://review.openstack.org/12815705:30
*** elynn has joined #heat05:31
*** Qiming_ has joined #heat05:31
*** rushiagr_away is now known as rushiagr05:32
*** Qiming_ has quit IRC05:33
*** Qiming_ has joined #heat05:34
*** Qiming has quit IRC05:34
*** harlowja_at_home has quit IRC05:38
*** Qiming_ has quit IRC05:38
*** sileht_ is now known as sileht05:38
*** sileht has joined #heat05:40
*** jprovazn has joined #heat05:42
openstackgerritAdrian Vladu proposed a change to openstack/heat-templates: Added Puppet Agent HOT  https://review.openstack.org/12856105:42
*** ifarkas has joined #heat05:43
*** achanda has joined #heat05:45
openstackgerritAdrian Vladu proposed a change to openstack/heat-templates: Added Puppet Agent HOT  https://review.openstack.org/12856105:46
*** nkhare has joined #heat05:48
*** ckmvishnu has joined #heat05:48
*** achanda_ has quit IRC05:49
*** kopparam has joined #heat05:49
*** unmeshg has joined #heat05:53
*** Goneri has joined #heat05:53
*** killer_prince has joined #heat05:56
*** killer_prince is now known as lazy_prince05:57
*** cmyster has joined #heat06:00
*** cmyster has quit IRC06:00
*** cmyster has joined #heat06:00
openstackgerritOpenStack Proposal Bot proposed a change to openstack/heat: Imported Translations from Transifex  https://review.openstack.org/12818806:03
*** saju_m has quit IRC06:05
*** zigo has joined #heat06:08
*** julienvey has joined #heat06:08
*** metral has quit IRC06:09
*** ramishra has quit IRC06:09
*** metral has joined #heat06:09
*** dims has joined #heat06:13
openstackgerritA change was merged to openstack/heat: Log translation hint for Heat.engine (part1)  https://review.openstack.org/10951206:16
*** dims has quit IRC06:18
*** arunrajan has left #heat06:21
skraynevgood morning06:22
skraynevhi aslkeld06:22
skraynevs/aslkeld/asalkeld06:23
asalkeldhi skraynev06:25
*** erkules_ is now known as erkules06:27
*** ananta has joined #heat06:34
*** asalkeld has quit IRC06:35
*** d0m0reg00dthing has joined #heat06:39
*** d0m0reg00dthing has quit IRC06:40
*** tomek_adamczewsk has joined #heat06:44
*** tspatzier has joined #heat06:45
*** achanda has quit IRC06:45
*** swygue has quit IRC06:46
*** saju_m has joined #heat06:47
*** andreaf has quit IRC06:50
*** asalkeld has joined #heat06:52
*** Adri2000_ is now known as Adri200006:54
*** jcoufal has joined #heat06:55
cmystermorning06:57
*** robklg has joined #heat06:57
skraynevhi cmyster06:57
*** jstrachan has joined #heat07:00
*** Guest86578 is now known as d0ugal07:04
*** d0ugal has quit IRC07:05
*** d0ugal has joined #heat07:05
*** ccrouch has quit IRC07:08
*** andersonvom has joined #heat07:08
*** ccrouch has joined #heat07:09
*** pas-ha has joined #heat07:11
pas-hamorning all :)07:11
*** tspatzier__ has joined #heat07:12
asalkeldhi pas-ha07:12
*** sabeen1 has joined #heat07:15
*** tspatzier has quit IRC07:15
*** sabeen3 has quit IRC07:16
*** pasquier-s has joined #heat07:24
cmysterhi07:24
skraynevhey pas-ha07:25
*** shardy has joined #heat07:25
*** Yanyan has quit IRC07:30
*** mcwoods has joined #heat07:36
goobljaIs somebody familiar with oslo messaging? I am truing to get heat notifications similar as it is done in ceilometer. I have qpid queue. and I am truing it with the code http://paste.openstack.org/show/121183/  but i get error : oslo.config.cfg.DuplicateOptError: duplicate option: rpc_backend :) maybe somebody has a clue?07:40
*** Yanyanhu has joined #heat07:42
goobljaerror is shown when function get_transport is called :(07:44
asalkeldgooblja, remove the config option07:46
asalkeldand just hardcode (for now) the exchange07:47
*** achanda has joined #heat07:57
*** liusheng has joined #heat07:58
*** tomek_adamczewsk has quit IRC07:59
*** tomek_adamczewsk has joined #heat08:00
*** jistr has joined #heat08:02
*** achanda has quit IRC08:02
*** tomek_adamczewsk has quit IRC08:02
*** tomek_adamczewsk has joined #heat08:03
*** derekh has joined #heat08:06
*** ishant has joined #heat08:10
*** alexpilotti has joined #heat08:11
*** jamiehannaford has joined #heat08:12
openstackgerritTetiana Lashchova proposed a change to openstack/heat: Create security group rules in loose loop  https://review.openstack.org/12823408:15
*** tomek_adamczewsk has quit IRC08:20
*** stannie has joined #heat08:30
*** kopparam has quit IRC08:30
*** rwsu has quit IRC08:31
*** kopparam has joined #heat08:31
shardytiantian: hey, do you have a moment to discuss https://review.openstack.org/#/c/128509/2?08:35
shardyIt looks basically good, but I'd like to figure out a way to make it solve bug #137656208:36
uvirtbotLaunchpad bug 1376562 in heat "Can't delegate optional roles" [High,Triaged] https://launchpad.net/bugs/137656208:36
openstackgerritA change was merged to openstack/heat: Log translation hint for Heat.contrib  https://review.openstack.org/10948408:40
*** fayablazer has joined #heat08:44
*** my_openstack_use has joined #heat08:49
shadowershardy: thanks for the resourcegroup patches, I'll have a look at them later today08:53
shardyshadower: np, please let me know if they do what you need :)08:54
shardyI'll push another revision shortly with the interface tweaks requested by stevebaker08:54
*** julienvey has quit IRC08:54
shadowershardy: yep, will do. SpamapS will have a thing or two to say, too, I'm sure08:54
shardyshadower: Yeah he left a positive comment on the review, which seems a good start :)08:55
*** rwsu has joined #heat08:56
shadoweroh cool08:56
*** julienvey has joined #heat08:56
*** mkerrin has quit IRC08:56
shadowershardy: there's no chance the patches will get into Juno Heat though, is there?08:56
*** Putns has joined #heat08:56
*** lazy_prince is now known as killer_prince08:57
*** tomek_adamczewsk has joined #heat08:58
*** killer_prince has quit IRC09:00
openstackgerritHang Liu proposed a change to openstack/heat: Change ThreadGroupManager.remove_event() to be compatible with greenthread callback  https://review.openstack.org/12858009:00
*** lazy_prince has joined #heat09:00
*** sorantis has joined #heat09:02
*** jstrachan has quit IRC09:05
*** sergmelikyan has joined #heat09:05
shardyshadower: unfortunately no, it's too late for Juno09:05
shadoweroh well09:05
*** nikunj2512 has joined #heat09:07
*** tomek_adamczewsk has quit IRC09:10
*** tomek_adamczewsk has joined #heat09:11
*** rushiagr is now known as rushiagr_away09:19
*** mkerrin has joined #heat09:20
*** jyoti-ranjan has quit IRC09:24
*** sorantis has quit IRC09:26
*** mkerrin has quit IRC09:26
*** lsmola has quit IRC09:26
*** jcoufal has quit IRC09:29
*** jcoufal has joined #heat09:29
openstackgerritIshant Tyagi proposed a change to openstack/heat: Add DB model for resource graph and also persist the graph.  https://review.openstack.org/12859009:31
*** renlt has quit IRC09:34
*** judf has quit IRC09:35
*** rushiagr_away is now known as rushiagr09:36
*** jyoti-ranjan has joined #heat09:37
*** lazy_prince is now known as killer_prince09:37
*** killer_prince is now known as lazy_prince09:40
cmystershardy: hey, is there a list of TBD tests like we have (had) for Tempest but for heat acceptance ?09:42
*** lsmola has joined #heat09:42
shardycmyster: "heat acceptance"?09:46
shardyhey, btw :)09:46
*** andreaf has joined #heat09:46
shardytbh I've lost interest in tempest recently since it became clear it's an unbelieveably unproductive time-sink09:47
*** vijayagurug has joined #heat09:48
*** lazy_prince is now known as killer_prince09:49
cmystershardy: we talked about it some time ago, writing heat acceptence tests inside heat09:49
shardycmyster: Do you mean the functional tests?09:50
shardyThe idea, IIRC was to require (or strongly encourage) new features to have an associated functional test09:50
cmysteryes,09:50
cmystera group of them09:50
shardythat should be possible for kilo, thanks to the work stevebaker has been doing moving things from tempest to the heat tree09:51
shardycmyster: so the answer is not yet then, but we can recategorize all the tempest stuff in the heat launchpad to be tagged "functional test" or something09:51
cmystereach new feature will have a set of tests that we can call acceptance tests for feature X and then when its passing validation we can say its accepted as a new feature bluh bluh bluh09:52
shardyI think we need to get the functional test job voting first, then push on getting new tests written09:52
cmysterwhere's the job ?09:52
shardycmyster: I've not used quite that terminology before, but basically yes, that sounds like what we'd like to happen09:52
*** andreaf has quit IRC09:53
cmystershardy: its QA term'09:53
shardyOriginally the plan was to require tempest patches to be posted for features but for obvious reasons, that's not worked out09:53
* cmyster nods09:53
*** alexheneveld has joined #heat09:56
*** mkerrin has joined #heat10:00
*** jstrachan has joined #heat10:02
*** amakarov_away is now known as amakarov10:02
openstackgerritVisnusaran Murugan proposed a change to openstack/heat: Validation to avoid duplicate stack names per tenant  https://review.openstack.org/12339710:02
*** cdent has joined #heat10:04
*** saju_m has quit IRC10:08
*** andreaf has joined #heat10:09
*** andreaf has quit IRC10:09
*** andreaf has joined #heat10:10
*** elynn has quit IRC10:13
*** Yanyanhu has quit IRC10:13
*** killer_prince is now known as lazy_prince10:16
*** dims has joined #heat10:18
*** jstrachan has quit IRC10:21
*** jstrachan has joined #heat10:22
*** ramishra has joined #heat10:29
*** kopparam has quit IRC10:42
ramishrashardy: Hello, would like your feedback for https://review.openstack.org/#/c/128182/10:42
*** kopparam has joined #heat10:43
ramishrashardy: please review it when you have time10:44
*** ckmvishnu has quit IRC10:44
*** andreaf has quit IRC10:47
*** kopparam has quit IRC10:47
*** andreaf has joined #heat10:47
*** swygue has joined #heat10:51
*** saju_m has joined #heat10:52
*** saju_m has quit IRC10:54
*** saju_m has joined #heat10:54
*** mkollaro has joined #heat10:54
*** nkhare_ has joined #heat10:55
*** saju_m has quit IRC10:55
*** saju_m has joined #heat10:56
*** saju_m has quit IRC10:57
*** saju_m has joined #heat10:57
*** nkhare has quit IRC10:58
shardyramishra: Hi! sure, will do10:58
*** andreaf has quit IRC11:00
openstackgerritunmesh-gurjar proposed a change to openstack/heat: Database models and apis for convergence  https://review.openstack.org/10901211:09
*** jprovazn has quit IRC11:09
*** nikunj2512 has quit IRC11:10
*** kopparam has joined #heat11:11
*** ckmvishnu has joined #heat11:13
*** zhiwei has quit IRC11:16
*** ramishra has quit IRC11:24
*** ramishra has joined #heat11:25
*** ramishra has quit IRC11:26
*** ramishra has joined #heat11:26
openstackgerritunmesh-gurjar proposed a change to openstack/heat: Added observed properties to resource show output  https://review.openstack.org/11283811:27
*** apporc has quit IRC11:31
*** my_openstack_use has quit IRC11:32
asalkeldunmeshg, we should probably have somewhere people can give feedback on that design11:33
asalkeldmaybe an etherpad?11:33
asalkeldon the wiki11:33
asalkelddoh, it's right there11:34
*** radez_g0n3 is now known as radez11:37
*** ramishra has quit IRC11:38
unmeshg@asalkeld: yeah...its in the wiki11:41
asalkeldyes, thanks busy there now11:41
*** nikunj2512 has joined #heat11:41
*** pasquier-s has quit IRC11:42
*** swygue has quit IRC11:43
*** ramishra has joined #heat11:47
*** andersonvom has quit IRC11:50
*** Qiming has joined #heat11:51
*** cmyster has quit IRC11:52
*** andersonvom has joined #heat11:53
*** andersonvom has joined #heat11:54
*** nkhare_ has quit IRC11:55
*** lazy_prince is now known as killer_prince11:55
*** ananta has quit IRC11:55
*** mspreitz has joined #heat11:55
*** unmeshg has left #heat11:56
*** inc0 has joined #heat11:57
*** rpothier has joined #heat11:57
inc0hello11:57
asalkeldhi11:57
*** ananta has joined #heat11:58
*** rakesh_hs has quit IRC11:58
*** unmeshg_ has joined #heat11:58
*** rakesh_hs has joined #heat11:59
*** tspatzier__ is now known as tspatzier11:59
inc0https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg37483.html I've proposed this to be part of convergence, any thoughts guys?12:00
*** ramishra has quit IRC12:00
*** killer_prince is now known as lazy_prince12:00
*** ananta has quit IRC12:01
*** jdandrea has quit IRC12:09
*** jdandrea has joined #heat12:09
*** EricGonczer_ has joined #heat12:10
*** lazy_prince is now known as killer_prince12:10
*** ananta has joined #heat12:11
*** dims has quit IRC12:11
*** dims has joined #heat12:12
*** EricGonczer_ has quit IRC12:14
*** dims_ has joined #heat12:14
*** alexheneveld has quit IRC12:15
*** dims has quit IRC12:16
*** aweiteka has joined #heat12:16
*** killer_prince is now known as lazy_prince12:23
*** bdossant has joined #heat12:24
*** lazy_prince is now known as killer_prince12:26
*** jdob has joined #heat12:27
*** alexheneveld has joined #heat12:28
*** pasquier-s has joined #heat12:34
*** ishant2 has joined #heat12:37
*** ramishra has joined #heat12:38
*** kopparam has quit IRC12:40
*** kopparam has joined #heat12:41
*** spzala has joined #heat12:41
therveinc0, Why you don't want it to happen?12:44
therve(sarcarsm)12:45
*** kopparam has quit IRC12:45
inc0therve, explain please? I'm code developer, I'm oblivious to such basic communication issues12:46
*** spzala has quit IRC12:49
*** andreaf has joined #heat12:52
*** sanjayu has quit IRC12:53
*** simonc has joined #heat12:59
*** andreaf has quit IRC13:00
mspreitzananta: are you here?13:00
anantamspreitz...yeah...13:00
anantasorry i lost some context13:01
asalkeldg'night it is 11pm here13:01
mspreitzI want to ask about the future of OS::Heat::HARestarter13:01
*** asalkeld has quit IRC13:01
Qimingasalkeld, bye13:01
mspreitzDo you agree that when the time comes...13:01
skraynevasalkeld: g' night13:02
inc0mspreitz, one thing, how do you want to autoheal instance that is supposed to be failed?13:02
mspreitzwe can replace the implementation of HARestarter?13:02
Qimingmspreitz, many ways, in addition to delete + create13:02
mspreitzinc0: note that the pattern I have been following is health maint. on a user-chosen assembly, not necessarily a single resrouce13:02
*** andreaf has joined #heat13:03
*** ckmvishnu has left #heat13:03
mspreitzananta: my concern is that some people have said that HARestarter is inconsistent with convergence13:03
mspreitzananta: but we need to give users at least a transition period of 1 release13:03
anantamspreitz: i need to look at HARestarter13:04
mspreitzananta: or no need to transition, if we can change the implementation behind the resource names that they write in their templates13:04
anantaBTW, how is it related to convergence?13:04
ryansbmspreitz: I feel like changing behavior of HARestarter wouldn't be great (ambiguity, since users don't necessarily know what Heat version their provider runs) I'd much rather have a new resource & deprecate HARestarter13:04
ryansbinstead of having users think it does X where it actually does Y in their version.13:04
mspreitzLook at the Heat chat two weeks ago, and at the patch for deprecating HARestarter...13:04
mspreitzyou will see remarks, and who made them, about HARestarter being inconsistent with convergence13:05
anantaok13:05
zanebspoiler: it was me13:05
mspreitzryansb: exactly.  My whole concern here is treating users right13:05
mspreitzzane: do you think now that we could simply change the implemenation behind the buggy name?13:05
mspreitzand, separately, fix the name bug13:06
mspreitz(fixing the name bug can obviously be done with a proper transition period)13:06
ryansbThat seems like it'd be more confusing for users than introducing a new resource & deprecating HARestarter13:07
zanebso... it'll be the same except for the name and the implementation?13:07
*** swygue has joined #heat13:07
*** tonisbones has joined #heat13:07
mspreitzryansb: when I speak of changing the implementation, I mean to NOT change the net result, only how it is accomplished.  To users it would be a non-change.13:08
mspreitzzaneb: the name change would be a separate change, done in a way that does NOT introduce confusion.13:09
ryansbah, I see what you're saying. I thought you meant change as in "fix bugs & work a bit differently to make more sense" not just "fix bugs w/ no user facing change"13:09
mspreitzzaneb: can we leave the name change out of it for now, and focus on the harder problem first?13:09
zanebthe implementation isn't the problem though. *everything* is the problem. it's completely broken by design13:09
mspreitzzaneb: I recall you saying things like that before.  Which is why I am dubious with what asalkeld said at the end of today's meeting.13:10
mspreitznamely, that we could just change the implementation without users noticing13:10
*** rpothier has quit IRC13:11
mspreitzanant: ^^^ is what I want your opinion on13:11
mspreitzananta: I mean13:11
mspreitzananta: are you still here?13:11
anantayup13:12
inc0allright...and why can't we make convergence work in exactly the same way? (we get alarm->contionous observer gets alarm->contious observer restarts an instance)13:12
*** ramishra has quit IRC13:12
anantai was reading about HARestarter =) ... sorry my limited understanding is not helping me13:12
mspreitzIf you want to reconvene on this topic later, I am OK with that13:13
anantathe way I see is that the HARestarter is another resource type...so it should be handled differently in convergence13:13
inc0ananta, that would produce us exponential number of resources. HARestartedInstance, HARestartedNetwork ....13:14
inc0I'll add EvacuatedOnSharedStorageInstance ;)13:14
inc0couldn't we add for example something like handle_autoheal and that method itself will be plugin based?13:16
mspreitzI did not understand the thinking behind either ananta or inc0 remarks13:16
*** saju_m has quit IRC13:16
unmeshg_does it sound correct that convergence should not observe resources which are in HARestarter ?13:16
mspreitzananta: did you make a typo?  I do not understand what you tried to say13:16
mspreitzunmeshg_: I an concerned about mission creep for convergence13:17
inc0unmeshg_, my understanding is that it will be actually ceilometer which will observe resoruce13:17
QimingIIUC, the problem lies in this line self.stack.restart_resource(victim.name), while 'restart' means different things for different resource types.13:17
mspreitzunmeshg_: I think convergence should keep thinks like the heat template says.  Making sure that those things are really working is a higher level issue.13:17
mspreitzs/thinks/things13:17
mspreitzQiming: there is certainly a name bug here...13:18
Qimingmaybe we can have resource.handle_restart() default to delete followed by create, but each subclass can have its own implementation13:18
mspreitzQiming: I propose to discuss that separately.  For now can we pretend that the name is "delete-and-recreate-thing-and-all-its-dependents" ?13:18
QimingI don't see a conflict with convergence work there, iiuc13:18
inc0mspreitz, question is, is instance, which works in incorrect way, actually working? or its state is "failed" and then convergence should do its magic?13:19
anantamspreitz: yup... the properties of resources as they are seen in the template13:19
*** EricGonczer_ has joined #heat13:19
mspreitzinc0: what ananta said13:19
unmeshg_IMO, handle_recreate would be more appropriate than handle_restart13:19
mspreitzinc0: I do not think we should get convergence into the business of evaluating more than what is said in the template13:20
anantamspreitz: I agree13:20
Qimingunmeshg_, +1, since all resources are 'created' rather than 'started'13:20
mspreitzQiming, unmeshg_: YES, YESS, YESSS the name is a bug!  Can we please discuss that separately?13:20
*** Drago has joined #heat13:20
anantai think HARestarter would be a special case to be handled13:21
mspreitzI mean, we can fix the name bug easily and orthogonally.  Let us not confuse the hard work by thinking about the name bug13:21
zanebI'm fine with special cases. Not fine with designing all of convergence around one special case that will go away a release later.13:22
unmeshg_@ananta: I agree13:22
anantazaneb: I agree13:22
unmeshg_@zaneb: eventually if we remove HARestarter, Convergence should start observing those resources as well13:22
unmeshg_am I missing something ?13:23
*** andersonvom has quit IRC13:23
mspreitzunmeshg_: convergence will certainly obsserve resources, but in a shallow way13:23
anantathe HARestarter interferes with what convergence does..but it can be handled as a special case...i am sure there could more cases like that13:23
zanebI don't think convergence should not observe those resources13:23
zanebthe tricky part is that HARestart performs weird operation on its containing stack13:23
zanebwhich no other resource does, and which no resource should ever do13:24
*** che-arne has joined #heat13:24
mspreitzI think convergence should not attemt to evaluate health in as high level a way as an OS::Neutron::HealthMonitor specifies13:24
inc0mspreitz, you think heat should do it at all?13:24
*** alexpilotti has quit IRC13:24
Qimingif we fix the HARestarter implementation, could we treat it as a extremely light-weight convergence engine?13:25
mspreitzAnd, as was noted in the meeting today and now here, the unit for recovery might not be an individual resource.  The high level health needs to be on a user-chosen assembly of resources13:25
*** ggiamarchi has joined #heat13:25
mspreitzinc0: those words could be interpreted in a few ways...13:25
jdandreaunmeshg_: +113:26
inc0mspreitz, what I mean, what will actually tell HARestarter to do HARestart?13:26
*** andreaf has quit IRC13:26
inc0what will monitor health? Will that be heat? neutron lbaas?13:26
inc0ceilometer?13:26
mspreitzinc0: it should be possible to get the job done through heat, but the user has to bring some intelligence to the party, through the temnplate13:26
inc0mspreitz, thing is, health monitoring is a horribly complicated thing if we want to this properly (vide nova servicegroup, still has lots of issues)13:27
mspreitzIf you look at the pattern I showed in my addition of optional health to ASG you see what I mean13:27
mspreitzinc0: I am not proposing to solve world peace here...13:27
*** andreaf has joined #heat13:28
mspreitzinc0: if we add the ability to maintain health as determined by LBaaS, that is a big step up from where we are today.  And...13:28
mspreitz.. if we also expose a hook that can be used by some more sophisticated system, then we have a properly open design13:28
mspreitz.. that could support a user doing as sophisticated health maint. as she likes13:28
inc0why not *just* expose hook (RPC and convergence_continous_observer?) to mark resources as failed?13:29
mspreitzinc0: the unit of failure/recovery might not be a single resource.  Example: a volume13:29
mspreitz(zaneb: prepare,  this is going in your direction)13:30
inc0yes, thats other topic I'd love to mention, later -> configurable autohealing13:30
mspreitzinc0: OK, here comes a simplification I will defend:...13:30
mspreitzhealth maint. can and should be formulated as: test, and if it fails delete and re-create, a user-chosen assembly of resources13:31
mspreitzThis is a pattern the world understands and uses.13:31
inc0what I'm really concerned is "test" part and "delete and re-create"13:32
mspreitzzaneb: is that the pattern you say is broken by design?13:32
openstackgerritVijayaguru Guruchave proposed a change to openstack/python-heatclient: Pass auth_url if os_no_client_auth specified.  https://review.openstack.org/12798713:32
zanebmspreitz: not sure. maybe13:32
mspreitzinc0: so if we expose a hook that can be used to effect the delete and re-create then the test part is extensible (which is goodness)13:33
mspreitzinc0: as for the delete and re-create, that is something heat knows how to do13:33
*** jyoti-ranjan has quit IRC13:33
mspreitzzaneb: can you elaborate?13:33
*** EricGonczer_ has quit IRC13:33
inc0mspreitz, sure, but in my opinion that too is oversimplification, not all resources can be deleted and re-created13:33
inc0vide volume13:34
mspreitzinc0: I keep saying, do not think this has to be about an individual resource13:34
inc0but again, thats something we can tackle by introducing more or less plugin-based actions13:34
zanebhttp://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resources/instance.py#n63 <- this part right here is what's broken by design13:34
mspreitzI keep saying, the unit of health is a user-chosen assembly13:34
zanebhttp://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resources/instance.py#n103 <- also this part13:34
mspreitzzaneb: your concern with L63 is that one resource has an unholy relationship with a peer.  That is not the only way to get the pattern I want.13:36
Qimingzaneb, +113:36
inc0zaneb, why exactly "find resource" is wrong? I mean, if convergence contionus observer gets notification from nova that instance xxx has got ERROR status, shouldn't it do exactly the same?13:36
inc0second thing, again, configurable, plugin based handle_autoheal will do the job13:36
mspreitzzaneb: about L103 --- do you think it is reasonable for a Heat API to have an operation that takes a (stack ID, resource name) and does this: delete that resource, and re-create it according to the existing template?13:37
zanebinc0: resources that go digging around in the stack are evil13:37
*** nikunj2512 has quit IRC13:38
mspreitzzaneb: ... and propagate any consequent changes in the attributes13:39
*** nikunj2512 has joined #heat13:39
zaneblike an update with force-replace?13:40
*** zz_gondoi is now known as gondoi13:41
*** charlesr has joined #heat13:41
Qimingmspreitz, can you explain the word 'assembly'? sorry I haven't read your patch #127884, maybe I have missed something?13:41
*** andreaf has quit IRC13:42
openstackgerritAndrea Rosa proposed a change to openstack/heat: Add validation constraints on config inputs  https://review.openstack.org/10549613:44
*** jasond has joined #heat13:44
mspreitzzaneb: yes, like an update with force-replace.  Except that the caller does not provide a resource snippt, he just says "delete and re-create, you already know what I want for the create part"13:44
mspreitzQiming: for "assembly" you can read "stack", that is not very different from what I mean13:45
zanebmspreitz: yeah, hopefully we will soon have an update PATCH api where you don't have to supply the template again13:45
mspreitzQiming: I said assembly because it does not logically have to exactly be a stack, the user requirement is only that it is a user-specified set of things (with relationships)13:46
*** ananta has left #heat13:46
zanebI could see something where we adding a parameter listing resources we want to force to be replaced13:46
Qimingmspreitz, I see13:46
*** unmeshg_ has quit IRC13:47
*** andreaf has joined #heat13:48
*** rakesh_hs has quit IRC13:48
*** rushiagr is now known as rushiagr_away13:48
mspreitzzaneb: so you would be comfortable with a resource type that takes a template as a parameter, instantiates that template as a nested stack, monitors health info about that nested stack from LBaaS, and does a delete and re-create when LBaaS says the nested stack is unhealthy13:48
mspreitzzaneb: have I got that right?13:48
mspreitz(I really should have said the parameter is a snippet, as in OS::Heat::ASG, not just a template)13:50
*** jmckind has joined #heat13:51
*** jmckind has quit IRC13:51
*** jmckind has joined #heat13:51
*** jmckind has joined #heat13:52
*** ishant has quit IRC13:52
*** jmckind has quit IRC13:52
*** ishant2 has quit IRC13:52
* mspreitz has another question, but will wait until zaneb answers the outstanding one13:53
*** jmckind has joined #heat13:53
inc0ok, so again, how convergence with exposed hook and configurable autoheal action doesn't solve this issue?13:53
inc0(basic configuration is destroy-recreate, but I will argue that it can't be hardcoded for same reason why "restart resource" broke HARestarter)13:54
mspreitzinc0: your parenthetic remark looks like a digression, but I have to follow it first..13:55
mspreitzinc0: what is your view of why "restart resource" broke HARestarter?13:55
inc0because not every resource should be restarted13:55
inc0if you have instance on host which blew up, restarting it won't really do anything13:56
*** tango|2 has joined #heat13:56
mspreitzinc0: remember that "restart" is a name bug.  What HARestarter actually does is delete and re-create.  Supposing Nova notices that the host is gone, the re-create will put the new server on a working host13:57
inc0mspreitz, or delete whole volume with database because volume controller has lost a ping for few seconds13:58
mspreitzAFK13:58
inc0my point is, there is no "one size fits all" approach to autohealing13:58
inc0which means, that should be configurable13:58
inc0per resource instance really in my opinion (not per resource type)13:59
*** sdake_ has joined #heat13:59
*** kopparam has joined #heat13:59
inc0evacuation is form of autohealing, recreating is another form13:59
inc0vm with DB could be evacuated and vm with frontend could be recreated14:00
*** andreaf has quit IRC14:00
zanebmspreitz: I'd be more comfortable with that than I am with HARestarter14:01
inc0zaneb, and how about my idea?14:01
*** ramishra has joined #heat14:02
* mspreitz is back from AFK14:02
zanebinc0: convergence is a longer term thing. that's the whole reason for this discussion14:02
*** EricGonczer_ has joined #heat14:02
mspreitzinc0: storage volume or DB can not be healed on its own.  You can only heal a larger unit that includes the storage.14:03
mspreitzinc0: OTOH, in some cases you can restore health with a less drastic action.  E.g., for a VM a true restart might suffice in some cases.14:03
inc0mspreitz, my point...configurable14:04
inc0client should decide what is best way to restore health of resource14:04
*** vijendar has quit IRC14:04
mspreitzinc0: my point is that I just talked about two separate issues: scope and how to recover.  And for the latter, I am dubious about anything less drastic than delete and re-create14:04
*** aweiteka has quit IRC14:05
*** kopparam_ has joined #heat14:05
inc0I'm dubious about anything hardcoded14:05
inc0thats my point14:05
mspreitzReality is like this: in some cases a true restart of a VM is enough, but neither Heat nor LBaaS will be smart enough to know that. Better to always delete and re-create14:05
*** alexheneveld has quit IRC14:05
*** aweiteka has joined #heat14:06
inc0mspreitz, not if you can't afford losing ephemeral disk14:06
inc0evacuate is good approach then14:06
mspreitzinc0: if you can not afford to lose an ephemeral disk then you have a big problem no matter what we do!14:06
inc0mspreitz, in reality, this is often a case14:06
mspreitzI assume the user builds a reliable distributed system14:06
*** ramishra has quit IRC14:06
inc0mspreitz, not always they don't14:07
openstackgerritSteven Hardy proposed a change to openstack/heat: ResourceGroup update refactor  https://review.openstack.org/12836414:07
openstackgerritSteven Hardy proposed a change to openstack/heat: ResourceGroup add "force_remove" property  https://review.openstack.org/12836514:07
Qiminga (accidentally) stopped server can be started; an deleted server can be recreated; a crashed server can be rebooted/rebuilt; a server previously running on a failed host can be evacuated... heat just needs to call the existing Nova API based on the server's status.  It is not always about delete-recreate.14:08
inc0but still, I feel that we agree in general but we argue on something that is actually an implementational detail14:08
*** kopparam has quit IRC14:08
mspreitzinc0: in the real world there is always the possibility of a disk crash.  I think what you mean is that while the user can cope, it may be expensive and he would prefer not to pay the price if he does not have to.14:08
mspreitzQiming: inc0: I agree that if we want to try to get the cheaper recoveries when possible we have to have some choice.14:09
inc0mspreitz, is convergence (whenever it happens) we expose hook for LBaaS to use (or anything else), and if somethings hits this hook, convergence contionus observes marks resource as failed and rebuilds it (however it can)14:10
mspreitzQiming: inc0: I have been focused on the most expensive one because we have trouble with something that simple14:10
inc0will that fit your case?14:10
mspreitzinc0: OK, now it is time for me to get back to the conversation with zaneb...14:11
mspreitzzaneb: would you be OK with a service that, for whatever reason, calls the operation on a (stack ID, resource name) that deletes and re-creates that resource?14:11
Qimingmspreitz, imho, LBaaS based failure detection is only one possibility, but I don't like the approach to hardcode it in the resource type implementation14:11
mspreitzQiming: I agree.  That is why I ask for an exposed ability to delete and re-create whenever something else desires it14:12
*** shakayumi has quit IRC14:12
*** vijayagurug has left #heat14:13
zanebmspreitz: as long as it follows the usual stack update workflow, and just does a no-questions-asked UpdateReplace on the specified resource14:13
mspreitzzaneb: I am not sure I understand your limitations there.  Can you elaborate?14:14
inc0anyway, I got to go, see you all14:14
mspreitzinc0: thanks14:15
*** sdake has joined #heat14:15
*** sdake has quit IRC14:15
*** sdake has joined #heat14:15
inc0mspreitz, will you be in Paris?>14:15
mspreitzinc0: unfortunately, no.  I hope my colleague Bill Arnold will be there14:15
*** rpothier has joined #heat14:15
inc0I think there is design session for HARestarter in agenda14:16
zanebmspreitz: just go in and replace one resource isn't enough, you have to respect the entire dependency graph, which means the mechanism needs to be a stack update14:16
mspreitzinc0: I wish I could go, but I have been told I can not14:16
mspreitzzaneb: OK, that makes sense to me.14:16
mspreitzzaneb: in fact, I think we should even respect hierarchy14:16
mspreitzzaneb: i.e., if updating a nested stack, and its outputs change, there is an update being done on the parent stack too14:17
*** nkhare_ has joined #heat14:17
mspreitzzaneb: but the interface is simple - just point and shoot at a single resource14:18
mspreitz(which might be a stack itself)14:18
zanebmspreitz: yeah, agree, I'm hoping that's something convergence will be able to do for us. I think it's unrealistic for now :/14:18
inc0zaneb, because of human resources?14:18
*** charlesr has quit IRC14:19
zanebno14:20
inc0whats blocking us then?14:20
*** ramishra has joined #heat14:20
goobljaHello everybody :) Have question about heat.conf... what parameters have to be set to enable notifications on qpid mq... it seems that by default it dosen't sends notifications... :(14:21
*** aweiteka has quit IRC14:21
thervegondoi, notification_driver=messaging in the configuration file should do it14:22
therveIf the rest is configured properly14:22
shardytherve: hey, did you see my question about the oslo liason the other day?14:23
mspreitzzaneb: I think you just said that HARestarter would be fine if it were a service instead of a resource type14:23
therveshardy, No sorry14:23
*** alexpilotti has joined #heat14:23
zanebmspreitz: I think I did14:23
shardyWe were looking for a volunteer last week so I said I'd do it for kilo, wanted to check that was cool with you, e.g that you weren't keen to do it14:24
shardy(you're welcome to if you do :)14:24
therveshardy, Awesome thanks14:24
mspreitzzaneb: and, being a service, there should be a resource type for it, right?14:24
zanebmspreitz: sure14:24
zanebin fact, I think that's exactly the problem with HARestarter14:24
shardytherve: Ok, cool, just wanted to check given that you did it for Juno14:25
mspreitzzaneb: so you think we could fix HARestarter without requiring any changes in user templates?14:25
zanebit's a resource that is a place to hang some code in Heat, instead of a representation of some other thing that has it's own API and UUID14:25
therveshardy, Yeah I hope I did an okay job up until recently :)14:25
*** ramishra has quit IRC14:25
zanebthat's what I've been consistently trying to get rid of since at least Havana14:25
*** cody-somerville has joined #heat14:26
shardytherve: Yes absolutely!  I just figured it was about time for me to volunteer for something and you weren't around at the meeting ;)14:26
mspreitzzaneb: OK, I understand better what you have been trying to say14:26
*** cody-somerville has quit IRC14:26
*** cody-somerville has joined #heat14:26
*** charlesr has joined #heat14:27
jdandreazaneb: let Heat "orchestrate" and let other services "perform," is that the gist? (I've been trying to drive that point home in meetings here as well ... *notes stack-digging resources are evil*)14:27
*** Qiming has quit IRC14:27
zanebjdandrea: ++14:28
* mspreitz wonders if confusion between objects and operations is the problem here14:29
jdandreazaneb: There has been confusion on my end (about orchestrate vs perform). I resorted to an composer/conductor/orchestra metaphor.14:29
jdandreamspreitz: I think you may be on to something.14:29
*** Qiming has joined #heat14:29
zanebmspreitz: yep, I think that's part of it too14:29
mspreitzzaneb: let us consider an HARestarter service.  What does it do?  When I invoke its operation, it does nothing more or less than invoke the Heat operation that we discussed earlier14:29
*** alexpilotti has quit IRC14:29
jdandreas/to an/to a14:30
zanebmspreitz: but it does it on a stack that you have passed in a reference to. it follows that the HARestarter resource itself cannot be in the same stack14:30
mspreitzzaneb: ah, right, today's HARestarter offers a subset of the functionality of our hypothetical service14:31
mspreitzzaneb: but I do not see subsetting as evil, just immaturity14:32
*** pas-ha has quit IRC14:32
mspreitzAha!  Here it is...14:32
*** kopparam_ has quit IRC14:33
zanebtoday's HARestarter offers a broken version of the functionality of our hypothetical service, while ignoring the central organising principle of Heat, which is the dependency graph14:33
mspreitzIf only Ceilometer could let me write a general Heat API invocation as an alarm action, we would all be happy14:33
* shardy wonders how we got from health-aware ASG's to an HARestarter *service*14:33
shardyit's like one of those bad zombie movies14:33
mspreitzshardy: it is because we want to open up to more general detection14:33
mspreitzshardy: so we need an exposed "recover" operation14:34
goobljatherve so i added notification_driver=messaging, but i don't see notification messages... to gather them I use this script http://paste.openstack.org/show/121266/ with this script ia can gather nova notifications if I change exchange to nova but with heat it looks empty14:35
*** aweiteka has joined #heat14:35
mspreitzIf we agree that Heat could have a proper delete-and-recreate-this-resource operation, and a Ceilometer alarm or a general thing could invoke it, we would be done, right?14:35
*** kopparam has joined #heat14:35
zanebshardy: rofl14:35
thervegondoi, I would expect you to subscribe to a notification queue14:36
thervegooblja sorry14:36
thervegooblja, Maybe you configured nova differently?14:36
gondoi:D14:36
inc0check if there is anything consuming notifications...that might be an issue too14:37
mspreitzinc0: does my outline make sense to you?  (Again, focusing for now only on the nost drastic recovery)14:37
inc0mspreitz, yes, but I think we shouldn't do another service...let's push convergence to that point asap14:38
mspreitzinc0: I did not say anything about another service14:38
mspreitzinc0: oh wait14:38
mspreitzinc0: for detection by LBaaS we do not need another service. We just need a proper delete-and-recreate in the Heat API and Ceilometer to be able to invoke it14:39
*** kopparam has quit IRC14:39
mspreitzinc0: for general detection you need some way to get that "general" part in there, that is where the possibility of another service arises14:39
*** nkhare_ has quit IRC14:39
thervemspreitz, That API could be the signal API, no?14:39
therveDon't know if we want something more dedicated than that14:40
shardymspreitz: I maintain the quickest way to achieve that is simply to use an autoscaling group (potentially configured with size one), combined with some additional data which allows specification of what unhealthy members should be removed from the group14:40
*** kopparam has joined #heat14:40
mspreitztherve: I think there shoud be a proper Heat API for this.  There could also be a signal based way to do it.14:40
thervemspreitz, Well signal is a proper Heat API :)14:40
inc0shardy, doesn't that be convergence?14:41
mspreitztherve: I am not so sure of that.  signal requires synthetic users14:41
therveIf it still does it's a bug14:41
mspreitzshardy: so delete-and-recreate becomes workflow {set scaling group size to 0; set scaling group size to 1}, right?14:42
shardyinc0: I'm just saying that rather than overloading every problem around signal driven recovery, we could consider how to do it right now14:42
*** jergerber has joined #heat14:43
goobljatherve yeap http://paste.openstack.org/show/121268/ very interesting for nova there is no notification_driver defined. I also tried to add rpc_backend = qpid but then my dashboard could not get stack_list and no notifications apeared :(14:43
zanebshardy: I think that's more or less what I suggested on mspreitz's spec review14:43
shardymspreitz: No, you'd leave the size at 1, but either do a stack update with a hint saying which unhealthing thing gets removed, or that hint is contained (or obtainable via subsequent introspection) in the alarm signal14:44
shardymspreitz: https://review.openstack.org/#/c/128365/14:44
shardythat is a first step towards that for ResourceGroup14:44
*** cdent has quit IRC14:45
thervegooblja, What's the version of Heat you're using?14:45
shardyzaneb: Ok, cool, I feel like I've been constantly saying it since this whole HARestarter thing kicked off14:45
*** kopparam has quit IRC14:45
goobljatherve :) how to check it?14:45
*** cdent has joined #heat14:45
mspreitzshardy: update w hint vs signal seem different enough I am going to have to discuss them separately14:45
thervegooblja, Depends on how you installed it14:46
shardymspreitz: the internal mechanism is the same14:46
mspreitzshardy: my concern is for the user...14:46
shardyautoscaling *is* a stack update, in response to the signal14:46
*** jasond has quit IRC14:46
mspreitzshardy: I want a user to be able to write a template that creates a system that maintains its own health14:47
goobljatherve there was manual installation another man did it :) i don't realy know how :(14:47
*** tango has joined #heat14:47
mspreitzshardy: if there were a way to have a template speak of that "update w hint", and prescribe that it happens when a certain condition is met, that would be an OK outline14:47
*** tango|2 has quit IRC14:48
shardymspreitz: well that's what I'm proposing, you define an update policy which determines what happens14:48
*** funzo_ is now known as funzo14:48
mspreitzshardy: can you elaborate?  On what thing is there an update policy?  Does that update policy speak of the reaction or the condition or both?  If not both, how is it connected to the other?14:49
shardymspreitz: If you look in the resource group patch, you'll see there's a "remove_policies"14:51
shardythat is something which could equally be applied to AutoScalinggGroup, either directly via a template update and similar new property, or similar logic triggered via the signal and data gleaned from ceilometer14:52
shardy(both, probably)14:52
mspreitzshardy: OK.  How does a user write a template that causes such an update when LBaaS detects ill health?14:53
goobljatherve I found version I have heat 0.2.914:53
mspreitzyou may s/LBaaS detects ill health/a Ceilometer alarm fires/14:54
thervegooblja, That's the client version14:54
shardymspreitz: you need the ill health to trigger a ceilometer alarm, which then enables us to detect (either via the alarm signal payload or by querying ceilometer)14:54
shardythat some thing in the group needs to be added to the blacklist14:54
mspreitzshardy: Ceilometer alarm payload is not controllable by user.14:55
mspreitzshardy: if not in ceilometer alarm payload, what would do the right Ceilometer query?14:55
shardymspreitz: probably the template author would define a query in the remove_hints map, which would enable heat to query things which match the criteria of ill health14:56
mspreitzshardy: oh, wait, I am off by one level...14:57
mspreitzshardy: we are talking about a general way to maintain the health of one thing...14:58
mspreitzshardy: so there may not be a need for a parameter flowing from Ceilometer to Heat, the thing is implicit14:58
*** achanda has joined #heat14:59
mspreitzshardy: or, maybe I was wrong about the "we"14:59
*** randallburt has joined #heat14:59
inc0ok, not I really gtg, cya, mspreitz we'll talk later15:00
*** randallburt has quit IRC15:00
inc0now**15:00
*** randallburt has joined #heat15:00
mspreitzshardy: so I think I could maintain the health of one thing like this: create a ResourceGroup of count=1, resource=(the one thing), remove_policy=[0], and let it have a  webhook that does an update; (redefine remove_policy so that it is ignored at create time)15:01
mspreitz(I think that redefintion gives a better definition anyway)15:01
goobljatherve hmmm... can I somehow check heat version via command line.. i have openstack on Centos 7... and i suppose it was instaled by yum install15:02
mspreitzshardy: do you agree?15:02
shardymspreitz: s/ResourceGroup/AutoScalingGroup15:03
mspreitzwhy that substitution?15:03
mspreitzshardy: IIRC, in ASG, an "update policy" that specifies a change of 0 takes an early out15:04
*** inc0 has quit IRC15:05
mspreitzshardy: AutoScalingGroup.adjust takes an early out if the change is no change15:06
*** thedodd has joined #heat15:06
*** ramishra has joined #heat15:07
skraynevg'night all15:07
mspreitzskraynev: g'night15:07
shardymspreitz: ResourceGroup is a static template driven thing, what you want is signal driven group adjustments, which is ASG combined with ScalingPolicy15:07
*** achanda has quit IRC15:07
shardyMaybe the remove_hint becomes a property of ScalingPolicy, which then derives the unhealthy victims and passes the data to the ASG on adjustment15:09
mspreitzshardy: that sound a little better...15:11
mspreitzshardy: actually it is more than enough if I am trying to maintain the health of one thing (i.e., ASG with size fixed at 1)15:12
mspreitzshardy: when the size is fixed at 1, we need no parameter to specify which member to delete15:12
mspreitzshardy: if we go up a level and try to have a scaling group that maintains the health of all its members, we still have trouble identifying the member15:13
mspreitzshardy: so, are you talking about how to maintain the health of one thing, or of all the members of a scaling group?15:13
mspreitz(of general size)15:14
shardymspreitz: I'm saying they can be the same15:14
mspreitzshardy: so I will continue on the general size discussion...15:14
mspreitzshardy: since a Ceilometer alarm is a concrete thing, not a pattern, there has to be an alarm per member of the scaling group, right?15:15
*** alexpilotti has joined #heat15:15
*** rushiagr_away is now known as rushiagr15:16
*** robklg has quit IRC15:16
shardymspreitz: I have to go to a meeting now, but no, IMO you'd have one alarm15:16
mspreitzshardy: then I am really puzzled.  Can you send or post a complete outline when you get a chance?15:17
shardymspreitz: sure, I will try to do that15:18
mspreitzshardy: Thaniks!15:18
mspreitzThanks15:18
thervemspreitz, You can make a query after the alarm to know the specifics15:18
therveIIUC15:18
shardytherve: +115:18
mspreitztherve: "I" can not act; the user has to write a template15:18
thervemspreitz, Right, that's what Steven said, you would specify that query15:19
mspreitzthe user has to write a template that causes some particular thing to take action at the right time15:19
mspreitztherve: what agent can a template author cause to issue the right query?15:19
therveSorry I don't understand15:20
mspreitzwhere in the template does the query appear?15:20
jdandreaIs there a link to the list of environment variables I can get at during cloud-init (e.g., the resource ID for the server being instantiated)?15:20
therveIn the resource that get signaled by the alarm15:21
*** aweiteka has quit IRC15:21
mspreitztherve: Since Ceilometer does not allow user control over the HTTP request body in an alarm action, there has to be a distinct alarm per thing-whose-health-is-being-maintained, right?15:22
mspreitztherve: or are you proposing one query about the health of all the members of the group?15:23
thervemspreitz, Not if you query ceilometer after the alarm to get the proper information15:23
therveBut we're going in circle here15:23
mspreitztherve: I maybe just realized why we are going in a circle15:24
Qimingjdandrea, /var/lib/cloud/data/instance-id ?15:24
therveNo I mean we don't manage to understand each other :)15:24
jdandreaQiming: thank you!15:24
mspreitztherve: you are proposing that when a signal is received, the receiver queries about the health of all the members of the group; is that it?15:24
mspreitztherve: yes, I understood what you meant by circle15:24
thervemspreitz, Well not the health of all the members, but the id of the members with bad health15:25
mspreitztherve: we may have misunderstood each other because I was not even thinking of the possibility you are trying to suggest15:25
mspreitztherve: I need a clarification in your answer....15:26
mspreitztherve: this query you are speaking of, does the query (not its result) identify the mebers with bad health, or is it a query like "return me the list of members with bad health" ?15:26
*** nosnos has quit IRC15:27
thervemspreitz, The latter15:27
mspreitztherve: Ah, that is why I did not understand you.15:27
*** ramishra has quit IRC15:27
mspreitztherve: the cost of answering that query grows in proportion to the list of the group, right?15:28
mspreitzs/list/size15:28
mspreitztherve: the cost of answering that query grows in proportion to the size of the group, right?15:28
thervemspreitz, Hopefully not.15:28
mspreitztherve: how could it not?  Oh wait, I just realized something else.  This query is not an operation in today's Ceilometer API, right?15:29
therveI think it is15:29
mspreitztherve: can you outline how the signal handler could use today's Ceilometer API to make the query you have in mind?15:30
Qimingg'night15:30
mspreitzQiming: g'night15:30
thervemspreitz, No15:30
*** kebray has joined #heat15:31
*** kebray has quit IRC15:31
*** fayablazer has quit IRC15:32
mspreitztherve: I can not see how the Ceilometer query API could be used to do such a query15:33
therveMaybe it's not possible then15:33
mspreitztherve: I think it is impossible.  So that is why I conclude there has to be an alarm per group member.15:34
mspreitzthat is among the reasons15:34
*** aweiteka has joined #heat15:34
*** Qiming has quit IRC15:35
mspreitzthe other is that there is no way for a ceilometer client to make any part of the HTTP request in an alarm action variable; it alarm action has to be a literal. And it can not have a user-specified request body nor headers15:35
mspreitztherve: how could a ceilomter client specify an alarm action that identifies the right group member for a given alarm?15:36
mspreitztherve: by index is bad, the index of a member changes over the lifetime of a group15:37
mspreitztherve: the resource name or ID of the member would work15:37
mspreitz... if that name or ID were something that the user could write at the relevant position in a template15:37
*** inc0 has joined #heat15:38
mspreitztherve: but I do not see how a user can write the "resource" property of an OS::Heat::AutoScalingGroup to make it create an alarm with an action that contains the name or ID of the group member15:39
*** serg_melikyan has joined #heat15:40
inc0hehe you're still discussing?;)15:40
mspreitzinc0: yes15:40
mspreitztherve: I take that back, there is one thing I can think of...15:40
mspreitztherve: the member has to be a stack, and the template that specifies that stack can include a reference to the stack's ID15:41
mspreitztherve: however, the stack's ID is not the same as the ID of the Resource that is the group member15:41
mspreitztherve: so there is still a gap15:41
mspreitztherve: in a future with convergence, it would suffice to delete the stack15:42
mspreitz--- the stack that backs the group member15:42
mspreitztherve: but not today15:42
mspreitztherve: so I do not see a way today to use shardy's remove_policies on a group with more than one member15:45
*** packet has joined #heat15:46
mspreitztherve: are you still there?15:47
*** JayJ has joined #heat15:47
thervemspreitz, ceilometer resource-list should be able to return the member with bad health15:48
*** viktors is now known as viktors|afk15:51
*** andrearosa has quit IRC15:52
*** otoolee has quit IRC15:53
mspreitztherve: you mean like this: `ceilometer resource-list -q "counter_name=network.services.lb.member"` ?15:53
therveWell with the counter value15:56
therveSorry got to go15:56
* jdandrea is still stuck trying to figure out if there's an environment variable specifying the resource ID during cloud-id.15:57
*** sabeen2 has joined #heat15:57
* jdandrea ... and is going to brute force it (for now).15:58
mspreitzjdandrea: you know about GETting http://169.245.169.254/ from inside the VM, right?15:58
jdandreamspreitz: I do, but I don't use it. Perhaps I should. :)15:58
jdandrea"The magic IP."15:58
*** sabeen1 has quit IRC15:59
mspreitzjdandrea: try it.  It returns an index to the head of a branch of magic files15:59
* jdandrea waves a wand15:59
jdandreaOk.15:59
mspreitzjdandrea: but rember that what you find is specific to the VM from which you are exploring15:59
mspreitzjdandrea: and it only works in VMs for which that is the relevant mechanism; for others it is a special mounted filesystem15:59
jdandreamspreitz: That's what I want. I think that, for SoftwareConfig cases, there *are* env vars. Just not sure if that's true for standard issue cloud-init.16:00
mspreitzjdandra: at various times in the past I have found documentation for cloud-init...16:00
*** tspatzier has quit IRC16:00
mspreitzjdandrea: but I think that, in practice, what you actually get is somewhat vedor specific16:01
mspreitzjandra: and I do not remember what, if anything, that doc said about envars16:01
mspreitzright now I am hunting doc on Ceilometer query syntax.  Trying to figure out how to compare with number 0 instead of string 016:02
jdandreamspreitz: Hmm, good question.16:02
jdandreatype='string? http://docs.openstack.org/developer/ceilometer/webapi/v2.html#Query16:03
jdandrea(maybe that's the default, and it can be changed to int)16:04
*** andrearosa has joined #heat16:04
shardyjdandrea: I think http://169.254.169.254/latest/meta-data/instance-id gives you what you want16:04
mspreitzjdandrea: I mean in the CLI, not the API16:04
shardyheat doesn't pass the resource_id in the cloud-init data so I don't think there's any magic variable you can reference16:05
*** aweiteka has quit IRC16:05
jdandreashardy: *nods* ... I was grabbing that from /var/lib/cloud/data/instance-id previously.16:05
jdandreashardy: oic, and I suspect I can't pass that resource id in either (picture a cat chasing its tail).16:05
shardyjdandrea: Ah, so cloud-init does get that data then16:05
jdandreashardy: *nods* I wasn't sure if there was some *other* set of data though, with the resource id.16:06
mspreitzshardy: does Heat pass its resource ID for a Compute instance to Nova at all?16:06
shardyjdandrea: yeah, exactly16:06
shardymspreitz: no, it can't16:06
mspreitzshardy: why can't it?16:06
shardythe resource_id comes from nova16:06
mspreitzshardy: I thought Heat has its own ID for each Resource16:07
jdandreashardy: I ask because I'm creating a ceilometer sample from the VM (a heartbeat of sorts), and one of the fields required by a sample is "resource_id.16:07
shardymspreitz: no, the ID comes from each underlying service in nearly all cases16:07
jdandreashardy: Maybe it can be anything. I just imagine it should be somewhat unique (apart from the metadata with the stack ID, which I *can* pass in).16:08
jdandreashardy: So until that underlying service gives Heat an ID, you can't possibly know what it will be.16:08
jdandreaMakes sense.16:08
*** spzala has joined #heat16:09
openstackgerritSteven Hardy proposed a change to openstack/heat: Remove oslo request_id module  https://review.openstack.org/12826816:10
openstackgerritSteven Hardy proposed a change to openstack/heat: Remove oslo middleware.base module  https://review.openstack.org/12826916:10
openstackgerritSteven Hardy proposed a change to openstack/heat: Remove oslo sslutils  https://review.openstack.org/12827016:10
jdandreashardy: I wonder if I can pass in the resource name/path though? *thinking aloud here*16:10
jdandreaEhhh, maybe not the path.16:11
jdandreanm16:11
*** __TheDodd__ has joined #heat16:11
*** thedodd has quit IRC16:12
*** jistr has quit IRC16:14
*** otoolee has joined #heat16:15
*** aweiteka has joined #heat16:18
*** bdossant has quit IRC16:27
*** pasquier-s has quit IRC16:31
*** inc0 has quit IRC16:33
*** tomek_adamczewsk has quit IRC16:35
*** kebray has joined #heat16:37
jdandreaIn a ResourceGroup, IIUC, resource names are appended wth numbers in sequence (server-1, server-2, etc.) so that the names are unique. Is there a way for me to use that number elsewhere within that resource's properties? (Trying to create a scheduler hint unique to that particular resource.)16:44
shardyjdandrea: maybe you could scale out nested stacks and reference the OS::stack_name intrinsic function in the hint?16:49
jdandreashardy: Welllll ... that's certainly one way to do it. Not preferable, but yeah.16:50
* jdandrea is hung up on things being neat and tidy, imagines stacks lying around like pizza boxes after a party. :-P16:51
jdandreaIt works in a pinch, mhm.16:51
*** derekh has quit IRC16:52
*** alexheneveld has joined #heat16:56
zanebjdandrea: doesn't the %index% replacement (in Juno) work for that?17:01
jdandreazaneb: *That's* what it was! (I think.) I was trying to locate it in the docs.17:01
jdandreazaneb: So if I use %index% in one of my properties, I can get at that value for a given resource instantiaion (crossing fingers).17:02
zanebI think that's how it works17:02
jdandreazaneb: tyvm :)17:03
* jdandrea is giving it a try17:03
*** julienvey has quit IRC17:09
*** julienvey has joined #heat17:10
mspreitzzaneb: oh, that %index% thing would be handy.  Is it documented anywhere?17:10
zanebafaik17:11
*** jstrachan has quit IRC17:12
mspreitzzaneb: jdandrea: found doc at http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::ResourceGroup17:13
mspreitzright where I would look first!17:13
*** jprovazn has joined #heat17:14
*** julienvey has quit IRC17:14
*** aweiteka has quit IRC17:14
*** harlowja_away is now known as harlowja17:15
jdandreamspreitz: I *was* looking at that. Chalk it up to developer myopia. *head/desk*17:16
jdandreaStaring me in the face. ty17:17
*** vijayaguru has joined #heat17:17
mspreitzjdandrea: BTW, this solves a problem were talking about earlier - how can a template author write a resource_def (a ResourceGroup property) that refers to the particular member when instantiated17:18
*** randallburt has quit IRC17:18
jdandreamspreitz: Indeed.17:18
zanebI don't know why it says there that it only works in the name. it's definitely applied to all properties. if somebody wants to submit a patch...17:18
*** vijendar has joined #heat17:18
jdandrea" Can be used, *for example*, to customize the name ..." - that's ok, no?17:19
*** tango has quit IRC17:23
*** amakarov is now known as amakarov_away17:23
*** rushiagr is now known as rushiagr_away17:25
*** aweiteka has joined #heat17:26
jdandrea$1M question coming up here: Does %index% (and INDEX_VAR and _handle_repl_val()) work for properties of any scaling resource? *checks the code* D'oh! Nope. Hmm. Maybe if I scale a resource group of size 1 ... ;)17:27
jdandrea(Oh, the index_var would have to be per plugin anyway. Understandable.)17:27
mspreitzjdandrea: right.  That's why I am surprised shardy is talking about dealing directly with a group of general size17:28
mspreitzIf the group size is fixed at 1, you don't need no stinkin' variable to tell you what the index is!17:28
mspreitzjdandera: well, actually, there are lots of reasons I am surprised17:29
*** andersonvom has joined #heat17:29
jdandreamspreitz: I want to understand and appreciate all the angles around that, for certain. I'll get there. :)17:30
shardyhttp://blog.russellbryant.net/2014/10/15/openstack-instance-ha-proposal/17:30
shardyOh inc0 has gone17:30
jdandreashardy: Ooh, ty17:30
* jdandrea goes off to read again17:30
shardyinteresting reading for those wanting automated evacuate17:31
*** vijendar has quit IRC17:31
*** Drago has quit IRC17:31
*** vijendar has joined #heat17:31
openstackgerritSteven Hardy proposed a change to openstack/heat: ResourceGroup update refactor  https://review.openstack.org/12836417:38
openstackgerritSteven Hardy proposed a change to openstack/heat: ResourceGroup add "force_remove" property  https://review.openstack.org/12836517:38
openstackgerritSteven Hardy proposed a change to openstack/heat: ResourceGroup add remove_policies property  https://review.openstack.org/12836517:40
openstackgerritVijayaguru Guruchave proposed a change to openstack/python-heatclient: Pass auth_url if os_no_client_auth specified  https://review.openstack.org/12798717:41
mspreitzshardy: thanks for the pointer.  Now I wonder how it is related to our earlier discussion of health maintenace that is not limited to infrastructure (i.e., hypervisor) failure.17:47
*** achanda has joined #heat17:49
*** tspatzier has joined #heat17:51
*** Drago has joined #heat17:52
mspreitzjdandrea: shardy: BTW, if you have the scrollback in #openstack-ceilometer, you see that ceilometer query abilities are surprisingly limited17:53
mspreitzat least for `ceilometer resource-list`17:53
jdandreaI can see it, yep. Will read, tx.17:53
mspreitz`ceilometer sample-list` is, hopefully, another story17:54
*** randallburt has joined #heat17:56
*** Drago1 has joined #heat17:58
*** tspatzier has quit IRC17:58
mspreitzshardy: does your blog post about automated evacuate have implications for the sort of health maintenance I have been discussing?17:59
*** tango has joined #heat18:01
*** jamiehannaford has quit IRC18:01
*** vijendar has quit IRC18:03
*** spzala has quit IRC18:04
*** kopparam has joined #heat18:05
mspreitzshardy: sorry, s/your/Russell's/18:05
*** charlesr has quit IRC18:06
*** __TheDodd__ has quit IRC18:07
*** thedodd has joined #heat18:09
*** vijayaguru has left #heat18:09
*** Drago1 has quit IRC18:11
*** Drago1 has joined #heat18:12
openstackgerritZane Bitter proposed a change to openstack/heat: Unit tests: remove dead code from Neutron Autoscaling test  https://review.openstack.org/12872818:15
openstackgerritZane Bitter proposed a change to openstack/heat: Ensure all Neutron LoadBalancer members are deleted  https://review.openstack.org/12872918:15
*** achanda has quit IRC18:16
shardymspreitz: possibly, it's certainly related to the use-case I've been discussing with inc018:17
shardyit only really solves the case where you want to restart a VM because a nova host went down though18:18
shardyso not the general purpose solution you seem to be seeking18:18
*** jergerber has quit IRC18:18
mspreitzshardy: right.  I want different detection, different fencing, different recovery18:19
*** mkollaro has quit IRC18:20
*** achanda has joined #heat18:20
*** charlesr has joined #heat18:21
*** thedodd has quit IRC18:23
mspreitzshardy: your remarks earlier in this chat today are about different fencing and recovery than Russell's blog post18:29
*** tomek_adamczewsk has joined #heat18:30
*** nikunj2512 has quit IRC18:31
*** spzala has joined #heat18:32
*** otoolee has quit IRC18:41
*** mspreitz has quit IRC18:42
*** thedodd has joined #heat18:47
*** spzala has quit IRC18:50
*** Drago1 has quit IRC18:50
*** tomek_adamczewsk has quit IRC18:53
*** kbyrne has quit IRC18:53
*** tomek_adamczewsk has joined #heat18:53
*** achampio1 has joined #heat18:55
*** packet has quit IRC18:55
*** cody-somerville has quit IRC18:55
*** jpeeler has quit IRC18:56
*** dteselkin has quit IRC18:56
*** packet has joined #heat18:57
*** gilliard has quit IRC18:57
*** kopparam has quit IRC18:57
*** achampion has quit IRC18:57
*** skraynev has quit IRC18:57
*** gilliard has joined #heat18:58
*** charlesr has quit IRC18:59
*** mspreitz has joined #heat19:01
*** vijendar has joined #heat19:02
*** kopparam has joined #heat19:02
*** Drago1 has joined #heat19:02
mspreitzshardy: Sorry, I was not multi-tasking enough, I just noticed and read your comment on https://review.openstack.org/#/c/124656/19:02
*** tomek_ad1 has joined #heat19:02
*** tomek_adamczewsk has quit IRC19:03
mspreitzI made the comments about HARestarter because (a) it is the primary motivation, (b) that motivation lends a sense of urgency, and (c) as an urgent diversion from a train-wreck there is less need to do something hugely general (which is a direction some reviewers wanted to go)19:04
mspreitzshardy: I will admit a second motivation, which is I think what is now proposed is much nicer to users than asking them to code up a bunch of mechanism afresh each time they want health maintenance19:05
*** tomek_adamczewsk has joined #heat19:05
mspreitzshardy: are you saying you would prefer to review primarily on the basis of that second motivation?19:05
*** dteselkin has joined #heat19:05
*** kbyrne has joined #heat19:06
*** Drago1 has quit IRC19:07
*** tomek_ad1 has quit IRC19:07
*** achanda has quit IRC19:07
*** Drago1 has joined #heat19:08
*** randallburt has quit IRC19:08
*** jpeeler has joined #heat19:09
*** skraynev has joined #heat19:11
*** charlesr has joined #heat19:12
*** adrienverge has joined #heat19:12
*** kopparam has quit IRC19:14
adrienvergeHi all19:16
adrienvergeThe cinder scheduler hints patch would appreciate reviews :)19:16
adrienvergehttps://review.openstack.org/#/c/126298/19:16
*** kebray has quit IRC19:20
*** achanda has joined #heat19:25
*** otoolee has joined #heat19:28
*** Drago1 has quit IRC19:28
*** cdent has quit IRC19:28
*** Drago1 has joined #heat19:28
*** Drago1 has quit IRC19:29
*** EricGonczer_ has quit IRC19:29
*** adrienverge has quit IRC19:31
*** Drago1 has joined #heat19:31
*** Drago1 has quit IRC19:35
*** Drago1 has joined #heat19:35
*** cdent has joined #heat19:48
*** EricGonczer_ has joined #heat19:49
*** kebray has joined #heat19:49
*** EricGonczer_ has quit IRC19:51
*** Drago1 has quit IRC19:52
*** cdent has quit IRC19:53
*** Drago1 has joined #heat19:53
*** charlesr has quit IRC19:55
*** ifarkas has quit IRC19:57
*** tonisbones has quit IRC19:58
*** vijendar has quit IRC19:59
*** vijendar has joined #heat20:06
*** EricGonczer_ has joined #heat20:12
*** jprovazn has quit IRC20:19
*** spzala has joined #heat20:21
*** vijendar has quit IRC20:25
*** vijendar has joined #heat20:25
*** EricGonczer_ has quit IRC20:30
*** achanda has quit IRC20:31
*** otoolee has quit IRC20:31
*** crose has joined #heat20:33
*** dteselkin has quit IRC20:36
openstackgerritMike Spreitzer proposed a change to openstack/heat-specs: Adding Optional Health to OS::Heat::AutoScalingGroup  https://review.openstack.org/12465620:37
*** EricGonczer_ has joined #heat20:38
*** dteselkin has joined #heat20:38
*** otoolee has joined #heat20:47
*** Drago1 has quit IRC20:48
*** sarob has joined #heat20:49
*** achanda has joined #heat20:49
*** EricGonczer_ has quit IRC20:49
*** Drago has quit IRC20:52
*** jdob has quit IRC20:55
*** julienvey has joined #heat21:03
*** fayablazer has joined #heat21:05
*** swygue has quit IRC21:07
*** dims_ has quit IRC21:14
*** dims_ has joined #heat21:15
*** rpothier has quit IRC21:20
*** JayJ has quit IRC21:20
*** JayJ has joined #heat21:20
*** andersonvom has quit IRC21:21
*** mspreitz has quit IRC21:23
*** arunrajan has joined #heat21:23
arunrajanbrint: Saw your response to https://jira.rax.io/browse/HEAT-761 We were in a team meeting and looking for that one but couldn't find it. Do you happen to have that Jira number handy?21:24
*** radez is now known as radez_g0n321:25
stevebakermorning21:27
*** __TheDodd__ has joined #heat21:28
*** thedodd has quit IRC21:29
shardystevebaker: hey, it's getting late so it's probably just me, but if you can spell out what you want for remove_policies that would be awesome & I'll revisit it in the morning21:30
shardyI thought I'd addressed your comments but evidently Im missing some context from the plugpoint thing21:31
stevebakershardy: I did in the initial comment, just a list of maps instead of a top-level map21:31
stevebakershardy: dialing in?21:32
shardystevebaker: Ok cool, maybe I misinterpreted, somewhat sleep-deprived atm ;)21:32
stevebakershardy: I can't imagine why21:33
*** EricGonczer_ has joined #heat21:35
*** jmckind has quit IRC21:37
*** blomquisg has quit IRC21:39
*** miguelgrinberg has quit IRC21:39
*** aweiteka has quit IRC21:40
*** fayablazer has quit IRC21:41
*** gondoi is now known as zz_gondoi21:41
*** tomek_adamczewsk has quit IRC21:41
*** tomek_adamczewsk has joined #heat21:42
*** EricGonczer_ has quit IRC21:45
*** arunrajan has left #heat21:45
*** miguelgrinberg_ has joined #heat21:46
*** crose has quit IRC21:48
*** JayJ has quit IRC21:48
*** alexheneveld has quit IRC21:54
*** kebray has quit IRC21:57
*** packet has quit IRC21:58
*** miguelgrinberg_ has quit IRC22:00
*** sabeen1 has joined #heat22:01
*** julienvey has quit IRC22:01
*** julienvey has joined #heat22:02
*** sabeen2 has quit IRC22:04
*** alexheneveld has joined #heat22:05
*** andersonvom has joined #heat22:06
*** miguelgrinberg_ has joined #heat22:06
*** julienvey has quit IRC22:07
*** sarob has quit IRC22:08
*** andersonvom has quit IRC22:10
*** asalkeld has joined #heat22:12
asalkeldmorning22:12
*** jcoufal has quit IRC22:13
*** miguelgrinberg_ has quit IRC22:13
*** miguelgrinberg_ has joined #heat22:26
*** Drago has joined #heat22:29
*** miguelgrinberg_ has quit IRC22:30
*** julienvey has joined #heat22:32
*** vdreamarkitex has joined #heat22:32
*** alexpilotti has quit IRC22:32
*** __TheDodd__ has quit IRC22:33
*** julienvey has quit IRC22:36
*** achanda has quit IRC22:40
*** achanda has joined #heat22:44
*** sarob has joined #heat22:53
*** asalkeld has quit IRC22:56
*** Drago has quit IRC23:06
*** cody-somerville has joined #heat23:07
*** asalkeld has joined #heat23:13
*** cody-somerville has quit IRC23:18
*** sarob has quit IRC23:21
*** sarob has joined #heat23:22
*** sarob has quit IRC23:22
*** sarob has joined #heat23:23
*** che-arne has quit IRC23:23
*** sarob has quit IRC23:23
*** sarob has joined #heat23:24
*** sarob has quit IRC23:24
*** sarob has joined #heat23:24
*** che-arne has joined #heat23:24
*** sarob has quit IRC23:25
*** sarob has joined #heat23:25
*** sarob has quit IRC23:26
*** cody-somerville has joined #heat23:26
*** andersonvom has joined #heat23:31
openstackgerritOpenStack Proposal Bot proposed a change to openstack/heat: Updated from global requirements  https://review.openstack.org/12803823:41
*** aweiteka has joined #heat23:47
*** lifeless has quit IRC23:52
*** lifeless has joined #heat23:53

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!