08:01:31 #startmeeting heat 08:01:32 Meeting started Wed Jun 15 08:01:31 2016 UTC and is due to finish in 60 minutes. The chair is ricolin. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:01:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 08:01:35 The meeting name has been set to 'heat' 08:02:08 #topic Roll call 08:02:15 \o 08:02:16 hi 08:02:19 o/ 08:02:23 o/ 08:02:25 :) 08:02:51 #topic Adding items to agenda 08:03:06 hi 08:03:09 #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda#Agenda_.282016-06-15_0800_UTC.29 08:03:41 ricolin: could you add nested-depth event list 08:03:58 stevebaker: sure 08:05:05 #topic HOT generator 08:05:22 https://review.openstack.org/328822 08:05:42 team, i tried to put the spec for HOT generator ^^ 08:06:27 this is mainly for generating the HOT template using python API. I and jdob tried to bring up a POC for the same 08:07:14 i would need your help on reviewing it and if possible planing to push it for n-2 :) 08:07:47 Will give it a look:) 08:07:59 ricolin, sure. thanks. 08:09:01 Maybe also intreasting to getting futher feature over this like Stack template generater on horizon UI:) 08:09:52 ricolin, yes. that is nice idea ! 08:10:09 #topic nested-depth event list 08:10:33 hey, I have a spec-lite bug for this here https://bugs.launchpad.net/heat/+bug/1588561 08:10:33 Launchpad bug 1588561 in python-heatclient "event list REST API call should support nested stacks" [Medium,In progress] - Assigned to Steve Baker (steve-stevebaker) 08:10:46 and the heat and heatclient reviews are ready to go 08:11:15 https://review.openstack.org/#/q/topic:bug/1588561 08:11:22 o/ 08:11:27 sorry I'm late 08:11:33 shardy: oh, perfect timing 08:11:57 shardy: I have a spec-lite bug for this here https://bugs.launchpad.net/heat/+bug/1588561 08:11:57 Launchpad bug 1588561 in python-heatclient "event list REST API call should support nested stacks" [Medium,In progress] - Assigned to Steve Baker (steve-stevebaker) 08:12:30 stevebaker: ah, yeah, I've been planning to pull all your optimization related patches locally and test w/tripleo 08:12:35 or have you already done that? 08:12:40 I'm getting 17s vs 1.7s for an event-list with ~800 events 08:13:20 shardy: I always test with tripleo, but I'm not creating big overclouds 08:13:45 stevebaker: ack - I'll re-test anyway, but I don't have access to huge resources either 08:13:58 shardy: thats a different series, but zaneb raised an objection https://review.openstack.org/#/c/317220/ 08:13:59 stevebaker: will give it a try too. 08:14:14 I've been chatting to some folks that do tho, so we can potentially do some large scale tests at some point 08:14:54 I need to dig into how much we can get out of sqlalchemy for avoiding hitting the database, which is yak shaving into ripping out all our get session logic :o 08:15:25 stevebaker: Maybe you can move https://review.openstack.org/#/c/323614/ out of dependency 08:15:38 it's a good fix 08:15:57 ricolin: yeah, that will at least end up at the front of the series, or on its own 08:16:34 stevebaker: nice 08:17:00 stevebaker: Hmm, ok - I'll give it some thought and comment on the review 08:17:04 ta 08:18:39 #topic external resource 08:18:41 https://review.openstack.org/#/c/135492/ 08:18:57 I think after team's review 08:19:27 hope this series maybe can get merge 08:21:13 fine move on! 08:21:20 ricolin: I have a raised the concern on doing deps for external resource in the last meeting, though I'm ok with it going in with latest change. 08:21:36 though I still don't know why we need to do that. 08:21:46 ricolin: but if we waited that change could have its first birthday! 08:21:59 lol 08:22:25 ramishra: We do can resloved all deps with external resources 08:23:31 not sure I understand, deps of an external resource has no meaning as we don't manage it's lifecycle 08:23:32 but will require some hard code on all dep function through 08:24:02 anyway, we should discuss it later. 08:24:09 sure 08:24:30 I'm ok to prevent deps:) 08:24:38 #topic convergence status 08:25:28 I hit https://bugs.launchpad.net/heat/+bug/1592374 yesterday 08:25:28 Launchpad bug 1592374 in heat "deleting in_progress stack with nested stacks fails with convergence enabled" [High,Confirmed] 08:25:30 We've not seen any major issues from other projects yet:) 08:25:34 shardy: since we already turn convergence to default, is there any test from TripleO about that? 08:25:52 shardy: is that when deleting just stalls? I'm hitting that 08:25:55 it may or may not be specific to convergence, but turning it off has definitely improved (or possibly fixed) it 08:26:02 stevebaker: yup 08:26:19 stevebaker: specifically it happens every time for me if you delete a stack containing nested stacks that are IN_PROGRESS 08:26:45 turning off convergence I hit a couple of suspect locking issues on delete, but they may be the other ones already reported that are specific to not-convergence 08:27:11 I hit this yesterday but I see my fix has landed already https://launchpad.net/bugs/1592243 08:27:11 Launchpad bug 1592243 in heat "Convergence: nested stacks are not populated with correct nested_depth" [High,Fix released] - Assigned to Steve Baker (steve-stevebaker) 08:27:23 stevebaker: I started a functional test, which appears to reproduce locally but it looks like theres a rack I need to fix before it'll reproduce in the gate: 08:27:32 https://review.openstack.org/329460 08:27:47 one problem I've seen is with *very* large resource groups (over 500) 08:27:48 I think I have hit the same issue with using exponential backoff for retry https://review.openstack.org/#/c/328613/ 08:28:32 create/update gets slower and slower as the sync point data grows, which can be mitigated with update policy, but that doesn't apply to delete 08:28:37 stevebaker: FWIW my deletes failed every time with a SoftwareDeploymentGroup of size 4 08:29:00 I'm hoping for ricolin's exp backoff to land soon to see if it helps 08:29:28 ricolin: I'm planning an experimental job for TripleO that enables convergence 08:29:37 ricolin: my local testing indicates it's not ready yet tho 08:29:54 stevebaker: I can further more reduce the syncpoint access 08:29:58 I suppose we can look at enabling the job anyway tho 08:30:12 shardy: yes, that would be good 08:30:13 stevebaker: but it might took more time to excute 08:30:43 shardy: cool 08:30:46 shardy: as tripleo ci does not use heat master and fails back to last commit, if the ci fails, can we not enable convergence and check the periodic jobs for issues? 08:31:31 ramishra: we've disabled convergence for all tripleo-ci jobs via puppet atm 08:31:48 yeah, I'm asking can we not just enable it? 08:31:52 ramishra: if one wanted to check with it enabled now, you can simply post a WIP patch to instack-undercloud reverting my patch that turned it off 08:32:23 basically the experimental job will do that via a conditional override of the hieradata inside instack-undercloud that sets convergence_engine 08:32:38 ramishra: No, because I've tested locally and it doesn't work 08:33:12 we've probably already promoted current-tripleo to beyond the commit which switched convergence on by default 08:33:21 so we can't rely on the periodic job to not promote 08:33:32 and even if we did, I suspect we'd be stuck with an increasingly old heat 08:33:59 shardy: ok, I was hoping that it's till behind the convergence switch;) 08:34:11 s/till/still 08:34:29 ramishra: we'll want to keep picking up bugfixes (and new features), so a long-term heat pin won't work for us 08:34:43 I'll be happy to switch convergence on when we can prove the issues are fixed 08:34:56 but bear in mind, on the single-node undercloud we don't gain a lot of the benefits 08:35:08 shardy: yeah, I understand 08:36:20 #topic open discussion 08:36:40 I'd appreciate some eyes on these heatclient patches: 08:36:43 https://review.openstack.org/#/q/status:open+project:openstack/python-heatclient+branch:master+topic:bug/1590421 08:37:03 basically the osc commands to show template/environment mangle the output due to the cliff formatters 08:37:11 I've proposed what I think is a more sane format 08:37:18 e.g something you can actually cut/paste from 08:37:54 I would like some reviews https://review.openstack.org/#/c/294023/. been there for a long time. Was about to abdndon it. Zane suggested to keep it in the queue for it's fate;) 08:38:06 shardy: I'll try them out 08:38:17 Similarly, if there are any cliff formatter experts, I could use some help with https://review.openstack.org/#/c/327205/ 08:38:23 shardy: ramishra: try it out++ 08:38:26 I can't figure out how to format a list of yaml files 08:38:38 I want the name of the file, then a pretty printed yaml content 08:38:47 cliff won't let me do that atm 08:38:57 probably missing something, any help appreciated :) 08:39:49 shardy: you don't *have* to use a cliff formatter, you could just print it 08:39:54 stevebaker, ricolin: thanks! 08:40:05 stevebaker: aha, I wasn't sure if that was the done thing ;) 08:40:13 that would certainly be easier :) 08:40:14 we do it here and there 08:40:23 stevebaker: ack, maybe I'll do that then, thanks! 08:40:55 ramishra: is there any way to break that patch down in size? 08:41:08 shardy: also there is yaml markup to denote beginning and end of file in multi file streams - check out the official spec 08:41:17 that's probably not helping wrt attracting reviews IMO 08:41:28 stevebaker: cool, thanks will do 08:42:00 shardy: it's not complex patch(though looks big), just moved the the some tests around;) 08:42:28 shardy: If breaking down would help it gettting reviewed, I can spend time to do it:) 08:42:38 ramishra: ack, I'll try to check it out 08:43:17 Need some help and review on https://review.openstack.org/#/c/280201/ 08:43:20 ramishra: I just think smaller patches tend to more easily attract reviewers, so if there's a logical way to split it, that may help 08:43:23 shardy: thanks 08:45:12 anything else:)? 08:45:25 going one 08:45:44 going twice 08:45:57 #endmeeting