20:00:09 #startmeeting heat 20:00:11 Meeting started Wed Jan 14 20:00:09 2015 UTC and is due to finish in 60 minutes. The chair is asalkeld. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:15 The meeting name has been set to 'heat' 20:00:18 hello folks 20:00:22 o/ 20:00:27 hi 20:00:33 hi 20:00:35 hi 20:00:41 o/ 20:00:48 #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda 20:00:53 hi 20:01:58 #topic Review action items from last meeting 20:02:08 #link http://eavesdrop.openstack.org/meetings/heat/2014/heat.2014-12-17-20.00.html 20:02:19 \oo/ 20:02:21 I don't think i did that:-O 20:02:42 #action asalkeld to write up an email to the ml re: meetup 20:02:52 oops, I'll get on that 20:03:17 #topic Adding items to the agenda 20:03:26 any new topics for today? 20:03:38 o/ 20:04:10 hi 20:04:13 probably I had one small question about functional tests 20:04:20 ok 20:04:22 (slightly distracted) 20:04:50 maybe zaneb can give us an update on his convergence work 20:04:55 Are we still doing critical issues sync as an item? 20:05:04 we can 20:05:10 that won't take long :/ 20:05:20 asalkeld: Ok, just wanted to draw attention to https://bugs.launchpad.net/heat/+bug/1384750 20:05:25 #topic critical issues sync as an item? 20:05:35 which has been confirmed today and isn't targetted to k2 20:05:46 #link https://bugs.launchpad.net/heat/+bug/1384750 20:05:47 I reported it but I don't have a fix atm 20:06:10 ok, maybe i can look at it 20:06:37 Ok, thanks, just wanted to get some eyes on it 20:07:04 #topic functional tests 20:07:07 skraynev_: ... 20:07:54 cool. I wonder what we plan to do for functional test, e.i. do we want to have test create/delete for each resource? 20:08:50 skraynev_: not sure, i am adding more logic tests into functional 20:09:11 updates, nested with env, files 20:09:27 deeper tests that should not be in unit tess 20:09:44 yes, but it covers basic resource structure. 20:09:52 but you might be thinking of scenario tests 20:10:18 you can't really test all resources in isolation 20:10:19 scenario is not equal functional? 20:10:36 agree 20:10:38 skraynev_: FWIW, I don't think we should duplicate stuff we can test in the unit tests, as there's more overhead in doing the functional testing 20:10:39 skraynev_: tests which have groups of resources interacting might be more useful 20:10:46 scenario are linking a bunch of servies together 20:11:03 stevebaker: sure 20:11:19 yeah, and functional are about proving heat behaves in a specific way, testing multiple levels of our implementation 20:12:19 skraynev_: scenario is for autoscaling 20:12:32 ok, so you have not objections if it be some bunches f.e. for neutron, sahara, autoscaling 20:12:35 and other non trivial usecasese 20:12:43 so we better come up with IRL-like usage scenarios 20:12:48 skraynev_: that would be great 20:13:19 pas-ha: "IRL"? 20:13:26 in real life 20:14:08 common use-cases which we use constantly on deployments ;) 20:14:10 dang i am bad at acronym 20:14:28 ok, lets move on 20:14:43 #topic zane update on convergence 20:15:09 zaneb: .. 20:15:13 still working on it 20:15:21 struggling to find time at the moment 20:15:36 i see 20:15:40 but close to putting something out there for discussion 20:15:56 that resolves the last issues about storing progress 20:16:06 please priortise as much as possible 20:16:22 unfortunately by throwing away a lot of the efficiency gains I hoped to include :/ 20:16:46 but it doesn't look like there is much of a choice 20:17:14 correct first 20:17:34 indeed 20:17:34 what efficiency gains are we taking about? 20:17:58 avoiding lots and lots and lots of writes to the database 20:18:26 so now it's going to be chatty 20:18:32 :( 20:18:39 basically to ensure we maintain all of the state we're going to be hammering the DB 20:19:20 ok 20:19:31 but if we don't then it's impossible to correctly hand-off from a previous update that was still in progress on a resource to the current update 20:19:55 sure 20:20:14 other than polling in a loop, which is (a) unpalatable, and (b) also hammers the DB anyway 20:21:02 zaneb: keep anat and co. in the loop they are frustated and need some idea of the progress 20:21:27 ok 20:21:32 didn't we have to optimze DB access due to scale limitations related to metadata/DB access previously? 20:22:01 stevebaker: did that 20:22:07 e.g if we make the DB access much more high bandwidth, will it break the problem we're trying to solve? 20:22:15 e.g the scalability problem 20:22:18 yes it is much better now 20:22:21 possibly 20:22:31 i think that was the guest polling 20:22:39 I suppose scaling the DB is more of a solved problem, but I'm wary of reintroducing those problems again.. 20:22:40 although hopefully itwon't get as O(n^2) as it was before stevebaker's improvements 20:22:49 it was the overhead of stack parsing 20:22:59 i see 20:23:13 Okay, cool, as long as we're not going back to square one in that regard :) 20:23:15 joined fetches helped a lot 20:23:43 lets move on 20:23:51 #topic kilo-2 20:24:00 #link https://launchpad.net/heat/+milestone/kilo-2 20:24:18 that's due on 5 feb 20:24:42 so if you have a bp that is not likely to make it, please tell me 20:25:09 i don't know about https://blueprints.launchpad.net/heat/+spec/support-cinderclient-v2 20:25:20 in "unknown state" 20:26:10 any questions about kilo-2 and/or specs? 20:26:25 I think we have some logic around cinder v2 already 20:26:51 now we use create the client with max version available 20:27:04 pas-ha: maybe that is already implemented? 20:27:08 and jump around in couple of places as APIs are not exactly the same 20:27:21 i'll check the git history 20:27:35 need to theck with the author what exactly was meant by this bp 20:27:54 i did send an email, so far no response 20:28:55 it's implemented 20:29:02 Adrien Vergé 20:29:14 2014-09-23 20:29:17 nice 20:29:59 ok, we can move to open discussion now 20:30:07 #topic open discussion 20:30:21 or close early, I am easy 20:30:50 blueprint support-cinder-api-v2 20:31:06 the blueprint was wrong, and didn't update the spec 20:31:25 cool 20:31:49 I have a question about removing scheduler logic from handle*/check*complete methods 20:32:16 pas-ha: i think that's only complex stuff 20:32:24 zaneb: might remember 20:32:26 pas-ha: do it 20:32:52 I tried to get rid of them in current architecture, but ended up with simplified TaskRunner :( 20:33:25 pas-ha: FSMs are your friend 20:33:43 Does anyone know the status of the Mistral resources? 20:33:54 just pass the state as the parameter to check_*_complete 20:33:58 I've seen some updates to the spec lately, but interested how actively the code is being worked on 20:34:03 instead of passing a continuation 20:34:08 The client is ready, 20:34:17 resources are being worked on 20:34:33 shardy: yeah it's active 20:34:39 for starters there'd be workflow and crontimer 20:35:04 zaneb, thanks, will dif in than direction 20:35:17 tho' shardy want to clarify 20:35:26 not sure what you are asking 20:35:31 asalkeld: I'd love it if folks could follow up on my application HA thread on openstack-dev with more concrete examples of how the mistral integration may help solve the problem 20:35:55 I know you and zaneb were in favour of that direction, but I'm fuzzy on the details tbh 20:36:08 e.g exactly how the interaction between the workflow and orchestration would work 20:36:30 i have forgotten that thread :-O, need to find it 20:36:45 Mistral calling Heat would be a step in the workflow 20:36:59 http://lists.openstack.org/pipermail/openstack-dev/2014-December/053447.html 20:37:03 asalkeld: ^^ ;) 20:37:09 ta 20:37:41 so instead of type: OS::Heat::ScalingPolicy 20:37:45 you have a workflow 20:38:03 and the workflow, does an update on the stack 20:38:11 to do what every you want 20:38:24 yeah, exactly 20:38:31 (more flexible that keep adding new stuff to heat resources) 20:39:13 i feel like heat autoscaling is too inflexible for many 20:39:24 Ok, that sounds OK (although maybe s/update/signal ref my mail) 20:39:30 shardy: I may ask folks who is working on it to give (may be in g_docs) some examples of use cases. Or may be demo record. 20:39:56 but I'd really like more examples of what that workflow would look like, e.g a Heat resource which defines a mistral workflow, which then calls heat 20:39:57 yeah we need some concrete examples 20:40:30 in my defense, i did ask for that in the spec 20:40:35 Then, when the resources start getting posted, we can experiment with the HA use-case I described and figure out if it'll work :) 20:40:39 asalkeld: :) 20:40:46 asalkeld: Yeah, so did I :) 20:41:26 https://review.openstack.org/#/c/143989/7/specs/kilo/mistral-resources.rst 20:41:31 my top comment 20:42:03 i like my use case 3 ;) 20:42:13 but you also understand, that it's not so easy and short demonstrate whole information about use case in spec, IMO 20:43:04 skraynev_: sure, but blindly making resources that match the upstream api is not good either 20:43:19 we need to make sure that the template looks sensible 20:43:29 and it actually solves a problem 20:43:37 asalkeld: fully agree 20:43:58 asalkeld: +1 20:44:16 * asalkeld is amazed at how i am doing without coffee 20:44:19 :) 20:44:24 so there is reason, why we want to upload first avaliable code of resources 20:44:44 it will give us a chance to touch it and play with it 20:45:00 skraynev_: code is great 20:45:21 we can do actual experimentation 20:45:47 i'd suggest mistral/heat integartion is esp. important and we need to get it right 20:46:01 Yeah, posting the code early would be good, a lot of us are interested in this :) 20:46:23 shardy: re: decouple-nested 20:46:30 * shardy hides 20:46:35 i am onto the update-policy tests 20:46:47 asalkeld: Sorry about the test nightmare I gave you... :( 20:46:47 not too many tests to fix after that 20:47:05 so i am hopeful of landing it in time 20:47:12 no worries 20:47:14 asalkeld: awesome! I owe you many beers at the next summit :) 20:47:32 hopefully it will be valueble to convergence too 20:47:51 I actually think it'll be a really good thing for scalability with big stacks w/lots of provider resources etc 20:48:09 I don't have any numbers to back that up tho :) 20:48:11 btw, functional tests - looks like all instance group tests were silently skipped, and when enabled one of them fails - https://review.openstack.org/#/c/147136/ 20:48:37 ooo, ok thanks pas-ha 20:49:01 ouch, nicely spotted pas-ha 20:49:53 I don't understand why our test framework allows us to have slient-skip-on-error 20:49:53 pas-ha: hopefully the lab in czech is working today 20:50:17 zaneb: yeah, we need to just fail 20:50:21 not skip 20:50:24 well, I use my beefy desktop box 20:50:38 that was not an error 20:50:44 of the test 20:50:50 same for me 20:50:51 but error of the config 20:50:53 i need to get one, just have a laptop 20:50:59 pas-ha: isn't the keypair only for manually poking at failed tests? 20:51:34 stevebaker: maybe we just need to remove the use of it 20:51:48 stevebaker, https://github.com/openstack/heat/blob/master/heat_integrationtests/functional/test_instance_group.py#L116 20:51:53 asalkeld: I like being able to debug failed tests 20:51:59 and we have None by default in config 20:52:26 https://github.com/openstack/heat/blob/master/heat_integrationtests/functional/test_instance_group.py#L158 20:52:57 pas-ha: ah. That test should be creating its own keypair if one is not configured 20:53:12 yep, that I have pointed in review 20:53:36 does launchconfig require the keyname 20:54:13 cool, not required 20:54:33 we can just remove references to it 20:54:33 nope, optional 20:54:54 but stevebaker likes to debug failed stuff 20:55:03 and me too frankly 20:55:04 5mins left 20:55:15 pas-ha: i am using a random string there 20:55:30 don't need a key to debug a random string 20:55:43 aha, ok 20:55:56 pas-ha: just do this https://github.com/openstack/heat/blob/master/heat_integrationtests/scenario/test_server_cfn_init.py#L31 20:56:23 pas-ha: that should probably be a common method 20:56:43 indeed, coud be moved 20:56:44 cant test.py setup() do that? 20:57:23 what ever 20:57:40 some logic would be needed not to create too many keypairs 20:58:56 the use of it should be removed from instance group test (as it's not needed there) 20:59:16 i'll add that to the review 20:59:27 that is all folks.. 20:59:31 #endmeeting