20:00:40 #startmeeting heat 20:00:41 Meeting started Wed Jul 2 20:00:40 2014 UTC and is due to finish in 60 minutes. The chair is zaneb. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:45 The meeting name has been set to 'heat' 20:01:04 I haven't had any volunteers to chair yet this week 20:01:33 #topic who is here? 20:01:41 o/ 20:01:43 hi 20:01:53 Hi 20:01:58 hey o/ 20:01:58 Hi 20:02:00 hi 20:02:03 o/ 20:02:06 \o 20:02:11 hi 20:02:57 shardy? 20:03:18 #topic Review action items from last meeting 20:03:21 ./ 20:03:27 both for me 20:03:37 zaneb add mid-cycle meetup planning to Heat PTL guide on wiki 20:03:47 I'm way ahead of you guys 20:03:56 #link https://wiki.openstack.org/wiki/Heat/PTLGuide 20:04:07 down the bottom there 20:04:33 zaneb put link to PTL guide in Heat wiki page 20:05:12 tbh I don't think that's necessary. not every wiki page is accessible from the main Heat page, and this one is by definition only of interest to one person at a time 20:05:49 we have https://wiki.openstack.org/w/index.php?title=Special%3APrefixIndex&prefix=Heat&namespace=0 to list all wiki pages 20:06:00 that is linked from /wiki/Heat 20:06:20 #topic Adding items to the agenda 20:06:29 anybody? 20:06:53 I've already added mine 20:07:12 capital 20:07:40 #topic Mid-cycle meetup 20:08:02 I had one question, but I suppose it may be moved at the free discussion time :) 20:08:04 #info Mid-cycle meetup is happening in Raleigh on Aug 18-20 20:08:16 #link https://etherpad.openstack.org/p/heat-juno-midcycle-meetup 20:08:34 can people please sign up on that etherpad ^ 20:09:04 everything is confirmed, so now is the time to be getting approval and booking 20:09:29 the TripleO experience with reserving a hotel block seems to suggest that we shouldn 20:09:36 shouldn't bother 20:10:18 #info no hotel block - just start booking 20:10:49 #topic reviewing client-plugins 20:10:53 stevebaker 20:10:54 In other words, don't let the lack of hotel blocks be a blocker. 20:10:56 is the venue the same as triplo? 20:11:02 wirehead_: correct 20:11:15 #link https://review.openstack.org/#/q/status:open+project:openstack/heat+branch:master+topic:bp/client-plugins,n,z 20:11:16 pas-ha: yes, it's Red Hat Tower in downtown Raleigh 20:11:23 zaneb, thanks 20:11:30 AKA "the patch set from hell" ;) 20:11:48 randallburt: lol 20:12:02 I started blueprint client-plugins a month ago and it is now quite a long patch series which has so far seen very few reviews. 20:12:35 randallburt: I'd be surprised if I haven't posted worse ;) 20:12:44 I spend most of my time doing rebases with other changes that have landed, and I'd rather be doing something else;) 20:12:47 zaneb: so would I :D 20:13:14 stevebaker: fwiw, I've pinged my folks to look at that set, but I'll crack more whips 20:13:33 so in the nicest possible way, I'd like to ask that this series get priority for review attention 20:13:55 if there is anything else I can do to make reviews more likely, please let me know 20:14:01 stevebaker: did you and asalkeld talk over stevadore? (I admit to not having looked at the patches in a while…) 20:14:16 stevebaker: can you build me a clone? 20:14:32 * randallburt would like a pony 20:14:33 stevebaker: I have blocked some time off tomorrow to look at those 20:15:39 randallburt, I've switched client plugins to using stevedore, and asalkald has started a series which converts everything else. I think 80% of my series can land before me and Angus need to coordinate on the order that things need to merge 20:15:43 I recommend to test these patches with some templates if you have a ability for this 20:16:04 stevebaker: awesome. I'll also block off some time tomorrow to review/test 20:16:21 #action review the client-plugins patch series 20:16:24 80% is about 27 changes that need reviews ;) 20:16:26 imma miss my weekly release next week I bet :( 20:16:52 #topic AZ isolation in practice 20:17:00 Mike Spreitzer and I are wondering about assertions that AWS AZs map to OpenStack AZs. The specific Heat concern is that there should be a way to spread scaling groups across entities that correspond to AWS AZs, which would eithe r be OpenStack regions or OpenStack AZs within a region. 20:17:16 this doesn't sound like a question that is in-scope for Heat to me 20:17:19 I've been looking for designs, or guidelines, or installer frameworks that would result in a separation of the stack components (heat, rest of openstack) which matches (roughly) the physical separation for OpenStack AZs. 20:17:20 oh, and I'm sitting on a beach from the middle of next week, so there is another reason for urgency on reviews 20:18:01 zaneb has a bearing on how scaling group spreading is implemented. 20:18:28 any pointers? 20:18:34 BillArnold: OpenStack provides AZs for the purpose of allowing operators to deploy something analogous to AWS AZs 20:18:45 whether they do or not is up to the operators 20:18:45 BillArnold: so we need a way to specify that in the scaling group, but isn't it up to Nova to handle the actual scheduling/placement? 20:19:07 stevebaker: ah, good to know. could you have another look at the action aware sw config spec before you leave. I added some comments based on yours and did some changes. 20:19:10 randallburt: about these patches, could you please look last comments for https://review.openstack.org/#/c/97975/16, possibly it's ok :) 20:19:19 skraynev: k 20:19:22 tspatzier, ok, will do 20:19:28 randallburt: thx 20:19:41 stevebaker: thanks. and I'll give prio to your changes tomorrow 20:19:48 BillArnold: AZs really only apply to Nova servers/EC2 instances + volumes 20:19:51 tspatzier, thanks 20:19:51 randallburt instance groups already have a list of AZs as a property. Nothing acts on it though 20:20:07 zaneb yes 20:21:01 BillArnold: k. so its a matter of some sort of selection algorithm on that property value then? 20:21:14 as randallburt said, we can already pass an AZ for a server to Nova. now we just need to do it 20:21:20 and then pass that info on to the nova create command when we pop an instance? 20:21:24 SMOP, &c. 20:21:59 randallrburt yes, with scaledown instance deletion policy close to what AWS has by default 20:22:44 so the answer is just have a HA heat (and rest of openstack) for a region? 20:23:46 BillArnold: yes, you want your entire control plane highly-available 20:24:01 zaneb ok we'll continue on the mailing list 20:24:32 but that has nothing to do with AZs. AZs are about which compute servers Nova schedules instances on 20:24:43 ok 20:24:49 #topic Critical issues sync 20:25:02 do we have any outstanding critical issues? 20:25:15 https://review.openstack.org/#/c/103716/ is a proposed fix for a critical issue 20:26:27 it's looking like an artificial fix for an equally artificial 'problem' 20:26:43 stevvebaker whiy does it fix the problem? 20:27:03 I'm inclined to go for it, but it would also be nice to talk to SQLAlchemy folks about the underlying issue 20:27:38 BillArnold: the problem occurs when we cancel a thread at a particular stage of performing a DB operation 20:28:03 BillArnold, best guess is that SQLAlchemy sometimes errors when its thread is killed. The error itself is harmless except that in the gate we fail the job if there is any ERROR entries in the heat log 20:28:13 Is this patch really can prevent that problem? 20:28:39 which turns out to be quite likely to hit when we are trying multiple times in quick succession to delete an empty stack 20:28:46 elynn: per comments and my review, not completely no. 20:29:10 but reduce it enough to keep the gates from blocking 20:29:10 elynn, yes. it prevents the issue when multiple deletes are request on a stack which has no resources (so it will delete quickly) 20:29:41 it is also critical because it is blocking an important change from shardy 20:29:47 stevebaker: but its not a 100% guarantee that the issue will never surface under some other conditions, yes? 20:29:59 not that I would hold up the train for that considering 20:30:00 randallburt: correct 20:30:25 although it's vanishingly unlikely that you would hit this in real-world conditions 20:30:31 zaneb: agreed 20:30:36 randallburt, correct, it will not prevent the problem in production, but it is a "harmless" ERROR logging 20:31:03 and, as far as we know, there's no impact if you do - but that is the part that I would like us to investigate further 20:31:22 zaneb: no problem there, but I +2 for the interim. 20:31:32 randallburt: agreed 20:31:57 so if this makes the gate happy, it's probably good to make the patch series from hell land ;-) 20:32:31 I think we already reverted some of shardy's performance improvements to make the gate happy ;) 20:33:13 zaneb: I remember, but this one seems more harmless 20:33:42 yep, it will be nice to get shardy's patch back in 20:33:48 #topic Open Discussion 20:33:55 skraynev: you had something? 20:33:57 there is another change before that btw https://review.openstack.org/#/c/103715/ 20:34:44 yeap, some thing which I met on review https://review.openstack.org/#/c/72336/ 20:34:51 stevebaker: +A 20:35:28 wow, that's a long review 20:35:29 question is related with validation when we use in validate method some properties of depended resources 20:36:55 AFAIK, currently during stack-create we do validation and then handle_create 20:36:59 zaneb, it is based on stevebaker's 20:37:11 pas-ha, \o/ 20:37:27 stevebaker, as you asked :) 20:37:34 but in this case we have reference on other resource 20:37:49 skraynev, you can raise validation errors in handle_create 20:38:41 * randallburt fights urge to ramble all over that resource 20:38:54 randal ramble? 20:39:02 stevebaker: do you suggest to move this short validation part in handle_create? 20:39:10 rambleburt 20:39:14 randallburt, don hold it inside, I'm all ears and eyes 20:39:15 ^^ :) 20:40:14 pas-ha: well, it wouldn't be fair since I need to do some research into sahara, but it seems having the user specify cluster template and the service being able to tell you what image to use makes the resource properties needlessly cumbersome. 20:40:24 skraynev, yes. I think validation that checks on other resources should happen in handle_create 20:40:57 stevebaker: I just thought about some tricky validation for such cases, but I like your idea :) 20:41:01 pas-ha: but I only passed my eyes over the code just now so don't want to derail what's already been a long patch set. 20:41:01 stevebaker++ 20:41:32 randallburt, no, mine is only 4 items long :) 20:41:52 re: validating other resource dependencies in create: its the only way to be sure. 20:42:17 skraynev: how is reference to another resource done? by means of get_attr? 20:42:20 pas-ha: oh, whoops :D 20:42:49 tspatzier, by doing API calls 20:43:07 stevebaker: ok, thanks 20:43:37 stevebaker: thx, I stuck that validation can not be moved from validate 20:43:59 not really, the api call is to check a particular properties combination is met 20:44:03 tspatzier: no, through get_resource 20:44:13 skraynev, Server does resource validation in handle_create 20:44:20 I need to go now, school run 20:45:13 property validation is done as a pre-fund of handle_create as I found recently when working on a patch. so yeah, doing more validation in handle_create seems to make sense. 20:45:29 for stuff you can't know until runtime, handle_create is the place to do it 20:45:57 agree with zaneb and tspatzier 20:46:18 tspatzier: when we ask cluster-template-id (it may be existing - it's ok, and when we create this template in heat template - it's failed) 20:47:14 the reason it failed is because of validation? 20:48:30 because when you use get_resource and do validate - this function can not be resolved correct, because the mentioned resource are not created yet and have not id 20:48:36 for validation 20:48:44 elynn ^ 20:49:31 elynn, when create stack.validate is called first which calls validate for all of its children, and those are not created yet 20:49:37 so one more reason to do it in handle_create because this should not be executed until the other resource is in CREATE_COMPLETE 20:49:48 no i see the problem, will change 20:51:03 pas-ha: I added comment as reminder. 20:51:08 ok, if there's nothing else I'm going to end the meeting 20:51:12 another topic that I've met several times - 20:51:20 So my question is resolved 20:51:21 doh! sooo close :D 20:51:25 * zaneb hovers 20:51:43 do we need to have a single place/function to understand if heat runs on nova-net or neutron 20:51:47 ? 20:52:14 since several places already need this info for proper validation 20:52:30 pas-ha: hrm. "maybe"? That sounds like something to hash out on the ML IMO. 20:52:37 ok 20:52:47 zaneb: you're welcome ;) 20:52:52 lol 20:52:55 pas-ha: do you mean case about checking services in keystone? 20:53:00 yes 20:53:24 could even be a config option 20:53:45 not sure if it is a best way (checking the keystone) 20:54:07 but as it crops up looks like it better be a single point of checking 20:54:33 ok, to the ML then :) 20:54:43 * zaneb looks forward to the day when neutron replaces nova-network 20:54:57 zaneb: +100500 20:55:05 cool, thanks everyone! 20:55:07 #endmeeting