20:00:07 #startmeeting heat 20:00:08 Meeting started Wed Nov 19 20:00:07 2014 UTC and is due to finish in 60 minutes. The chair is asalkeld. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:12 The meeting name has been set to 'heat' 20:00:20 #topic rollcall 20:00:31 o/ 20:00:36 o/ 20:00:38 \o 20:00:45 \o 20:00:46 o// 20:00:50 hi 20:01:04 hello 20:01:19 o/ 20:01:21 hello all 20:01:36 #link https://wiki.openstack.org/wiki/Meetings/HeatAgenda 20:02:20 #topic review last actions 20:02:43 that would be pre-summit actions 20:02:52 egh, it's been such a long time - not sure this is relervant 20:03:05 lets moveon 20:03:23 #topic add items to the agenda 20:03:41 anything new? 20:04:08 convergence syncup is scheduled after meeting right? 20:04:14 yip 20:04:15 asalkeld: RPC exceptions 20:04:25 ok 20:04:49 #topic Critical issues sync 20:04:56 stevebaker: Ah, also the plan re RPC versioning and versioned objects would be good to clarify 20:05:03 +1 20:05:11 sure shardy 20:05:17 inc0: syncup will be on this channel? 20:05:30 skraynev_, I think #heat 20:05:39 skraynev_ #heat 20:05:48 no critical bugs? 20:06:03 got it, thx 20:06:22 o/ 20:06:27 ok moving on 20:06:45 #topic Organize a bug cleanup day 20:06:59 therve raised this at summit 20:07:21 seems like a good idea 20:07:23 this is more cleaning up old bugs that are not relervent 20:07:26 let me jump in a bit please, in glance there was bug cleanup day 20:07:30 does cleanup == triage ? 20:07:36 There's a huge number of old blueprints too 20:07:36 yeah 20:07:48 its about tagging, triaging, confirming, setting priority 20:07:55 could we use this chance to abandon ancient changes too? with a nice message 20:08:10 sure 20:08:41 maybe make an entherpad and propose blueprints to die? 20:08:49 stevebaker: +1000 20:09:11 asalkeld, what about discussing it ad hoc here at irc? 20:09:24 sure 20:09:37 How about the 26th to do this? next week 20:09:38 if we'll all be focused on this, it should be pretty efficient 20:09:39 so is one day enough 20:09:54 stevebaker, shardy: what would we want to abandon that isn't auto-abandoned already? 20:10:09 zaneb: there is no auto-abandon anymore AFAIK 20:10:10 one workday in every tz 20:10:10 zaneb: auto abandon hasn't been a thing for many months 20:10:18 orly? 20:10:21 Thanksgiving is on the 27th, so many US folks will be travelling, FWIW 20:10:29 zaneb, we need to do auto abandon's job now 20:10:30 so there's stuff sitting there for months with negative feedback and no updates from the owner 20:10:38 3rd dec? 20:10:49 i'll be traveling 20:10:56 (3-6 20:11:09 zaneb: It's been discussed (again) on the ML recently, some folks like crufty old patches, apparently 20:11:20 2nd dec? 20:11:29 weirdos 20:11:35 i am more in favor of doing this over a week - not sure it can be done in a day 20:11:45 or 1st, monday - everyone hates mondays so will try to get over with this asap;) 20:11:53 lol 20:11:59 inc0: that is my Sunday 20:12:10 Why not 24th or 25th? 20:12:10 ok let's make it the 2nd 20:12:15 inc0: oh, I mean Tuesday. as you were 20:12:16 and see how it goes 20:12:39 asalkeld: want to send out an announcement? 20:13:05 #action asalkeld to send out an announcement about the bug cleanup day 20:13:19 #topic https://wiki.openstack.org/wiki/CrossProjectLiaisons 20:13:33 please have a look ^ 20:14:05 is randall about? 20:14:15 I still think about API :) 20:14:32 we need a stable branch liaison 20:14:36 asalkeld: I can take stable branch if you like 20:14:55 cool, just put your name there 20:14:57 Vulnerability management 20:15:12 any takers? 20:15:23 asalkeld: Why do we need a liason for that when there's already a dedicated sub-team? 20:15:45 shardy: I supppose it's just contanc person 20:15:45 shardy, cos' it's on the page and someone is asking? 20:15:59 shardy, laison maintains the members of the $PROJECT-coresec team i 20:16:03 someone involved in running a public heat endpoint would be ideal for vuln management 20:16:26 agree, anyone from rax here? 20:16:32 asalkeld: me 20:16:32 asalkeld: Sure, I'm just saying it doesn't make much sense, beyond the fashion for liason-al-the-things ;) 20:16:42 jasond`, you keen? 20:17:03 shardy, you want to take an action to fight it? 20:17:07 asalkeld: let me talk to randall about that and get back to you 20:17:12 asalkeld: Not really 20:17:17 presumably the PTL is the default liaison for all the things 20:17:18 who are the current members of the VM team? 20:17:18 :) 20:17:23 asalkeld: he stepped away for a minute 20:17:29 iirc it's me, shardy and jasond`? 20:17:35 jasond`, no problem 20:17:43 zaneb: I thought I was on it too, can't recall 20:17:49 zaneb: I believe I'm on it 20:18:05 oh yeah, I think stevebaker also 20:18:17 we don't have enough vulnerabilities to have a process 20:18:21 that's right, i am already on it :) 20:18:25 stevebaker: \o/ ;) 20:18:33 cool 20:18:40 so I think it's just a case of one of us being the first contact point 20:19:12 jasond`, said he would talk to randall about it 20:19:15 API working group 20:19:28 that might be a bit more work 20:19:46 lots of meetings 20:20:14 skraynev_? 20:20:36 ryansb, ? 20:20:37 yr not selling it 20:20:45 lol 20:20:52 Status! Influence! Parties! 20:20:52 you have to say "lots of meetings WITH BEER" 20:20:58 stevebaker: thumbs up! 20:21:01 ryansb, doh 20:21:04 but I'd be interested, thought I'm not core 20:21:11 *though 20:21:15 i don't think that is a problem 20:21:16 I assume that is not a requirement 20:21:38 ryansb, you are it 20:21:38 yeah, I don't see that as a big obstacle 20:21:43 The liaison should be a core reviewer for the project, but does not need to be the PTL 20:21:46 They say 20:21:46 I just wasn't sure how strong the "should" was in their description 20:22:07 ryansb, is working hard at core - right? 20:22:13 ;) 20:22:22 :) 20:22:42 I'll put my name in, and see if they kick me out. 20:22:51 cool 20:22:52 ryansb has a direct line to like half the core reviewers, so he shouldn't have any trouble getting stuff landed ;) 20:23:33 ok, if anyone want to do release management, that is now also up for grabs 20:23:47 asalkeld: nobody wants to release management 20:23:57 but i assumed that ^ 20:24:01 :) 20:24:06 That's the really "fun" part of being a PTL ;) 20:24:18 ok, lets move on 20:24:26 #topic project meetings 20:24:55 so regarding the cpl's - you are encouraged to now attend the project meeting too 20:25:08 cpl's == cross project liaisons 20:25:19 https://wiki.openstack.org/wiki/Meetings/ProjectMeeting 20:25:20 asalkeld: ah, I didn't know that 20:25:25 that's new 20:25:37 optional, but encouraged 20:25:37 I often lurk anyway 20:25:59 moving on 20:26:05 #topic Mid cycle meetup planning 20:26:50 so the poll is very even 20:26:57 #link https://doodle.com/b9m4bf8hvm3mna97#table 20:27:13 maybe hangouts then? 20:27:14 which is just saying no one really wants to travel 20:27:22 inc0, yip 20:27:35 i propose we don't travel to a midcycle 20:27:36 we might have hard time with tz difference then 20:28:05 but we can schedule topic sessions in hours convenient with interested people 20:28:05 inc0 i think the trick is to have topical discussions 20:28:08 I propose we don't use hangouts, because I don't have Google+ 20:28:19 so we don't need everyone 20:28:31 russellb has some sort of WebRTC thing we could use I believe 20:28:33 zaneb: :) 20:28:40 zaneb, i am not stressed what we use 20:28:47 we can set up mumble server;) 20:28:51 not really the point, as long as it works 20:29:15 are people ok with trying this? 20:29:22 or whatever works, we all work in huge corpos...I think we can manage to get one teleco bridge or external server;) 20:29:47 we should give it a decent attempt 20:29:54 yip 20:29:57 +1 20:30:03 +1 20:30:06 +1 20:30:06 Yeah, major plus of remote-midcycle is we can record 20:30:12 +1 20:30:14 +2 20:30:23 why not bluejeans? 20:30:25 +1 20:30:27 +1, travel was looking more or less impossible to get approved 20:30:30 #action asalkeld to announce the "remote" midcycle 20:30:32 +1 20:30:34 +1 20:30:35 stevebaker: Yeah I was thinking that might work 20:30:50 next we can bike shed on the software to use 20:31:00 and the timezone to sync to 20:31:15 we could start getting ideas for sessions too 20:31:18 going nocturnal would be amusing 20:31:33 inc0, yeah 20:31:37 we should write oslo.webrtc ;) 20:31:49 i think if everyone attends it won't work 20:31:55 meetup as a service? 20:32:06 ryansb: as additional plugin for Heat :) 20:32:15 we need to have topics that apeal to a focused group 20:32:26 etherpad? 20:32:41 inc0, sure 20:32:59 so this won't be until Feb? 20:33:10 stevebaker, i guess 20:33:37 but it's cheap, so we could do it when ever 20:33:52 i think we just need a process for setting up a discussion 20:34:07 we could do it more than once if it would be required 20:34:15 suer 20:34:36 ok, lets move on we had some other topics 20:35:01 #topic rpc versioning 20:35:06 shardy, ^ 20:35:22 i hope the topic is what you wanted 20:35:28 So my question is, what needs to happen so we actually solve the versioning on upgrade problem? 20:35:56 stevebaker mentioned bumping versions on a review recently, then it later occurred to me that it doesn't actually solve anything, or does it? 20:36:28 RPC_API_VERSION looks like it meets a need. Rev it on API changes, then make calls with the requested version. You will end up with an engine that responds with that version 20:36:34 why don't you think it solves a problem 20:37:16 one problem is: speaking to an older engine 20:37:18 stevebaker: Does something have the retry logic on the client side to keep trying different engines if it hits one with the wrong version? 20:37:18 stevebaker, what if noone answers? timeout? 20:37:19 I'm guessing it depends a lot on the nature of the change you're making 20:38:28 I think horizon has that in config file 20:38:29 shardy: surely a newer engine will respond first time. Upgrade instructions will need to be 0. upgrade database 1. upgrade some engines, 2. upgrade everything else in any order 20:38:34 can't we just mimic this approach? 20:38:58 stevebaker: that's backwards 20:39:04 (if I understand problem correctly, which I may not.) 20:39:12 stevebaker, we agreed at summit we need to support tripelo upgrades 20:39:14 engine(s) have to be upgraded before DB 20:39:32 inc0: afaik horizon doesn't do rpc calls 20:39:39 stevebaker: So in the transition, you have a mixture of engine verions, what happens when an upgraded API hits a too-old engine? 20:39:51 upgrade all software, then db 20:39:56 Is there code in oslo.messaging which round-robins until it gets a match? 20:39:58 shardy: it doesn't, because the calls request the version they need 20:40:08 and also TripleO is doing this awkward thing where they deploy an API and an engine together, so that they both get upgraded at the same time :/ 20:40:16 shardy, that's on versioned objects 20:40:19 stevebaker: But each engine supports exactly one version, doesn't it? 20:40:24 how about getting topic excheange on amqp 20:40:27 s/on/in 20:40:32 and have 2 queues for each version 20:40:35 for transition? 20:40:56 and when calling rpc, we'll specify which queue it should land on? 20:40:56 shardy: client requesting 1.0 can be serviced by a 1.1 engine 20:41:22 shardy: the vast majority of our API changes are backwards compatible minor revs 20:41:23 stevebaker, so 1.x must be backward-compat? 20:41:25 inc0, that sounds messy for the operator 20:41:41 zaneb: upgrading in pairs should be fine 20:41:45 asalkeld, it might, but we can make that semi-automatic 20:42:16 stevebaker: Ok, so if all engine upgrades are backwards compatible, what does the version buy us over documenting the engine must be upgraded first? 20:42:18 stevebaker: well, it's a PITA because it means you have to maintain both backwards and upwards compat 20:43:09 zaneb, +1 if we made one mistake, we'd need to live with it forever and ever 20:43:22 shardy: because engine->engine calls need a new engine. engine 1 does a call requesting 1.2, a 1.2 engine responds. 20:43:30 shardy, can we go look and see what other projects do? 20:43:43 seems like this should be a solved problem 20:44:03 I think we are discussing the solved problem 20:44:24 asalkeld: Sure, and if it's solved then great 20:44:51 Is it? Nova guys were discussing it also at the summit. 20:45:04 we just need to get better at reving RPC_API_VERSION and calling the version required, so keep an eye out for RPC API changes in reviews 20:45:04 I was confused because it's not clear how this should work looking at the code, and the discussion of versioned objects sounded like that was required to solve this properly 20:45:09 it's really hard to discuss this in general terms, outside the context of a specific change 20:45:18 because every change is potentially different 20:45:22 mdulko, yeah - their's is based on the versioned objects 20:45:22 mdulko, did they threw any ideas? 20:45:39 zaneb, +1 20:45:42 inc0, asalkeld already replied 20:45:54 zaneb: Well what's triggered my interest is the recently merged decouple-nested RPC additions 20:46:01 mdulko: they are passing complex objects over RPC, so they need versioned objects. Currently we have only method arguments with simple types 20:46:15 zaneb: stevebaker asked for a version bump, but unfortunately the patches got merged anyway 20:46:40 shardy: can you submit a rev change as a follow up? 20:46:41 I'm trying to figure out how big a problem that is, and how we handle RPC changes at reiew time in future 20:46:53 stevebaker: Sure, if that fixes a problem 20:47:01 shardy: so that is adding extra parameters, without which it presumably breaks something, so it needs a version bump 20:47:02 shardy: we can point others at this change "-1, do this" 20:47:15 shardy, we probably need a javlin test 20:47:19 may be we should bump the rpc version not with every little change? but "cumulatively", say on milestones? 20:47:41 pas-ha, but people are doing cd 20:47:41 shardy: or perhaps it just returns an error? in which case maybe it's OK (though not ideal, obvs) 20:47:43 pas-ha: there is lots of heat CD 20:47:51 asalkeld: Yes, if anyone wants to take on getting grenade/javelin working that would be awesome 20:48:08 asalkeld: I started but gave up on it as it's hard to get working locally 20:48:18 https://review.openstack.org/#/c/86978/ 20:48:19 ok, that sucks 20:49:14 should we move on? 20:49:22 +1 20:49:42 stevebaker, what was your topic? 20:49:49 RPC exceptions 20:49:54 #topic rpc exceptions 20:50:38 so exceptions on api->engine calls are handled by the fault middleware, but we need some thought on what to do about engine->engine exceptions 20:51:27 stevebaker: could you give example when it may happen? 20:51:29 catch them? 20:51:39 ok, so serialising them 20:51:47 if a NotFound exception is raised the result on the client side has the type heat.common.exception_Remote.NotFound_Remote, which can't be specified in a static except 20:52:55 stevebaker: so your point is that the RPC API doesn't behave enough like the ReST API for the purposes of the client plugins? 20:52:56 so we need something in the rpc client to map it like middleware does 20:53:19 which means we either need to change all our error paths to check for these made up remote types, or do some wrapping in the RPC client to raise the real exceptions again. I'd like to discuss which is best 20:53:44 couldn't we move the fault middleware into the rpc client? 20:53:54 how many error paths are we talking about? 20:53:58 to have this stuff in one place 20:54:10 asalkeld: that is just formatting the error for the user, we need something quite different in the engine 20:54:24 what are we doing engine-engine RPC for other than shardy's nested stack changes? 20:54:39 zaneb, not much now 20:54:47 soon more 20:54:54 zaneb: my software-deployment changes. Not much now but eventually all the things 20:55:07 zaneb: stevebaker's moved softwareconfig to that model, and in future, won't convergence work in a similar way? 20:55:08 zaneb, stop, signal 20:55:23 shardy: convergence will never get a reply at all 20:55:33 zaneb: anything which catches a HeatException or subclass thereof 20:55:34 it's completely async 20:55:59 stevebaker: isn't that localised to a client plugin though? couldn't you do the translation there? 20:56:51 4mins 20:57:53 zaneb: you mean heat.rpc.client.EngineClient? yes. I think it would need every method call to check for every known thrown exception to re-raise it though 20:58:21 anyway, I've raised the issue, we don't need to solve it now 20:58:39 #topic open discussion 20:58:45 (for 2mins) 20:58:59 not very open ended:-O 20:59:15 stevebaker: no, I mean a client plugin that talks to the RPC client instead of python-heatclient 20:59:31 open, but with a low ttl 21:00:04 zaneb: there is no such thing. resources are making rpc calls directly 21:00:28 #endmeeting