14:00:17 #startmeeting Glance 14:00:17 Meeting started Thu Jun 18 14:00:17 2015 UTC and is due to finish in 60 minutes. The chair is nikhil_k. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:20 o/ 14:00:20 The meeting name has been set to 'glance' 14:00:26 o/ 14:00:28 #topic agenda and roll call 14:00:32 o/ 14:00:36 o/ 14:00:37 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:00:47 o/ 14:01:38 We have a small agenda today. 14:02:11 May be we can discuss a few things as part of open discussion 14:02:12 o/ 14:02:19 Let's get started 14:02:27 #topic Updates 14:03:15 Thanks to harlowja, we've a patch up for dashboard. I think kragniz also introduced that concept in the project last cycle. 14:03:24 #link https://review.openstack.org/#/c/191923/ 14:03:28 Thanks guys! 14:04:06 Please take a look at the versioning changes: 14:04:13 o/ 14:04:15 #link https://review.openstack.org/#/c/192401/ 14:04:31 That's all for this week. 14:04:47 o/ sorry for being late, folks 14:04:52 Unless someone has more updates/announcements we can move on 14:05:30 nikhil_k: that version thing 14:05:38 where that number 11 came from? 14:06:12 jokke_: more info in the ML 14:06:32 versioning is being done for number of cycles project incubated in 14:06:52 but I doubt that for future when back compat changes would be requested 14:07:03 it's supposed to be semver sytle 14:07:08 jokke_: if you read any messages on the ML read Doug's and lifeless' because of how much they're working on fixing things this cycle 14:07:26 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067082.html 14:07:35 nikhil_k: right. We're changing the versioning scheme (again) but not the use of Semantic Versioning 14:08:01 found it 14:08:12 sigmavirus24: sorry, do you mean we are not using Semantic Versioning now? 14:08:26 *We're not changing the use of Semantic Versioning 14:08:44 ok 14:08:49 Thanks for clarification 14:09:16 I need to read a few more messages from this morning so, my updates may be a bit stale. 14:09:24 Please refer to the thread for latest 14:09:31 yeah so that will not work out before they remove the pbr functionality merged recently that demands the version bump to next 9bigger) version number 14:09:47 11 is one louder 14:09:48 #info thread sub: "[openstack-dev] [all] [stable] No longer doing stable point releases" 14:10:48 mclaren: nice one 14:10:57 oops, did I say one 14:11:26 #topic python-glanceclient + x-openstack-request-id 14:11:35 hi all, this is related to returning request-id to the caller 14:11:45 #link https://review.openstack.org/#/c/156508/8/specs/log-request-id-mappings.rst 14:12:01 Blueprint: https://blueprints.launchpad.net/python-glanceclient/+spec/expose-get-x-openstack-request-id 14:12:24 we are discussing this with cross-projects as well as LW group 14:12:39 right now, X-openstack-request-id is not returned by the client to the caller. 14:13:08 Dough has suggested to use thread local storage to for storing last request id as it will not break backward compatibility 14:13:08 abhishekk: I'm fully behind the aim of this, but I'm bit concerned merging any of the functionality in before we get to an agreement how we are going to do this request ID thing ... 14:13:50 jokke_: i understand, I just want the opinion of the community members on this approach 14:14:19 abhishekk: were there any other alternative methods for returning it proposed? 14:14:20 IMO, this is very useful for end-user to have request id 14:15:04 sigmavirus24: cross-project specs has 3 solutions and now talk is going on for using osprofiler 14:15:22 abhishekk: I'm talking about the clients 14:15:53 sigmavirus24: returning tuple (resp, body) 14:16:57 but it will not support backward compatibility 14:16:58 I guess #link https://blueprints.launchpad.net/python-glanceclient/+spec/expose-get-x-openstack-request-id has the design elaborate 14:17:14 # https://review.openstack.org/#/c/156508/1/specs/return-req-id-from-cinder-client.rst 14:17:15 abhishekk: what all is it breaking? 14:17:33 abhishekk: your example there (BP) says cinderclient 14:18:02 I meant, what's broken by the client change 14:18:03 nikhil_k: thanks, i will modify it 14:18:08 abhishekk: it looks kind of neat approach, but I'll need to look into it a bit before having opinion ;) 14:18:31 nikhil_k: the current design in the blueprint will not break any thing 14:18:39 jokke_: yes please 14:19:10 ok 14:19:28 nikhil_k: returning tuple(resp_header, body) will change the return type 14:19:46 nikhil_k: My concern is not really that we would be breaking anything, but we would merge code that will never be used and rewritten because the approach changes before the rest is done ;) 14:19:51 what does the sdk folks say? 14:19:55 do* 14:20:18 abhishekk: ^ 14:20:23 nikhil_k: sdk folks are ready to add this and they said approach individual teams 14:20:26 nikhil_k: we're not that far yet ... as said the x-project spec is still under bikeshedding 14:20:58 http://eavesdrop.openstack.org/meetings/python_openstacksdk/2015/python_openstacksdk.2015-06-16-19.00.log.html 14:21:01 jokke_: hah, before the rest is done :) Nice one 14:21:04 nikhil_k: ^^ 14:21:07 * sigmavirus24 will be joining the bikeshedding efforts 14:21:18 you're welcome and I'm sorry 14:21:56 It will be great if I get early feedback about this from the glance members 14:22:07 sigmavirus24: there is plento of really poor and bad options on the air ;) 14:22:19 i am also talking with cinder and neutron guys for this 14:22:40 abhishekk: I'll need to toy with the options before I can give you a firm bit of feedback from me personally 14:22:45 plenty even 14:22:48 Others may already have opinions though =P 14:23:40 sigmavirus24: sure :) 14:23:47 I haven't followed the ML thread after that wg suggestion 14:23:58 any link for the latest news on this? 14:24:36 nikhil_k: not yet, currently talk is about using osprofiler 14:25:05 abhishekk: yeah ... I do what I can to block that one ;) 14:25:15 abhishekk: the trade off of appending the project id/tag vs. tuple was? 14:25:36 jokke_: thank you :) 14:25:51 nikhil_k: not sure 14:26:48 ok, I will try to follow up if time permits 14:26:50 abhishekk: I'm against either approach, putting it to the keystone middleware or to the osprofiler because we do want to have the req ID's still even the deployment does not want to use either of those 14:27:21 jokke_: right 14:27:28 nikhil_k: thanks 14:27:57 agreed 14:27:58 I think it is useful because If X-openstack-request-id is available with the user, it will be easy for the service provider to trace any issue by looking at the log messages and respond user quickly with appropriate reasoning 14:27:58 maybe oslo context? 14:28:43 abhishekk: yes, I agree it will be super useful for debugging 14:29:12 mclaren: right, we just need to finalize the approach :) 14:29:12 jokke_: I'm with you 14:29:46 (off-topic: Is osprofiler even still maintained? Last I looked at it, the last commit was from November) 14:29:49 abhishekk: I am doubting the push back because of the raised necessity of inclusion of project name in the x-red-id 14:29:57 mclaren: that has been on the table ;) 14:30:14 mclaren: then we just need to get all projects adopting that 14:30:29 the logs themselves are plenty useful to identify project 14:30:36 == nikhil_k 14:30:38 anyway 14:30:43 this is cross-project discussion, no? 14:30:46 @all: please provide your suggestions 14:30:53 I think so, sigmavirus24 14:31:08 I can ask Boris about osprofiler 14:31:14 yeah ... this is all still open on X-Proj level 14:31:20 sigmavirus24: yes, but I need to approach individual teams as well 14:31:31 so I'd say we do not merge anything related before that level has been sorted 14:31:38 == jokke_ 14:31:50 Cool, abhishekk any chance you can raise this on Tuesday meeting for horizontal team discussion in the CPL meeting? 14:31:54 but as abhishekk said, please chip in 14:32:07 mfedosin: https://github.com/stackforge/osprofiler (last commit to master was in october) 14:32:26 abhishekk: we see a good turnout and indovidual team feedback there. just make sure to ping liaisons before tuesday 14:32:51 nikhil_k: yes, because of timezone difference tpatil will join the cpl meeting 14:32:56 sigmavirus24: I thought it had moved into the /openstack namespace, but I might be imagining that 14:33:04 kragniz: wasn't there 14:33:07 It's sad if osprofiler is dead 14:33:16 sure. heads up a day before to do some prep work will help with more feedback 14:33:24 https://review.openstack.org/#/c/135080/ is the lastest review, which is failing tests hard and is only being reviewed by one person 14:33:25 nikhil_k: sure 14:33:36 which doesn't seem acceptable for a cross-project solution imo 14:34:03 thank you all, please left your feedback on https://review.openstack.org/#/c/193035/ 14:34:10 optional libraries should be avoided as best possible, imho 14:34:35 sigmavirus24: That's a sad little patch 14:34:48 ok, moving on. 14:34:52 #topic client socket time out param 14:34:57 mfedosin: all yours 14:35:04 okay, thanks 14:35:08 #link https://review.openstack.org/#/c/192394/ 14:35:21 as you know eventlet has an option called 'keepalive', which is True by default 14:35:29 http://eventlet.net/doc/modules/wsgi.html 14:35:43 It means that server will never close connections and wait for client actions after response. 14:35:59 It's not a good practice, because clients can create many connection and don't close them after. 14:36:08 It leads to the situation when many threads are active and consume cpu forever with socket polling. 14:36:23 The fix is very easy: Set 'keepalive' parameter in glance config to False and then pass it to the server. 14:36:45 This param added here https://review.openstack.org/#/c/130839/ 14:36:50 this is socket level keepalive (not http keepalive)? 14:37:28 yes 14:37:39 But setting 'keepalive' to False only partially fixes the problem: the performance may be reduced because of frequent reconnections and it won't prevent malefactors of attacking the cloud 14:37:46 client_socket_timeout = 900 , is that in sec? 14:37:58 yes 14:38:17 It's pretty enough to perform needed operations, so it won't affect performance, and it will guard the system from attackers. The same issues occurred in Keystone and Cinder and they just added this timeout too 14:38:20 nikhil_k: similar used in cinder as well 14:38:31 so, this means the kernel timeout would be overloaded? 14:38:39 https://review.openstack.org/#/c/177670/ https://review.openstack.org/#/c/119365/ 14:38:45 mfedosin: I've a patch up since about 8 months ago to do the same thing? 14:39:27 https://review.openstack.org/#/c/119132/ 14:39:54 that's where cinder/nova's patches originated from I think 14:40:07 oh... why it hasn't merged yet? :) 14:40:44 yeah, I made this patch by analogy with yours in cinder 14:40:47 good question :-) 14:40:49 I see reviews plenty 14:40:50 hah 14:41:00 * sigmavirus24 updates review 14:41:02 sorry mclaren 14:41:12 * sigmavirus24 doesnt' recall if he was a core reviewer by Apr 21 14:41:44 np. (My review stats are pretty terrible.) 14:41:45 mclaren: right 14:41:45 mclaren: no wonders the one from abhishekk looks like copy of yours without the documentations 14:42:07 jokke_: yes :) 14:42:21 documentation? who needs documentation! 14:42:30 =P 14:42:49 we had this problem during the tests in our latest release. So I wrote patch there to fix timeouts and it works 14:43:05 wow, nice to know it's actually useful! 14:43:06 * jokke_ kicks sigmavirus24 between the big toes 14:43:11 so I think we should merge Stuart's commit asap and then backport it to kilo 14:43:23 == mfedosin 14:43:30 mfedosin: ++ 14:43:35 although jokke_ is stable-maint and can tell us if it's appropriate 14:43:41 so I can abandon mine then 14:43:48 shut up and take my money :P 14:43:53 s/documentation/backports/g 14:44:00 * sigmavirus24 takes jokke_'s money 14:44:01 * sigmavirus24 doesn't shut up 14:44:37 mclaren, what about DocImapact flag there? 14:45:11 mfedosin: sure. I probably need a rebase anyway 14:45:36 #topic Open Discussion 14:46:25 Hi everyone, I am trying t work on tasks scrubbers, so I need views from everyone 14:47:47 so first of all, I would like to know, what is the exact difference between the glance scrubbers and the tasks scrubbers 14:47:59 sigmavirus24: thebackport is fine as long as we open bug for that (which is warranted due to the behavior mfedosin described) 14:48:01 please shed some light on this 14:48:21 GB21: the glance scrubber has to deal with backend storage, too 14:48:32 GB21: the tasks scrubber cleans the DB only 14:49:12 images scrubber looks at the status and tasks scrubber will look at the expires_at field 14:49:32 GB21: what nikhil_k said, different scrubbing criterion 14:50:04 rosmaita and nikhil_k, ohk 14:50:07 the glance scrubber removes delayed deleted images for example (from memory, it's a while since I've looked at it) 14:50:17 rosmaita: also, may we need it to be a separate job to spread the scrubbing load differently for those two? 14:50:37 nikhil_k: i agree, best to have it be a separate job 14:50:38 mclaren: right 14:50:48 status=pending_delete 14:51:05 please explain, how it has to be a separate job 14:51:20 create a separate process for it 14:51:46 keep it cron like though 14:52:44 btw, guys we kinda need some of this for glance v1, v2 support in nova #link https://review.openstack.org/#/c/192926/ 14:53:23 nicely written spec, thanks stevelle 14:53:42 thanks nikhil_k and all the others who talked with me about it 14:54:19 stevelle: you now own v2 to v3 migration also ;-) 14:54:40 Uh, oh. Voluntold 14:54:55 heh 14:55:14 mclaren: good voluntelling 14:55:22 mclaren: you're in charge of voluntelling now 14:56:06 I've been voluntold to? Very meta :-) 14:56:20 ;) 14:59:17 if people aren't aware, the releases of libs and clients may become centralized soon(-ish) 15:00:09 Looks like we're out of time 15:00:16 Thanks all! 15:00:19 #endmeeting