14:00:17 <nikhil_k> #startmeeting Glance
14:00:17 <openstack> Meeting started Thu Jun 18 14:00:17 2015 UTC and is due to finish in 60 minutes.  The chair is nikhil_k. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:20 <mfedosin> o/
14:00:20 <openstack> The meeting name has been set to 'glance'
14:00:26 <bpoulos> o/
14:00:28 <nikhil_k> #topic agenda and roll call
14:00:32 <abhishekk> o/
14:00:36 <jokke_> o/
14:00:37 <nikhil_k> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:00:47 <cpallares> o/
14:01:38 <nikhil_k> We have a small agenda today.
14:02:11 <nikhil_k> May be we can discuss a few things as part of open discussion
14:02:12 <kragniz> o/
14:02:19 <nikhil_k> Let's get started
14:02:27 <nikhil_k> #topic Updates
14:03:15 <nikhil_k> Thanks to harlowja, we've a patch up for dashboard. I think kragniz also introduced that concept in the project last cycle.
14:03:24 <nikhil_k> #link https://review.openstack.org/#/c/191923/
14:03:28 <nikhil_k> Thanks guys!
14:04:06 <nikhil_k> Please take a look at the versioning changes:
14:04:13 <stevelle> o/
14:04:15 <nikhil_k> #link https://review.openstack.org/#/c/192401/
14:04:31 <nikhil_k> That's all for this week.
14:04:47 <ativelkov> o/ sorry for being late, folks
14:04:52 <nikhil_k> Unless someone has more updates/announcements we can move on
14:05:30 <jokke_> nikhil_k: that version  thing
14:05:38 <jokke_> where that number 11 came from?
14:06:12 <nikhil_k> jokke_: more info in the ML
14:06:32 <nikhil_k> versioning is being done for number of cycles project incubated in
14:06:52 <nikhil_k> but I doubt that for future when back compat changes would be requested
14:07:03 <nikhil_k> it's supposed to be semver sytle
14:07:08 <sigmavirus24> jokke_: if you read any messages on the ML read Doug's and lifeless' because of how much they're working on fixing things this cycle
14:07:26 <nikhil_k> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067082.html
14:07:35 <sigmavirus24> nikhil_k: right. We're changing the versioning scheme (again) but not the use of Semantic Versioning
14:08:01 <jokke_> found it
14:08:12 <nikhil_k> sigmavirus24: sorry, do you mean we are not using Semantic Versioning now?
14:08:26 <sigmavirus24> *We're not changing the use of Semantic Versioning
14:08:44 <nikhil_k> ok
14:08:49 <nikhil_k> Thanks for clarification
14:09:16 <nikhil_k> I need to read a few more messages from this morning so, my updates may be a bit stale.
14:09:24 <nikhil_k> Please refer to the thread for latest
14:09:31 <jokke_> yeah so that will not work out before they remove the pbr functionality merged recently that demands the version  bump to next 9bigger) version number
14:09:47 <mclaren> 11 is one louder
14:09:48 <nikhil_k> #info thread sub: "[openstack-dev] [all] [stable] No longer doing stable point releases"
14:10:48 <nikhil_k> mclaren: nice one
14:10:57 <nikhil_k> oops, did I say one
14:11:26 <nikhil_k> #topic python-glanceclient + x-openstack-request-id
14:11:35 <abhishekk> hi all, this is related to returning request-id to the caller
14:11:45 <abhishekk> #link https://review.openstack.org/#/c/156508/8/specs/log-request-id-mappings.rst
14:12:01 <abhishekk> Blueprint: https://blueprints.launchpad.net/python-glanceclient/+spec/expose-get-x-openstack-request-id
14:12:24 <abhishekk> we are discussing this with cross-projects as well as LW group
14:12:39 <abhishekk> right now, X-openstack-request-id is not returned by the client to the caller.
14:13:08 <abhishekk> Dough has suggested to use thread local storage to for storing last request id as it will not break backward compatibility
14:13:08 <jokke_> abhishekk: I'm fully behind the aim of this, but I'm bit concerned merging any of the functionality in before we get to an agreement how we are going to do this request ID  thing ...
14:13:50 <abhishekk> jokke_: i understand, I just want the opinion of the community members on this approach
14:14:19 <sigmavirus24> abhishekk: were there any other alternative methods for returning it proposed?
14:14:20 <abhishekk> IMO, this is very useful for end-user to have request id
14:15:04 <abhishekk> sigmavirus24: cross-project specs has 3 solutions and now talk is going on for using osprofiler
14:15:22 <sigmavirus24> abhishekk: I'm talking about the clients
14:15:53 <abhishekk> sigmavirus24: returning tuple (resp, body)
14:16:57 <abhishekk> but it will not support backward compatibility
14:16:58 <nikhil_k> I guess #link https://blueprints.launchpad.net/python-glanceclient/+spec/expose-get-x-openstack-request-id has the design elaborate
14:17:14 <abhishekk> # https://review.openstack.org/#/c/156508/1/specs/return-req-id-from-cinder-client.rst
14:17:15 <nikhil_k> abhishekk: what all is it breaking?
14:17:33 <nikhil_k> abhishekk: your example there (BP) says cinderclient
14:18:02 <nikhil_k> I meant, what's broken by the client change
14:18:03 <abhishekk> nikhil_k: thanks, i will modify it
14:18:08 <jokke_> abhishekk: it looks kind of neat approach, but I'll need to look into it  a bit before having opinion ;)
14:18:31 <abhishekk> nikhil_k: the current design in the blueprint will not break any thing
14:18:39 <abhishekk> jokke_: yes please
14:19:10 <nikhil_k> ok
14:19:28 <abhishekk> nikhil_k: returning tuple(resp_header, body) will change the return type
14:19:46 <jokke_> nikhil_k: My concern is not  really  that we  would be breaking anything, but we would merge code that will never  be used and rewritten because the approach changes before the rest is done ;)
14:19:51 <nikhil_k> what does the sdk folks say?
14:19:55 <nikhil_k> do*
14:20:18 <nikhil_k> abhishekk: ^
14:20:23 <abhishekk> nikhil_k: sdk folks are ready to add this and they said approach individual teams
14:20:26 <jokke_> nikhil_k: we're not that far yet  ... as said the x-project spec is still under bikeshedding
14:20:58 <abhishekk> http://eavesdrop.openstack.org/meetings/python_openstacksdk/2015/python_openstacksdk.2015-06-16-19.00.log.html
14:21:01 <nikhil_k> jokke_: hah, before the rest is done :) Nice one
14:21:04 <abhishekk> nikhil_k: ^^
14:21:07 * sigmavirus24 will be joining the bikeshedding efforts
14:21:18 <sigmavirus24> you're welcome and I'm sorry
14:21:56 <abhishekk> It will be great if I get early feedback about this from the glance members
14:22:07 <jokke_> sigmavirus24: there is plento of  really poor and  bad  options on the air ;)
14:22:19 <abhishekk> i am also talking with cinder and neutron guys for this
14:22:40 <sigmavirus24> abhishekk: I'll need to toy with the options before I can give you a firm bit of feedback from me personally
14:22:45 <jokke_> plenty even
14:22:48 <sigmavirus24> Others may already have opinions though =P
14:23:40 <abhishekk> sigmavirus24: sure :)
14:23:47 <nikhil_k> I haven't followed the ML thread after that wg suggestion
14:23:58 <nikhil_k> any link for the latest news on this?
14:24:36 <abhishekk> nikhil_k: not yet, currently talk is about using osprofiler
14:25:05 <jokke_> abhishekk: yeah ... I do what I can to block that one ;)
14:25:15 <nikhil_k> abhishekk: the trade off of appending the project id/tag vs. tuple was?
14:25:36 <abhishekk> jokke_: thank you :)
14:25:51 <abhishekk> nikhil_k: not sure
14:26:48 <nikhil_k> ok, I will try to follow up if time permits
14:26:50 <jokke_> abhishekk: I'm against either approach, putting  it to  the keystone middleware or to the osprofiler because we do want  to have the req ID's still even the deployment does not want to use either of  those
14:27:21 <abhishekk> jokke_: right
14:27:28 <abhishekk> nikhil_k: thanks
14:27:57 <nikhil_k> agreed
14:27:58 <abhishekk> I think it is useful because If X-openstack-request-id is available with the user, it will be easy for the service provider to trace any issue by looking at the log messages and respond user quickly with appropriate reasoning
14:27:58 <mclaren> maybe oslo context?
14:28:43 <mclaren> abhishekk: yes, I agree it will be super useful for debugging
14:29:12 <abhishekk> mclaren: right, we just need to finalize the approach :)
14:29:12 <sigmavirus24> jokke_: I'm with you
14:29:46 <sigmavirus24> (off-topic: Is osprofiler even still maintained? Last I looked at it, the last commit was from November)
14:29:49 <nikhil_k> abhishekk: I am doubting the push back because of the raised necessity of inclusion of project name in the x-red-id
14:29:57 <jokke_> mclaren: that has been on the table  ;)
14:30:14 <jokke_> mclaren: then we just need to  get all projects adopting that
14:30:29 <nikhil_k> the logs themselves are plenty useful to identify project
14:30:36 <sigmavirus24> == nikhil_k
14:30:38 <sigmavirus24> anyway
14:30:43 <sigmavirus24> this is cross-project discussion, no?
14:30:46 <abhishekk> @all: please provide your suggestions
14:30:53 <nikhil_k> I think so, sigmavirus24
14:31:08 <mfedosin> I can ask Boris about osprofiler
14:31:14 <jokke_> yeah ... this is all still open on X-Proj  level
14:31:20 <abhishekk> sigmavirus24: yes, but I need to approach individual teams as well
14:31:31 <jokke_> so I'd say we do not merge anything related before that level has been  sorted
14:31:38 <sigmavirus24> == jokke_
14:31:50 <nikhil_k> Cool, abhishekk any chance you can raise this on Tuesday meeting for horizontal team discussion in the CPL meeting?
14:31:54 <jokke_> but as abhishekk said, please chip in
14:32:07 <sigmavirus24> mfedosin: https://github.com/stackforge/osprofiler (last commit to master was in october)
14:32:26 <nikhil_k> abhishekk: we see a good turnout and indovidual team feedback there. just make sure to ping liaisons before tuesday
14:32:51 <abhishekk> nikhil_k: yes, because of timezone difference tpatil will join the cpl meeting
14:32:56 <kragniz> sigmavirus24: I thought it had moved into the /openstack namespace, but I might be imagining that
14:33:04 <sigmavirus24> kragniz: wasn't there
14:33:07 <mfedosin> It's sad if osprofiler is dead
14:33:16 <nikhil_k> sure. heads up a day before to do some prep work will help with more feedback
14:33:24 <sigmavirus24> https://review.openstack.org/#/c/135080/ is the lastest review, which is failing tests hard and is only being reviewed by one person
14:33:25 <abhishekk> nikhil_k: sure
14:33:36 <sigmavirus24> which doesn't seem acceptable for a cross-project solution imo
14:34:03 <abhishekk> thank you all, please left your feedback on https://review.openstack.org/#/c/193035/
14:34:10 <nikhil_k> optional libraries should be avoided as best possible, imho
14:34:35 <cpallares> sigmavirus24: That's a sad little patch
14:34:48 <nikhil_k> ok, moving on.
14:34:52 <nikhil_k> #topic client socket time out param
14:34:57 <nikhil_k> mfedosin: all yours
14:35:04 <mfedosin> okay, thanks
14:35:08 <nikhil_k> #link https://review.openstack.org/#/c/192394/
14:35:21 <mfedosin> as you know eventlet has an option called 'keepalive', which is True by default
14:35:29 <mfedosin> http://eventlet.net/doc/modules/wsgi.html
14:35:43 <mfedosin> It means that server will never close connections and wait for client actions after response.
14:35:59 <mfedosin> It's not a good practice, because clients can create many connection and don't close them after.
14:36:08 <mfedosin> It leads to the situation when many threads are active and consume cpu forever with socket polling.
14:36:23 <mfedosin> The fix is very easy: Set 'keepalive' parameter in glance config to False and then pass it to the server.
14:36:45 <mfedosin> This param added here https://review.openstack.org/#/c/130839/
14:36:50 <mclaren> this is socket level keepalive (not http keepalive)?
14:37:28 <mfedosin> yes
14:37:39 <mfedosin> But setting 'keepalive' to False only partially fixes the problem: the performance may be reduced because of frequent reconnections and it won't prevent malefactors of attacking the cloud
14:37:46 <nikhil_k> client_socket_timeout = 900 , is that in sec?
14:37:58 <mfedosin> yes
14:38:17 <mfedosin> It's pretty enough to perform needed operations, so it won't affect performance, and it will guard the system from attackers. The same issues occurred in Keystone and Cinder and they just added this timeout too
14:38:20 <abhishekk> nikhil_k: similar used in cinder as well
14:38:31 <nikhil_k> so, this means the kernel timeout would be overloaded?
14:38:39 <mfedosin> https://review.openstack.org/#/c/177670/ https://review.openstack.org/#/c/119365/
14:38:45 <mclaren> mfedosin: I've a patch up since about 8 months ago to do the same thing?
14:39:27 <mclaren> https://review.openstack.org/#/c/119132/
14:39:54 <mclaren> that's where cinder/nova's patches originated from I think
14:40:07 <mfedosin> oh... why it hasn't merged yet? :)
14:40:44 <mfedosin> yeah, I made this patch by analogy with yours in cinder
14:40:47 <mclaren> good question :-)
14:40:49 <nikhil_k> I see reviews plenty
14:40:50 <sigmavirus24> hah
14:41:00 * sigmavirus24 updates review
14:41:02 <sigmavirus24> sorry mclaren
14:41:12 * sigmavirus24 doesnt' recall if he was a core reviewer by Apr 21
14:41:44 <mclaren> np. (My review stats are pretty terrible.)
14:41:45 <abhishekk> mclaren: right
14:41:45 <jokke_> mclaren: no  wonders the one from abhishekk looks like copy of yours without the documentations
14:42:07 <abhishekk> jokke_: yes :)
14:42:21 <sigmavirus24> documentation? who needs documentation!
14:42:30 <sigmavirus24> =P
14:42:49 <mfedosin> we had this problem during the tests in our latest release. So I wrote patch there to fix timeouts and it works
14:43:05 <mclaren> wow, nice to know it's actually useful!
14:43:06 * jokke_ kicks sigmavirus24 between the big toes
14:43:11 <mfedosin> so I think we should merge Stuart's commit asap and then backport it to kilo
14:43:23 <sigmavirus24> == mfedosin
14:43:30 <abhishekk> mfedosin: ++
14:43:35 <sigmavirus24> although jokke_ is stable-maint and can tell us if it's appropriate
14:43:41 <mfedosin> so I can abandon mine then
14:43:48 <jokke_> shut up and take my money  :P
14:43:53 <nikhil_k> s/documentation/backports/g
14:44:00 * sigmavirus24 takes jokke_'s money
14:44:01 * sigmavirus24 doesn't shut up
14:44:37 <mfedosin> mclaren, what about DocImapact flag there?
14:45:11 <mclaren> mfedosin: sure. I probably need a rebase anyway
14:45:36 <nikhil_k> #topic Open Discussion
14:46:25 <GB21> Hi everyone, I am trying t work on tasks scrubbers, so I need views from everyone
14:47:47 <GB21> so first of all, I would like to know, what is the exact difference between the glance scrubbers and the tasks scrubbers
14:47:59 <jokke_> sigmavirus24: thebackport is fine as long as  we open bug  for that (which is warranted due to the behavior mfedosin described)
14:48:01 <GB21> please shed some light on this
14:48:21 <rosmaita> GB21: the glance scrubber has to deal with backend storage, too
14:48:32 <rosmaita> GB21: the tasks scrubber cleans the DB only
14:49:12 <nikhil_k> images scrubber looks at the status and tasks scrubber will look at the expires_at field
14:49:32 <rosmaita> GB21: what nikhil_k said, different scrubbing criterion
14:50:04 <GB21> rosmaita and nikhil_k, ohk
14:50:07 <mclaren> the glance scrubber removes delayed deleted images for example (from memory, it's a while since I've looked at it)
14:50:17 <nikhil_k> rosmaita: also, may we need it to be a separate job to spread the scrubbing load differently for those two?
14:50:37 <rosmaita> nikhil_k: i agree, best to have it be a separate job
14:50:38 <nikhil_k> mclaren: right
14:50:48 <nikhil_k> status=pending_delete
14:51:05 <GB21> please explain, how it has to be a separate job
14:51:20 <nikhil_k> create a separate process for it
14:51:46 <nikhil_k> keep it cron like though
14:52:44 <nikhil_k> btw, guys we kinda need some of this for glance v1, v2 support in nova #link https://review.openstack.org/#/c/192926/
14:53:23 <nikhil_k> nicely written spec, thanks stevelle
14:53:42 <stevelle> thanks nikhil_k and all the others who talked with me about it
14:54:19 <mclaren> stevelle: you now own v2 to v3 migration also ;-)
14:54:40 <stevelle> Uh, oh. Voluntold
14:54:55 <mclaren> heh
14:55:14 <sigmavirus24> mclaren: good voluntelling
14:55:22 <sigmavirus24> mclaren: you're in charge of voluntelling now
14:56:06 <mclaren> I've been voluntold to? Very meta :-)
14:56:20 <sigmavirus24> ;)
14:59:17 <nikhil_k> if people aren't aware, the releases of libs and clients may become centralized soon(-ish)
15:00:09 <nikhil_k> Looks like we're out of time
15:00:16 <nikhil_k> Thanks all!
15:00:19 <nikhil_k> #endmeeting