20:01:59 <markwash> #startmeeting glance
20:02:00 <openstack> Meeting started Thu Apr  4 20:01:59 2013 UTC.  The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:02:01 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:02:03 <bcwaldon> any rackers around?
20:02:04 <openstack> The meeting name has been set to 'glance'
20:02:17 <markwash> unfortunately I scheduled this over top of a racksburg product meeting
20:02:23 <markwash> so nobody from rackspace can make it
20:02:23 <kgriffs> bcwaldon: here
20:02:32 <bcwaldon> ok - next week
20:02:32 <markwash> well, racksburg, that is :-)
20:02:42 <kgriffs> (for the moment - all hands mtg starting soon)
20:02:49 <bcwaldon> kgriffs: I should have been more specific (looking for iccha , brianr, and ameade)
20:03:02 <markwash> so this is our first meeting
20:03:04 <markwash> exciting I know
20:03:05 <bcwaldon> kgriffs: but please stick around :)
20:03:15 <kgriffs> no worries. I'm just eavesdropping :D
20:03:28 <markwash> #topic Glance Blueprints https://blueprints.launchpad.net/glance
20:03:50 <markwash> I've been working on cleaning up the blueprint list lately
20:04:03 <markwash> and I'd love to make this meeting the place where we keep up to speed with that
20:04:16 <jbresnah> That would be great IMO
20:04:19 <flaper87> +1
20:04:35 <markwash> I've got a list of items to discuss, I'll just start going 1 by 1
20:04:57 <markwash> with blueprint public-glance
20:05:13 <markwash> https://blueprints.launchpad.net/glance/+spec/public-glance
20:05:24 <markwash> feels like we maybe already support that feature?
20:05:36 <bcwaldon> how so?
20:06:18 <bcwaldon> the use case came from canonical wanting to publish cloud images through a public glance service
20:06:22 <bcwaldon> a readonly service
20:06:24 <markwash> isn't there anonymous auth?
20:06:35 <bcwaldon> the catch is how do you use it through nova
20:06:48 <bcwaldon> and yes, there is anonymous access
20:07:08 <bcwaldon> I will admit we are most of the way there, but the last mile is the catch
20:07:17 <markwash> so, is the feature then that we need nova to support talking anonymously to an arbitrary glance server?
20:07:26 <bcwaldon> I think so
20:07:37 <bcwaldon> now that we have readonly access to glance
20:07:51 <bcwaldon> but this BP was written before that - really it feels like a nova feature
20:07:57 <markwash> so, I'd like for us to investigate to ensure that's the only step left
20:07:57 <bcwaldon> OR
20:08:02 <markwash> and then move this bp to nova
20:08:19 <bcwaldon> sure - the alternative would be to create a glance driver for glance that you can point to that public glance service with
20:08:36 <markwash> maybe we can touch base with smoser and see what he still wants
20:08:42 <bcwaldon> but maybe you could just use the http backend...
20:08:45 <markwash> anybody wanna take that action?
20:08:48 <bcwaldon> yes, thats probably good
20:08:55 <bcwaldon> we could just summon him here
20:09:12 <markwash> nah, lots to go through, there is riper fruit
20:09:23 <bcwaldon> ok
20:09:30 <bcwaldon> #action bcwaldon to ping smoser on public-glance bp
20:09:31 <markwash> #action markwash investigate remaining steps on bp public-glance and touch base with smoser
20:09:34 <markwash> darnit
20:09:34 <bcwaldon> !
20:09:44 <markwash> team work
20:09:50 <bcwaldon> ok, lets move on
20:09:53 <markwash> glance-basic-quotas, transfer-rate-limiting
20:09:59 <markwash> https://blueprints.launchpad.net/glance/+spec/glance-basic-quotas https://blueprints.launchpad.net/glance/+spec/transfer-rate-limiting
20:10:27 <markwash> These interact closely with a proposed summit session: http://summit.openstack.org/cfp/details/225
20:10:51 <flaper87> Good point, quotas. I think we should get that going for havana.
20:11:15 <markwash> I'd like for iccha_ to put together a blueprint for that umbrella topic
20:11:31 <markwash> and, mark those bps as deps
20:11:45 <markwash> and then put everything in discussion for the summit
20:11:56 * flaper87 wont be at the summit :(
20:12:08 <markwash> ah, hmm
20:12:15 <markwash> flaper87: any notes for us now?
20:12:34 <jbresnah> I think that the transfer rate limiting might be a special case
20:12:45 <jbresnah> it needs much lower latency than connection throttling
20:13:03 <markwash> hmm, interesting, how do you mean?
20:13:06 <jbresnah> ie: it cannot call out to a separate service to determine the current rate
20:13:12 <flaper87> markwash: yep. I've put some thoughts there and I think it would be good to get the quota code used for nova (same code for cinder) and make it generic enough to live in oslo and use that as a base for the implementation
20:13:23 <jbresnah> well... say a tenant has a limit of 1Gb/s total
20:13:35 <markwash> jbresnah: ah I see
20:13:46 <markwash> jbresnah: one thought I had is that most people run glance api workers in a cluster
20:13:54 <jbresnah> so the limit can be taken up front from a 3rd party service
20:13:56 <markwash> so we'd need a way to share usages across workers
20:14:00 <jbresnah> but enforcement has to be done locally
20:14:02 <jbresnah> if that makes sense
20:14:07 <jbresnah> that too
20:14:11 <jbresnah> tho that is harder
20:14:16 <jbresnah> so, in the case i was pointing out
20:14:28 <jbresnah> if they have 1 gb/s
20:14:36 <jbresnah> and they do 1 transfer and it is going at 900mb/s
20:14:39 <jbresnah> they have 100 left
20:14:46 <jbresnah> but if the first drops down to 500...
20:14:48 <jbresnah> then they have 500
20:14:57 <jbresnah> that is all nice, but how do you efficiently do it
20:15:04 <markwash> sounds complicated
20:15:07 <flaper87> markwash: nova's implementation uses either a local cache or a remote cache (say memcached)
20:15:13 <jbresnah> you cannot reasonably make calls to a third party service ever time you wish to send a buffer
20:15:20 <jbresnah> if you do, all transfer will be quite slow
20:15:36 <jbresnah> a solution to that problem woulc be complicated
20:15:46 <jbresnah> i propose that the limit be set at the begining of the transfer
20:15:54 <markwash> my take is, it sounds complicated and non-glancy, but if it were solved it would be useful for lots of openstack projects
20:15:58 <jbresnah> so the bandwidth is 'checked out' from the quota service
20:16:01 <jbresnah> enforced locally
20:16:05 <jbresnah> and then chencked back in when done
20:16:22 <jbresnah> that approach is pretty simple i think
20:16:35 <jbresnah> and in the short term, it can just be a global conf setting for all users
20:16:54 <jbresnah> so an admin can say 'no user may transfer faster than 500mb/s', or some such
20:17:07 <markwash> in any case, its something that folks have seen as relevant to making glance a top-level service
20:17:23 <markwash> and we only have 5 slots, so I'd like to frame transfer-rate-limiting as part of that discussion
20:17:54 <markwash> so does that framing sound acceptable?
20:18:18 <jbresnah> i am parsing the meaning of:  making glance a top-level service
20:18:23 <markwash> :-)
20:18:50 <jbresnah> but i am ok with making it a subtoipic for sure.
20:18:56 <markwash> Rackspace currently doesn't expose glance directly, only through nova
20:19:02 <jbresnah> ah
20:19:05 <markwash> there are some features they want in order to expose it directly
20:19:20 * markwash stands in for rackspace since they aren't here atm
20:19:25 <jbresnah> cool, that makes sense
20:19:45 <flaper87> makes sense
20:19:49 <markwash> #agreed discuss quotas and rate-limiting as part of making glance a public service at the design summit
20:20:13 <markwash> We have a number of blueprints related to image caching
20:20:30 <markwash> glance-cache-path, glance-cache-service, refactoring-move-caching-out-of-middleware
20:20:53 <markwash> I have some questions about these
20:21:24 <markwash> 1) re glance-cache-service, do we need another service? or do we want to expose cache management just at another endpoint in the glance-api process
20:21:42 <markwash> or do we want to expose cache management in terms of locations, as with glance-cache-path?
20:21:43 <jbresnah> i vote for the latter
20:22:32 <markwash> flaper87: thoughts?
20:22:37 <flaper87> I'm not sure. If it'll listen in another port I would prefer to keep them isolated and as such more "controllable" by the user
20:22:40 <jbresnah> i made some comments on that somewhere but i cannot find them...
20:22:50 <flaper87> what If I would like to stop the cache service?
20:23:11 <flaper87> I bet most deployments have HA on top of N glance-api's
20:23:37 <bcwaldon> do you think those using the cache depend on the remote management aspect?
20:23:37 <flaper87> so, stoping 1 at a time wont be an issue but, it doesn't feel right to run 2 services under the same process / name
20:24:08 <markwash> I could see it going either way. . I like the management flexibility of having a separate process, but I think it could cause problems to add more moving parts and more network latency
20:24:28 <markwash> bcwaldon: I'm not sure actually
20:24:37 <jbresnah> how is the multiple locations feature supposed to work?
20:24:48 <jbresnah> to me a cached image is just another location
20:24:54 <markwash> bcwaldon: is there a way to manage the cache without using the v1 api? i.e. does glance-cache-manage talk directly to the local cache?
20:25:05 <jbresnah> it would be nice if when cached it could be registered via that same mechanisms and any other location
20:25:13 <jbresnah> and when cleaned up, removed the same way
20:25:24 <jbresnah> the cache management aspect would then be outside of this scope
20:25:24 <flaper87> would it? I mean, I don't see that much latency, TBH. I'm thinking about clients pointing directly to the cache holding the chaced image
20:25:28 <flaper87> or something like that
20:25:57 <bcwaldon> markwash: cache management always talks to the cache local to each glance-api node, and it is only accessible using the /v1 namespace
20:26:16 <markwash> bcwaldon: so even if the api is down, I can manage the cache, right?
20:26:24 <bcwaldon> what does that mean?
20:26:44 <markwash> like, I can ssh to the box and run shell commands to manage the cache. . .
20:26:53 <bcwaldon> no - it uses the public API
20:27:00 <markwash> gotcha, okay
20:27:08 <bcwaldon> ...someone check me on that
20:27:16 <flaper87> it uses the public api
20:27:31 <flaper87> AFAIK!
20:27:38 <markwash> So, its probably easy to move from a separate port on the glance-api process, to a separate process
20:27:38 <flaper87> yeah, the registry client
20:27:46 <flaper87> i guess, or something like that
20:28:02 <bcwaldon> its not even a separate port, markwash
20:28:12 <markwash> right, I'm proposing that it would be exposed on a separate port
20:28:20 <bcwaldon> ah - current vs proposed
20:28:21 <bcwaldon> om
20:28:22 <bcwaldon> ok
20:29:01 <markwash> its quite a change to, from the api's perspective, treat the cache as a nearby service, rather than a local file resource
20:29:12 <flaper87> I think it would be cleaner to have a separate project! Easier to debug, easier to maintain and easier to distribute
20:29:25 <flaper87> s/project/service
20:29:28 <flaper87> sorry
20:29:33 <bcwaldon> you scared me
20:29:36 <flaper87> hahahaha
20:29:36 <markwash> :-)
20:29:44 <bcwaldon> I agree
20:29:47 <jbresnah> I do not yet understand the need for a separate interface.
20:30:05 <jbresnah> why not register it as another location
20:30:14 <flaper87> we already have a separate file glance-cache.conf
20:30:23 <jbresnah> and use the interfaces in place for multiple locations?
20:30:36 <flaper87> jbresnah: It would be treated like that, (as I imagine it)
20:30:52 <markwash> jbresnah: I think you're right. . its just, to me, there is no place for cache management in image api v2 unless its a special case of locations
20:30:56 <jbresnah> flaper87: can you explain that a bit?
20:31:09 <flaper87> yep
20:31:20 <flaper87> so, I imagine that service like this:
20:32:27 <jbresnah> markwash: i would think that when a use went to download an image, the service could check all registered locations, if there is a local one, it could send that.  if not it could cache that and then add that location to its list
20:32:44 <flaper87> 1) it caches a specific image 2) When a request gets to glance-api it checks if that image is cached in some of the cache services. 3) if it is then it points the client to that server for downloading the iamge
20:32:47 <flaper87> iamge
20:32:49 <jbresnah> any outside admin calls could be done around existing API and access to that store (the filesystem)
20:32:50 <flaper87> image
20:32:58 <flaper87> that's one scenario
20:33:11 <flaper87> what I wanted to say is that it would be, somehow, another location for an image
20:33:26 <jbresnah> flaper87: of course.  but in that case, why not give the client a list of locations and let it pick what it can handle?
20:33:38 <jbresnah> flaper87: swift:// file:/// http:// etc
20:33:56 <flaper87> jbresnah: sure but the client will have to query glance-api anyway in order to obtain that info
20:33:57 <jbresnah> the special case aspect + another process makes me concerned that this is an unneeded complication
20:34:02 <jbresnah> but i can back down
20:34:06 <markwash> flaper87: I'm not sure in that scenario how we populate an image to the cache. . right now you can do it manually, but you can also use it as an MRU cache that autopopulates as we stream data through the server
20:34:09 <flaper87> so, the client doesn't know if it's cached or not
20:34:28 <jbresnah> flaper87: ?.  in that case i do not understand your original scenerio
20:35:50 <flaper87> mmh, sorry. So, The cache service would serve cached images, right ?
20:35:53 <markwash> we might need to postpone this discussion for a while
20:35:55 <jbresnah> my last point is this: to me this part of glance is a registry replica service.  it ideally should be able to handle transient/short term replica registrations without it being a special case
20:35:59 <jbresnah> and it seems that it is close to that
20:36:21 <jbresnah> but i do not want to derail work at hand either
20:36:45 <markwash> I don't know that we have enough consensus at this point to really move forward on this front
20:36:53 <jbresnah> code in the hand is worth 2 in the ...bush?
20:37:17 <markwash> true, so I think if folks have patches they want to submit for directional review that would be great
20:37:43 <jbresnah> cool
20:37:48 <markwash> but I'm not comfortable enough to just say "lets +2 the first solution that passes pep8" either :-)
20:37:59 <flaper87> hahaha
20:38:23 * markwash is struggling for an action item out of this cache stuff
20:38:55 <bcwaldon> markwash: let's get a better overview of the options - I think I can weigh more effectively if I see that
20:39:00 <jbresnah> i could better document up my thoughts and submit them for review?
20:39:00 <bcwaldon> and we can have a more directed discussion
20:39:06 <flaper87> I'd say we should discuss this a bit further. I mean, I'd agree either with a separate service or with everything embedded in glance-api somehow. What I don't like that much is for this service to listen in another port within glance-api process
20:39:16 <jbresnah> informally submit i mean, like email them
20:39:23 <markwash> flaper87: okay, good to know
20:39:29 <markwash> jbresnah: sounds good
20:39:56 <flaper87> jbresnah: we could create a pad and review both scenarios together
20:40:01 <flaper87> and see which one makes more sense
20:40:14 <flaper87> and then review that either in the summit or the next meeting
20:40:18 <markwash> #action jbresnah, flaper87 to offer more detail proposals for futuer cache management
20:40:29 <markwash> typos :-(
20:40:40 <markwash> one more big issue to deal with that I know of
20:40:49 <markwash> iscsi-backend-store, glance-cinder-driver, image-transfer-service
20:41:11 <bcwaldon> I really don't want to open that can of worms on IRC - this is a big discussion to have at the summit
20:41:16 <markwash> all of these feel oriented towards leveraging more efficient image transfers
20:41:24 <markwash> my proposal is more limited
20:41:45 <markwash> I would like to roll these together for the summit, in as much as they are all oriented towards bandwidth efficienty
20:41:50 <jbresnah> in that case, i'll keep my worms in the can until the summit ;-)
20:41:54 <markwash> s/efficienty/efficiency/
20:42:07 <lifeless> oooh bandwidth efficient transfers.
20:42:13 * lifeless has toys in that department
20:42:15 <flaper87> jbresnah: I'll email mines so you can throw them all together
20:43:04 <markwash> I also think the goals of the image-transfer service border on some of the goals of exposing image locations directly
20:43:12 <jbresnah> markwash: i think they may expand into areas beyond BW efficiency, but i am good with putting them all into a topic limited to that
20:43:23 <jbresnah> markwash: i agree
20:43:42 <markwash> cool
20:44:04 <markwash> I'll double check with john griffith and zhi yan liu about what their goals are exactly
20:44:16 <markwash> because it is still possible the conversations are divergent
20:44:57 <markwash> does anybody have any other blueprints they would like to discuss?
20:45:14 <markwash> I have a few more items, but lower importance and less risk of worm-cans
20:46:09 <jbresnah> I have one, but I feel like i am dominating too much of this time already so it can wait
20:46:09 <bcwaldon> open 'em up
20:46:29 <markwash> jbresnah: go for it, not a ton to discuss besides blueprints. . we've been hitting summit sessions on the way I think
20:48:04 <jbresnah> direct-url-meta-data
20:48:21 <jbresnah> it actually has more to do with multiple-locations i think
20:48:28 <jbresnah> and cross cuts a little to the caching thing...
20:48:28 <markwash> yeah, I think so too
20:48:41 <bcwaldon> man, someone should finish that multiple-locations bp
20:48:47 <markwash> though probably we need to update the image locations spec before we could mark it as superseded
20:49:00 <bcwaldon> #action bcwaldon to expose multiple image locations in v2 API
20:49:11 <flaper87> hahahaha
20:49:11 <jbresnah> basically the thought is that if you are exposing information from a drvier that you may also need to expose more information than a url for it to be useful
20:49:32 <jbresnah> (i <3 the multiple image locations BP)
20:49:36 <markwash> agreed
20:49:41 <jbresnah> for example, a file url
20:49:47 <markwash> I think locations should become json objects, rather than strings
20:49:48 <jbresnah> that is basically useless
20:49:52 <jbresnah> yeah
20:50:00 <bcwaldon> yep - that's the plan
20:50:10 <jbresnah> with an definition defined by the url scheme
20:50:14 <jbresnah> ok excelent
20:50:20 <flaper87> +1
20:50:26 <bcwaldon> we'll establish the API then we can figure out how to internally add that metadata and bubble it out
20:50:35 <jbresnah> then i suppose that blueprint can go away, or be part of the multiplelocations bp
20:50:51 <bcwaldon> no, it should stay - just make it dependent on multiple-locations
20:51:00 <jbresnah> cool
20:51:20 <markwash> hmm, I'd rather mark it as superseded and add some more detail to the multi-locations bp
20:51:26 <markwash> just to reduce the number of bps
20:51:30 <flaper87> markwash: agreed
20:51:42 <markwash> bcwaldon: would you be okay with me doing that?
20:51:44 <bcwaldon> markwash: that's a silly thing to strive for
20:51:51 * markwash strives for many silly things
20:51:55 <bcwaldon> let's make one blueprint called 'features'
20:51:59 <jbresnah> heh
20:52:02 <flaper87> hahaha
20:52:06 <markwash> :-)
20:52:15 <bcwaldon> we can chat about it - it's a small detail
20:52:30 <bcwaldon> it's a logically different thing to me
20:52:43 <markwash> yeah, in this case, its just that I sort of want object-like urls to appear in the api fully formed like athena from zeus' head
20:52:43 <bcwaldon> and I want to be able to call multiple locations done once we are exposing multiple locations
20:52:49 <flaper87> bcwaldon: but isn't direct-url-meta-data covered by what will be exposed in m-image-l ?
20:53:07 <markwash> and not have two api revisions for the whole ordeal, just one
20:53:17 <bcwaldon> from my point of view, multiple-image-locations has value completely disregarding backend metadata
20:53:24 <bcwaldon> so we're defining a very specific feature
20:53:27 <markwash> I see
20:53:29 <flaper87> ok, sounds good
20:53:47 <markwash> I think that's true. . lets still make sure the multi-locations bp has the details we need to be forward compatible with useful metadata
20:53:49 <flaper87> should we define an action to keep markwash hands off of those blueprints ?
20:54:02 <bcwaldon> only if he actions himself
20:54:09 <bcwaldon> and yes, markwash, we definitely should
20:54:15 <markwash> and then we can rally around some usecases to motivate the metadata aspect
20:54:27 <markwash> #action markwash keep messing with blueprints
20:54:36 <bcwaldon> yeah - I'm interested in the other usecases of that metadata
20:54:46 <bcwaldon> jbresnah: any easy examples in mind?
20:54:58 <jbresnah> the file url
20:55:27 <bcwaldon> well - one could argue that you shouldn't use that store
20:55:28 <jbresnah> so in nove there is a feature that will do a system cp if the direct_url to an image is a file url
20:55:29 <markwash> this could be useful for glance and nova compute workers that share a filesystem like gluster
20:55:38 <bcwaldon> ...oh
20:55:49 <bcwaldon> well I didnt think that through
20:55:49 <jbresnah> but this is sort of a useless feature
20:56:03 <bcwaldon> I get the distributed FS use case
20:56:03 <jbresnah> because you have to assume that nova-compute and glance mount the same fs in the  same way
20:56:19 <bcwaldon> true
20:56:25 <jbresnah> so info like NFS exported host, or some generic namespace token would be good
20:56:31 <markwash> jbresnah: that does make me wonder if we need some new fs driver that is a "shared fs" driver
20:56:37 <jbresnah> so that the services could be preconfigured with some meaning
20:56:53 <jbresnah> markwash: maybe, i haven't really thought aobut how that would help yet
20:56:56 <markwash> or maybe its just optional metadata that goes straight from local configuration of the fs store to the metadata location
20:57:11 <markwash> s/metadata location/location metadata/
20:57:12 <flaper87> markwash: I would say the later
20:57:25 <flaper87> sounds like something up to the way the whole thing is configured
20:57:27 <markwash> jbresnah: were you thinking along the lines of the latter as well?
20:57:31 <flaper87> and not the implementation itself
20:57:37 <jbresnah> nod
20:57:43 <markwash> okay cool, I like that
20:58:02 <markwash> so that's a great use case, we should find some more b/c I think they may be out there
20:58:10 <markwash> we're about out of time, any last minute items?
20:58:20 <flaper87> yep
20:58:22 <bcwaldon> only to say thank you for organizing this, markwash
20:58:31 <flaper87> what about doing bug squashing days from time to time ?
20:58:44 <markwash> flaper87: could be great
20:58:50 <flaper87> markwash: indeed, thanks! It's really useful to have this meetings
20:58:59 <bcwaldon> my only reservation would be our low volume of bugs
20:59:14 <bcwaldon> relative to larger projects that have BS days
20:59:26 <markwash> Do we want to have another meeting next week before the summit?
20:59:27 <flaper87> yep, that's why I was thinking of it like something we do 1 per month
20:59:30 <flaper87> or something like that
20:59:31 <jbresnah> yeah this was great, thanks!
20:59:36 <bcwaldon> ok
20:59:39 <bcwaldon> markwash: yes please
20:59:43 <flaper87> markwash: +1
20:59:45 <markwash> #action markwash to look at 1/month bugsquash days
21:00:09 <markwash> #action markwash to schedule an extra glance meeting before the summit
21:00:13 <markwash> thanks guys, we're out of time
21:00:15 <markwash> #endmeeting