20:01:59 #startmeeting glance 20:02:00 Meeting started Thu Apr 4 20:01:59 2013 UTC. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:02:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:02:03 any rackers around? 20:02:04 The meeting name has been set to 'glance' 20:02:17 unfortunately I scheduled this over top of a racksburg product meeting 20:02:23 so nobody from rackspace can make it 20:02:23 bcwaldon: here 20:02:32 ok - next week 20:02:32 well, racksburg, that is :-) 20:02:42 (for the moment - all hands mtg starting soon) 20:02:49 kgriffs: I should have been more specific (looking for iccha , brianr, and ameade) 20:03:02 so this is our first meeting 20:03:04 exciting I know 20:03:05 kgriffs: but please stick around :) 20:03:15 no worries. I'm just eavesdropping :D 20:03:28 #topic Glance Blueprints https://blueprints.launchpad.net/glance 20:03:50 I've been working on cleaning up the blueprint list lately 20:04:03 and I'd love to make this meeting the place where we keep up to speed with that 20:04:16 That would be great IMO 20:04:19 +1 20:04:35 I've got a list of items to discuss, I'll just start going 1 by 1 20:04:57 with blueprint public-glance 20:05:13 https://blueprints.launchpad.net/glance/+spec/public-glance 20:05:24 feels like we maybe already support that feature? 20:05:36 how so? 20:06:18 the use case came from canonical wanting to publish cloud images through a public glance service 20:06:22 a readonly service 20:06:24 isn't there anonymous auth? 20:06:35 the catch is how do you use it through nova 20:06:48 and yes, there is anonymous access 20:07:08 I will admit we are most of the way there, but the last mile is the catch 20:07:17 so, is the feature then that we need nova to support talking anonymously to an arbitrary glance server? 20:07:26 I think so 20:07:37 now that we have readonly access to glance 20:07:51 but this BP was written before that - really it feels like a nova feature 20:07:57 so, I'd like for us to investigate to ensure that's the only step left 20:07:57 OR 20:08:02 and then move this bp to nova 20:08:19 sure - the alternative would be to create a glance driver for glance that you can point to that public glance service with 20:08:36 maybe we can touch base with smoser and see what he still wants 20:08:42 but maybe you could just use the http backend... 20:08:45 anybody wanna take that action? 20:08:48 yes, thats probably good 20:08:55 we could just summon him here 20:09:12 nah, lots to go through, there is riper fruit 20:09:23 ok 20:09:30 #action bcwaldon to ping smoser on public-glance bp 20:09:31 #action markwash investigate remaining steps on bp public-glance and touch base with smoser 20:09:34 darnit 20:09:34 ! 20:09:44 team work 20:09:50 ok, lets move on 20:09:53 glance-basic-quotas, transfer-rate-limiting 20:09:59 https://blueprints.launchpad.net/glance/+spec/glance-basic-quotas https://blueprints.launchpad.net/glance/+spec/transfer-rate-limiting 20:10:27 These interact closely with a proposed summit session: http://summit.openstack.org/cfp/details/225 20:10:51 Good point, quotas. I think we should get that going for havana. 20:11:15 I'd like for iccha_ to put together a blueprint for that umbrella topic 20:11:31 and, mark those bps as deps 20:11:45 and then put everything in discussion for the summit 20:11:56 * flaper87 wont be at the summit :( 20:12:08 ah, hmm 20:12:15 flaper87: any notes for us now? 20:12:34 I think that the transfer rate limiting might be a special case 20:12:45 it needs much lower latency than connection throttling 20:13:03 hmm, interesting, how do you mean? 20:13:06 ie: it cannot call out to a separate service to determine the current rate 20:13:12 markwash: yep. I've put some thoughts there and I think it would be good to get the quota code used for nova (same code for cinder) and make it generic enough to live in oslo and use that as a base for the implementation 20:13:23 well... say a tenant has a limit of 1Gb/s total 20:13:35 jbresnah: ah I see 20:13:46 jbresnah: one thought I had is that most people run glance api workers in a cluster 20:13:54 so the limit can be taken up front from a 3rd party service 20:13:56 so we'd need a way to share usages across workers 20:14:00 but enforcement has to be done locally 20:14:02 if that makes sense 20:14:07 that too 20:14:11 tho that is harder 20:14:16 so, in the case i was pointing out 20:14:28 if they have 1 gb/s 20:14:36 and they do 1 transfer and it is going at 900mb/s 20:14:39 they have 100 left 20:14:46 but if the first drops down to 500... 20:14:48 then they have 500 20:14:57 that is all nice, but how do you efficiently do it 20:15:04 sounds complicated 20:15:07 markwash: nova's implementation uses either a local cache or a remote cache (say memcached) 20:15:13 you cannot reasonably make calls to a third party service ever time you wish to send a buffer 20:15:20 if you do, all transfer will be quite slow 20:15:36 a solution to that problem woulc be complicated 20:15:46 i propose that the limit be set at the begining of the transfer 20:15:54 my take is, it sounds complicated and non-glancy, but if it were solved it would be useful for lots of openstack projects 20:15:58 so the bandwidth is 'checked out' from the quota service 20:16:01 enforced locally 20:16:05 and then chencked back in when done 20:16:22 that approach is pretty simple i think 20:16:35 and in the short term, it can just be a global conf setting for all users 20:16:54 so an admin can say 'no user may transfer faster than 500mb/s', or some such 20:17:07 in any case, its something that folks have seen as relevant to making glance a top-level service 20:17:23 and we only have 5 slots, so I'd like to frame transfer-rate-limiting as part of that discussion 20:17:54 so does that framing sound acceptable? 20:18:18 i am parsing the meaning of: making glance a top-level service 20:18:23 :-) 20:18:50 but i am ok with making it a subtoipic for sure. 20:18:56 Rackspace currently doesn't expose glance directly, only through nova 20:19:02 ah 20:19:05 there are some features they want in order to expose it directly 20:19:20 * markwash stands in for rackspace since they aren't here atm 20:19:25 cool, that makes sense 20:19:45 makes sense 20:19:49 #agreed discuss quotas and rate-limiting as part of making glance a public service at the design summit 20:20:13 We have a number of blueprints related to image caching 20:20:30 glance-cache-path, glance-cache-service, refactoring-move-caching-out-of-middleware 20:20:53 I have some questions about these 20:21:24 1) re glance-cache-service, do we need another service? or do we want to expose cache management just at another endpoint in the glance-api process 20:21:42 or do we want to expose cache management in terms of locations, as with glance-cache-path? 20:21:43 i vote for the latter 20:22:32 flaper87: thoughts? 20:22:37 I'm not sure. If it'll listen in another port I would prefer to keep them isolated and as such more "controllable" by the user 20:22:40 i made some comments on that somewhere but i cannot find them... 20:22:50 what If I would like to stop the cache service? 20:23:11 I bet most deployments have HA on top of N glance-api's 20:23:37 do you think those using the cache depend on the remote management aspect? 20:23:37 so, stoping 1 at a time wont be an issue but, it doesn't feel right to run 2 services under the same process / name 20:24:08 I could see it going either way. . I like the management flexibility of having a separate process, but I think it could cause problems to add more moving parts and more network latency 20:24:28 bcwaldon: I'm not sure actually 20:24:37 how is the multiple locations feature supposed to work? 20:24:48 to me a cached image is just another location 20:24:54 bcwaldon: is there a way to manage the cache without using the v1 api? i.e. does glance-cache-manage talk directly to the local cache? 20:25:05 it would be nice if when cached it could be registered via that same mechanisms and any other location 20:25:13 and when cleaned up, removed the same way 20:25:24 the cache management aspect would then be outside of this scope 20:25:24 would it? I mean, I don't see that much latency, TBH. I'm thinking about clients pointing directly to the cache holding the chaced image 20:25:28 or something like that 20:25:57 markwash: cache management always talks to the cache local to each glance-api node, and it is only accessible using the /v1 namespace 20:26:16 bcwaldon: so even if the api is down, I can manage the cache, right? 20:26:24 what does that mean? 20:26:44 like, I can ssh to the box and run shell commands to manage the cache. . . 20:26:53 no - it uses the public API 20:27:00 gotcha, okay 20:27:08 ...someone check me on that 20:27:16 it uses the public api 20:27:31 AFAIK! 20:27:38 So, its probably easy to move from a separate port on the glance-api process, to a separate process 20:27:38 yeah, the registry client 20:27:46 i guess, or something like that 20:28:02 its not even a separate port, markwash 20:28:12 right, I'm proposing that it would be exposed on a separate port 20:28:20 ah - current vs proposed 20:28:21 om 20:28:22 ok 20:29:01 its quite a change to, from the api's perspective, treat the cache as a nearby service, rather than a local file resource 20:29:12 I think it would be cleaner to have a separate project! Easier to debug, easier to maintain and easier to distribute 20:29:25 s/project/service 20:29:28 sorry 20:29:33 you scared me 20:29:36 hahahaha 20:29:36 :-) 20:29:44 I agree 20:29:47 I do not yet understand the need for a separate interface. 20:30:05 why not register it as another location 20:30:14 we already have a separate file glance-cache.conf 20:30:23 and use the interfaces in place for multiple locations? 20:30:36 jbresnah: It would be treated like that, (as I imagine it) 20:30:52 jbresnah: I think you're right. . its just, to me, there is no place for cache management in image api v2 unless its a special case of locations 20:30:56 flaper87: can you explain that a bit? 20:31:09 yep 20:31:20 so, I imagine that service like this: 20:32:27 markwash: i would think that when a use went to download an image, the service could check all registered locations, if there is a local one, it could send that. if not it could cache that and then add that location to its list 20:32:44 1) it caches a specific image 2) When a request gets to glance-api it checks if that image is cached in some of the cache services. 3) if it is then it points the client to that server for downloading the iamge 20:32:47 iamge 20:32:49 any outside admin calls could be done around existing API and access to that store (the filesystem) 20:32:50 image 20:32:58 that's one scenario 20:33:11 what I wanted to say is that it would be, somehow, another location for an image 20:33:26 flaper87: of course. but in that case, why not give the client a list of locations and let it pick what it can handle? 20:33:38 flaper87: swift:// file:/// http:// etc 20:33:56 jbresnah: sure but the client will have to query glance-api anyway in order to obtain that info 20:33:57 the special case aspect + another process makes me concerned that this is an unneeded complication 20:34:02 but i can back down 20:34:06 flaper87: I'm not sure in that scenario how we populate an image to the cache. . right now you can do it manually, but you can also use it as an MRU cache that autopopulates as we stream data through the server 20:34:09 so, the client doesn't know if it's cached or not 20:34:28 flaper87: ?. in that case i do not understand your original scenerio 20:35:50 mmh, sorry. So, The cache service would serve cached images, right ? 20:35:53 we might need to postpone this discussion for a while 20:35:55 my last point is this: to me this part of glance is a registry replica service. it ideally should be able to handle transient/short term replica registrations without it being a special case 20:35:59 and it seems that it is close to that 20:36:21 but i do not want to derail work at hand either 20:36:45 I don't know that we have enough consensus at this point to really move forward on this front 20:36:53 code in the hand is worth 2 in the ...bush? 20:37:17 true, so I think if folks have patches they want to submit for directional review that would be great 20:37:43 cool 20:37:48 but I'm not comfortable enough to just say "lets +2 the first solution that passes pep8" either :-) 20:37:59 hahaha 20:38:23 * markwash is struggling for an action item out of this cache stuff 20:38:55 markwash: let's get a better overview of the options - I think I can weigh more effectively if I see that 20:39:00 i could better document up my thoughts and submit them for review? 20:39:00 and we can have a more directed discussion 20:39:06 I'd say we should discuss this a bit further. I mean, I'd agree either with a separate service or with everything embedded in glance-api somehow. What I don't like that much is for this service to listen in another port within glance-api process 20:39:16 informally submit i mean, like email them 20:39:23 flaper87: okay, good to know 20:39:29 jbresnah: sounds good 20:39:56 jbresnah: we could create a pad and review both scenarios together 20:40:01 and see which one makes more sense 20:40:14 and then review that either in the summit or the next meeting 20:40:18 #action jbresnah, flaper87 to offer more detail proposals for futuer cache management 20:40:29 typos :-( 20:40:40 one more big issue to deal with that I know of 20:40:49 iscsi-backend-store, glance-cinder-driver, image-transfer-service 20:41:11 I really don't want to open that can of worms on IRC - this is a big discussion to have at the summit 20:41:16 all of these feel oriented towards leveraging more efficient image transfers 20:41:24 my proposal is more limited 20:41:45 I would like to roll these together for the summit, in as much as they are all oriented towards bandwidth efficienty 20:41:50 in that case, i'll keep my worms in the can until the summit ;-) 20:41:54 s/efficienty/efficiency/ 20:42:07 oooh bandwidth efficient transfers. 20:42:13 * lifeless has toys in that department 20:42:15 jbresnah: I'll email mines so you can throw them all together 20:43:04 I also think the goals of the image-transfer service border on some of the goals of exposing image locations directly 20:43:12 markwash: i think they may expand into areas beyond BW efficiency, but i am good with putting them all into a topic limited to that 20:43:23 markwash: i agree 20:43:42 cool 20:44:04 I'll double check with john griffith and zhi yan liu about what their goals are exactly 20:44:16 because it is still possible the conversations are divergent 20:44:57 does anybody have any other blueprints they would like to discuss? 20:45:14 I have a few more items, but lower importance and less risk of worm-cans 20:46:09 I have one, but I feel like i am dominating too much of this time already so it can wait 20:46:09 open 'em up 20:46:29 jbresnah: go for it, not a ton to discuss besides blueprints. . we've been hitting summit sessions on the way I think 20:48:04 direct-url-meta-data 20:48:21 it actually has more to do with multiple-locations i think 20:48:28 and cross cuts a little to the caching thing... 20:48:28 yeah, I think so too 20:48:41 man, someone should finish that multiple-locations bp 20:48:47 though probably we need to update the image locations spec before we could mark it as superseded 20:49:00 #action bcwaldon to expose multiple image locations in v2 API 20:49:11 hahahaha 20:49:11 basically the thought is that if you are exposing information from a drvier that you may also need to expose more information than a url for it to be useful 20:49:32 (i <3 the multiple image locations BP) 20:49:36 agreed 20:49:41 for example, a file url 20:49:47 I think locations should become json objects, rather than strings 20:49:48 that is basically useless 20:49:52 yeah 20:50:00 yep - that's the plan 20:50:10 with an definition defined by the url scheme 20:50:14 ok excelent 20:50:20 +1 20:50:26 we'll establish the API then we can figure out how to internally add that metadata and bubble it out 20:50:35 then i suppose that blueprint can go away, or be part of the multiplelocations bp 20:50:51 no, it should stay - just make it dependent on multiple-locations 20:51:00 cool 20:51:20 hmm, I'd rather mark it as superseded and add some more detail to the multi-locations bp 20:51:26 just to reduce the number of bps 20:51:30 markwash: agreed 20:51:42 bcwaldon: would you be okay with me doing that? 20:51:44 markwash: that's a silly thing to strive for 20:51:51 * markwash strives for many silly things 20:51:55 let's make one blueprint called 'features' 20:51:59 heh 20:52:02 hahaha 20:52:06 :-) 20:52:15 we can chat about it - it's a small detail 20:52:30 it's a logically different thing to me 20:52:43 yeah, in this case, its just that I sort of want object-like urls to appear in the api fully formed like athena from zeus' head 20:52:43 and I want to be able to call multiple locations done once we are exposing multiple locations 20:52:49 bcwaldon: but isn't direct-url-meta-data covered by what will be exposed in m-image-l ? 20:53:07 and not have two api revisions for the whole ordeal, just one 20:53:17 from my point of view, multiple-image-locations has value completely disregarding backend metadata 20:53:24 so we're defining a very specific feature 20:53:27 I see 20:53:29 ok, sounds good 20:53:47 I think that's true. . lets still make sure the multi-locations bp has the details we need to be forward compatible with useful metadata 20:53:49 should we define an action to keep markwash hands off of those blueprints ? 20:54:02 only if he actions himself 20:54:09 and yes, markwash, we definitely should 20:54:15 and then we can rally around some usecases to motivate the metadata aspect 20:54:27 #action markwash keep messing with blueprints 20:54:36 yeah - I'm interested in the other usecases of that metadata 20:54:46 jbresnah: any easy examples in mind? 20:54:58 the file url 20:55:27 well - one could argue that you shouldn't use that store 20:55:28 so in nove there is a feature that will do a system cp if the direct_url to an image is a file url 20:55:29 this could be useful for glance and nova compute workers that share a filesystem like gluster 20:55:38 ...oh 20:55:49 well I didnt think that through 20:55:49 but this is sort of a useless feature 20:56:03 I get the distributed FS use case 20:56:03 because you have to assume that nova-compute and glance mount the same fs in the same way 20:56:19 true 20:56:25 so info like NFS exported host, or some generic namespace token would be good 20:56:31 jbresnah: that does make me wonder if we need some new fs driver that is a "shared fs" driver 20:56:37 so that the services could be preconfigured with some meaning 20:56:53 markwash: maybe, i haven't really thought aobut how that would help yet 20:56:56 or maybe its just optional metadata that goes straight from local configuration of the fs store to the metadata location 20:57:11 s/metadata location/location metadata/ 20:57:12 markwash: I would say the later 20:57:25 sounds like something up to the way the whole thing is configured 20:57:27 jbresnah: were you thinking along the lines of the latter as well? 20:57:31 and not the implementation itself 20:57:37 nod 20:57:43 okay cool, I like that 20:58:02 so that's a great use case, we should find some more b/c I think they may be out there 20:58:10 we're about out of time, any last minute items? 20:58:20 yep 20:58:22 only to say thank you for organizing this, markwash 20:58:31 what about doing bug squashing days from time to time ? 20:58:44 flaper87: could be great 20:58:50 markwash: indeed, thanks! It's really useful to have this meetings 20:58:59 my only reservation would be our low volume of bugs 20:59:14 relative to larger projects that have BS days 20:59:26 Do we want to have another meeting next week before the summit? 20:59:27 yep, that's why I was thinking of it like something we do 1 per month 20:59:30 or something like that 20:59:31 yeah this was great, thanks! 20:59:36 ok 20:59:39 markwash: yes please 20:59:43 markwash: +1 20:59:45 #action markwash to look at 1/month bugsquash days 21:00:09 #action markwash to schedule an extra glance meeting before the summit 21:00:13 thanks guys, we're out of time 21:00:15 #endmeeting