20:04:34 #startmeeting glance 20:04:34 Meeting started Thu Jun 13 20:04:34 2013 UTC. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:04:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:04:37 The meeting name has been set to 'glance' 20:04:39 howdy glance folks 20:04:42 \o/ 20:04:47 o/ 20:05:20 \o/ 20:05:25 haha 20:05:26 just to make some more noise 20:05:28 :D 20:05:35 so, maybe a lighter meeting today 20:05:35 \o/ 20:05:46 #topic open discussion 20:05:58 esheffield: did you have notes for us about v1 vs v2 documentation? 20:06:30 markwash: hi do you checked https://review.openstack.org/#/c/31591/5 ? see my comments? 20:06:58 markwash: not so much new on the documentation front, more just v1 / v2 work in general 20:07:22 esheffield: ah, okay sorry I misunderstood 20:07:23 markwash: you saw the email between waldon and me? 20:07:34 yes 20:07:52 was basically going to just touch on that if anyone else had interest / was working on it 20:08:38 esheffield: so you're asking about interest in this blueprint https://blueprints.launchpad.net/python-glanceclient/+spec/glance-client-v2and 20:08:45 https://blueprints.launchpad.net/python-glanceclient/+spec/glance-client-v2 20:08:48 maybe just that second one 20:09:30 right - waldon had been working on it, but hasn't for a while so we're going to pick it up 20:09:38 sounds great to me 20:09:45 but wanted to see if anyone else in the group here might be doing related work already as well 20:09:46 I know waldon has been distracted from that work for some time 20:10:07 flaper87: I know you've been following the v2 glanceclient work 20:10:22 esheffield: but I'm sure you all are welcome to finish off that blueprint 20:10:38 I think you guys might be best positioned to give it the attention it deserves, esheffield 20:11:06 markwash: sounds good - I know we're going to need that functionality pretty soon 20:11:08 yeah, I've followed it but I'm more than happy to see folks signing up for it 20:11:17 zhiyan: so I took a look at that review. . I'm not totally sure how I feel 20:11:45 zhiyan: there are two issues I think 20:11:53 markwash: I just means maybe there should have a checking for location adding 20:12:09 1) when you add locations to the image, and then delete the image, should glance delete the underlying location 20:12:36 2) when you add a location to the image, (how) should glance verify the validity of that location and its underlying data 20:13:03 2# agree, that is my comments mean also. 20:13:25 since the initial target for adding locations is as a protected ( or admin-only) action, we might want to be more permissive than otherwise 20:13:30 I'm not sure I agree with #2 20:13:44 for 1#, IMO we should just delete location records only but not delete image data from those locations.. 20:14:16 How would glance verify that? Wouldn't that take us back to the discussion about whether glance should verify tenants ids or not? 20:14:44 flaper87: use store drivers check it IMHO 20:15:08 flaper87: such as: check scheme is well-know and valid 20:15:21 flaper87: I am tending towards the idea that we should not treat directly added locations as different from the typical location case 20:15:36 well, but that doesn't verify the underlying data 20:15:40 zhiyan: ^ 20:15:45 flaper87: but it would be easy to load the location into the store and at least run "get_size" to see if it theoretically works 20:16:15 markwash: I think we should verify the image location is supported and that's it 20:16:25 markwash: probable it's a good method 20:16:48 I'd like to think a bit more about it, and then take my feedback to the review 20:16:57 I owe jbresnah some serious review time in any case 20:17:03 me too 20:17:22 flaper87: yes, just like my comments :) 20:17:29 thank you 20:17:38 zhiyan: is there a specific reason why you don't want glance to delete the image location in the cinder volume case? 20:17:43 zhiyan: sorry, haven't read them, will do tomorrow :( 20:18:48 markwash: sorry, do you means https://review.openstack.org/#/c/32864/ ? I just want to paste it here... 20:19:17 zhiyan, thanks 20:20:08 flaper87: async stuff? 20:20:14 markwash: yup 20:20:17 I've got a full-time upstream week coming up 20:20:32 and i'd like to take care of as much of that as possible (unless I'm stepping on toes!) 20:20:44 awesome, sounds really great 20:21:15 so I think the real weakness in the poc I have up now 20:21:28 https://review.openstack.org/#/c/31874/ 20:21:30 so, I think I'll give up on the idea of using Oslo's messaging code for the async work bp 20:21:53 is that it doesn't really show anything about how to implement the actual asynchronous work 20:22:12 flaper87: oh? 20:22:26 markwash: ok, because I have check 'buyer-beware' with jbresnahan before, under this principle, user should manually handle image data... and also under current implementation, cinder-store not create volume when user register an image (make sense, right? he just register a external volume as an image), so when user delete the image recode, store should not delete it also.... 20:22:36 right but, I saw that Heat has something similar that we could use in glance as well 20:22:51 which seemed pretty solid 20:23:26 we could also take some ideas from Oslo's messaging but not using it directly (unless we want to have distributed async workers) 20:23:40 flaper87: does Task and TaskRunner is ready? 20:23:50 flaper87: I really want to support distributed async workers, but not have the distributed nature of it reflected in the interface 20:24:04 flaper87: like, all implementations of that interface are async 20:24:23 flaper87: but some of them, based only on their internal implementation with no interface changes, are also distributed across other hosts 20:24:42 markwash: If we want to support async workers then I would say we should use Oslo's messaging and work along those lines 20:24:44 zhiyan: I wanna circle back around to the buyer-beware part, sorry for a brief delay 20:25:11 markwash: sorry, I meant distributed async workers 20:25:13 :D 20:25:33 flaper87: I actually can't tell if we're agreeing or disagreeing :-) 20:25:47 hehehe, sorry, let start again 20:25:53 flaper87: seems like a glance-worker service? 20:26:01 markwash: I agree we should support distributed async workers 20:26:31 and IMHO, the easiest way to do that is relying on Oslo's messaging library 20:26:42 because it's ready 20:26:46 cool 20:27:05 markwash: I mean, the rpc one, we're still working on the new one 20:27:08 so then I just need to look more closely at what using oslo messaging would mean in the code 20:27:13 right 20:27:31 it's not so different and I think it integrates better 20:27:39 zhiyan: mmh, could be, yup! 20:27:40 it still seems like what we might disagreeing on is where rpc.cast gets called 20:27:57 markwash: right 20:28:09 i.e., is it called in a driver for the async processing, or is it called in something more like "core" glance 20:28:55 in your POC it's called here: https://review.openstack.org/#/c/31874/2/glance/api/v2/images.py 20:29:41 zhiyan: flaper87: i just think, for glance-cinder-store, when user use volume as an image and the volume stored in backend by FC, then maybe only some host have FC HBA card (hardward), so glance-worker could running on that node... 20:29:44 so if I were using rpc.cast, it would go in those functions ? 20:30:15 zhiyan: that's a great use case for distributed! 20:30:20 ty 20:30:36 yeah, I think so 20:30:49 because if we use rpc.cast we wouldn't need to create an Async domain 20:30:51 markwash: yes, in cinder, that's a service plan for that: cinder-io-worker...not start yet 20:31:03 we could easily integrate it in the existing doamins 20:32:31 I'm guessing there must be something a bit awkward about representing asynchronous tasks within the domain model that I'm not noticing 20:32:47 zhiyan: first step, i think we can put async message handler in glance-api, and when we need, we can pick it out to a new distributed service... 20:33:47 markwash: mmh, where would you put it instead? 20:34:01 Do you think a separate domain makes more sense? 20:34:14 flaper87: no no, I think I mis spoke 20:34:27 domain.py ? right, flaper87 20:34:54 I guess I'm just a bit prejudiced against putting rpc.cast directly in api calls 20:35:12 rpc feels kind of "external" and I like to hide that away form the core of the project as much as possible 20:35:28 but I confess I haven't dug deep enough to know the actual tradeoffs well 20:35:54 well, I guess we could take some time to think about where to put it a bit further 20:35:54 like, when I'm unit testing api calls, I don't want to have to stub out rpc 20:36:22 markwash: mmh, i see what you mean 20:36:25 flaper87: that will probably be the first thing I do, I'll keep it out in the open so people can see where I'm headed and offer more of this good critique 20:36:58 markwash: would it make sense to enable / disable async support ? 20:37:02 zhiyan: I'm sure we can support both local and distributed in either case, and that's definitely on the roadmap. . so I'm glad you and I are on the same page 20:37:26 flaper87: hmm, I dunno.. . I hadn't thought you would be able to *disable* asynchronous processing 20:37:30 :), cool! 20:38:09 zhiyan, re buyer beware. . I think you're right about the user being surprised if glance deletes their volume for example 20:38:23 zhiyan: but maybe this isn't a "user" oriented use case yet 20:38:39 markwash: yes, i agree 20:39:13 markwash, flaper87: shall we talk about glance-cinder-store now? 20:39:38 i have post a draft patch for you review, https://review.openstack.org/#/c/32864/ 20:39:41 zhiyan: sure, I saw your review, I still need to dig into it 20:39:52 markwash: thanks. 20:39:58 * flaper87 hasn't reviewed that one yet 20:40:06 zhiyan: in my To review list 20:40:18 oh you posted that today 20:40:25 cool 20:40:27 flaper87: :), yes 20:40:33 thank you all :) 20:40:48 zhiyan: you'll have to give some pop-tarts away 20:40:53 :D 20:40:58 * flaper87 is always hungry 20:41:04 btw, for nova side multiple-location image preparing support, I have created a bp https://blueprints.launchpad.net/nova/+spec/image-multiple-location 20:41:22 zhiyan: cool 20:42:04 when glance side ready, i will propose it again, and i will take high level design for it today 20:42:21 cool, thanks for that 20:42:25 zhiyan: great. .thanks for getting on top of that. . it will be good to see if we get any feedback from nova folks 20:42:54 can we talk a little bit about testing now, and then maybe we're done for the day. . . 20:43:01 sure 20:43:17 #link https://etherpad.openstack.org/glance-improving-test-cycle-times 20:43:32 #link https://review.openstack.org/#/c/30512/ 20:43:39 we have talked that draft plan in glance irc channel...when we finish the last weekly meeting 20:44:03 i will keep that way. 20:44:08 ok. next. 20:44:08 so I put together a review (that needs a lot of fixup, thanks for the notes flaper87 ) that tries to speed up glance testing 20:44:19 zhiyan, sorry didn't mean to rush you! 20:44:48 and when I was talking about it with some folks earlier, it seemed like we needed a high level strategy for what testing in glance should look like 20:45:08 I had some spare coffee-fueled moments on the train and put together the etherpad I linked above 20:45:11 markwash: no, just for image-multiple-location...not testing 20:45:17 markwash: the ehterpad looks good, great work. 20:45:35 I'm going to start working through the specific proposals at the bottom 20:46:01 and its least invasive first, so I think folks should have a lot of time to respond and redirect the work as we go 20:46:16 markwash: about #1 20:47:03 we have *something* like that now, but it might only be in ./run_tests.sh 20:47:04 markwash: we were just discussing about how to separate / organize tests in Marconi earlier today. There's a pattern I've seen / used in the past that might work for glance as well. (by pattern I mean modules structure) 20:47:12 oh cool 20:47:47 I think it would be fantastic if most os projects could adopt similar (but good) conventions for where tests live 20:48:00 Inside glance.test we should have the base clases that will be imported / used by the actual tests and the tests should live outside the glance/ package 20:48:08 lemme at that to your pad 20:49:04 cool 20:49:36 I think that about covers testing for now 20:49:48 other open topics? iccha? 20:49:53 thanks markwash will look at etherpad 20:50:01 ya markwash , so i noticed https://blueprints.launchpad.net/glance/+spec/remove-sensitive-data-from-locations 20:50:17 should be looking more closely at this is we are inching towards public glance? 20:50:18 iccha: ++ for resurecting that 20:50:23 iccha: funny you should mention that. . I saw that recently and realized I had completely forgot about it 20:50:43 iccha: +1 yes 20:51:09 hehe yeah sorry i jumped in in last min on meeting, turns out my flight was cancelled and i could join meeting afterall :p 20:51:10 should we use metadata_encryption ? 20:51:17 markwash: so do we target that for havana 20:51:32 iccha yeah, if somebody wants to work on it 20:51:36 did anyone do any ground work on it or have any ideas? if yes the whiteboard is a great place 20:51:40 I'd like to see that happen in Havanna 20:51:42 Havana 20:51:59 iccha: I'm a bit hesitant to myself, only because I don't have a big ready deployment at my fingertips to help me figure out all the possible gotchas 20:52:27 markwash: i think we would be interested 20:52:51 as far as I can tell, the thought was "just remove it", but do something like moving the related per store/container/etc authn creds to config 20:53:17 so you might have a glance.store.swift.user_id config setting 20:53:37 or maybe even a mapping of container -> user_id / password (again still thinking of swift) 20:53:39 for each user? 20:53:54 can we extract those sensitive data from location string, and encrypt them by metadata_encryption and save them to location's metadata (https://blueprints.launchpad.net/glance/+spec/direct-url-meta-data) ? 20:54:03 not quite, I think the users would share the same ones 20:54:18 zhiyan: I think the hope is to get rid of the need for metadata encryption 20:54:34 zhiyan: basically, we would remove sensitive stuff completely from glance's database 20:54:42 iccha: what about writing ideas on ehterpad? I'm interested in that as well 20:54:48 yeah, we should etherpad that out 20:54:56 +1 flaper87 20:54:57 instead of me yammering unintelligibly :-) 20:55:02 ok, +1 20:55:17 * flaper87 is with technology as his sister is with shoes. He's always interested 20:55:26 https://etherpad.openstack.org/remove-sensitive-location-info-glance 20:55:42 iccha: +1 20:55:51 its blank for now cause i removed all the sensitive info from etherpad :p 20:55:59 okay cool, anything else for the last 5 minutes? or extra free time for all? 20:56:04 we don't even have to tell our bosses 20:56:06 cool, I can help with that blueprint! 20:56:30 not from me 20:56:39 not from me now :) 20:56:47 nothing else here :) 20:56:49 #endmeeting