20:02:43 #startmeeting glance 20:02:44 Meeting started Thu May 23 20:02:43 2013 UTC. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:02:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:02:48 The meeting name has been set to 'glance' 20:03:06 Agenda for today 20:03:09 #link https://wiki.openstack.org/wiki/Meetings/Glance 20:03:44 #topic new blueprints 20:04:18 first thing I wanted to mention is a sheepdog store 20:04:35 I don't know if folks are familiar, but sheepdog is a block/object store, some folks want to use for storing images 20:04:49 I was looking around for a blueprint we could approve, but couldn't find one 20:05:05 here is the code so far https://review.openstack.org/#/c/29961/ 20:05:08 i'm here now 20:05:08 worth a look 20:05:32 maybe we should ask Liu to make a bp, or maybe its not a big deal 20:05:47 its good to have ppl interested in contributing and writing code for new stores, but +1 on bp required 20:05:49 anybody see any reason not to add a sheepdog store? the code in the review looks like it is progressing nicely 20:06:10 i'm cool with it, don't even care if there is a bp 20:06:39 one thing I keep thinking about is 20:06:58 in the long run, should we offer better support for drivers living outside of the code base? 20:07:15 yes 20:07:15 it seems good to me 20:07:17 markwash: ideally 20:07:18 I was about to say something along those lines 20:07:21 one thing tho... 20:07:25 these backends we're getting are great, but glance core doesn't necessarily have the resources to test them 20:07:31 there is always the debate about having plugins part of the distro or not 20:07:43 like, it gets contributed and then the expertise goes away 20:07:50 and maintaining it is hard 20:08:18 probably good to have an easy path to include your backend store without it being part of glance proper. 20:08:41 maybe openstack needs a "user contrib" area for collecting things like this for people to find, but aren't part of the official distro 20:08:49 yeah something like that would be good 20:08:54 I think we maybe could have Glance provide a testing framework for what-have-you backends 20:09:03 what are the exact benefits and is it worth our time? 20:09:37 so this might be a concern folks should start to chatter about 20:09:39 that would be good. to vet a driver 20:09:44 I don't see it in our really near-term future 20:09:46 nod 20:10:08 cool, I've already asked for a blueprint for sheepdog, and I'll mark it approved when I see one (as long as its not somehow crazy) 20:10:41 next item. . . 20:11:14 https://blueprints.launchpad.net/glance/+spec/image-error-state-management 20:11:17 error state management 20:11:26 this one was recently added by our good friend ameade 20:11:44 and triggered some interest from folks who are tired of being confused when their snapshot fails 20:12:02 not trying to dig into the details, here, yet 20:12:13 but I think it might make sense for more folks to take a look offline 20:12:41 I'm inclined to accept things we can to improve the experience with failed uploads, while maintaining backwards compatibility 20:13:10 one more pair of items for new blueprints 20:13:14 markwash: a comment on your reply to the bp .. 20:13:24 nikhil: go for it 20:13:25 can we just check the checksum? 20:13:51 no? 20:14:02 maybe in some cases, however, I think sometimes the image gets deleted immediately on a failure 20:14:09 so there wouldn't be anything to check 20:14:10 migth be easier in case of copy_from (raw data) might get complicate 20:14:34 d in case of import on that (just completing thought) 20:14:55 markwash: ah k 20:15:05 sure. . I think there's some possible synergy with the import export blueprints, which are next 20:15:23 #link https://blueprints.launchpad.net/glance/+spec/new-download-workflow 20:15:33 #link https://blueprints.launchpad.net/glance/+spec/new-upload-workflow 20:15:58 markwash: really in this bp I care about the use cases and not the proposed solution 20:16:08 does it make sense to change the bp to not propose a solution? 20:16:15 or does that make it a really fancy bug? 20:16:35 tricky 20:16:58 ameade, might be a good idea, but I'm not worried 20:17:36 rosmaita sent out an email about the upload and download workflows (some might say import and export) 20:17:36 i'm assuming we will touch on when we want to solve this in the upload/download discussion? 20:17:38 #link http://lists.openstack.org/pipermail/openstack-dev/2013-May/009385.html 20:18:08 ameade: its possible we can help out with the solution separately from upload/download changes 20:18:24 I guess its kind of up to whoever writes the code and reviewers :-) 20:18:33 markwash: that's true lol 20:18:34 I just want to make sure we preserve backwards compat :-) 20:18:35 markwash: well, i was under the impression both were diff (where import was upload+convert) 20:19:04 nikhil: I was under the same impression, but I think if you read into the specs its clear that import/export are whats being discussed 20:19:17 nikhil: but I wouldn't be terribly surprised if I was wrong about that and missed something :-D 20:19:43 :) 20:19:47 which specs are you talking about? 20:19:52 up/down? 20:19:55 yup 20:20:08 anything works, as long as we've some concensus on how we'r handling the image standards 20:20:23 i mean vhd qcow2 etc 20:20:25 they don't require conversion, but they leave open an easy way to include that 20:20:39 for import, as part of the "verification" step 20:20:57 for export, whatever we want to call the "scrubbing" 20:21:00 oh, yes 20:21:02 sorry 20:21:09 okay, maybe I'm on the same page as rosmaita, and not nikhil 20:21:14 and was just confused a moment ago 20:21:21 np, not sure how clear that was in the spec 20:21:30 import -> bits & metadata can change 20:21:52 so our plan is to let the idea soak in and see what else ppl have to say in mailing list? 20:21:53 for whatever purpose (verification, conversion, something else) 20:22:08 well, for me, I love these spec 20:22:12 specs 20:22:27 and I think it makes sense to start breaking ground 20:22:54 I am interested to take a poll here in this meeting to see if folks have concerns about the specs, or if they want more time to consider things 20:23:36 i have one more ... the image cloning one 20:23:59 not complete though, but i use the "image-action" resource 20:24:04 I am good with those specs, but so much of it depends on async workers (in my mind) and that does not yet seem worked out 20:24:25 rosmaita: I was wondering, do we get cloning for free with image import? 20:24:31 (has anyone got a chance to look at the etherpad for asycn workers?) 20:24:40 pls ignore if this is not the topic of discussion 20:24:46 not for free, i don't think 20:24:52 rosmaita: if we define glance in another region as the source of the import 20:25:11 perhaps I was wrong 20:25:25 markwash: i hav eto think, i think it requires more glance-to-glance coordination 20:25:36 nikhil: you and jbresnah bring up a good point. it seems like the first step here is still async workers 20:25:50 markwash: we'r just worried about the grusome implementation details for cloning being incorporated as copy_from 20:26:09 nikhil: that's a good concern to have 20:26:14 :) 20:26:33 nikhil: I think we're free to redefine how copy_from looks in the api for v2 (since it does not exist there yet TMK) 20:26:51 sounds good 20:27:07 so do you wanna start with that or from workers for that specifically? 20:27:23 well, it sounds like we don't have any surpises here about upload/download, so I think we can move on 20:27:32 darn the stmnt looks complicated 20:27:35 I'd appreciate it if any folks here want to chime in on the ML 20:27:56 just to keep the late-game surprises to a minimum 20:28:26 lets talk about bugs for a minute 20:28:30 #topic bugs! 20:28:47 folks, I have not been paying enough attention to bugs 20:29:05 I think we might be out of the post-summit bp triage glut 20:29:14 so maybe we can start taking on bugs as a group? 20:29:30 * jbresnah has been slacking on bugs lately 20:29:34 +1 20:29:37 +1 20:29:53 you mean triaging as a group? 20:30:27 ameade: just bringing up the difficult ones 20:30:41 I think we can make our way through most as a matter of just personal attention 20:30:58 with that in mind, does anyone have any bugs they're worried about right now? 20:31:30 I have https://bugs.launchpad.net/glance/+bug/1155389 20:31:31 https://bugs.launchpad.net/glance/+bug/1069979 20:31:41 is the one I posted still true? 20:31:49 ameade: I'm worried about that one too 20:31:57 ameade: I think we need to double check to verify 20:32:35 ameade: I assume that I originally marked it "invalid" for a good reason 20:32:40 but that assumption may be wrong. . 20:32:44 * markwash marksmash 20:32:49 hehe 20:33:10 we need a bugbot in this channel 20:33:39 ameade: can you look into that bug some more? 20:33:48 sure you can action it to me 20:34:03 #action ameade try to reproduce https://bugs.launchpad.net/glance/+bug/1069979 20:34:06 ameade: thanks! 20:34:07 i'll confirm it if it's true 20:34:16 looks nasty if it is 20:34:29 and we could theoretically get in quick fix before havana 1 is released 20:34:47 hmm i think esheffield tried it recently tried it and it worked , but would be definitely worth checking through 20:35:00 #action markwash try to do some bug triage for a change 20:35:00 I've been doing a lot of experimenting the last couple of days with image creates / uploads and they were working 20:35:04 i think taking a look at what the root issue described is would be good 20:35:12 once I got the headers etc. right 20:35:19 but didn't see that specific error 20:35:26 esheffield: we might need to ask him about what transport hes' using for notifications, too 20:35:27 to make sure it doesnt sneak back in somehow 20:35:55 yeah, that was just a plain vanilla devstack setup I was using 20:36:16 any other bugs people want to draw our collective attention to? 20:37:03 cool, lets move on to tracking some existing work 20:37:11 #topic blueprints in progress 20:37:24 there has been a lot of discussion of https://blueprints.launchpad.net/glance/+spec/api-v2-property-protection 20:37:39 continuing in the etherpad 20:37:43 * markwash searches for link 20:37:49 https://etherpad.openstack.org/public-glance-protected-props 20:38:02 markwash: ^^ 20:38:36 not sure if we're getting closer to breaking ground, but if not any delays are probably due to me being a jerk 20:39:18 the main issues right now are how to deal with collisions between protected properties and existing free-form properties 20:39:31 I think we'll probably have an update again next week 20:40:23 the challenges being v2 api advocates flat hierarchy for properties 20:40:38 which means, lots of collisions! 20:40:42 :-( 20:40:56 nikhil: do you want to talk about async workers? 20:41:26 nikhil: and can you remind me of any links 20:41:35 ameade: ^^ I think I randomly assigned you to the async workers bp 20:41:37 there are some requirements on etherpad: https://etherpad.openstack.org/havana-glance-requirements 20:42:09 sure 20:42:14 jbresnah: thanks 20:42:19 np 20:42:38 markwash: was wondering if you got a change to browse through a get a picture 20:42:53 tho i might have been a bit late in getting everything there 20:43:08 nikhil: I don't think I quite follow what you are asking me 20:43:22 also, haven't added the granular opinions on the requirements 20:43:32 man this async worker has a lot of responsibilities 20:43:33 i was talking about https://etherpad.openstack.org/havana-glance-requirements 20:43:36 markwash: ^^ 20:43:43 ameade: yeah 20:43:49 do we have a list of must haves vs nice to haves? 20:43:52 nikhil: oh, okay 20:44:00 i thought flavio was doing some kinda poc 20:44:01 nikhil what I'm seeing looks good to me 20:44:23 markwash: ah k , though I feel like adding more details 20:44:29 iccha: must haves would be good 20:44:35 * ameade wishes we had an implemented workflow service 20:44:38 just dont want o step on peoples toes on what ideas are already out there 20:44:39 I think some simple POCs would be great things for making the next step 20:44:52 jbresnah: i agree that may help us focus 20:45:04 * markwash vaguely recalls promising some sort of POC of his own. . . 20:45:11 did we reach any agreement about threads vs separate process.. 20:45:12 there seems to be a lot of chatter related to workers in other projects as well 20:45:25 wondering if openstack needs a general worker service of some sort 20:45:39 * ameade switches to openstack-meeting to ask if they are done with it yet 20:45:42 iccha: i think we didnt get that far into impl 20:45:49 feels like there may be a lot of "wheel reinventing" about to happen 20:46:05 can threads handle some of these heavyweight usecases? 20:46:20 iccha: I still feel great about making it a deployment config option (greenthreads vs other processes) 20:46:34 markwash: +1 20:46:36 esheffield: this wheel might not be as old or universal 20:46:47 but maybe it is and I just don't get it yet 20:47:15 markwash: may be even services (besides threads and processes) 20:47:23 esheffield: though I guess it does all feel kind of like celery 20:47:27 * nikhil waits for a hurray from jbresnah 20:47:42 nod celery 20:47:44 markwash: yeah, I've been looking more at celery 20:47:50 but i would like to be able to have a simple impl too 20:47:51 do we have any takers on a POC / straw man code? 20:47:59 or should I just follow up with flaper87? 20:48:03 it is not worth forcing everything through amqp messaging 20:48:21 nd nikhil 20:48:29 markwash: i would like to try that, if we'r not focusing on getting it before h1 20:48:45 jbresnah: agree, especially when it can save deployers the trouble of making a change if they don't particularly care about the new async features 20:48:54 celery does seem to support a "webhooks" approach as well, so maybe not everything thru amqp 20:48:55 I would still like to see the first thing be an interface definition for a worker 20:49:13 nikhil: cool, go for it. . it won't hurt to have more than one if flaper87 is already working on one too 20:49:27 esheffield: but i would like to even support thtreads, fork, multiprocessing potentially 20:49:33 cool, i'll follow up with him too 20:49:36 #action nikhil work on a proof of concept for processing 20:49:46 I may look into that POC as well 20:49:50 #action nikhil what mark said 20:49:51 no promises 20:49:56 nikhil: can the first step be an interface deifnition for a service? 20:50:02 :) 20:50:04 jbresnah: +1 20:50:13 * markwash <3s interfaces 20:50:18 10 minutes 20:50:26 iccha makes a good point 20:50:27 oh yeah 20:50:40 i want to bring up something thats why :p 20:50:58 iccha for open discussion? 20:51:01 yeah 20:51:06 cool 20:51:10 #topic open discussion 20:51:29 iccha you go first 20:51:32 I have something too, but I yield to the representative from rackspace 20:51:48 we have oft and on talked about soft deletes, where we want to keep them if it would be great to take a look at alternative approaches than existing one 20:51:59 *whther 20:52:01 +1 20:52:04 +1 20:52:26 it seems that glance developers are on the same page there 20:52:31 but not so much the users 20:52:33 from where I'm sitting, I'd love to see us fix the admin uuid reuse issue first 20:52:47 and then draw up a deprecation plan for getting rid of soft deletes 20:52:53 +1 uuid 20:52:56 someone please describe the use cases that caused soft deletes to happen 20:53:03 in the first place 20:53:08 ameade: agreed 20:53:25 ameade: we tried to tease it out on the mailing lists but pretty much failed 20:53:27 ameade: the use case is "as a glance developer, I want to copy and paste nova's (oslos) db models, to save work" 20:53:28 rosmaita: had good use cases for why we need soft deletes 20:53:45 * markwash turns his sarcasm back off :-) 20:53:46 so we can look at alternative approaches if he manages to convince us :) 20:53:51 heh 20:54:00 markwash: i figured that was why 20:54:06 :P 20:54:08 well, suppose i make an image and put a lot of package info into the metadata 20:54:11 markwash: but if that is the reason it exists it is very helpful to know 20:54:31 and i want to find out what that was, even though the image was deleted in the mean tim 20:54:36 *time 20:54:47 rosmaitia: i think that is a misuse tho 20:54:53 why? 20:55:01 rosmaitia: because glance only knows what the metadata is now 20:55:15 rosmaitia: it can change throughout the lifetime 20:55:29 looking up meta for a running instance give no gaurentee of consistancy 20:55:31 rosmaita: seems like a reasonable use case to me 20:55:45 markwash: all that will give you is the metadata at the time of delete 20:55:49 rosmaita: we probably just need to figure out the right way to do that in the api, or through notifications or more structured logging 20:55:56 markwash: and that is only if updates are not allowed after delete 20:56:05 jbresnah: good points as well 20:56:17 ok, another use case would be if glance does a "changes-since" list like nova has 20:56:24 that is sort of the same usecase that was talked aobut on the mailing list 20:56:37 and i think it is trying to use glance as a hammer when it was designed to be a wrench 20:57:01 * jbresnah is not familiar with changes since 20:57:11 (3 minutes) 20:57:22 lets you see your servers from a particular time, even deleted ones 20:57:23 changes-since is still auditing right? 20:57:27 here is another way to look at it, than getting rid of info; say if we err on side of keeping it and have a better way to soft delete. what do we lose? 20:57:49 rosmaita: so it takes all the changes and lets you get a view from any particular time? 20:57:59 right 20:58:06 can we take this discussion offline and hit a few more quick items? 20:58:08 rosmaita: sort of like version control (like git or whatever) would do? 20:58:13 go for it markwash 20:58:15 nod 20:58:23 next weeks meeting time 20:58:26 jbresnah: not as fine grained 20:58:55 crap I forgot my suggested time options 20:59:08 7 pm EST 20:59:09 time buddy? 20:59:14 while mark is thinking, also, check out spec for clonning BP: https://wiki.openstack.org/wiki/Glance-image-cloning 20:59:19 4 am EST :p 20:59:25 oh yeah, that one 20:59:39 looks funky rosmaita 21:00:07 I think 7 pm EST would be good for folks other than in europe 21:00:12 but flaper tends to stay up late anyway 21:00:12 *EDT 21:00:20 well, I'll send out an email 21:00:24 since we're runnig out of time 21:00:30 haha, prolly we'd talk in UTC 21:00:33 any quick thoughts on another bug / review squash day? 21:00:44 +1 ? 21:00:45 fridays work well for me for such things 21:00:46 +1 21:00:47 +1 21:00:58 +1, hope to help this time 21:01:09 +1 21:01:16 +1 21:01:21 let's try for friday, unless its conflicts with the havana-1 release in a silly way 21:01:38 thanks everybody, in order not to anger the gods of timeliness, I'm going to close it out 21:01:40 is anything due for h1? 21:01:52 . . . nothing critical 21:02:01 we should probably start pushing harder on h2 deadlines though 21:02:08 as possible by the constraints of open source 21:02:29 Thanks everybody! 21:02:31 #endmeeting