20:04:16 #startmeeting glance 20:04:17 Meeting started Thu Jul 11 20:04:16 2013 UTC. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:04:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:04:20 The meeting name has been set to 'glance' 20:04:40 hi folks~ 20:04:41 #link https://launchpad.net/glance/+milestone/havana-2 20:04:44 yo yo 20:05:11 markwash: yes, i have 3 20:05:24 all our blueprints are in "needs code review" which is good, we have about a week left to get them all moved through 20:05:35 cool 20:05:38 hey 20:05:46 hi iccha 20:05:47 here now 20:05:50 o/ 20:05:51 markwash: well, I still need to submit another one 20:05:53 during this meeting, if there are enough folks around, I'd like to cement our plans for getting those in 20:05:58 flaper87: true 20:06:00 Gate isn't helping today 20:06:11 This one is ready to be reviewed https://review.openstack.org/#/c/36218/ 20:06:18 #topic registry database driver 20:06:22 flaper87: NOD, 20:06:57 flaper87: that review looks straightforward 20:06:58 So, the registry database driver is almost ready. I still need to submit a new review, which should be the last one 20:07:06 and its passing the gate at this point 20:07:31 36218 seems unlikely to conflict with other changes 20:07:53 markwash: yeah, that review changes the db_api for properties deletion, though 20:07:53 how likely do you think the other patch is to have conflicts with the other bps that are currently in review? 20:08:11 markwash: the other patch shouldn't conflict at all 20:08:21 it is an isolated package under glance/db/registry 20:08:28 and it has its own tests 20:08:55 flaper87: when do you think your next patch should be ready? 20:09:23 so, unless I find something else that blocks that patch, I think It should be ready tomorrow / Monday 20:09:33 okay great, that should be plenty of time 20:09:46 anyone else have questions about registry db driver? 20:10:00 and btw, sorry for the delay, I was away the last 2 weeks 20:11:00 #topic multiple locations 20:11:16 both zhiyan and jbresnah have been very active on this one lately 20:11:29 zhiyan, is there just one review left for you? or two? 20:11:45 markwash: for multiple-locations BP, I have one 20:12:19 markwash: I have prepared a overall status for you/team, let me paste it here: 20:12:30 I have 4 patchs there for 4 BPs: multiple-image-locations (PATCH API part), multiple-locations-downloading, locations-policy and glance-cinder-driver 20:12:38 Need review: 20:12:39 [1] https://review.openstack.org/#/c/35134/ 20:12:43 Ready for review (close to merge to me): 20:12:44 [2] https://blueprints.launchpad.net/glance/+spec/multiple-locations-downloading (no dependency) 20:12:44 [3] https://blueprints.launchpad.net/glance/+spec/locations-policy (depends on above [1]) 20:12:44 [4] https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver (depends on above [3]) 20:13:35 btw, thanks folks help review my patchs. 20:13:46 zhiyan: I also was thinking https://review.openstack.org/#/c/35741/ was necessary, do you agree? 20:14:26 markwash: yesyes, it's #[3] 20:14:30 also, with a simple-seeming rebase, that seems like a patch that could go through fast 20:15:05 zhiyan: okay, cool. . I think maybe multiple locations depends on locations-policy 20:15:05 [2] = https://review.openstack.org/#/c/35734/ 20:15:14 [3] = https://review.openstack.org/#/c/35741/ 20:15:21 [4] = https://review.openstack.org/#/c/32864/ 20:15:28 I'll circle back to those when we talk about glance-cinder-driver, okay? 20:15:47 ok 20:16:17 so just to regather all the reviews we think are necessary for multiple locations to be "Implemented" 20:16:19 so, for multiple-image-locations part, i think we just on https://review.openstack.org/#/c/35134/ reviewing 20:16:28 https://review.openstack.org/#/c/35134/ 20:16:36 https://review.openstack.org/#/c/35741/ 20:16:42 and jbresnahs which is. . . 20:16:52 https://review.openstack.org/#/c/34492/ 20:16:53 https://review.openstack.org/#/c/34492/ 20:16:56 yes 20:17:25 unfortunately jbresnah is out, so I'll have to check back with him 20:17:40 it looks like https://review.openstack.org/#/c/34492/ needs a rebase to resolve merge conflicts 20:17:55 i have a question on https://review.openstack.org/#/c/35134/ , i saw jbresnah (maybe you also) have a concens on image status mangement/update logic 20:17:59 zhiyan: you had the last -1 on that branch, do you think its getting close to getting your +1? 20:18:52 zhiyan: you said those issues in your -1 were easy to fix, so I figure a rebase is pretty much all that is needed 20:18:53 i have no -1 on #34492 20:19:26 I see the -1 for PS 18 is gone 20:19:55 markwash: yes, since ps18 has 3 minor problems... 20:19:55 #action jbresnah rebase https://review.openstack.org/#/c/34492/ and we'll review and land it asap 20:20:14 so, the next one in this series is https://review.openstack.org/#/c/35134/ 20:20:15 markwash: can we talk about #35134? 20:20:18 yup 20:20:30 the state issues jbresnah pointed out 20:20:35 I think ultimately this is my fault 20:20:51 I didn't understand when writing the domain model the right place to put state management and certain other details 20:20:55 so, i personally think it is ok, and we need dedicated BP to cover image status management 20:21:18 currently, there is only one piece of state management in the controllers, and that is setting the image.status = 'saving' when you start to upload data 20:21:29 this patch adds several more locations 20:21:47 zhiyan: I tend to agree that we should have a more comprehensive refactoring of the state management code location 20:21:56 so the blocker here is just to make sure we're doing the *right* state management 20:22:21 the way I'm imaginging it, basically, if you add a location to an image that is 'queued', it becomes 'active' 20:22:35 and if you delete the last location from an image that is 'active', it becomes 'queued' 20:22:39 does that sound right? 20:22:51 markwash: yup. so, if you have concern current status changing logic in #35134, can we just give a limitation to it on current stage? 20:23:04 markwash: +1 20:23:15 I guess it's a bit unclear what queued state means 20:23:22 flaper87: its is confusing 20:23:34 'queued' is the state we take on when the image record is created, but before data is uploaded 20:23:42 there might also be a 'pending' state? 20:24:02 markwash: the current implement in PS12, i think it just like you said 20:24:22 zhiyan: okay cool, then all I think is needed is more review on my part there 20:24:40 markwash: Ok, we're on the same page there but, if we put the image back to queued when the last location is deleted, queued will mean something else 20:24:51 #action markwash re-review state management logic in https://review.openstack.org/#/c/35134/ 20:24:54 flaper87: hmm 20:25:24 markwash: i meaning, if you think that logic is not much stable for status management, so i think we can give a limitation there, for example, just allow client call patch api on 'active' status image.. 20:25:25 I mean, it can be missleading. Users won't know whether the image was created and no data was uploaded 20:25:39 or if the image's locations were deleted 20:26:23 I don't think adding a new state will help but, what about renaming it ? 20:26:38 'inactive' ? 20:26:44 (in a future patch and tracked by a separate blueprint) 20:26:48 markwash: that makes more sense 20:27:15 flaper87: okay, I buy that. . essentially accept 'queued' for now, but move forward with a state management refactoring in the future? 20:27:28 markwash: agreed! 20:27:33 are there any other items of state management that are confusing that might be worth mentioning in the initial bp? 20:27:35 what's the different on 'queued' .vs. 'inactive' ? 20:27:56 zhiyan: to answer your earlier question, I think we need to allow locations to be added to 'queued' images 20:28:17 markwash: for 'replace'? 20:28:35 markwash: I'll take a deeper look into Glance's states and see if I find something that might be worth considering 20:28:39 zhiyan: both replace and add /locations/- 20:29:04 'replace' has two: replace to empty, and non-empty 20:29:24 markwash: i think you means for replace non-empty list, right? 20:30:07 I'm getting a bit bogged down at this point, zhi yan can you and I discuss this later after I take a quick look at the state management in your patch again? I'll be around for your normal work hours 20:30:14 replace locations list to empty list, means removing, replace locations list to non-empty list means adding.. 20:30:45 markwash: of cuuse 20:30:47 great 20:30:48 of cause 20:30:54 thanks 20:31:04 ok, move on next 20:31:06 #action markwash create a blueprint for centralizing state management / refactoring 20:31:13 last one in this series is easy I think 20:31:31 https://review.openstack.org/#/c/35741/1 20:31:45 yes 20:31:50 zhiyan: pretty much just needs a rebase to an up-to-date version of the patch we were just discussing 20:31:53 right? 20:32:31 markwash: need rebase, and not need change code, it is ok for you/team? right 20:32:45 it seems okay to me, I like it 20:33:04 #action zhiyan rebase https://review.openstack.org/#/c/35741 off more recent patch for review 20:33:05 cool 20:33:13 glance-cinder-driver is next 20:33:19 #topic glance-cinder-driver 20:33:23 yep 20:33:39 https://review.openstack.org/#/c/35734/ is first I think 20:33:58 https://review.openstack.org/#/c/32864/ is next 20:34:12 ok 20:34:15 yes 20:34:54 I think the first one is pretty straightforward 20:35:10 wait, maybe I reversed those 20:35:27 https://review.openstack.org/#/c/32864/ is the one that seems easiest to me 20:35:43 yikes my browser stopped acting normal 20:35:55 okay got the order right the first time 20:36:08 so #35734 just needs some approvals I think 20:36:17 :) 20:36:21 and I have honestly not gotten around to #32864 yet 20:36:54 sorry i am late! 20:37:23 jbresnah: we talked about you in your absence 20:37:26 mostly good thing! 20:37:33 s/thing/things/ 20:37:36 heh 20:37:41 I still need to get through those reviews as well 20:37:45 sorry i was totally wrapped up in rebase for last patch 20:37:50 I think I already reviewed the first one 20:38:15 zhiyan: just for context, how terrible would it be if #32864 missed h-2 and showed up early in h-3 ? 20:38:33 zhiyan: end of the world? end of the universe? 20:38:46 * jbresnah has not yet checked out 32864 20:38:51 markwash: oh, no, i need it...can we try to address it in h2? 20:39:21 zhiyan: I'm only worried because its last in the list, I think there is likely to be enough time 20:39:27 you know, i prepare above patchs, just want to give cinder-driver support 20:39:58 all right, good to know 20:40:38 markwash: so a little sad about it....doing some supporting things but lost the key one... 20:40:47 the stuff you've been doing has been a great help to getting multiple locations off the ground, and I was quite happy when I realized that our "solution" for getting the cinder driver working with multiple locations was exactly what the previous ptl had intended 20:40:58 i'm pretty sure #32864 is very easy.. 20:41:06 thanks to you (and everyone) for all the hard work! 20:41:46 for #2864, the key change just here: https://review.openstack.org/#/c/32864/6/glance/store/cinder.py 20:41:52 yeah I suppose it looks pretty straightforward 20:42:10 well I think we're good 20:42:17 jbresnah, let me circle back to you now 20:42:34 we talked a bit earlier about https://review.openstack.org/#/c/34492/ 20:42:51 I think that just needs a rebase for merge conflicts 20:42:54 here comes patchset 20! 20:43:21 yeah 20:43:23 heh 20:43:27 it has been an adventure 20:43:43 hopefully i will send in for review as soon as tox passes in the window right there --->< 20:43:45 hopefully i will send in for review as soon as tox passes in the window right there ---> 20:43:56 it would be amazing if that merged tody 20:44:14 i would make a sizable donation to the charity of the approvers choice 20:44:27 zhiyan: jbresnah flaper87 are all of you going to be with us next week? or planning some time off? 20:44:41 * jbresnah is entirely out of vacation time 20:44:53 markwash: i have some day job stuff to do but i will be around 20:45:01 markwash: zhiyan will here 20:45:17 jbresnah: i was just about to hit +1 20:45:18 :) 20:45:34 sorry i have been gone so much 20:45:42 i will try to make up for it with timely reviews 20:45:59 well, I think that covers everything I'm worried about for the project updates 20:46:01 markwash: I'll be on-line and working 20:46:04 #topic open discussion 20:46:34 ugh, i know ihave things to talk about but my brain is failing me 20:46:47 In two weeks, I want us to reevaluate our h-3 targeted blueprints 20:46:53 * flaper87 gives jbresnah a Red Bull 20:47:01 markwash: +1 20:47:01 i am trying to figure out the state of quality of service in openstack 20:47:19 from waht i can tell, there is basically nothing for that in glance, is that a fair statement? 20:47:19 I've also been kind of neglecting non-h2 reviews lately, so I'm hoping to correct that soon 20:47:39 there are workers, is there a simulatanious connection limit that is enforced by glance proper? 20:47:42 jbresnah: I think that's correct 20:47:49 markwash: cool thanks 20:47:53 +1 h3 re-eval 20:48:20 markwash: what do you think about doing some serious BP clean up? 20:48:30 ugh, probably needed again :-) 20:48:36 * markwash hasn't been looking 20:48:50 anything that is not relevant for H and for which the submitters cannot be contacted gets cleaned 20:49:02 is that a reasonable thing to do? 20:49:04 the great thing is that we now have "ongoing" and "future" milestones 20:49:14 jbresnah: sounds reasonable 20:49:28 jbresnah: +1 20:49:46 cool 20:50:00 i had contact with one but failed to give him the time for the meeting today :-( 20:50:08 i will correct that this afternoon i hope 20:50:33 and https://review.openstack.org/#/c/34492/ 20:50:36 PS20 is in 20:50:41 any early notice on that would be appreciated, b/c otherwise I might hog all the meeting time with my own concerns 20:50:59 markwash: cool, maybe i will setup a side time for that 20:51:46 okay, looks like the well might have run dry 20:52:11 thanks to everybody for all the hard work, dealing with the awkward timezone issues, and all the diligent reviews 20:52:23 +! for everyone 20:52:25 +1 20:52:26 yeah, thanks! 20:52:45 as a TZ problem child I am really grateful and impressed with the energy glance has lately! 20:53:41 meeting next week will be the early timeslot 20:53:43 1400 UTC 20:53:50 thanks all for review and support! 20:54:07 markwash: cool, Does that get updated in the ical ? 20:54:27 flaper87: I *think* its supposed to already be up to date there, but I had trouble updating my client when I tried to verify 20:55:18 totslly unrelated note i have a question :) let me kniw whenever there is a window :) 20:55:24 markwash: oke-doke! Will check that and let you know 20:55:28 markwash: oh, it is updated 20:55:30 cool 20:56:23 iccha: nows your chance! 20:56:37 4 mins left 20:57:13 markwash: do we have a stance on soft deletes? as glance. i have seen emails in openstack mailing list about shadow tables for a bunch of projects incl glance 20:57:34 also seeing the oslo db pathc up in review. it has lot of great things to offer but we can always pick and choose 20:58:21 iccha: IMHO, we shouldn't merge those blindly. We should pick the features we want to have in Glance first and then elaborate more the other 20:58:21 I'm open to whatever on shadow tables, they seem better than soft deletes, but I do also worry that its solving the problem at the wrong layer 20:58:43 I am against soft deletes too ftr 20:58:54 markwash: agreed! 20:58:55 just wanted to make sure we keep thinking about it :) 20:58:55 b/c for example, each driver would need to solve it differently, which maybe is fine, maybe suboptimal though 20:59:30 we need to think more about that 20:59:54 I've been working with somebody at my company to pursue different drivers 21:00:09 and the conclusion is pretty clear -- db api needs to go, it is abstracting the db at way too high a layer 21:00:23 "go" means "become legacy only used by v1" 21:00:25 fwiw 21:00:46 +1 21:00:48 not to invalidate any of the registry db driver stuff, which is invaluable 21:01:03 +1 21:01:14 and which i think could be easily ported to a differnt db abstraction 21:01:19 okay, we're out of time! 21:01:21 #endmeeting