20:07:39 #startmeeting glance 20:07:40 Meeting started Thu Sep 4 20:07:39 2014 UTC and is due to finish in 60 minutes. The chair is markwash__. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:07:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:07:45 The meeting name has been set to 'glance' 20:07:45 * markwash__ is out of practice 20:08:05 agenda at https://etherpad.openstack.org/p/glance-team-meeting-agenda 20:08:11 however I think first we need a j3 update 20:08:27 so we're in the process of trying to ram through the last patches needed to tag j3 20:08:57 AFAICT, those patches are https://review.openstack.org/#/c/44355/ and https://review.openstack.org/#/c/111483/ 20:09:03 does that look right to folks? 20:09:15 yes, watching the patches in zuul has been like watching a baby bunny trying to cross a busy street filled with drunk drivers 20:09:29 yes 20:09:42 good thing there's lots more bunnies where that came from 20:09:49 https://review.openstack.org/118627 20:09:56 strength in numbers :) 20:10:02 https://review.openstack.org/105231 20:10:09 client is fine... 20:10:11 those also need to get through 20:10:49 * nikhil_k realizes.. may be we need 'em for horizon 20:10:58 TravT: is it possible to wait until tomorrow to really put the accelerator on those glanceclient patches? 20:11:13 or would such a delay hurt horizon progress? 20:11:17 * markwash__ thinks of the gate 20:11:37 well, I haven't talked with david lyle about it 20:11:43 keep hoping to see them through 20:12:03 well, the bunnies each have a run ahead of them at this point 20:12:21 so good luck to them 20:12:31 * TravT covers his eyes 20:12:37 :) 20:12:40 we also have tempest test patch(https://reView.openstack.org/113632/) which is dependent on glance-client(https://review.openstack.org/105231). Can tempests patch merge after today? 20:12:41 and the important detail about those other waiting patches (for the server project) 20:12:44 is 20:13:10 If they don't land by ttx's morning tomorrow, they are going to be auto-deferred to juno rc-1 20:13:14 just b/c the gate is crazy 20:13:32 they will be auto-granted FFEs as well 20:13:40 so I'm not really sure what the distinction is exactly 20:13:46 it would be really nice to get the SEED bumped up https://review.openstack.org/111483. It actually passed all the gate tests and then got hit by a package download failure for cinder 20:13:55 and now seems to just be cycling in check 20:14:14 lakshmiS: I'm not actually sure about the tempest test changes, how those factor in 20:14:29 I would assume that adding tests is not blocked by the FF, since they're not really features 20:14:38 the j-3 to j-rc1 period is intended for testing 20:14:52 lakshmiS: but I think you'll have to ask tempest folks about that 20:15:03 markwash__: ok 20:15:23 any other questions about the j-3 tagging/release process? 20:15:24 if the client patches fail, it will be a really hard sell for horizon. 20:16:20 TravT -- fail as in "don't work" ? or fail as in "don't land" ? 20:16:35 don't land 20:17:00 TravT, do any of those patches need more review still? or are they just waiting on the gate? 20:17:20 david has bumped 2 of our 4 horizon patches to high priority and agreed to ask for FFE if they client gets released 20:17:26 no, don't need review. 20:17:44 just need some sort of rain dance or something to keep zuul from whacking them 20:18:06 TravT: released == in master? or released == up on pypi? 20:18:36 hmm... I think released == up on pypi so that we can update the horizon requirements file 20:19:04 okay 20:19:07 markwash__: on pypi and move minimum openstack/requirements version to that releas 20:19:08 e 20:19:43 I'm a bit worried about the openstack/requirements change 20:19:52 unless the feature is discoverable via the API, I'd have to re-look at the patch 20:19:55 I feel like those are freezed relatively early 20:20:12 freezed -> frozen I suppose 20:20:31 I'll ping ttx in the morning, but neutron is in the same boat 20:20:58 TravT: it looks like you may end up needing to request a Dep Freeze Exception 20:21:06 and that's after bribing Flavio to release the client 20:21:11 thanks david-lyle 20:21:12 and that's after the gate calms down to actually land it 20:21:22 okay, any other j-3 questions? 20:21:47 I'd like to get out of the way of the regularly-schedule agenda if possible :-) 20:22:21 Frodo's quest seems tiny now 20:22:33 :-) 20:22:51 #topic kilo specs 20:23:12 let's merge https://review.openstack.org/#/c/115807/ after tomorrow 20:23:21 (again, #godsavethegate) 20:23:27 any other thoughts about kilo specs? 20:23:32 agreed 20:23:43 i do have a question. 20:23:54 we bumped out tags from current spec. 20:24:08 we're going to submit a new spec for kilo 20:24:13 Should the specs which has not be approved yet be moved to kilo folder? 20:24:22 ativelkov: yes please 20:24:33 should I update the juno metadefs spec to reflect reality 20:24:41 if a spec is good apart from being in the wrong folder, anyone should feel free to move it 20:25:20 TravT, I would love to see the juno spec reflect what has been merged in juno (if possible) 20:25:27 TravT: it seems nice 20:25:35 Ok. I'll do that. 20:25:36 would just a quick disclaimer "this feature did not land in Juno" work? 20:25:47 do the specs have to go through the gate? 20:25:52 or do they take a shortcut 20:26:12 or I suppose if things can just be removed from the spec that is fine too 20:26:20 I din't get any reviews on artifacts spec for quite a long time. It does not block any work being done, but I want to clear its fate a bit :) 20:26:22 TravT: I'm not sure 20:26:28 ok, 20:26:40 I don't think you have to wait a long time for a spec TravT 20:26:40 well i'll update and the #godsavethegate you guys decide when to merge 20:27:33 TravT: I asked in openstack-dev, if I hear back we can maybe merge them sooner 20:27:46 okay, next up (I'm going in PTL order) 20:27:58 #topic Metadefs bug handling 20:28:10 Do you want a lot of little bugs with one or two lines, or a bug per reviewer comments made pre-merge? 20:28:36 I don't understand the second alternative quite clearly 20:28:39 sorry.. 20:29:18 as people went through the files, they made comments. We could create one big bug for DB and just start putting all fixes into it 20:29:43 or we could just addressing individual fixes 20:29:47 say one file at a time 20:29:47 I think it would be better to structure the fixes into groups of related changes I guess? 20:29:59 its nice to have a small patch to review 20:29:59 just whatever the reviewers want to see 20:30:02 So individual bugs or one dumping ground? 20:30:12 not one dumping ground 20:30:21 +1 20:30:27 individual as long as its not too granular, group together as you see fit 20:30:35 remember that reviewers like a moderate number of small patches best 20:30:45 sounds good to me 20:30:49 ok, we'll try to keep it reasonable 20:31:00 huzzah 20:31:03 next up. . . 20:31:16 #topic Bug Day 20:31:30 \\o \o/ o// o/7 20:31:59 who's topic is this? :-) 20:32:13 not mine this time 20:32:49 but I do like clean bug list 20:32:54 so it seems someone is proposing we have another bug day 20:33:00 Mine 20:33:02 this time one day only so something would get done as well 20:33:03 maybe wrong etherpad... we don't have bugs in glance :) 20:33:12 cpallares: hi there! 20:33:19 hi there markwash__ 20:33:45 bug days are great things for the rc1 timeframe 20:33:57 arnaud: +1, just undocumented features 20:34:07 so is there sometime next week or the week after that works for some of us? and who can sign up? 20:34:08 exactly jokke_ ;) 20:34:08 It doesn't have to be just one day. 20:34:11 and who can sign up to remind me? 20:34:33 cpallares: based on our last two day one, yes it does ;) 20:34:49 haha alright 20:34:53 I'm up for it next week ... the following I'm out from Wed 20:35:02 yeah I think next is best 20:35:21 markwash__: I can take the poking stick 20:35:22 since we're supposed to have RC1 by around the 25th of sept 20:35:25 jokke_: thanks! 20:36:01 So Thursday? 20:36:02 how does the 11th seem to folks here? 20:36:06 i missed the last one. is the point to get fixes in a bug day or for everybody to grab them and triage them? 20:36:09 +1 20:36:16 TravT: both 20:36:29 TravT: You can also close them :P 20:36:32 triage is especially great because there may be some release blockers hiding there 20:36:36 TravT: We have bugs from 2012 20:36:37 I will be there 20:36:50 and we probably have a huge list of ones to just close 20:37:03 cpallares: at least I do not have powers to mark them for example "Won't fix" 20:37:20 okay I really like thursday, bc I can just start my day with the team meeting and stay on bugs the whole day 20:37:36 markwash__: That sounds good :) 20:37:45 markwash__: there is 10+ that are "propose-close" tagged :P 20:37:54 No-one just seems to touch them :P 20:38:00 okay, so let's get together a etherpad for this particular bug day 20:38:09 ok. 11th works for me... but our guys in poland have been wanting to participate as well. 20:38:10 with links to the tag scheme we wanna use 20:38:48 cpallares: can you kick off that etherpad and then email us? arnaud/jokke can probably help with some details if there are any that are missing 20:39:04 yes, sounds good 20:39:15 markwash__: Sounds good 20:39:18 +1 20:39:23 cpallares: and by email us, I just mean throw it out on the ML tagged [Glance] 20:39:28 cpallares: thanks! 20:39:31 yay bug day 20:39:31 ok 20:40:03 #topic Documentation 20:40:12 TravT: it looks like you had a question about doc updates? 20:40:35 yeah, is there anything special we need to know about getting docs updated? 20:40:46 we want to make sure that metadefs are properly documented 20:40:55 I have to throw that question to the group, the docs are an area where I've been struggling 20:41:16 rosmaita sometimes knows a lot about our docs :-) 20:41:23 markwash__: Do we write our own docs? 20:41:32 cpallares: some of them I think 20:41:46 and contribute to some elsewhere I belive 20:41:51 but I'm way out of touch 20:41:58 so you could tell me the docs are on mars and I'd have to take your word for it 20:41:59 we can write as many docs as we like 20:42:07 Sounds like this needs a shootout to ML as well 20:42:07 that's where specs should help (I guess) 20:42:19 rosmaita: where we should dump those? 20:42:47 https://wiki.openstack.org/wiki/Glance-where-are-the-docs 20:42:56 gives you a run-down of what goes where 20:43:04 though i just noticed it is a bit out of date 20:43:05 Oh cool!! 20:43:11 ooohhhh 20:43:18 a real wiki! 20:43:26 thanks rosmaita! 20:43:27 #link https://wiki.openstack.org/wiki/Glance-where-are-the-docs 20:43:33 i will try to update it tomorrow 20:43:44 we need some SEO for that wiki page 20:43:56 rosmaita: can you shoot us a message when the update is done? 20:44:00 so google brings it up for me when I search anything 20:44:16 maybe i can put some porn on it or something 20:44:20 well okay 20:44:23 +1 20:44:29 #topic Glance common service framework 20:44:51 i will fix page & send out to ML when it's updated 20:44:55 I deferred this one to last, b/c I think we resolved to wait until some of the oslo-incubator changes sort themselves out 20:45:11 rosmaita: ty! 20:45:34 but I'd like to point out that mclaren has some thoughts about SIGHUPS and reloading configs at https://etherpad.openstack.org/p/sighup-conf-reload 20:45:34 markwash__: I think the outcome from last nights discussion was to revisit the plans how we do it for Kilo 20:46:09 jokke_: +1 20:46:10 markwash__: mclaren has also view and he would like to approach with something similar what nginx does 20:46:18 yeah I was just reading that 20:46:36 I'm glad to have a little survey of what is done because I think we might be in a slightly more complicated position than some other openstack services 20:46:52 given our mix of long- and short-lived connections 20:47:04 which I think does not solve the fundamental issues we have, but those needs to be attacked when we get spec out there 20:47:40 I was pretty happy with the spec for adopting the ProcessLauncher until it surfaced that it causes these problems with reloading 20:47:50 so I think we'll just need a new, more comprehensive kilo spec 20:47:55 which specifically addresses the reconfiguration aspects 20:48:10 guided by some of the considerations mclaren has put forward 20:48:19 jokke_: does that sound like a reasonable summary / restatement ? 20:48:57 markwash__: sounds good to me ... the spec comments are good place to put the discussion of the concerns 20:49:06 okay cool 20:49:09 and that way we get them documented 20:49:14 yeah 20:49:49 let's open up the floor 20:49:51 If we have time I would like to add couple of fast topics out of the list (forgot to add them there) 20:49:52 #topic open discussion 20:50:19 https://bugs.launchpad.net/glance/+bug/1316234 20:50:20 Launchpad bug 1316234 in glance "Image delete calls slow due to synchronous wait on backend store delete" [Undecided,In progress] 20:50:49 There was some discussion about how this should be done. I have a proposal for a path forward and was hoping to get some input on it. 20:51:29 Basically, deletes can take a long time and in a public cloud, at least, there has been some discussion around an asynchronous method aside from the scrubber model. 20:51:39 The FFE reguest from flaper ... I commented already on the api change, but would like to bring it on the table here. The API change is doing exactly opposite what the bp is stating and claims to implement it. /me dislikes 20:51:44 sorry jcook 20:51:51 np 20:52:05 lets go with yours first ... that was missed enter 20:52:06 jcook: fixing the scrubber would be kind of amazing 20:52:30 the scrubber seemed to be desirable for the private cloud use case but not the public cloud 20:52:53 there was some mention of doing a more distributed scrubbing, but also, discussion about doing a background delete 20:53:03 jcook: oh? I think it was added by public cloud folks originally as a way to keep data around a little longer in case a delete was accidental 20:53:03 agree 20:53:07 I'm proposing adding background deletes as a third configurable option 20:53:16 Is'n scrubber responsible also for the delayed delete? 20:53:36 markwash__: problem with scrubber is too many deletes at once overwhelming the backend storage 20:53:37 yes, but it does all at once 20:53:42 markwash__: that's what I was thinking as well 20:53:51 ah I see 20:53:59 well, we could fix it to not do deletes all at once :-) 20:54:16 jcook, rosmaita: so how about fixing the scrubber to have FE configurable throttle? 20:54:16 we could, but a background delete seems more intuitive and simpler 20:54:23 jcook: but I think such a proposal sounds very welcome 20:54:26 so go for it 20:54:31 rosmaita, not sure if the approach proposed by jcook would solve this problem 20:54:48 jcook: is there already a spec or mail about this or is it forthcoming? 20:55:09 jcook: what this background delete would be doing? Delete just kicking up task and the image hopefully dissappears at some point? 20:55:20 one question in regards to a background delete. Is there opposition to a new state of delete_in_progress in the event of failure of background delete 20:55:34 yeah 20:55:37 no new states! :-) 20:55:40 +1 20:55:55 It would behave similarly to pending_delete except that it would start delete on image immediately in thread or subprocess 20:56:23 it could be marked as deleted, but I think it would be more correct to go into delete_in_progress and then delete when background process of delete finishes 20:56:40 there was some talk of making pending_delete recoverable, did that ever get implemented? 20:56:56 jcook: users don't care about this state machine (though ops do), let's keep it out of the main api 20:56:57 i think zhi was working on it at some point 20:57:05 rosmaita: I think that still needs manual labor 20:57:06 we can just create another resource to track such things 20:57:25 +1 20:57:29 but anyway, I guess it would be best to do this all in spec review :-) 20:57:35 +1 20:57:47 we have only a few minutes left 20:57:48 markwash__: not sure I grok 20:57:51 kk 20:58:29 jcook: if the idea is good, make a spec so we can mock it together publicly :P 20:58:43 ACK 20:58:54 +1 jokke_ 20:59:17 :P 21:00:09 all right, that's the bell 21:00:11 thanks everybody! 21:00:16 thanks! 21:00:16 good luck with all your bunnies 21:00:24 #endmeeting