20:07:39 <markwash__> #startmeeting glance
20:07:40 <openstack> Meeting started Thu Sep  4 20:07:39 2014 UTC and is due to finish in 60 minutes.  The chair is markwash__. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:07:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:07:45 <openstack> The meeting name has been set to 'glance'
20:07:45 * markwash__ is out of practice
20:08:05 <markwash__> agenda at https://etherpad.openstack.org/p/glance-team-meeting-agenda
20:08:11 <markwash__> however I think first we need a j3 update
20:08:27 <markwash__> so we're in the process of trying to ram through the last patches needed to tag j3
20:08:57 <markwash__> AFAICT, those patches are https://review.openstack.org/#/c/44355/ and https://review.openstack.org/#/c/111483/
20:09:03 <markwash__> does that look right to folks?
20:09:15 <TravT> yes, watching the patches in zuul has been like watching a baby bunny trying to cross a busy street filled with drunk drivers
20:09:29 <nikhil_k> yes
20:09:42 <markwash__> good thing there's lots more bunnies where that came from
20:09:49 <TravT> https://review.openstack.org/118627
20:09:56 <lakshmiS> strength in numbers :)
20:10:02 <TravT> https://review.openstack.org/105231
20:10:09 <nikhil_k> client is fine...
20:10:11 <TravT> those also need to get through
20:10:49 * nikhil_k realizes.. may be we need 'em for horizon
20:10:58 <markwash__> TravT: is it possible to wait until tomorrow to really put the accelerator on those glanceclient patches?
20:11:13 <markwash__> or would such a delay hurt horizon progress?
20:11:17 * markwash__ thinks of the gate
20:11:37 <TravT> well, I haven't talked with david lyle about it
20:11:43 <TravT> keep hoping to see them through
20:12:03 <markwash__> well, the bunnies each have a run ahead of them at this point
20:12:21 <markwash__> so good luck to them
20:12:31 * TravT covers his eyes
20:12:37 <arnaud> :)
20:12:40 <lakshmiS> we also have tempest test patch(https://reView.openstack.org/113632/) which is dependent on glance-client(https://review.openstack.org/105231). Can tempests patch merge after today?
20:12:41 <markwash__> and the important detail about those other waiting patches (for the server project)
20:12:44 <markwash__> is
20:13:10 <markwash__> If they don't land by ttx's morning tomorrow, they are going to be auto-deferred to juno rc-1
20:13:14 <markwash__> just b/c the gate is crazy
20:13:32 <markwash__> they will be auto-granted FFEs as well
20:13:40 <markwash__> so I'm not really sure what the distinction is exactly
20:13:46 <TravT> it would be really nice to get the SEED bumped up https://review.openstack.org/111483.  It actually passed all the gate tests and then got hit by a package download failure for cinder
20:13:55 <TravT> and now seems to just be cycling in check
20:14:14 <markwash__> lakshmiS: I'm not actually sure about the tempest test changes, how those factor in
20:14:29 <markwash__> I would assume that adding tests is not blocked by the FF, since they're not really features
20:14:38 <markwash__> the j-3 to j-rc1 period is intended for testing
20:14:52 <markwash__> lakshmiS: but I think you'll have to ask tempest folks about that
20:15:03 <lakshmiS> markwash__: ok
20:15:23 <markwash__> any other questions about the j-3 tagging/release process?
20:15:24 <TravT> if the client patches fail, it will be a really hard sell for horizon.
20:16:20 <markwash__> TravT -- fail as in "don't work" ? or fail as in "don't land" ?
20:16:35 <TravT> don't land
20:17:00 <markwash__> TravT, do any of those patches need more review still? or are they just waiting on the gate?
20:17:20 <TravT> david has bumped 2 of our 4 horizon patches to high priority and agreed to ask for FFE if they client gets released
20:17:26 <TravT> no, don't need review.
20:17:44 <TravT> just need some sort of rain dance or something to keep zuul from whacking them
20:18:06 <markwash__> TravT: released == in master? or released == up on pypi?
20:18:36 <TravT> hmm... I think released == up on pypi so that we can update the horizon requirements file
20:19:04 <markwash__> okay
20:19:07 <david-lyle> markwash__: on pypi and move minimum openstack/requirements version to that releas
20:19:08 <david-lyle> e
20:19:43 <markwash__> I'm a bit worried about the openstack/requirements change
20:19:52 <david-lyle> unless the feature is discoverable via the API, I'd have to re-look at the patch
20:19:55 <markwash__> I feel like those are freezed relatively early
20:20:12 <markwash__> freezed -> frozen I suppose
20:20:31 <david-lyle> I'll ping ttx in the morning, but neutron is in the same boat
20:20:58 <markwash__> TravT: it looks like you may end up needing to request a Dep Freeze Exception
20:21:06 <markwash__> and that's after bribing Flavio to release the client
20:21:11 <TravT> thanks david-lyle
20:21:12 <markwash__> and that's after the gate calms down to actually land it
20:21:22 <markwash__> okay, any other j-3 questions?
20:21:47 <markwash__> I'd like to get out of the way of the regularly-schedule agenda if possible :-)
20:22:21 <nikhil_k> Frodo's quest seems tiny now
20:22:33 <markwash__> :-)
20:22:51 <markwash__> #topic kilo specs
20:23:12 <markwash__> let's merge https://review.openstack.org/#/c/115807/ after tomorrow
20:23:21 <markwash__> (again, #godsavethegate)
20:23:27 <markwash__> any other thoughts about kilo specs?
20:23:32 <arnaud> agreed
20:23:43 <TravT> i do have a question.
20:23:54 <TravT> we bumped out tags from current spec.
20:24:08 <TravT> we're going to submit a new spec for kilo
20:24:13 <ativelkov> Should the specs which has not be approved yet be moved to kilo folder?
20:24:22 <markwash__> ativelkov: yes please
20:24:33 <TravT> should I update the juno metadefs spec to reflect reality
20:24:41 <markwash__> if a spec is good apart from being in the wrong folder, anyone should feel free to move it
20:25:20 <arnaud> TravT, I would love to see the juno spec reflect what has been merged in juno (if possible)
20:25:27 <markwash__> TravT: it seems nice
20:25:35 <TravT> Ok.  I'll do that.
20:25:36 <markwash__> would just a quick disclaimer "this feature did not land in Juno" work?
20:25:47 <TravT> do the specs have to go through the gate?
20:25:52 <TravT> or do they take a shortcut
20:26:12 <markwash__> or I suppose if things can just be removed from the spec that is fine too
20:26:20 <ativelkov> I din't get any reviews on artifacts spec for quite a long time. It does not block any work being done, but I want to clear its fate a bit :)
20:26:22 <markwash__> TravT: I'm not sure
20:26:28 <TravT> ok,
20:26:40 <arnaud> I don't think you have to wait a long time for a spec TravT
20:26:40 <TravT> well i'll update and the #godsavethegate you guys decide when to merge
20:27:33 <markwash__> TravT: I asked in openstack-dev, if I hear back we can maybe merge them sooner
20:27:46 <markwash__> okay, next up (I'm going in PTL order)
20:27:58 <markwash__> #topic Metadefs bug handling
20:28:10 <TravT> Do you want a lot of little bugs with one or two lines, or a bug per reviewer comments made pre-merge?
20:28:36 <markwash__> I don't understand the second alternative quite clearly
20:28:39 <TravT> sorry..
20:29:18 <TravT> as people went through the files, they made comments.  We could create one big bug for DB and just start putting all fixes into it
20:29:43 <TravT> or we could just addressing individual fixes
20:29:47 <TravT> say one file at a time
20:29:47 <markwash__> I think it would be better to structure the fixes into groups of related changes I guess?
20:29:59 <markwash__> its nice to have a small patch to review
20:29:59 <TravT> just whatever the reviewers want to see
20:30:02 <jokke_> So individual bugs or one dumping ground?
20:30:12 <markwash__> not one dumping ground
20:30:21 <jokke_> +1
20:30:27 <markwash__> individual as long as its not too granular, group together as you see fit
20:30:35 <markwash__> remember that reviewers like a moderate number of small patches best
20:30:45 <jokke_> sounds good to me
20:30:49 <TravT> ok, we'll try to keep it reasonable
20:31:00 <markwash__> huzzah
20:31:03 <markwash__> next up. . .
20:31:16 <markwash__> #topic Bug Day
20:31:30 <jokke_> \\o \o/ o// o/7
20:31:59 <markwash__> who's topic is this? :-)
20:32:13 <jokke_> not mine this time
20:32:49 <jokke_> but I do like clean bug list
20:32:54 <markwash__> so it seems someone is proposing we have another bug day
20:33:00 <cpallares> Mine
20:33:02 <jokke_> this time one day only so something would get done as well
20:33:03 <arnaud> maybe wrong etherpad... we don't have bugs in glance :)
20:33:12 <markwash__> cpallares: hi there!
20:33:19 <cpallares> hi there markwash__
20:33:45 <markwash__> bug days are great things for the rc1 timeframe
20:33:57 <jokke_> arnaud: +1, just undocumented features
20:34:07 <markwash__> so is there sometime next week or the week after that works for some of us? and who can sign up?
20:34:08 <arnaud> exactly jokke_ ;)
20:34:08 <cpallares> It doesn't have to be just one day.
20:34:11 <markwash__> and who can sign up to remind me?
20:34:33 <jokke_> cpallares: based on our last two day one, yes it does ;)
20:34:49 <cpallares> haha alright
20:34:53 <jokke_> I'm up for it next week ... the following I'm out from Wed
20:35:02 <markwash__> yeah I think next is best
20:35:21 <jokke_> markwash__: I can take the poking stick
20:35:22 <markwash__> since we're supposed to have RC1 by around the 25th of sept
20:35:25 <markwash__> jokke_: thanks!
20:36:01 <jokke_> So Thursday?
20:36:02 <markwash__> how does the 11th seem to folks here?
20:36:06 <TravT> i missed the last one.  is the point to get fixes in a bug day or for everybody to grab them and triage them?
20:36:09 <jokke_> +1
20:36:16 <jokke_> TravT: both
20:36:29 <cpallares> TravT: You can also close them :P
20:36:32 <markwash__> triage is especially great because there may be some release blockers hiding there
20:36:36 <cpallares> TravT: We have bugs from 2012
20:36:37 <arnaud> I will be there
20:36:50 <markwash__> and we probably have a huge list of ones to just close
20:37:03 <jokke_> cpallares: at least I do not have powers to mark them for example "Won't fix"
20:37:20 <markwash__> okay I really like thursday, bc I can just start my day with the team meeting and stay on bugs the whole day
20:37:36 <cpallares> markwash__: That sounds good :)
20:37:45 <jokke_> markwash__: there is 10+ that are "propose-close" tagged :P
20:37:54 <jokke_> No-one just seems to touch them :P
20:38:00 <markwash__> okay, so let's get together a etherpad for this particular bug day
20:38:09 <TravT> ok. 11th works for me... but our guys in poland have been wanting to participate as well.
20:38:10 <markwash__> with links to the tag scheme we wanna use
20:38:48 <markwash__> cpallares: can you kick off that etherpad and then email us? arnaud/jokke can probably help with some details if there are any that are missing
20:39:04 <arnaud> yes, sounds good
20:39:15 <cpallares> markwash__: Sounds good
20:39:18 <lakshmiS> +1
20:39:23 <markwash__> cpallares: and by email us, I just mean throw it out on the ML tagged [Glance]
20:39:28 <markwash__> cpallares: thanks!
20:39:31 <markwash__> yay bug day
20:39:31 <cpallares> ok
20:40:03 <markwash__> #topic Documentation
20:40:12 <markwash__> TravT: it looks like you had a question about doc updates?
20:40:35 <TravT> yeah, is there anything special we need to know about getting docs updated?
20:40:46 <TravT> we want to make sure that metadefs are properly documented
20:40:55 <markwash__> I have to throw that question to the group, the docs are an area where I've been struggling
20:41:16 <markwash__> rosmaita sometimes knows a lot about our docs :-)
20:41:23 <cpallares> markwash__: Do we write our own docs?
20:41:32 <markwash__> cpallares: some of them I think
20:41:46 <markwash__> and contribute to some elsewhere I belive
20:41:51 <markwash__> but I'm way out of touch
20:41:58 <markwash__> so you could tell me the docs are on mars and I'd have to take your word for it
20:41:59 <rosmaita> we can write as many docs as we like
20:42:07 <jokke_> Sounds like this needs a shootout to ML as well
20:42:07 <arnaud> that's where specs should help (I guess)
20:42:19 <jokke_> rosmaita: where we should dump those?
20:42:47 <rosmaita> https://wiki.openstack.org/wiki/Glance-where-are-the-docs
20:42:56 <rosmaita> gives you a run-down of what goes where
20:43:04 <rosmaita> though i just noticed it is a bit out of date
20:43:05 <jokke_> Oh cool!!
20:43:11 <TravT> ooohhhh
20:43:18 <TravT> a real wiki!
20:43:26 <TravT> thanks rosmaita!
20:43:27 <jokke_> #link https://wiki.openstack.org/wiki/Glance-where-are-the-docs
20:43:33 <rosmaita> i will try to update it tomorrow
20:43:44 <markwash__> we need some SEO for that wiki page
20:43:56 <TravT> rosmaita: can you shoot us a message when the update is done?
20:44:00 <markwash__> so google brings it up for me when I search anything
20:44:16 <rosmaita> maybe i can put some porn on it or something
20:44:20 <markwash__> well okay
20:44:23 <jokke_> +1
20:44:29 <markwash__> #topic Glance common service framework
20:44:51 <rosmaita> i will fix page & send out to ML when it's updated
20:44:55 <markwash__> I deferred this one to last, b/c I think we resolved to wait until some of the oslo-incubator changes sort themselves out
20:45:11 <TravT> rosmaita: ty!
20:45:34 <markwash__> but I'd like to point out that mclaren has some thoughts about SIGHUPS and reloading configs at https://etherpad.openstack.org/p/sighup-conf-reload
20:45:34 <jokke_> markwash__: I think the outcome from last nights discussion was to revisit the plans how we do it for Kilo
20:46:09 <markwash__> jokke_: +1
20:46:10 <jokke_> markwash__: mclaren has also view and he would like to approach with something similar what nginx does
20:46:18 <markwash__> yeah I was just reading that
20:46:36 <markwash__> I'm glad to have a little survey of what is done because I think we might be in a slightly more complicated position than some other openstack services
20:46:52 <markwash__> given our mix of long- and short-lived connections
20:47:04 <jokke_> which I think does not solve the fundamental issues we have, but those needs to be attacked when we get spec out there
20:47:40 <markwash__> I was pretty happy with the spec for adopting the ProcessLauncher until it surfaced that it causes these problems with reloading
20:47:50 <markwash__> so I think we'll just need a new, more comprehensive kilo spec
20:47:55 <markwash__> which specifically addresses the reconfiguration aspects
20:48:10 <markwash__> guided by some of the considerations mclaren has put forward
20:48:19 <markwash__> jokke_: does that sound like a reasonable summary / restatement ?
20:48:57 <jokke_> markwash__: sounds good to me ... the spec comments are good place to put the discussion of the concerns
20:49:06 <markwash__> okay cool
20:49:09 <jokke_> and that way we get them documented
20:49:14 <markwash__> yeah
20:49:49 <markwash__> let's open up the floor
20:49:51 <jokke_> If we have time I would like to add couple of fast topics out of the list (forgot to add them there)
20:49:52 <markwash__> #topic open discussion
20:50:19 <jcook> https://bugs.launchpad.net/glance/+bug/1316234
20:50:20 <uvirtbot> Launchpad bug 1316234 in glance "Image delete calls slow due to synchronous wait on backend store delete" [Undecided,In progress]
20:50:49 <jcook> There was some discussion about how this should be done. I have a proposal for a path forward and was hoping to get some input on it.
20:51:29 <jcook> Basically, deletes can take a long time and in a public cloud, at least, there has been some discussion around an asynchronous method aside from the scrubber model.
20:51:39 <jokke_> The FFE reguest from flaper ... I commented already on the api change, but would like to bring it on the table here. The API change is doing exactly opposite what the bp is stating and claims to implement it. /me dislikes
20:51:44 <jokke_> sorry jcook
20:51:51 <jcook> np
20:52:05 <jokke_> lets go with yours first ... that was missed enter
20:52:06 <markwash__> jcook: fixing the scrubber would be kind of amazing
20:52:30 <jcook> the scrubber seemed to be desirable for the private cloud use case but not the public cloud
20:52:53 <jcook> there was some mention of doing a more distributed scrubbing, but also, discussion about doing a background delete
20:53:03 <markwash__> jcook: oh? I think it was added by public cloud folks originally as a way to keep data around a little longer in case a delete was accidental
20:53:03 <nikhil_k> agree
20:53:07 <jcook> I'm proposing adding background deletes as a third configurable option
20:53:16 <jokke_> Is'n scrubber responsible also for the delayed delete?
20:53:36 <rosmaita> markwash__: problem with scrubber is too many deletes at once overwhelming the backend storage
20:53:37 <jcook> yes, but it does all at once
20:53:42 <jokke_> markwash__: that's what I was thinking as well
20:53:51 <markwash__> ah I see
20:53:59 <markwash__> well, we could fix it to not do deletes all at once :-)
20:54:16 <jokke_> jcook, rosmaita: so how about fixing the scrubber to have FE configurable throttle?
20:54:16 <jcook> we could, but a background delete seems more intuitive and simpler
20:54:23 <markwash__> jcook: but I think such a proposal sounds very welcome
20:54:26 <markwash__> so go for it
20:54:31 <arnaud> rosmaita, not sure if the approach proposed by jcook would solve this problem
20:54:48 <markwash__> jcook: is there already a spec or mail about this or is it forthcoming?
20:55:09 <jokke_> jcook: what this background delete would be doing? Delete just kicking up task and the image hopefully dissappears at some point?
20:55:20 <jcook> one question in regards to a background delete. Is there opposition to a new state of delete_in_progress in the event of failure of background delete
20:55:34 <markwash__> yeah
20:55:37 <markwash__> no new states! :-)
20:55:40 <arnaud> +1
20:55:55 <jcook> It would behave similarly to pending_delete except that it would start delete on image immediately in thread or subprocess
20:56:23 <jcook> it could be marked as deleted, but I think it would be more correct to go into delete_in_progress and then delete when background process of delete finishes
20:56:40 <rosmaita> there was some talk of making pending_delete recoverable, did that ever get implemented?
20:56:56 <markwash__> jcook: users don't care about this state machine (though ops do), let's keep it out of the main api
20:56:57 <rosmaita> i think zhi was working on it at some point
20:57:05 <jokke_> rosmaita: I think that still needs manual labor
20:57:06 <markwash__> we can just create another resource to track such things
20:57:25 <nikhil_k> +1
20:57:29 <markwash__> but anyway, I guess it would be best to do this all in spec review :-)
20:57:35 <jokke_> +1
20:57:47 <markwash__> we have only a few minutes left
20:57:48 <jcook> markwash__: not sure I grok
20:57:51 <jcook> kk
20:58:29 <jokke_> jcook: if the idea is good, make a spec so we can mock it together publicly :P
20:58:43 <jcook> ACK
20:58:54 <TravT> +1 jokke_
20:59:17 <TravT> :P
21:00:09 <markwash__> all right, that's the bell
21:00:11 <markwash__> thanks everybody!
21:00:16 <arnaud> thanks!
21:00:16 <markwash__> good luck with all your bunnies
21:00:24 <markwash__> #endmeeting