14:00:08 #startmeeting glance 14:00:09 Meeting started Thu May 5 14:00:08 2016 UTC and is due to finish in 60 minutes. The chair is nikhil. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:12 The meeting name has been set to 'glance' 14:00:18 o/ 14:00:20 #topic roll call 14:00:20 \o 14:00:34 hey guys 14:00:47 o/ 14:01:13 let's wait another min 14:01:27 o/ 14:01:37 o/ 14:01:38 o/ 14:01:44 o/ 14:01:50 let's get started 14:01:53 #topic agenda 14:01:56 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:02:18 I'd set a few items. Shouldn't take full hour so we can have some open discussion. 14:02:28 #topic Announcements 14:02:46 #info Updates, upcoming events and more awareness http://lists.openstack.org/pipermail/openstack-dev/2016-May/093703.html 14:03:04 That;s the ML thread describing our checkpoints through the first part of cycle 14:03:21 I wanted to give people a chance to raise concerns, if any 14:03:49 calling 1 14:03:54 calling 2 14:03:59 calling 3 14:04:04 ok, none 14:04:16 So, let's think about our spec freeze 14:04:53 I wanted to try out a spec-soft-freeze on R16 (midcycle) 14:05:15 basically at this point we will filter out the specs that are likely going to be completed in newton 14:05:29 then a spec-hard-freeze on R-10 14:05:58 o/ 14:05:59 that will require any outstanding specs to request FFE and we will likely have none but sometimes one or two 14:06:26 it should give 5 weeks for reviewing the code, bug fixing, release stuff 14:06:30 what's r10? 14:06:31 any concerns on that? 14:06:38 sounds reasonable 14:06:40 #link http://releases.openstack.org/newton/schedule.html 14:06:56 r10 is jul 25-29 14:07:00 the weeks are listed in that schedule using a number 'n' against release weeks 14:07:46 again, the code can still get in till newton-3 where we will have a code-hard-freeze 14:07:54 as a standard openstack practice 14:08:12 then use RCs as a way to strengthen the release 14:08:30 * sigmavirus24 apologizes for being late 14:08:36 if nothing, I will move on and get those dates listed on the release schedule 14:09:03 one more thing: 14:09:37 please let me know soon on your mid-cycle plans as the logitical, travel, etc. planning is not trivial 14:10:11 moving on 14:10:27 #topic Finalize process for lite-specs 14:10:36 our current process is not documented 14:10:52 also, people are proposing against the lite specs repo 14:11:10 and we currently have no way to refer back to those lite specs 14:11:18 I wanted to propose this: 14:11:37 1. move lite specs under their separate folder in glance-specs repo 14:11:45 2. each lite spec has it's own file 14:12:06 3. code patches for those specs refer the lite-spec url optimistically 14:12:34 for example 14:12:58 when someone is proposing a lite-spec they will choose the file name for it 14:13:09 so they can anticipate where the spec is going to land 14:13:37 in this case the proposal was "buffered-reader-for-swift-driver" 14:13:49 so the spec showed up on http://specs.openstack.org/openstack/glance-specs/specs/newton/approved/glance_store/buffered-reader-for-swift-driver.html 14:13:58 objections? 14:14:21 (need a few minutes to think) 14:14:29 sure 14:14:35 So what makes this any different to a full spec? 14:14:43 just the content I believe 14:14:50 yep 14:15:07 having a separate file removes merge conflicts 14:15:11 I like it better than the current process 14:15:16 I'm kinda -1 on lite-specs in general (but I realise that's not what's being asked) 14:15:35 mclaren: 😃 14:15:48 mclaren: say some more 14:16:01 mclaren: how about we try to make the process faster (with a few experiments) 14:16:16 nikhil: my charset can't disply whatever symbol you typed 14:16:30 (unless it was a question mark inside a diamond) 14:16:35 ok, yeah. I'd ask for info rather than presume based on implicit input! 14:16:55 rosmaita: it was a smiley 14:16:57 :) 14:17:00 we're going to end up with release notes, commit messages, docs and lite-specs which will probably have a fair amount of duplication 14:17:14 (but with extra semi-colon) 14:17:47 release notes are bit independent from the lite-specs 14:17:54 but I'm happy to trial whatever folks want, I don't mind 14:18:11 but I agree there is a likelihood of duplication 14:18:22 Just to get this clear, if I have a spec i want to propose, there would be no difference except the amount of boxes I have to fill in? 14:18:43 to faster things up was one of the idea for a drivers' meeting but I guess that did not work (ETOOMANTMEETINGS) 14:19:26 bunting: are you anticipating that everyone will file lite specs from now on? 14:19:53 bunting: for lite-spec you do not need a BP, you need may be feedback from just 2 cores whereas the specs are about feedback from as many cores as possible and then +W from PTL/liaison 14:20:00 It just seems the process is basicly the same, why don't we just make more option optional on a regular spec? 14:20:39 we have redacted on the process for specs already 14:21:06 at this point only API impact, security oriented and new functionalities are required to have specs 14:22:25 lite-specs are about making sure the service behavior is consistent with it's constructs, original use cases (so that stakeholders inproduction are not affected), and to help those who don't have access to a large scale infra for testing, CI or staging/prod environments where things behave tenatively 14:23:30 bottom line -- lite specs are knowledge sharing, awareness for across openstack ecosystem and ability to understand the direction of a program without diving into commit messages 14:23:52 ok, we've spent enough time on this topic 14:24:05 we can continue during open discussion or after the meeting on #openstack-glance 14:24:13 nikhil: sure 14:24:20 moving on 14:24:27 #topic What is this config option intended for https://github.com/openstack/glance/blob/master/glance/scrubber.py#L36-L38 ? (hemanthm) 14:24:42 yep, that's me 14:25:06 I was working on config opts and I just wanted to get a better feel of why that opt was introduced 14:25:13 does anyone remember/know? 14:25:20 my guess 14:25:46 operators being able to recover image data for customers who delete images accidentally is one I could think of 14:25:47 is the amount of time which needs to elapse before an image in pending_delete is deleted 14:26:30 mclaren: +1 but do you know what purpose is it serving? 14:26:34 mclaren: was someone actually using it? 14:26:44 yeah, we switched it on in our cloud 14:26:57 I think everything is probably deleted by this point :-) 14:27:37 simply put, as an operator why would I want to use this config opt? 14:27:37 ah, what would be a ideal/average delay you would say? 14:27:47 we did have an idea that we could recover mistakenly deleted images 14:27:59 I'm not sure we ever actually did such a recovery 14:28:06 I see 14:28:08 also, isn't the image record deleted at this point, i.e., all the additionalProperties are non-recoverable? 14:28:25 i mean by normal API means 14:28:37 my guess was that if you had a volume attached or a NFS system you could do a routine maintanence for the system to preserve the images 14:28:52 the stuff is in the DB, but you have to muck around in there to resurrect the full image record 14:29:22 mclaren: should we assume that people are not using it? 14:29:27 right so the data and metadata experience would be equivalent 14:29:44 nikhil: we use it too 14:29:55 oh ok 14:30:22 I don't know how many folks are actually using scrubber 14:30:47 ok, do we have all the answers? 14:31:10 hemanthm: ? 14:31:10 hmmm, not sure 14:31:29 we're running short of time, btw 14:31:38 ok, we can take it offline 14:31:44 hemanthm: maybe propose to deprecated that option in your patch 14:31:57 hemanthm: if we have specific questions, I think that would help 14:32:00 guess that should be a different patch, though 14:32:01 what's the motivation to deprecate? 14:32:10 no use 14:32:10 well, it seems pretty useless 14:32:16 we should remove, or add more support 14:32:17 simply put, as an operator why would I want to use this config opt? 14:32:25 nikhil: ^ that was my question 14:33:00 hemanthm: ok. so, we need to take that to the operators' ML. we are likely not to get answers here. 14:33:13 i think fei long floated the idea of an undelete admin call a few cycles ago, but it was voted down 14:33:42 hemanthm I can ask IBM folks, next week 14:33:47 thanks nikhil 14:33:51 ok, moving on 14:33:58 #topic Image upload to specific backend https://review.openstack.org/#/c/310970/ (wxy or nikhil) 14:34:29 so, we decided during the summit that we are not going to expose "which glance stores are configured for glance deployment" 14:34:30 if someone hacks your db to set pending_delete it may be useful to not delete everything straight away? 14:34:55 mclaren: you already have the scrubber interval 14:35:03 this is in addition to that 14:35:28 ok, but this is per image? the scrubber interval could fire straight away 14:35:34 ++ mclaren 14:35:38 i really think zhi added this for the undelete call that didn't happen 14:36:05 guys we need to take this offline 14:36:17 (the topic has changed btw) 14:36:27 sorry 14:37:04 so, as per agreement: how will I know which stores (are configured and thus) to upload the image to 14:37:30 also, the store configuration are supposed to be opaque from users' perspective 14:37:44 and that could potentially change indeterministically 14:38:16 the feedback asks for admin only way 14:38:26 what's the feeling here? 14:38:37 is there a specific use-case for uploading to a particular store? 14:39:11 there seemed to be interest at the summit 14:39:13 I asked for it on the lite spec 14:39:25 mclaren: from ops? 14:39:26 hemanthm: maybe snapshots to cheap slow store, public images to fast store 14:39:32 I'm a bit uncomfortable with the leaky abstraction side of this 14:39:45 mclaren: +1 14:39:53 Do swift have something like storage policies? 14:39:56 mclaren: +1M 14:39:59 ++ mclaren 14:40:24 which expose the idea of different backends, but without exposing the 'internals'? 14:40:38 good point 14:41:03 mclaren: do you mind starting discussion on that thread on the lite-spec? 14:41:29 (as I do not see wxy here today) 14:41:51 (or he's hiding) 14:42:12 nikhil: Hello 14:42:34 djkonro: hello 14:43:05 aight, let's take this on the spec. (no movement on the question here) 14:43:10 I think it's possible I may not have any upstream cycles for the forseable future 14:43:22 wat! 14:43:26 I'm here. But for my poor English, I don't want waste your guys time. Just as nikhil said, point out this question and we can't discuess it through the lite-spec.:) 14:43:48 s/ can't/can 14:44:00 some internal stuff going on 14:44:06 wxy: ah hey, yeah let's chat more on lite-spec. we're a bit short on time too. 14:44:41 mclaren: that's a bummer. we need to chat offline on this! 14:44:47 sure 14:45:06 but for wxy's concern , I can start that discussion on the lite spec proxy mclaren 14:45:11 moving on 14:45:25 #topic community_images_newton.__init__(assigned=tsymancz1k, cores_involved=[rosmaita, nikhil]) 14:45:37 tsymanczyk: rosmaita : floor is yours 14:45:56 tsymanczyk has been working on updating the spec 14:45:59 at this point i'm working on incorporating the existing feedback to the spec. 14:46:14 i'm afraid i don't have an overall understanding that's ... sufficient to have opinions of my own. yet. 14:46:16 i guess my main question is whether the hierarchical stuff affects what we do here 14:46:51 I see 14:47:23 My main design point on hierarchical stuff was visibility 14:47:37 so, the answer is 'may be' 14:47:37 does the hierarchical stuff affect community sharing more that the existing peer-to-peer sharing? 14:47:47 that/than 14:48:12 mclaren: i think so, in that we introduce these new visibility values 14:48:19 I doubt that, but depends on how keen and in what way we plan to clean visibility semantics 14:49:18 the problem i see is that you could have public, private, protected 14:49:22 rosmaita: I see. so the refactor around visibility is something that may affect. 14:49:45 (rosmaita: please keep going) 14:50:00 i may need to write this up in an etherpad 14:50:13 i don't think i have a one-minute explanation of what's bothering me 14:50:26 ok :) 14:50:48 i will take an action item to do tha 14:50:50 t 14:51:11 tsymanczyk: we (all of us) can chat offline on the architecture stuff 14:51:16 rosmaita: ok, thanks. 14:51:21 ok 14:51:58 #action rosmaita to open a etherpad to discuss changes for visibilty flags 14:52:13 moving on 14:52:21 #topic image import: implementation feedback ( mclaren ) 14:52:34 mclaren: we are short of time, so we can go with your topic first 14:52:39 ok, thanks 14:53:00 https://review.openstack.org/#/c/312653/ 14:53:07 I've a bunch of questions on that patch 14:53:37 it might be a bit moot, but if folks can have a look and comment that would be appreciated 14:53:51 thanks for breaking them down! 14:54:45 np, it's the easy stuff so far... 14:54:55 so my main worry with re-using existing task stuff is that the admin API and import API overlap at some point 14:55:39 what's the admin api? 14:55:45 basically, an can start running a task for an image (say stored in users' container) 14:56:05 mclaren: the task api is the admin (task) API 14:56:23 as it used to be the (former) import API, I want to avoid overloading that term 14:56:26 mclaren: the default is admins only use it in mitaka, i believe 14:56:48 correct 14:57:22 s/an can/ an admin can/ 14:57:41 3 mins to go 14:57:48 I'm done 14:57:52 thanks! 14:57:56 #topic open discussion 14:58:21 I'd a chat with flwang yday on 14:58:22 https://bugs.launchpad.net/glance/+bug/1528453 14:58:24 Launchpad bug 1528453 in Glance "Provide a ranking mechanism for glance-api to order locations" [Wishlist,Triaged] - Assigned to Jake Yip (waipengyip) 14:58:49 I need some input on: do people this this is a good idea or bad? is someone else using it in production? 14:59:05 my feeling was that glance shouldn't bother with the values of the metadata 14:59:22 as it does not bother with values of the build_flags or licensing info 14:59:28 that should be done on the user side 14:59:50 i thought there was already some kind of strategy for this? 14:59:51 the patch associated with it is linked in the bug in my comment 14:59:58 that you can configure? 15:00:30 rosmaita: good point, I do not know if there's overlap though 15:00:39 worth investigating 15:00:44 and we're out of time 15:00:50 thanks all for joining! 15:00:55 #endmeeting