14:03:08 #startmeeting glance 14:03:08 Meeting started Thu May 26 14:03:08 2016 UTC and is due to finish in 60 minutes. The chair is nikhil. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:03:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:03:11 The meeting name has been set to 'glance' 14:03:21 o/ 14:03:25 \o 14:03:37 o/ 14:03:42 o/ 14:03:45 o/ 14:03:46 Courtesy meeting reminder: ativelkov, cpallares, flaper87, flwang1, hemanthm, jokke_, kragniz, lakshmiS, mclaren, mfedosin, nikhil_k, Nikolay_St, Olena, pennerc, rosmaita, sigmavirus24, sabari, TravT, ajayaa, GB21, bpoulos, harshs, abhishek, bunting, dshakhray, wxy, dhellmann, kairat 14:03:53 o/ 14:03:55 o/ 14:03:56 o/ 14:03:56 \o 14:03:57 for some reason my previous message did not go through 14:04:12 * nikhil had to restart irc 14:04:18 ah 14:04:42 #topic agenda 14:04:43 let's beging then 14:04:46 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:04:49 o/ 14:04:57 Small agenda today 14:05:05 some people haven't read my mails 14:05:17 Let's get to that later. 14:05:33 #topic Updates 14:05:47 #info Glare updates ( mfedosin , nikhil ) 14:05:53 o/ 14:06:13 we finished the spec and waiting for reviews :) 14:06:35 o/ 14:06:44 #link https://review.openstack.org/#/c/283136/ 14:07:23 there are a lot of use cases and examples, that should help you (in theory) to understand how API works 14:07:48 so if you have time leave your comments there 14:07:53 thanks in advance :) 14:08:31 also we are working on tests for Glare 14:09:17 I think we will present code for review on 6th of June 14:09:51 if you have any questions I'm happy to answer 14:10:23 I have a few things to update/add 14:10:36 nikhil: shoot :) 14:10:37 that I've mentioned to mfedosin 14:10:47 #1. Let's work on only a POC 14:11:41 #2. All things upstream to help engage community in discussion and code. So, no internal discussions after a POC and openstack usualy channels based communication and development. 14:12:25 #3. Dissolving the ownership quotient -- since this is not a feature and entire service, we need everyone on the same page and ability to contribute. Let's provide a platform for that. 14:12:39 nikhil: also we decided that API will be marked as 'experimental' initially 14:13:11 #4. Newton priority for Glare should be bare-bone code that will be used by a couple of early adopters that will help us evolve the API. By newton-3 we need to target a stable API. 14:13:56 #5. Stop all discussions on extra features and work on builing the community right after a POC is ready for review. (something not a WIP and set up in small patches easier to review and contribute as applicable) 14:14:29 #6. All bugs and stabilization should be done using LP and Mike was going to help setup a etherpad for that to enable pick up tasks/bugs to contribute. 14:14:53 mfedosin: right, the 'experimental' within newton but Current by newton-3 14:14:56 mfedosin: anything else? 14:15:14 #7. Glare should be awesome 14:15:27 ++ 14:15:28 ^ it's the basic requirement :) 14:15:47 It will be -- all awesome people work in Glance. 14:15:59 moving on 14:16:04 thanks nikhil 14:16:10 #info Nova v1, v2 (mfedosin, sudipto) 14:16:29 okay, I added unit tests there 14:16:42 #link https://review.openstack.org/#/c/321551/ 14:17:06 now there are 1750 LOC :) 14:17:47 today I'll upload code for XEN 14:18:15 then we have to wait for reviews 14:18:51 yesterday multinode gates were broken and Jenkins put -1 for that reason 14:19:09 * nikhil can be added to the reviews when they are posted/ready 14:19:47 nikhil: sure, in a couple of hours 14:20:03 np, that's in general 14:20:53 ok, let's move on then 14:20:56 Thanks mfedosin 14:21:13 I'll add Matt, Sean and other Nova team members then 14:21:26 well you'd ask them before you add 14:21:29 as reviewers 14:21:49 but I have to mention... 14:22:02 it's the final time I rewrite this code :) 14:22:03 my changes for devstack are still pending review 14:22:14 I'll probably start pinging ppl aggressively 14:22:14 i think that's reasonable. 14:22:19 flaper87: give us the link 14:22:41 mfedosin: not blaming you for that 14:22:42 flaper87: ++ 14:22:45 #link https://review.openstack.org/#/c/315190/ 14:23:04 mfedosin: it's a devstack review, I'll start pinging devstack folks 14:23:07 :D 14:23:11 mfedosin: you're fine 14:23:13 :) 14:23:20 ... for now 14:23:22 :P 14:23:42 flaper87: you give us the link, we keep it at the top of the list with nagging comments :D 14:24:00 I don't want to rewrite it from scratch again :( 14:24:31 ok, let's move on here 14:24:34 #topic Releases (nikhil) 14:24:47 #info newton-1 next Tuesday May 31st 14:25:22 So, if anyone sees anything outstanding that needs to go in newton-1 let me know _today_ 14:25:37 there's no release liaison in newton so, I am handling releases 14:26:05 Also, I think we can take the opportunity to propose a release for the client and store next week around Thursday 14:26:21 that way glance reviewers will have focus on knocking out release specific stuff 14:26:53 the release itself may not go through until later (as and when release team gets time to get those released) 14:27:03 moving on 14:27:09 #topic Announcements (nikhil) 14:27:25 I sent a Best practices for meetings email 14:27:39 nikhil: does Newton-1 (supposed) wirk with the latest release of store? 14:27:47 * jokke_ haven't check 14:28:32 jokke_: me neither. but that's a detail. we can time the releases to make stuff work (whether to release store before or after newton-1) 14:28:49 on my note about meetings: 14:28:55 #link http://lists.openstack.org/pipermail/openstack-dev/2016-May/095599.html 14:28:59 please read that carefully 14:29:11 try to be faster in updates (and precise) 14:29:33 any doubts or concerns? 14:29:52 Please make sure you add your discussion topic at least 24 hours before the scheduled meeting. 14:30:08 Can we make exception for request for reviews? 14:30:14 nope 14:30:30 I'd actually like to avoid using the meeting to request reviews 14:30:37 unless the review requires some actual discussion 14:30:37 ++ 14:30:52 ++ 14:30:56 So do we have an etherpad where we can track that 14:31:02 Maybe I missed that 14:31:04 what? 14:31:15 kairat: you are asking for tracking reviews? 14:31:18 my advice would be to reach out to the people you need reviews from if really needed. 14:31:34 or ping out at #openstack-glance 14:31:35 Also, keep in mind that pinging/requesting reviews so frequently is not really nice 14:31:40 but that might be just me 14:31:54 flaper87: yes and no 14:32:10 flaper87: sometime people want/need reminders if they get busy with internal stuff 14:32:21 so, please check with individuals you are asking for reviews 14:32:29 nikhil: note that I said "so frequently" ;) 14:32:38 flaper87: ok, thanks :-) 14:32:40 I didn't generalized 14:32:42 Ok, I was talking about etherpad with priority reviews for that week like for example Heat does 14:32:49 flaper87: perfect 14:33:12 kairat: ok, thanks for that clarification 14:33:23 kairat: +1 , more work for nikhil but would be really helpful 14:34:00 I used to do that before but many times people didn't pay attention 14:34:11 I'm fine without one extra etherpad I forget to follow ;) 14:34:13 I can look into that process a bit more. It's a bit of maintenance by itself. 14:34:17 Unless people make checking the ehterpad an habit, I believe it's not really useful 14:34:35 yeah, people have their own preferences 14:34:37 The dashboard helped a lot during Mitaka (IMHO) and for the cases it didn't, I used to chase people down 14:34:39 :P 14:34:40 nikhil: or maybe a demo of good gerrit practices -- i have mostly "folk knowledge" of gerrit, i'm sure i don't use it effieciently 14:34:59 rosmaita: noted 14:35:12 like, i guess i should look at flaper87 's dashboard more often 14:35:20 can we do something on the dashboard to reflect the importance? 14:35:28 ++ to hemanthm 14:35:35 hemanthm: yes and no 14:35:44 rosmaita: add the repos you need as followed under your settings, never go past page 2 unless you're bored and wanna clean up ;) 14:35:54 I mean, it can be done but it's a bit hacked in as there's no support for proper hashtags in our version of gerrit 14:36:10 What I did in mitaka is tag reviews with "Mitaka Priority" or something like that 14:36:20 and have the dashboard run a regex there and group those 14:36:23 It looks like too long discussion 14:36:39 we can discuss that in e-mail I guess 14:36:42 ok, let's see if we can come up with something offline 14:36:44 topics would work but they are overwritten on every new ps 14:36:50 * flaper87 stf 14:36:53 u 14:36:57 thanks flaper87 14:37:05 flaper87: mind taking a action item for what all you did in mitaka? 14:37:30 sure, I can write all that down if that helps 14:37:32 :D 14:37:41 thanks, that's what we were looking for 14:37:47 we can share knowledge on this 14:38:14 on the meetings' etherpad best practices section 14:38:34 look at our agenda pad starting line 21 right now 14:38:43 I can take questions offline 14:39:35 #topic Deleting images that arebeing used by Nova: do we want the same behaviour for raw and qcow2 images? (croelandt) 14:39:46 croelandt: floor is yours 14:39:47 so :) 14:40:02 At Red Hat, we have a customer that complains about the following issue: 14:40:07 1) They create an image in glance 14:40:17 2) they boot a VM in Nova using the image created in 1) 14:40:24 3) they try to delete the image in glance 14:40:33 Now, depending on the format of the image, it may or may not be deleted 14:40:48 if it is a raw image, it is considered "in use" and cannot be deleted 14:40:54 if it is a qcow image, it can be deleted 14:41:07 We were wondering whether the behaviour should become more "consistent" from a user point of view 14:41:14 So, lemme try to expand on this a bit more, if you don't mind. 14:41:20 do they delete image with Nova? 14:41:21 Even though we might have to use some tricks to make it happen in glance 14:41:24 flaper87: sure 14:41:33 mfedosin: no, they use "glance image-delete" iirc 14:41:45 == flaper87 14:41:46 The customer is using ceph and therefore, on raw images, ceph is doing a COW and basically marking the image as in-use 14:42:12 * sigmavirus24 was catching up sorry 14:42:14 I think this is a ceph specific case and I'm not sure we should expose it through the API. 14:42:42 I don't think this happens with other sotres, at least. 14:42:51 flaper87: correct 14:42:54 flaper87: ++ 14:43:01 So, is the question: We should forbid deleting glance images if there's an instance running? 14:43:16 well, I failed to ask that with proper english 14:43:18 but you got it 14:43:22 but it's a big security issue 14:43:24 ceph puts a lock on the file and prevents the deletion on that level 14:43:34 I'd say no as I don't think we should have that sort of dependencies 14:43:41 if image is public and it cannot be deleted 14:43:49 if somebody uses it 14:43:55 mfedosin: exactly, plus a bunch of other things 14:44:07 that would imply adding support for "force" deletes and who knows what else 14:44:13 mfedosin: that means that someone has failed on the deployment as glance clearly does not own the image data 14:45:02 flaper87: frankly speaking we have a bunch of those issues from our customers as well with ceph 14:45:06 I'd a chat with nova folks on this ceph case 14:45:17 mfedosin: that's fine but I think it's a ceph specific case 14:45:23 I don't think it's Glance's problem, TBH 14:45:32 nikhil: flaper87 ++ 14:45:38 the reason they want to support this off-the-band on-line usage of the image data in ceph pool as it's a popular way 14:46:09 Dealing with this on the glance side means glance now has to keep track of image usage, which really shouldn't be a concern for Glance. 14:46:10 and they have some in-tree workarounds to keep consistency with glance constructs 14:46:35 hemanthm: right and we should also consider multi-clouds 14:46:36 ++ to hemanthm 14:47:14 I mean, in general, I don't think Glance should get in the business of tracking this 14:47:18 so, like flaper87 and hemanthm are saying it's an edge case and should not be a top level feature in glance. 14:47:33 +1 14:47:43 Now, we do want to provide a better story for this, though. Is the scrubber the answer for this? Just have delayed deletes enabled and deal with this on the scrubber side ? 14:48:09 jokke_ mentioned the scrubber is buggy but that's something we can work on to fix if it's 14:48:22 flaper87: in Glare we're going to use shadow copy + delayed delete 14:48:27 We'd have to know exactly *how* it is buggy, first :) 14:48:32 I am not sure atm how I feel about scrubber 14:48:38 croelandt: ++ :) 14:48:44 it works for the general case from what I have seen 14:48:52 croelandt sure, that's why I'm pinging jokke_. He probably has more info 14:49:16 however, I don't know how scrubber can ensure there is no image usage anymore 14:49:35 we need to move on 14:49:37 hemanthm: it doesn't. It just tries to delete the data 14:49:45 croelandt: did you get a direction? 14:49:49 hemanthm: it'll keep retrying until it manages to delete it 14:50:00 prolly not the right solution, though 14:50:04 hemanthm: I think the idea behind that was to get user off from the synchronous delete and let scrubber fail as long as storage reports back busy 14:50:10 flaper87: it feels like a hack 14:50:10 just an idea, that's the best I can come upwith right now 14:50:15 nikhil: it is a hack 14:50:17 :D 14:50:25 one can write a crop job 14:50:31 but, the question is: Can we make it not a hack? 14:50:39 it's a decent workaround 14:50:41 that runs a script in periodic intervals doing "image-delete" 14:50:43 nikhil: well, I guess I could discuss this with jokke_ and flaper87 14:50:48 Can we support this use case through the scrubber ? 14:50:51 and re-submit our findings for another meeting 14:50:53 cron* 14:50:54 please no 14:50:59 * flaper87 stfu and lets the meeting to move on 14:50:59 jokke_: yes, makes sense 14:51:02 for the support 14:51:14 let's discuss offline 14:51:22 #topic open discussion 14:51:22 I really feel like that's can of worms we really don't want to open 14:51:34 jokke_: I didn't mean support of the dependency use case but the delete retries (I'm done, I swear) 14:51:47 Alex asks for review https://review.openstack.org/#/c/320912/ 14:51:59 so, kairat you posted a late request 14:52:09 So oslo.log folks are going to remove verbose option 14:52:17 we need to be prepared for that 14:52:21 nikhil, yep, I know 14:52:32 That's why I asked about clarification 14:52:33 (but I guess we can ignore that for this week as you'd questions) 14:52:37 cool 14:53:08 https://review.openstack.org/#/c/317579, https://review.openstack.org/#/c/317556 - please review that, oslo.folks asked to remove that a month ago 14:53:32 flaper87, jokke_ , hemanthm nikhil rosmaita ^ 14:54:01 thanks kairat , good heads up. guess we can try to get that in by n-1 14:54:10 same is the case with rosmaita (late request) 14:54:26 same request as last meeting 14:54:36 kairat: not against the change itself nor it's reasoning ... I'm sure oslo folks will follow standard deprecation so it's not huge rush ... but what I'd like to have is reference to that deprecation in the commit messages 14:54:42 See "[openstack-dev] [oslo][all] oslo.log `verbose` and $your project" for more information 14:54:48 would like to get the v1 api-ref in-tree soon, will make it easier to make corrections 14:54:57 jokke_, ok, will do 14:55:02 ++ jokke_ 14:55:52 rosmaita: perfect 14:55:53 rosmaita: can we not introduce that as we're deprecating the api 14:55:56 * jokke_ ducks 14:56:02 lol 14:56:27 jokke_: unfortunately, at the moment, it's "supported" 14:56:31 jokke_: I hope that was not a serious question? 14:56:34 which includes providing docs 14:56:44 rosmaita: that is quickly fixed 14:56:49 :) 14:56:52 I think flaper87 has patch waiting 14:57:05 yes, one way to get people to not use it is to hide the docs! 14:57:05 well, we need to have API ref 14:57:14 doesn't matter current, supported or deprecated 14:57:24 no, we do not want to do that 14:57:28 we want to increase the docs 14:57:29 I actually do 14:57:31 :P 14:57:46 last thing we need is another set of angry customers who do not have a reference 14:57:51 I don't see point ot hold hand how to use something we don't want anyone to use 14:57:51 even for the sake of migration 14:58:01 ;) 14:58:02 so, docs docs docs! 14:58:07 flaper87: trade you a review of your slide deck for a review of https://review.openstack.org/#/c/312259/ ! 14:58:16 rosmaita: deal 14:58:26 excellent! 14:58:44 looks like those are the things for today? 14:58:49 let's close this early to help setup the virtual sync 14:58:50 i will revise my timeframe and look at the deck this afternoon 14:59:00 * flaper87 is ready for the virtual meeting 14:59:07 * rosmaita is not 14:59:10 Thanks all. 14:59:15 #endmeeting