13:59:48 #startmeeting Glance 13:59:49 Meeting started Thu Jan 28 13:59:48 2016 UTC and is due to finish in 60 minutes. The chair is flaper87. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:59:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:59:51 #topic Agenda 13:59:52 The meeting name has been set to 'glance' 13:59:54 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:00:06 o/ 14:00:13 o/ 14:00:24 o/ 14:00:29 o/ 14:00:36 o/ 14:00:48 o/ 14:00:50 Hello, folks! 14:01:01 o/ 14:01:06 * flaper87 will implement "Courtesy SMS" 14:01:19 o/ 14:01:24 o/ 14:01:34 #chair rosmaita nikhil 14:01:35 Current chairs: flaper87 nikhil rosmaita 14:01:40 or whatsapp will do :) 14:01:50 o/ 14:01:55 * flaper87 won't ever trust his internet connection, ever ever ever ever again 14:02:08 cool 14:02:11 let's start 14:02:18 #topic Glare Updates 14:02:31 oh, it's me :) 14:02:58 so, we had a small conversation with Nikhil 14:03:19 about how we see glare 14:03:30 and the API refactor, right? :D 14:03:45 first of all I think we have to rewrite Alex specs 14:04:03 public and private apis are not good :) 14:04:33 :o 14:04:41 I'm writing a small text called "What is Glare?" 14:05:05 #link https://review.openstack.org/#/c/254710/ 14:05:05 there I'm going to express my feelings about it 14:05:16 so, it's safe to say that we'll reject the spec freeze exception for that 14:05:24 ++ 14:05:28 I think yes 14:05:29 and this will be pushed back to Newton 14:05:31 ok 14:05:36 I'll comment after the meeting 14:05:39 sad but true 14:05:40 I think it's the right call 14:05:47 also, sigmavirus24_awa had some good comments there 14:06:01 yes, I don't like 'all' 14:06:11 and I think those were valid enough to block the spec 14:06:17 ok, glad to hear there's consensus 14:06:24 we will work with nikhil to prepare a good api for glare 14:06:32 Meantime, we do need to move v3 to glare-api 14:06:44 yep 14:06:46 BTW, having glare-api and glance-api feels weird 14:06:53 I'm wondering if we should have glance-glare 14:06:55 instead 14:06:58 and it seems I have to look at this patch 14:07:03 nova services are nova-* 14:07:06 same cinder 14:07:10 and neutron 14:07:11 and everything else 14:07:22 glare-api is just misleading, IMHO 14:07:29 https://review.openstack.org/#/c/255274/ 14:07:30 lets worry about the naming after we figure out what/when/how we do the separation in the first place 14:07:52 jokke_: well, the spec for the separation landed already 14:07:59 and the v3 -> glare stuff is happening 14:08:03 there's apatch for it already 14:08:15 I'll amend the spec that landed and we can discuss this there 14:08:18 we must merge it in Mitaka 14:08:39 oh ... 14:09:01 yup 14:09:06 mfedosin: anything else? 14:09:31 sorry ... I read your line above "meantime we do _not_ need to move" my bad 14:09:36 #link https://review.openstack.org/#/c/254163/ 14:09:39 do we need a separate virtual meet for glance? 14:09:51 or the one you are gunna propose will suffice? 14:10:05 I think that should be enough 14:10:11 Planning to include everything 14:10:22 if not, we can always organize a new one 14:10:24 :D 14:10:25 cool 14:10:36 ok, moving on 14:10:42 #topic Updates Drivers 14:10:52 #link http://lists.openstack.org/pipermail/openstack-dev/2016-January/084983.html 14:11:10 So, the news is that we dismissed the drivers team 14:11:13 Drivers is dead, long live Drivers! 14:11:17 it's gone 14:11:19 we're all one team again 14:11:24 hooray 14:11:33 which makes more sense, I think 14:11:37 one for all, and all for fun 14:11:37 That brings some changes 14:11:43 lemme summarize them: 14:11:53 1) Specs are reviewed by cores too. It's a volunteers job 14:12:12 2) PTL will bring some specs to the meeting and request for volunteers to drive the review 14:12:22 I don't think we need the volunteer to be core, TBH 14:12:35 o/ 14:12:43 I'd like to encourage everyone to help driving specs from a review stand point 14:12:52 but it'll need core's votes anyway 14:12:57 3) PTL approves specs 14:13:13 Unless the patch is a trivial fix, typo, whatever, it's better to let the PTL approve the spec 14:13:30 Ok, that's it from the *glance-specs* repo side 14:13:37 Now, there's one more thing that needs to happen 14:13:54 Since we're merging both meetings, the glance meeting will have more content (I hope) 14:14:05 that means the time available is not the same we used to have 14:14:22 So, we need to get to a point, in the week, where the agenda is frozen 14:14:33 I think I'll start freezing the agenda on Wednesdays 14:14:47 Unless the agenda is empty, we won't be adding new topics to it 14:15:06 There's no need to add bullets to the Open Discussion section. It's open discussion and you can bring anything 14:15:12 Ditto for review requests 14:15:23 ok, I think that's it. 14:15:26 Did I miss something? 14:16:02 I'll take that as a no 14:16:11 #topic Updates V1 -> V2 14:16:15 mfedosin: ^ 14:16:39 mfedosin: ? 14:16:40 so, I updated all patches by johnthetubaguy requirements 14:16:46 w000h000 14:17:11 no comments at all there 14:17:19 about xen 14:17:39 I prepared a patch that removes custom headers from the request 14:17:46 awesome 14:18:10 also I have working code, but I need to set some env with python 2.4 14:18:22 ... 14:18:39 because unfortunately code that works in 2.7 may not work in 2.4 :( 14:18:54 why is this a thing, btw? 14:19:02 why do we need to support 2.4 just for the xenplugin? 14:19:06 just have no time with all latest events 14:19:30 ask Bob Ball :) 14:19:35 this is nonsense, TBH. 14:19:36 I will 14:19:53 can't believe some parts of openstack are being forced to support 2.4 14:19:58 anyway 14:20:00 moving on 14:20:12 I'll upload patch very soon 14:20:17 for xen plugin 14:20:18 #topic (re)think spec lite process.. Again ;) (flaper87) 14:20:20 mfedosin: thanks 14:20:23 so 14:20:24 spec lites 14:20:48 \\o \o/ o// 14:20:54 LP didn't work. It was a good/bad experiment. It was useful because we didn't have time to spend on re-structuring the specs repo 14:20:56 but now we do 14:21:01 Mitaka specs are frozen 14:21:12 Newton started and I (someone) can dedicate time to this 14:21:19 back then jokke_ proposed using glance-specs for this 14:21:29 and now that we're back to being a single team, I believe it makes sense 14:21:40 we'll have more bandwidth in the -specs repo 14:21:46 and cores can approve spec lites 14:21:52 Here's the little I've thought so far: 14:22:04 1) We create a template that requires just description and proposed change 14:22:04 and how about, not dump 100 10 line spec-lite files in there, but track spec lites in a single file? 14:22:04 I was/am on the same team as johnthetubaguy 14:22:07 oops 14:22:11 jokke_: ^ 14:22:36 2) We follow the same process as for patches: 2+2s +A 14:22:50 I'm happy to let core's approve spec lites 14:22:54 I think we need to separate store, client, api, glare specs in different files 14:23:23 I think we should keep them separate 14:23:31 pretty much like we do for releasenotes 14:23:46 it doesn't have to be big 14:23:53 it can be rendered in the same page if we want 14:24:03 but repo-wise, they can stay separate 14:24:32 jokke_: would you be able to help with that? 14:24:41 setting up the repo, updating the contributing guidelines, etc 14:25:26 * flaper87 wonders if his messages are getting to destination 14:25:29 !ping 14:25:30 pong 14:25:33 w00h00 14:25:34 that worked 14:25:36 ok 14:25:38 yeah 14:25:43 jokke_: is ignor.... 14:25:46 jokke_: HELLO 14:25:48 :D 14:25:50 awesome 14:25:51 ^^ 14:25:52 thanks a bunch 14:26:03 #action jokke_ to get spec-lite right 14:26:19 any objections? thoughts? 14:27:05 ok 14:27:08 moving on 14:27:09 #topic Glance Mini (Virtual) Summit (A bit late, I know) (flaper87) 14:27:17 ok, it's not really *THAT* late 14:27:23 most mid-cycles are happening this week 14:27:36 So, how would folks feel about meeting next week? 14:28:08 i could do that 14:28:10 agree 14:28:17 I dont mind substituting weekly meeting with video call(s) 14:28:21 I was honestly thinking of allocating 2 half days 14:28:29 or just one 14:28:31 depending on the agenda 14:28:49 Trying to think how we could make this work for most TZs 14:29:23 ok, I'll send a doodle to the ML 14:29:35 please, look for it *today* and vote 14:29:37 :D 14:29:44 or tomorrow for folks in other TZs 14:30:01 cool 14:30:29 I'll put together a short agenda that I hope other folks will contribute to 14:30:38 ok ... one or 2 half days just might work 14:30:52 not gonna be able to do anything sparser than that 14:31:02 gotcha 14:31:10 me neither 14:31:23 +1 14:31:27 * flaper87 writes this down 14:31:31 oh wait, it's logged 14:32:00 #action flaper87 to send doodle for glance mini virtual summit 14:32:07 ok, anything else? 14:32:37 agenda pls 14:32:48 yup, just mentioned it above 14:32:51 so tht we can pick and choose 14:33:29 oooook 14:33:32 moving on 14:33:37 #topic Open Discussion 14:33:41 This was a short meeting 14:33:43 :D 14:33:51 Shoot all your Open Discussion items 14:33:55 hi 14:34:07 on spec-lite, what about existing unapproved ones ? 14:34:28 should they be re-proposed in specs ? 14:34:28 +1 to sabari 14:34:31 https://review.openstack.org/#/c/259963/4 about backend exception from glnce 14:34:53 I'd keep the LP process until N and from Newton we start tracking them in a repo 14:35:02 let's not expect anyone doing double work for mitaka 14:35:05 sabari: yeah, we'll need to ask ppl to move them 14:35:10 I don't feel good about that 14:35:17 but I don't have time to do it myself 14:35:19 should I create a blue print for adding subclassed to return different error codes as mentioned on the patch?? 14:35:23 nor the knowledge on every spec-lite 14:35:28 and I'm not the author :P 14:35:51 jokke_: I'd start using the new process for new spec-lites 14:35:59 the ones that exist can be left as-is 14:36:12 but new ones should be filed under glance-specs 14:36:22 abhishekk: looking 14:36:35 flaper87: ok 14:36:44 k so the ones being discussed about can be on lp. Thanks. 14:37:19 abhishekk: you already filed a bug for it 14:37:23 abhishekk: I think it's fine 14:37:30 will read in more details soon 14:37:45 flaper87: ok thank you 14:38:01 anything else? 14:38:03 anyone ? 14:38:09 i got something 14:38:10 all members please review https://review.openstack.org/#/c/261288/2 (Provide a list of request ids to the caller 14:38:11 one question 14:38:12 I got one update 14:38:22 I have one) 14:38:24 (i will go last) 14:38:27 Darja found a strange behaviour in v2 14:38:37 when we delete images 14:38:44 PPL, feel free to shout all your updates 14:38:47 this is Open Discussion 14:38:53 mfedosin: link 14:38:57 https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L285-L295 14:39:28 First we update all image properties and if there are no exceptions then we delete it 14:39:42 yup 14:39:47 best effort 14:39:55 she found that this exceptions never happen 14:39:55 I'm still working with ninag and nikhil on the hierarchical quota spec 14:40:14 and it's okay to remove 'update' code 14:40:21 what do you think? 14:40:24 wait 14:40:30 code was written in 2012 14:40:39 the exceptions can occur if you loose conn to DB right? 14:41:09 or during parallel deletes. 14:41:13 dshakhray: ^ 14:41:16 ah yes 14:41:31 Yes) 14:41:35 DB upgrades too then 14:41:37 yeah, if you can't delete the image record, don't want to leave the image 'active' and delete the data 14:43:10 btw, image data was removed in location layer 14:43:28 so it's already gone 14:43:52 yes 14:44:24 so maybe we can remove https://github.com/openstack/glance/blob/master/glance/db/__init__.py#L286-L292 ? 14:44:28 hmm, I think there's a catch to this 14:44:48 mfedosin: probably propose a review and we can all jump on it :) 14:44:57 ok, nvm. if someone proposes a review we can discuss it there 14:45:03 nikhil +1 14:45:06 sabari: like minds :) 14:45:12 dashakhray_: please make a patch for this 14:45:24 my update: "The Openstack bug smash days" formerly "openstack hackathon" in NYC is getting closer to reality 14:45:37 sorry about the delay, this is a expensive, busy and challenging location 14:45:52 getting the space and sponsorship took a bit of time 14:45:56 #link https://etherpad.openstack.org/p/OpenStack-Hackathon 14:46:12 for those who can make it there 14:46:21 for everyone else I will try to get remote conn 14:46:33 mfedosin, i am here) I'm doing this 14:46:50 (courtesy invite from the organizing team): please try to attend at least locally 14:46:50 nikhil: +1 on remote conn 14:47:00 dashakhray_: thank you :) 14:47:36 nikhil: nice! will take a look 14:47:37 I'll be there in Moscow location :) 14:47:50 remote conn ++ 14:48:06 I will have NYC etherpad be setup by marketing personnel early next week 14:48:24 once all approvals are final 14:48:51 Ok, I ll proceed 14:48:51 Are we going to release glance_store this week? If yes then we need to merge this(https://review.openstack.org/#/c/272990/) or revert the patch where we deleted auth module: https://review.openstack.org/#/c/250857/. Otherwise, glance_store for mult-tenant swift driver will be broken. 14:48:53 Thanks 14:49:09 kairat: not sure if it'll be this week but soonish 14:49:13 likely next week 14:49:16 Ok, got it 14:49:31 last week i mentioned a glance spec from an ironic dev who wants to add 'tar' as a disk_format 14:49:34 #link https://bugs.launchpad.net/glance/+bug/1535900 14:49:35 Launchpad bug 1535900 in Glance "Add tar as a disk_format to glance" [Wishlist,In progress] - Assigned to Arun S A G (sagarun) 14:49:37 kairat: thanks, duly noted :) 14:49:37 So please review the patch, perhaps I should have raised bug priority also 14:49:40 i sent a message to the ML asking what people thought 14:49:41 #link http://lists.openstack.org/pipermail/openstack-dev/2016-January/084779.html 14:49:41 Btw, folks, we've invited kairat and rosmaita to the core team. They have both accepted and there haven't been objections on the ML 14:49:52 rosmaita: kairat congrats and thanks for joining 14:49:58 ty! 14:49:58 congrats! 14:49:59 I don't see us releasing it today and Fri is always bad day for release 14:50:04 +1 congrats kairat! 14:50:05 It'll be effective next week but I just can't keep my mouth shut 14:50:07 kairat, thanks, guys! 14:50:07 I need to officially reply with +1s 14:50:14 me too 14:50:37 anyway, about the disk_format thing ... 14:50:45 the discussion sort of got off track into a discussion about whether a tarball should be used at all 14:50:45 rosmaita: shoot 14:50:48 (which isn't really our concern, ironic is going to use it no matter what) 14:50:55 i will re-send asking people to focus on what the identifier should be (for example, 'tar' or 'os-tarball' or 'dangerous-format-image') 14:51:05 please join the discussion if you have an opinion 14:51:20 that's all, i had a question but think i will file & fix a bug instead 14:51:36 I've mixed feelings but as jokke_ pointed out, we already have OVA so... adding tar should be ok 14:51:47 I also think we could add it as a container and as a disk 14:51:49 :P 14:52:02 but as container-format makes sense, why can't it be raw for disk-format :) 14:52:03 i'm not so sure about container 14:52:12 where would the metadata be? 14:52:19 for 'docker' and 'ova' you know where to look 14:52:23 'tar' seems too generic 14:52:35 that's why i think maybe they should use 'os-tarball' for disk_format 14:52:40 and 'bare' for container 14:52:51 they don't have any metadata, is that the criteria for container-format ? 14:53:00 'tar' is an archiving file format, not a disk image file format. Qemu has no way to boot directly from a 'tar' file, for example. 14:53:12 yes, but ironic does 14:53:23 but OVA is basically a tar 14:53:30 it just has a file w/ metadata in it 14:53:34 not basicallym it is 14:53:36 but it's effectively a tar 14:53:40 jokke_: english 14:53:43 :D 14:53:57 hmm, I don't see it that way though 14:54:11 nikhil: elaborate? 14:54:19 like threat point of view there is no difference between ova and honest (as in not claiming to be something else) tarball 14:54:40 OVA is an image format though, even if it is just a tar with metadata files in it 14:54:42 feels like it leverages all tar-ing capabilities but the wrapping around it can be advanced 14:55:01 avarner: ova is really a package format 14:55:06 yes 14:55:09 exactly 14:55:11 ova = metadata + a valid disk format within it 14:55:19 ok 14:55:22 it's an indicator of all that can exist under the layer 14:55:42 vs. tar is explicit in terms of what is being wrapped 14:55:52 I just feel someone will propose using tar as a container format in the future 14:55:56 happy to not add it now 14:56:02 but I'd bet that will happen 14:56:02 the implementation can be simply seen as meta + disk 14:56:09 but the use of it isn't 14:56:42 rosmaita: ok, you'll send an email updating the thread, right? 14:56:46 yep 14:56:50 avarner: if it makes your life any easier it's one change in glance config files so feel free to document it for ironic use case 14:57:18 I just want us all to keep this in midn as it *will* bite us as soon as the new import process is in place 14:57:30 well, that actually was the question i had 14:57:43 we don't have a way to enforce disk_format, container_format identifiers 14:57:52 as jokke_ says, you just slap what you want in the config 14:58:06 of course, it doesn't make sense to do that unless there's a consumer 14:58:13 who knows what the identifier means 14:58:29 so if ironic looks for 'tar' and i have 'os-tarball', won't work 14:58:50 so maybe this is actually self-enforcing 14:59:04 folks, reaching the bottom of the hour. We need to wrap the meeting up 14:59:12 We can keep the discussion on te ML 14:59:16 rosmaita: thanks for driving it 14:59:20 np 14:59:28 thank you all for joining 14:59:33 thanks 14:59:37 thanks 14:59:37 thx 14:59:42 bye 14:59:43 tty next week and see you at the virtual meetup 14:59:46 #endmeeting