14:00:50 #startmeeting glance 14:00:55 Meeting started Thu Nov 22 14:00:50 2018 UTC and is due to finish in 60 minutes. The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:56 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:58 The meeting name has been set to 'glance' 14:01:02 #topic roll-call 14:01:09 o/ 14:01:14 o/ 14:01:15 o/ 14:01:43 o/ 14:01:43 (i am not well, i may quit early) 14:01:51 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:01:57 abhishekk: ok :( 14:02:02 take care of yourself 14:02:15 Yes, thank you 14:02:15 take care 14:02:29 lets get started then, this might be quick one 14:02:30 Yep, ty 14:02:33 #topic updates 14:02:50 Just quick hello from the summit 14:03:16 There was lots of active discussions around Edge and different use cases and needs 14:03:49 some topic about starlingx, right? 14:03:54 image handling is pretty much top of the list in every one of those, so please keep an eye on and provide any feedback you might have 14:04:44 LiangFang: there was some discussions with StarlingX as well but I'm more referring to the general edge discussions and OpenStack rather than StarlingX specifically 14:05:03 OK 14:05:37 just wanted to bring up that it's currently the hot topic and we seem to be pretty much in the center of it :) 14:05:58 yes 14:06:07 starlingx is based on openstack 14:06:24 and glance is the important module of it 14:06:59 Another thing, now when the Summit is over we have spec freeze approaching next week. If you have any specs still in flight/not submitted ... lets get those worked out 14:08:38 ok 14:08:38 this does not affect spec-lites as before but we want to lock down any bigger feature planning/API changes at this point 14:09:11 when will open again? 14:10:08 I mean when can we submit spec 14:10:36 after next week 14:10:45 It will be for the Train cycle. We normally hold off until towards the end of the cycle (somewhere around milestone 3) just to give the time for people to actually work on the stuff agreed for the cycle instead of focusing on future specs 14:11:15 ok 14:11:45 end of stein cycle is march 2019, right? 14:11:58 so feel free to draft you Train specs already but don't expect too much feedback etc. before towards the end of the cycle 14:12:43 10th of April is currently the expected release date 14:12:50 ok 14:13:13 ok moving on 14:13:26 #topic Combined mailing lists discussion 14:13:39 #link http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html 14:14:26 So it seems that people are not getting enough spammed by having single dev list yet and there is urge to move everything into a single list 14:15:07 reminder the new list went live at 19th 14:15:22 so redo your filters and subscribe to the new list 14:15:44 Done 14:16:16 #topic multi-store adding locations does not default to the default store 14:16:56 I saw some discussion about this but it seems that the locations api bit us again. abhishekk you have some light you can shed to this? 14:17:39 Yeah, Brian has found this issue while testing locations stuff 14:18:34 With multiple backend if we try to add location without backend not specified in location metadata then it fails with unknown scheme error 14:19:24 imacdonn has done some poc and submitted a patches for the same 14:19:42 I met this issue last week 14:19:47 also 14:19:59 location metadata must be specified 14:20:16 I have added some description on the bug 14:21:10 jokke_: kindly have a look at that 14:21:33 It looks good to me and brain also finds it ok 14:21:56 So to not break the API again ... I think we should utilize the default backend _if_ the uri scheme is valid for it (and not reguire the backend to be specified for backwards compatibility) any location that is not for the default should require the backend being specified 14:22:29 btw this will need at least Nova work as well and we need to coordinate how this is handled 14:23:37 any thoughts on that? 14:24:09 I need to cheCK from nova prospect 14:24:31 Meanwhile could you add your thoughts on the patch? 14:24:38 Yeah so the nova side where this will break is snapshotting when ceph is used 14:24:49 yeah, I'll comment on the bug as well 14:25:40 Thanks, i will check the snapshot case once back to work 14:25:45 specially if we have multiple ceph backends nova needs to look where the parent image is stored and use that backend id when creating the location for the snapshot 14:26:25 Ack 14:27:07 it's great that we found it now instead of finding it when people have deployed their edge sites with local ceph clusters and all the snapshots are fecked 14:27:50 Right 14:27:57 _This_ is one of the reasons why rolling new features as experimental is good idea :D 14:28:11 Agree 14:28:57 #topic Bug squad meeting to resume from 26th of Nov 14:29:10 This is just announcement 14:29:41 cool ty 14:29:44 We are resuming this by-weekly meeting since 26 November 14:29:51 moving on then 14:30:06 Yes 14:30:08 #topic FK constrain violation 14:30:21 hi 14:30:50 this is a bug that when doing db purge, foreign key error will happen 14:30:51 because 14:30:52 (i will quit now, will go through logs later) 14:30:58 #link https://review.openstack.org/#/c/617889/ 14:31:01 ok 14:31:04 abhishekk: kk, take care buddy 14:31:18 jokke_: ty 14:32:27 the bug is the table task_info has the records but 14:32:32 sorry 14:32:36 LiangFang: so I really really suck with databases. 14:32:38 bug is https://bugs.launchpad.net/glance/+bug/1803643 14:32:39 Launchpad bug 1803643 in Glance "task_info FK error when running "glance-manage db purge"" [Medium,Confirmed] - Assigned to Liang Fang (liangfang) 14:32:52 Mind to explain to like to 5 year old? :P 14:33:48 where do we actually break? I was looking at the bug and I can see failure in logic/functionality but no errors in the bug description 14:35:27 when there're "deleted" record in table tasks, when trying to delete the records in table tasks 14:36:04 exception will happen: like this: DBError detected when purging from tasks 14:36:33 because table task_info still have record and depends on table tasks 14:36:59 ok, that makes sense. Does that fail the whole purge or does it just affect the tasks part of it? 14:37:01 so we need to purge task_info first 14:37:14 and/or does it barf something else as well? 14:37:59 it will interrupt the db purge cmd 14:38:08 kk 14:38:19 so the db cannot be purged as expected 14:38:26 :) 14:38:34 sorry for my bad english 14:38:37 well in that case lets get it fixed. The patch seems to be quite straight forward for it 14:39:01 yes 14:39:07 thanks for looking into it 14:39:29 thanks 14:40:06 that's all for this patch 14:40:07 #topic allowing to add location only when location does not exist 14:40:13 #link https://review.openstack.org/#/c/617891/ 14:40:34 yes, rosmaita have gave comments on this patch 14:40:46 It seems make sense 14:41:02 yes he seems to have -2 there and his reasoning is very valid 14:41:11 yes 14:41:26 I agree with him, and about to abandon this patch 14:41:44 but waiting confirm from co-author 14:42:13 I'm working on the patches that need to upstream 14:42:40 but the original author is from wind river 14:42:44 cool, yeah I definitely share his concerns about that patch and it would break the API quite horribly (in a sense that it makes the API to work very very unpredictably) 14:43:38 #topic Don't percent-encode colon in HTTP headers 14:43:48 could you please comment your concern on this patch? thanks 14:44:53 So there was some discussion about these before and we seem to be doing some quite horrible stuff when encoding the headers. We basically do encode stuff that doesn't need to be encoded and don't properly decode them on the other end 14:45:03 LiangFang: yeah, I'll add my comment there 14:45:08 thanks 14:45:39 I think the comma is just another example of the behavior and the fix is likely not fully covering 14:45:53 but it would be decent bandaid for the issue seen now 14:46:59 I'll check the backport and see it through 14:47:04 I have no knowledge on this issue ... 14:48:02 basically we started precent quoting request headers to be able to follow the rfc for http headers 14:49:00 problem is that the lib we use for that is not exactly meant to be doing that so it quotes certain characters that are <127 ASCII code range which in turn breaks tons of stuff in the receiving end 14:49:25 and seems like only way to indicate safe chars to it is providing it a list of them 14:49:51 so every now and then we discover new characters it's quoting that happens to break some parts of our api 14:50:41 not great but at least we're breaking only those requests instead of majority of our headers as we used to 14:50:55 #link https://review.openstack.org/#/c/618362/ 14:51:17 #topic Open Discussion 14:51:38 there was pointer here. The Community Goals discussion is going on again 14:52:02 ack 14:52:06 #link http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000055.html 14:52:23 please follow up and do provide feedback. 14:52:39 ok 14:53:02 another thing left on the agenda is something quota related, this was from you LiangFang as well? 14:53:20 not from me 14:53:32 ok ... who added this? 14:53:56 it seems be added when the meeting is opened 14:55:46 well just to answer the question as I understand it: No there is no quotas for sharing images, no we do not want one, this is the exact reason why image sharing/accepting happen via out-of-band communications so that these does not spam anyones image lists 14:57:04 so whoever added the question there and hopefully is reading meeting logs, if that did not aswer the question or you have something to add/needs clarification, please get back to us with it 14:57:12 Anything else? 14:57:35 no from me 14:58:16 ok ... going 1st 14:58:28 hi, all. I have a question about use shared image with each other, Do we have the quota limit for requests to share the image to each other? For example, someone shares the image with me without stop, how do we deal with it? 14:58:51 rambo: see my answer 3 lines up ;) 14:59:06 might be 5 even 15:00:08 And we're out of time, thanks everyone and #openstack-glance serves if something pops in between 15:00:15 #endmeeting