14:00:50 <jokke_> #startmeeting glance
14:00:55 <openstack> Meeting started Thu Nov 22 14:00:50 2018 UTC and is due to finish in 60 minutes.  The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:58 <openstack> The meeting name has been set to 'glance'
14:01:02 <jokke_> #topic roll-call
14:01:09 <jokke_> o/
14:01:14 <LiangFang> o/
14:01:15 <Luzi> o/
14:01:43 <abhishekk> o/
14:01:43 <abhishekk> (i am not well, i may quit early)
14:01:51 <jokke_> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:01:57 <jokke_> abhishekk: ok :(
14:02:02 <jokke_> take care of yourself
14:02:15 <abhishekk> Yes, thank you
14:02:15 <LiangFang> take care
14:02:29 <jokke_> lets get started then, this might be quick one
14:02:30 <abhishekk> Yep, ty
14:02:33 <jokke_> #topic updates
14:02:50 <jokke_> Just quick hello from the summit
14:03:16 <jokke_> There was lots of active discussions around Edge and different use cases and needs
14:03:49 <LiangFang> some topic about starlingx, right?
14:03:54 <jokke_> image handling is pretty much top of the list in every one of those, so please keep an eye on and provide any feedback you might have
14:04:44 <jokke_> LiangFang: there was some discussions with StarlingX as well but I'm more referring to the general edge discussions and OpenStack rather than StarlingX specifically
14:05:03 <LiangFang> OK
14:05:37 <jokke_> just wanted to bring up that it's currently the hot topic and we seem to be pretty much in the center of it :)
14:05:58 <LiangFang> yes
14:06:07 <LiangFang> starlingx is based on openstack
14:06:24 <LiangFang> and glance is the important module of it
14:06:59 <jokke_> Another thing, now when the Summit is over we have spec freeze approaching next week. If you have any specs still in flight/not submitted ... lets get those worked out
14:08:38 <LiangFang> ok
14:08:38 <jokke_> this does not affect spec-lites as before but we want to lock down any bigger feature planning/API changes at this point
14:09:11 <LiangFang> when will open again?
14:10:08 <LiangFang> I mean when can we submit spec
14:10:36 <LiangFang> after next week
14:10:45 <jokke_> It will be for the Train cycle. We normally hold off until towards the end of the cycle (somewhere around milestone 3) just to give the time for people to actually work on the stuff agreed for the cycle instead of focusing on future specs
14:11:15 <LiangFang> ok
14:11:45 <LiangFang> end of stein cycle is march 2019, right?
14:11:58 <jokke_> so feel free to draft you Train specs already but don't expect too much feedback etc. before towards the end of the cycle
14:12:43 <jokke_> 10th of April is currently the expected release date
14:12:50 <LiangFang> ok
14:13:13 <jokke_> ok moving on
14:13:26 <jokke_> #topic Combined mailing lists discussion
14:13:39 <jokke_> #link http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html
14:14:26 <jokke_> So it seems that people are not getting enough spammed by having single dev list yet and there is urge to move everything into a single list
14:15:07 <jokke_> reminder the new list went live at 19th
14:15:22 <jokke_> so redo your filters and subscribe to the new list
14:15:44 <abhishekk> Done
14:16:16 <jokke_> #topic multi-store adding locations does not default to the default store
14:16:56 <jokke_> I saw some discussion about this but it seems that the locations api bit us again. abhishekk you have some light you can shed to this?
14:17:39 <abhishekk> Yeah, Brian has found this issue while testing locations stuff
14:18:34 <abhishekk> With multiple backend if we try to add location without backend not specified in location metadata then it fails with unknown scheme error
14:19:24 <abhishekk> imacdonn has done some poc and submitted a patches for the same
14:19:42 <LiangFang> I met this issue last week
14:19:47 <LiangFang> also
14:19:59 <LiangFang> location metadata must be specified
14:20:16 <abhishekk> I have added some description on the bug
14:21:10 <abhishekk> jokke_: kindly have a look at that
14:21:33 <abhishekk> It looks good to me and brain also finds it ok
14:21:56 <jokke_> So to not break the API again ... I think we should utilize the default backend _if_ the uri scheme is valid for it (and not reguire the backend to be specified for backwards compatibility) any location that is not for the default should require the backend being specified
14:22:29 <jokke_> btw this will need at least Nova work as well and we need to coordinate how this is handled
14:23:37 <jokke_> any thoughts on that?
14:24:09 <abhishekk> I need to cheCK from nova prospect
14:24:31 <abhishekk> Meanwhile could you add your thoughts on the patch?
14:24:38 <jokke_> Yeah so the nova side where this will break is snapshotting when ceph is used
14:24:49 <jokke_> yeah, I'll comment on the bug as well
14:25:40 <abhishekk> Thanks, i will check the snapshot case once back to work
14:25:45 <jokke_> specially if we have multiple ceph backends nova needs to look where the parent image is stored and use that backend id when creating the location for the snapshot
14:26:25 <abhishekk> Ack
14:27:07 <jokke_> it's great that we found it now instead of finding it when people have deployed their edge sites with local ceph clusters and all the snapshots are fecked
14:27:50 <abhishekk> Right
14:27:57 <jokke_> _This_ is one of the reasons why rolling new features as experimental is good idea :D
14:28:11 <abhishekk> Agree
14:28:57 <jokke_> #topic Bug squad meeting to resume from 26th of Nov
14:29:10 <abhishekk> This is just announcement
14:29:41 <jokke_> cool ty
14:29:44 <abhishekk> We are resuming this by-weekly meeting since 26 November
14:29:51 <jokke_> moving on then
14:30:06 <abhishekk> Yes
14:30:08 <jokke_> #topic FK constrain violation
14:30:21 <LiangFang> hi
14:30:50 <LiangFang> this is a bug that when doing db purge, foreign key error will happen
14:30:51 <LiangFang> because
14:30:52 <abhishekk> (i will quit now, will go through logs later)
14:30:58 <jokke_> #link https://review.openstack.org/#/c/617889/
14:31:01 <LiangFang> ok
14:31:04 <jokke_> abhishekk: kk, take care buddy
14:31:18 <abhishekk> jokke_: ty
14:32:27 <LiangFang> the bug is the table task_info has the records but
14:32:32 <LiangFang> sorry
14:32:36 <jokke_> LiangFang: so I really really suck with databases.
14:32:38 <LiangFang> bug is https://bugs.launchpad.net/glance/+bug/1803643
14:32:39 <openstack> Launchpad bug 1803643 in Glance "task_info FK error when running "glance-manage db purge"" [Medium,Confirmed] - Assigned to Liang Fang (liangfang)
14:32:52 <jokke_> Mind to explain to like to 5 year old? :P
14:33:48 <jokke_> where do we actually break? I was looking at the bug and I can see failure in logic/functionality but no errors in the bug description
14:35:27 <LiangFang> when there're "deleted" record in table tasks, when trying to delete the records in table tasks
14:36:04 <LiangFang> exception will happen: like this: DBError detected when purging from tasks
14:36:33 <LiangFang> because table task_info still have record and depends on table tasks
14:36:59 <jokke_> ok, that makes sense. Does that fail the whole purge or does it just affect the tasks part of it?
14:37:01 <LiangFang> so we need to purge task_info first
14:37:14 <jokke_> and/or does it barf something else as well?
14:37:59 <LiangFang> it will interrupt the db purge cmd
14:38:08 <jokke_> kk
14:38:19 <LiangFang> so the db cannot be purged as expected
14:38:26 <LiangFang> :)
14:38:34 <LiangFang> sorry for my bad english
14:38:37 <jokke_> well in that case lets get it fixed. The patch seems to be quite straight forward for it
14:39:01 <LiangFang> yes
14:39:07 <jokke_> thanks for looking into it
14:39:29 <LiangFang> thanks
14:40:06 <LiangFang> that's all for this patch
14:40:07 <jokke_> #topic allowing to add location only when location does not exist
14:40:13 <jokke_> #link https://review.openstack.org/#/c/617891/
14:40:34 <LiangFang> yes, rosmaita have gave comments on this patch
14:40:46 <LiangFang> It seems make sense
14:41:02 <jokke_> yes he seems to have -2 there and his reasoning is very valid
14:41:11 <LiangFang> yes
14:41:26 <LiangFang> I agree with him, and about to abandon this patch
14:41:44 <LiangFang> but waiting confirm from co-author
14:42:13 <LiangFang> I'm working on the patches that need to upstream
14:42:40 <LiangFang> but the original author is from wind river
14:42:44 <jokke_> cool, yeah I definitely share his concerns about that patch and it would break the API quite horribly (in a sense that it makes the API to work very very unpredictably)
14:43:38 <jokke_> #topic Don't percent-encode colon in HTTP headers
14:43:48 <LiangFang> could you please comment your concern on this patch? thanks
14:44:53 <jokke_> So there was some discussion about these before and we seem to be doing some quite horrible stuff when encoding the headers. We basically do encode stuff that doesn't need to be encoded and don't properly decode them on the other end
14:45:03 <jokke_> LiangFang: yeah, I'll add my comment there
14:45:08 <LiangFang> thanks
14:45:39 <jokke_> I think the comma is just another example of the behavior and the fix is likely not fully covering
14:45:53 <jokke_> but it would be decent bandaid for the issue seen now
14:46:59 <jokke_> I'll check the backport and see it through
14:47:04 <LiangFang> I have no knowledge on this issue ...
14:48:02 <jokke_> basically we started precent quoting request headers to be able to follow the rfc for http headers
14:49:00 <jokke_> problem is that the lib we use for that is not exactly meant to be doing that so it quotes certain characters that are <127 ASCII code range which in turn breaks tons of stuff in the receiving end
14:49:25 <jokke_> and seems like only way to indicate safe chars to it is providing it a list of them
14:49:51 <jokke_> so every now and then we discover new characters it's quoting that happens to break some parts of our api
14:50:41 <jokke_> not great but at least we're breaking only those requests instead of majority of our headers as we used to
14:50:55 <jokke_> #link https://review.openstack.org/#/c/618362/
14:51:17 <jokke_> #topic Open Discussion
14:51:38 <jokke_> there was pointer here. The Community Goals discussion is going on again
14:52:02 <LiangFang> ack
14:52:06 <jokke_> #link http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000055.html
14:52:23 <jokke_> please follow up and do provide feedback.
14:52:39 <LiangFang> ok
14:53:02 <jokke_> another thing left on the agenda is something quota related, this was from you LiangFang as well?
14:53:20 <LiangFang> not from me
14:53:32 <jokke_> ok ... who added this?
14:53:56 <LiangFang> it seems be added when the meeting is opened
14:55:46 <jokke_> well just to answer the question as I understand it: No there is no quotas for sharing images, no we do not want one, this is the exact reason why image sharing/accepting happen via out-of-band communications so that these does not spam anyones image lists
14:57:04 <jokke_> so whoever added the question there and hopefully is reading meeting logs, if that did not aswer the question or you have something to add/needs clarification, please get back to us with it
14:57:12 <jokke_> Anything else?
14:57:35 <LiangFang> no from me
14:58:16 <jokke_> ok ... going 1st
14:58:28 <rambo> hi, all. I  have a question about use shared image with each other, Do we have the quota limit for requests to share the image to each other? For example, someone shares the image with me without stop, how do we deal with it?
14:58:51 <jokke_> rambo: see my answer 3 lines up ;)
14:59:06 <jokke_> might be 5 even
15:00:08 <jokke_> And we're out of time, thanks everyone and #openstack-glance serves if something pops in between
15:00:15 <jokke_> #endmeeting