14:00:19 #startmeeting glance 14:00:20 Meeting started Thu May 13 14:00:19 2021 UTC and is due to finish in 60 minutes. The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:24 The meeting name has been set to 'glance' 14:00:24 #topic roll call 14:00:28 o/ 14:00:30 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:00:32 o/ 14:00:33 o/ 14:00:52 cool, lets wait couple of minutes more 14:01:29 o/ 14:01:59 lets start 14:02:22 #topic release/periodic jobs update 14:02:28 M1 is two weeks away 14:02:39 we have two topics to look after 14:02:48 1. Cache API work (jokke) 14:02:52 I think I have some API changes to write ;) 14:03:12 2. Review and approve quota related specs 14:03:30 jokke, cool, looking forward to it :P 14:03:51 is there a spec for the cache stuff? 14:04:18 periodic job is all green except one post failure and one retry 14:04:23 dansmith, yes there is 14:04:40 #link https://review.opendev.org/#/c/665258/ 14:05:00 ah, was looking for owner:jokke, got it 14:05:10 ack 14:05:37 * dansmith assumes that needs updating 14:05:47 moving ahead, we will discuss this in open discussion if you have any comments 14:06:33 dansmith, right, me or jokke will update it with latest understanding 14:06:38 moving ahead 14:06:39 wck 14:06:54 #topic Native Image Encryption (rosmaita) 14:07:06 this is rosmaita 14:07:23 yeah, basically, it's what i said on the agenda 14:07:29 (i'm in a concurrent meeting, sorry) 14:07:53 it's looking like the barbican Consumer API may not be ready until late xena, or early Y 14:08:05 ack 14:08:27 so i proposed that we go ahead and implement the basic stuff anyway, including some CI 14:08:43 +1 from me 14:08:54 at first i was thinking it could be an experimental feature 14:09:14 but i think actually the feature doesn't really need the consumer api 14:09:44 I propose to have a separate spec/lite-spec mentioning this on top of the current approved one 14:10:17 that makes sense to me 14:10:36 ok, i'll bring that back to Luzi and the pop-up team 14:10:39 #link https://review.opendev.org/#/c/609667/ 14:10:49 cool, thank you for taking care of this 14:11:03 moving ahead 14:11:17 #topic Implement glance-unified-quotas 14:11:24 #link https://review.opendev.org/c/openstack/glance-specs/+/788037 14:11:32 So we do have couple of reviews on the spec 14:12:13 and I am Ok with the current proposal with having usage APIs as a followup with this efforts 14:12:25 I think there's a question about whether or not to split the current count to active count and staged count, 14:12:36 but other than that the comments from rosmaita were mostly clarifications 14:12:46 if people want to see the count split, I can update it with that 14:13:00 If any one has any concerns/suggestions kindly raise on the spec before next meeting 14:13:18 I think its better to have split count 14:13:42 hmm-m I had a quick chat about it with abhishekk last week but forgot to write the review. Will do shortly. I'm quite surprised/worried how racy the proposal is 14:13:43 okay 14:14:39 jokke: happy to discuss here, 14:14:45 just the fact that with the soft limits and no enforcement as long as there is any quota left we basically allow no 14:15:10 limits situation where someone can just start bunch of imports and go terabytes over their quota pretty easily 14:15:37 well, that's kinda how soft limits work 14:15:53 they're "stop the bleeding" and not "enforce the last byte of usage" 14:16:25 the way oslo limit works, we will hit keystone each time we check the limit, so if we do that on each chunk read we will generate a ton of traffic back to keystone while doing so 14:17:22 I'd suggest we put stopgap there once one gets close, say 10% of quota left and you get one concurrent operation or something among those lines 14:17:50 well, rosmaita's suggestion will let you limit to N in-progress imports, 14:18:01 which would let you limit it for imports at least 14:18:09 just to make sure that one does not go "Oh I have only a gig of quota left, I better do all these imports and copy operations right now 14:18:17 for upload we'd have to do something else, like a quota on images in queued state or whatever 14:19:22 as noted, oslo limit doesn't really provide us a way to determine what our limit is, we tell it what our usage is and what we'ret trying to consume, and it tells us if we're over the limit or not 14:19:46 so doing a bunch of math (which I think is likely to be confusing to the user if they can't see it) with all the numbers is difficult 14:19:55 Also once we have basic mechanism in place we can always enhance it as per our use cases 14:20:06 Yeah, that's the other thing I noticed about the proposal. Due to the approach taken, the staging quota is pretty coarse too. It does not take into account for example copy jobs 14:20:47 abhishekk: right, and nova used to have a quota system that obsessed over hard limits, and it was ripped out years ago in favor of a counting approach like this which doesn't get out of sync and need maintenance, 14:20:54 so we're pretty in line with that 14:21:09 ack 14:21:29 I think other projects know the size of large things before/when they enforce quota.. since glance doesn't on upload... 14:22:04 jokke: meaning the staging space used when we're doing a copy later? 14:22:15 jokke: as noted on that code patch, I'm gonna do that, I just haven't had time this week 14:22:22 and web-download as well 14:22:47 dansmith: doing staging quota only based on the image state does not account staging usage during web-download or copy-image 14:22:56 ah, kk 14:23:11 yep, that could be addressed in implementation, but if you want we can also mention staging use during webdownload or copy operations in the spec 14:23:46 yeah, I'm kinda skeptical about how transparent that will be, because the space used in that scenario is more internalized, 14:23:47 and the user doesn't really know or understand that we're doing that in the same way they do when they stage..import, 14:24:15 but I can see the mismatch so I'm fine with doing it that way 14:24:16 So I will suggest to add concerns on the spec and get it addressed as early as possible 14:24:30 ack 14:24:44 yeah, I get my comments into the spec review 14:24:45 anything else jokke ? 14:24:49 cool 14:24:58 moving ahead 14:25:08 #topic Bi-weekly Bug discussion 14:25:28 Cyril is not around due to holidy 14:25:35 *holiday 14:25:47 We have 3 bugs to discuss today 14:25:51 lets start 14:25:59 #link https://bugs.launchpad.net/glance/+bug/1920936 14:25:59 Launchpad bug 1920936 in Glance "Unrelated stack trace when image doesn't exist in store" [Undecided,New] 14:26:34 This bug is reported in last cycle while we were working on glance-cinder multistore support 14:26:40 I've been poking the pile of swift bugs past week, just fyi, the driver is surprisingly broken to the stage it amazes me it works at all 14:27:07 :o 14:27:18 we can discuss it later 14:27:58 So the above bug is, if image is not present in any of the configured store then it raises 503 14:28:34 Rajat has reported it, since it is difficult to reproduce we need to do some tricks for reproducing the same 14:28:59 In either way I guess 503 is not acceptable and we should have meaningful response 14:29:23 503 is perfectly valid response 14:29:51 well, the stack trace isn't, 14:29:58 not necessarily correct for the exact situation, but there is nothing wrong throwing 503 ;) 14:30:02 sorry 500 14:30:08 and it looks like a bug in trying to call ImageProxy 14:30:50 it's not clear from the bug, 14:30:55 Yep, I will try to have some steps to reproduce this issue and then will work on fix 14:31:10 but it looks like maybe if you delete the thing on the cinder side that you're stuck and can't even GET or DELETE the image anymore? 14:31:32 I guess so 14:32:22 * abhishekk will work with rajat on this 14:32:27 moving ahead 14:32:32 #link https://bugs.launchpad.net/glance/+bug/1874458 14:32:32 Launchpad bug 1874458 in Glance "Glance doesn't take into account swift_store_cacert" [Undecided,New] 14:32:34 DELETE _should_ work 14:33:09 jokke, as you are working with swift driver could you also note this down ? 14:33:16 But looks like it might not be Cinder specific. Looks like we don't handle well if we have locations but none of them has data for the image 14:33:54 hmm, will try to use this way to reproduce the same 14:34:14 abhishekk: ack, I think it isn't in my current work set 14:34:39 nope its not, but will be great if you add it 14:34:56 yeah, I'll have a look 14:35:14 also the bug doesn't show how to configure it, so does not have much information here 14:35:30 cool, kindly update your findings on the bug 14:35:42 move ahead 14:35:47 #link https://bugs.launchpad.net/glance/+bug/1924612 14:35:47 Launchpad bug 1924612 in Glance "Can't list "killed" images using the CLI" [Undecided,In progress] - Assigned to Abhishek Kekane (abhishek-kekane) 14:36:39 So our image-list command have option but it does not list the killed images 14:36:46 I have a WIP patch 14:37:07 but again it showed me that it has another issue as well 14:37:14 glance image-list --property-filter status=killed --property-filter deleted=1 14:37:39 if I pass --property-filter deleted= true|false it does not recognize the value 14:38:04 it only works with integer and not with boolean 14:38:19 So may be that might be issue in our glanceclient 14:39:11 For main bug I will write some unit/functional tests so that patch will be open for review 14:39:20 either in client or request deserializer 14:39:27 hmm 14:39:49 I will report this second bug once I find the root cause of it 14:40:35 but I'd say that is not crazy critical bug as in there is exactly one way image ever gets killed and that is if you use signed image and the signature verifivation fails upon create 14:41:01 sorry, on upload 14:41:35 jokke, right, but as we have filter support it should work for all the image states 14:41:39 do they show up when you list all, including deleted? 14:41:47 nope 14:42:00 oh, ok ... that's interesting 14:43:06 Other than this, we have pointed out some bugs which are reported against v1 and registry, so we are going to mark them invalid/close 14:43:35 that's it from me for today 14:43:47 #topic Open Discussion 14:44:39 Just for update, I will not be around tomorrow 14:45:00 abhishekk: how is your next week looking? 14:45:35 will be there whole week 14:46:11 ok, shall we, taken you're well by Monday, sync about the caching API stuff and get it moving from the start of the week? 14:46:26 yep 14:46:42 finish it by the week as well :D 14:46:49 14:00 utc works for you? or you prefer earlier? 14:47:03 1400 works well for me 14:47:28 kk, I'll have invite in your calendar by the time you log on Mon for 14:00 14:47:38 great, thank you 14:47:49 anything else 14:48:27 Yeah, so lots of the swift issues are related to keystoneauth had backwards incompatible breaking changes like couple of years ago 14:48:53 and we missed them ... and have been lovely ignorant about the whole thing 14:49:04 do you mean while doing multi store changes ? 14:49:13 so that needs refactoring to the current API and we should have majority of those issues solved 14:49:31 no I mean keystone changed keystonauth api 14:49:34 I guess on Monday we should sync on this as well 14:49:53 and we never caught on that 14:50:02 ack 14:50:43 thats all from me 14:50:54 ack 14:51:01 lets wrap up for today 14:51:05 thank you all 14:51:10 thanks all! 14:51:11 have a great week ahead 14:51:31 abhishekk: enjoy your time off, hopefully you're feeling better 14:51:47 thank you, fingers crossed 14:51:57 #endmeeting