14:01:06 #startmeeting glance 14:01:07 Meeting started Thu Jan 28 14:01:06 2021 UTC and is due to finish in 60 minutes. The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:11 The meeting name has been set to 'glance' 14:01:12 #topic roll call 14:01:17 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:01:23 o/ 14:01:41 o/ 14:02:00 lets wait couple of minutes for others 14:03:20 sorry i'm late 14:03:28 no worries 14:03:43 I guess dansmith will be late as well due to daylight saving 14:03:46 lets start 14:04:02 #topic release/periodic jobs update 14:04:07 Final release of non-client libraries - 4 weeks 14:04:21 We don't have major changes in glance-store this cycle 14:04:28 so we could be good on this front 14:04:36 Milestone 3 - 5 weeks 14:05:00 that is coming up soon! 14:05:11 We have couple of specs to review/approve and need to focus on RBAC stuff in this milestone 14:05:43 yeah, lot of work to do :/ 14:05:51 Periodic jobs - green this week 14:05:57 Moving ahead 14:05:59 nice 14:06:16 #topic Important specs reviews 14:06:35 Distributed image import 14:06:47 #link https://review.opendev.org/c/openstack/glance-specs/+/763574 14:07:06 So there are two approaches for this 14:07:09 Lets lead the elephant into the store right away. 14:07:17 :D 14:07:35 Abhishek described my motivations pretty well in his comment earlier this week 14:07:52 1. Use locations table to store staging information and reuse most of the code 14:08:18 2. Use reserved property to store staging host 14:08:31 I have added my findings on both the approaches on the specs 14:08:38 The way I see this is we have pretty much three options how to move on: 14:09:21 go ahead 14:09:25 1) we utilize the locations like outlined in the spec, which will allow to clean up lots of techical dept we've had hanging around since we introduced the Interoperable Image Import 14:11:08 2) We acknowledge that we have absolutely no intention to clear the depts we have been cumulating when the opportunity comes, continue with Dan's proposal of quick and easy and remove all the remarks of TODOs etc. And we stop in the future merging MVPs expecting that when the feature is worked it needs to be complete as we will never finish it anyways. 14:12:04 3) We agree that we haven't came to an agreement how to handle this by the spec deadline, and include this work to be bikeshed again in Xena. 14:12:39 well, #3 is not a good option 14:12:57 For option 1, we don't have much time in hand 14:13:15 And I'm by no means blaming Dan or anyone in specific for us cumulating that tech dept, I'm first to admit being as guilty as any. It's just not sustainable whn we never get back to the items we agree to postpone 14:14:03 I would like to go with option 2 but in good manner, We should go with reserved property and come back in Xena to use locations 14:14:27 I will be volunteering to do this work in Xena 14:14:36 abhishekk: you know as well as I do that it will never happen 14:15:21 And we're doing the work multiple times just because we've been bikeshedding and not having the conversation of why for months 14:15:38 sadly its true, but I will put my best to do it 14:15:41 so if we decide to tackle this in Xena, fine by me, lets do it in Xena 14:16:19 rosmaita, do you have any suggestions? 14:16:36 when you say "tackle this" you mean the refactoring, or the whole spec? 14:16:42 Or lets get the spec agreed upon and I can get to work to make it happen. I'm more than happy to take Dan's poc and refactoring it to clean up the mess we left behind earlier 14:17:14 ok, that's the key question ... do you think there's time to do that in W? 14:17:23 rosmaita: I'm saying if we decide to finish this in Xena, lets push this work to Xena and stop wasting our Wallaby to put any of it in. 14:18:49 rosmaita: yeah, the changes, specially as abhishekk pointed out that we could use locations metadata instead of having dedicated column, are fairly trivial. It's tuching few places like staging, import, delete but it's not weeks of work taking the basis of Dan's work 14:19:33 jokke, if possible could you push your PoC on top of dan's work? 14:19:42 i would be against pushing this to Xena (where "this" == "Distributed image import") 14:19:49 Most of it is deleting 20 lines of hacky code and replacing it with single location. call 14:20:08 in all of those places 14:21:04 quick question ... what do image locations look like now? I mean a JSON-ish sketch 14:21:24 like is is_active its own field? 14:21:32 or is it in the location metadata? 14:21:45 iirc it is its own field 14:21:47 Ok, so If jokke manages to push his PoC around/before next weeks meeting, then we will take a call accordingly in next weeks meeting 14:22:12 is_active is a flag in code which tells image whether to set image active or not 14:22:48 ohh, that's why it's fying in the code on it's own rather than part of the metadtaa 14:22:59 flying ... metadata 14:23:09 kind of 14:23:27 I'd definitely like to see the 20 lines it removes and replaces with one, because maybe that will help me understand 14:23:32 I looked at the store code last night, 14:23:51 trying to figure out how I could fix the current bug of not cleaning up stage residue after a stage..delete 14:24:14 and it looks to me like it will require new store methods and support, to be able to list items in the store so we can compare against deleted images 14:24:28 but basically when staging we want to have something like (data, is_active=False, staging=True) and if staging == True: skip hashing and set the staging host to the metadata 14:24:31 it definitely does not seem like a flexible interface for something that can never be more than a local file on disk to me 14:25:42 so we're going to use the store metadata (but create an entry first) instead of the image metadata which is already here, and protected (by my other series)? 14:25:49 dansmith: that's why if we have location record for the data, when we iterate through that, we can just fire delete from store call to the host listed in the metadata and get that cleaned. without doing it that way we would need pretty much new api for it 14:26:27 we don't have to add a new api for it, my series proxies delete 14:26:33 dansmith: the location metadata is all internal anyways. It's not something user would be able to set in the first place 14:26:38 which has to happen anyway 14:27:29 I think we should wait for PoC which will help all of us for better understanding 14:27:40 like dansmith's 14:27:46 sure, I'd definitely like to see a PoC 14:28:04 and if it's really less work or complex, a delta patch on top should make it super easy 14:28:07 Based on the progress I will take a call in next meeting 14:28:19 but I haven't even gotten any real review on my series yet, 14:28:26 it's tempest-tested, devstack-supported, 14:28:48 and given that rate, I'd be super concerned about landing this in this cycle with any more confusion or discussion of approach 14:28:56 sorry for that, I was away most of the time, will try to put some time in this week 14:29:15 so it surely seems like if what we care about is fixing this, we should plan to land this, with or without a delta on top to see the locations as a better way 14:29:37 abhishekk: yeah I know, it's just frustrating 14:29:39 dansmith: I'm more than happy to -2 that series with note of waiting ofr the psec to be agreed upon if it makes you to feel any better ;) 14:29:49 for .. spec 14:30:18 jokke: do you have a new keyboard? more typos than usual 14:30:25 jokke: if you think that's the best way... 14:30:40 Lets conclude, revisit this next week 14:30:49 rosmaita: kind of, on the laptop kb atm. 14:30:56 next spec is Task show API 14:31:05 #link https://review.opendev.org/c/openstack/glance-specs/+/763740 14:31:08 i wonder if we should have a quick videoconf tomorrow maybe 14:31:15 i hate to see this pushed off another week 14:31:16 I think I have covered all the suggestions now 14:32:03 rosmaita, ack, we need to justify both the approaches and which will be cleared around next week 14:32:08 abhishekk: my +1 on that is implied given that I submitted recently, but happy to +1 again if you ant 14:32:48 dansmith, thanks, I will start on implementation if I get nod from all the reviewers here 14:33:08 It should take couple of days for me to finish, will start on it anyway 14:33:36 rosmaita, jokke please have a look 14:34:01 abhishekk: sorry, i will review after my next meeting 14:34:02 moving ahead 14:34:10 rosmaita, thanks 14:34:14 #topic Secure RBAC status 14:34:39 If distribute image import is elephant in room then this is hippo standing in the same room 14:34:56 We have PoC here, https://review.opendev.org/q/project:openstack/glance+topic:secure-rbac 14:35:06 this room is getting pretty crowded 14:35:16 and I am testing it and recording results here, https://etherpad.opendev.org/p/glance-rbac-testing 14:35:41 I would like to have other eyes on this as well 14:36:05 also https://review.opendev.org/c/openstack/glance/+/764074 14:36:21 nova-ceph-multistore job is failing continuously on this patch 14:36:59 I didn't get time to dig into it, but we also need to get this in to satisfy upstream goal for W 14:37:19 abhishekk: several patches fixed some things in the multistore job earlier this week, 14:37:22 have you rechecked lately? 14:37:32 all cinder-related things, interestingly :) 14:37:45 yes, yesterday 14:37:51 okay, I'll have a look 14:38:00 thank you 14:38:25 Moving in Open discussion 14:38:31 #topic Open discussion 14:38:46 yeah, most of those rbac patches has been my waiting list until tests are fixed 14:39:01 jokke, I will be looking into tests 14:39:10 I will sync with lbragstad for that 14:39:25 ++ 14:39:55 I went through last weeks meeting logs and found that Bug scrub was scheduled on 26th January 14:40:06 oops 14:40:16 which was public holiday for me and I was on PTO on 25th as well 14:40:27 That's what we agreed at the beginning of the cycle, the Tuesday after milestone week ;) 14:40:49 yeah, had no idea it hit on the Indian public holiday 14:40:59 so did anyone do any scrubbing? 14:41:06 So we should have this bug scrub on coming Tuesday? 14:41:28 yeah, I failed to notify about that :( 14:42:32 i won't be able to join, i have too much going on in cinder 14:42:36 abhishekk: I don't mind the day, we just really should take a day to look through and ee what's in there and how if we should tackle and prioritize them 14:43:00 The reason we greed not-Monday was that basically most of Irish public holidays hits on Mondays 14:43:08 ack 14:43:16 but the next one is couple of months away 14:43:37 So lets have the Bug scrub on coming Tuesday 14:43:55 I will create one etherpad and share it on #openstack-glance channel 14:44:10 i am not anti-bug-scrub, but i'd prefer to see us do something that will cause us to come to a consensus on the distributed image import work 14:44:33 because right now, i don't see how we will have any new info about that at the next glance meeting 14:45:04 we will have it 14:45:22 abhishekk: we will have what? 14:45:41 the PoC with locations approach 14:46:09 okay, and as a delta on top of mine? 14:46:14 In next week we will finalize the approach and move ahead 14:46:16 dansmith, yes 14:46:22 oh, ok, so if that's available by Tuesday, we can discuss 14:46:32 that sounds good to me 14:46:36 abhishekk: okay are you doing that or jokke ? I didn't see anyone say they would actually do it 14:47:26 dansmith, it will be jokke, that what we discussed earlier 14:47:30 would probably be good for jokke to do it, he has a strong opinion on what needs to change to harden the locations code 14:47:34 ok, cool 14:47:43 okay, great 14:47:58 anything else for discussion? 14:48:01 yes, 14:48:14 go ahead 14:48:17 this is pretty trivial and I think everyone agrees, can we please get it reviewed? https://review.opendev.org/c/openstack/glance/+/771990 14:48:26 Yeah, I have plan ready I have been just waiting for us to stop the arguing and make to go/no-go call for m to start working on it 14:48:42 this is also tested by tempest now: https://review.opendev.org/c/openstack/tempest/+/771071 14:49:08 rosmaita, jokke kindly review 14:50:00 ack 14:50:44 cool, anything else? 14:51:06 shall I just abandon the ceph spec as well? 14:51:31 or do you guys want to keep bikeshedding on that too? 14:51:38 I didn't have time to go through it, will look and comment on the specs 14:52:09 are you talking about this? https://review.opendev.org/c/openstack/glance-specs/+/740980 14:52:27 yes 14:52:31 I haven't seen responses to the outstanding -1 comments 14:52:33 yup 14:53:20 will comment on the spec 14:54:04 Hi, i upgraded few regions to Victoria release, i see the below snippets in the logs (this spec was introduced in stein release), is this an issue ? do i need to enable any flags (node_staging_uri) to fix it ? 14:54:04 https://opendev.org/openstack/glance/commit/149ea050cc58f39eaf9b4660bb8f0271b99d03da 14:54:38 2021-01-28 07:11:53,935.935 32 WARNING glance.api.v2.images [req-07c8d1ec-f23a-4842-8d92-2e9b3a6aa8b1 4c8fc9f2c67c41e195ddebef32d4fa68 8eba81a5654c4bb2a86fde93ccc33cab - default 049d584cc5064a34ad7defa7b48b7903] After upload to backend, deletion of staged image data has failed because it cannot be found at 14:54:39 /tmp/staging//d21fcb46-27d1-4385-b20f-c2d0acd2d1dc 14:55:16 rajivmucheli, try setting node_staging_uri in your glance-api.conf under default section 14:56:23 sure! 14:56:29 last 4 minutes 14:57:03 rajivmucheli: yeah, the node_staging_uri is needed for the image import 'glance-direct' and copy methods 14:57:21 actually all of them I think 14:57:31 web-download picks it down there as well 14:57:50 copy methods are available only if multiple stores are configured 14:58:03 but i have only swift enabled, 14:58:12 I think rajivmucheli is still using single store and that's why node_staging_uri is required 14:59:59 so the flag needs to be set for single or multiple store ? i use swift as backend, so its single store right ? 15:00:11 rajivmucheli, that flag requires for single store 15:00:22 rajivmucheli: it depends how your stores are configured in your config files 15:00:35 it should start with file:/// 15:00:47 okay, would this be related if these flags are enabled swift_buffer_on_upload = True, swift_upload_buffer_dir = /upload ? 15:00:54 if you have old style configs that is needed, if new multi-store approach then you need to configure the staging store 15:01:12 that is nothing to do with swift 15:01:14 out of time 15:01:22 lets move the glance channel 15:01:24 thank you all 15:01:29 Thanks all 15:01:30 #endmeeting