14:00:14 <abhishekk> #startmeeting glance
14:00:16 <openstack> Meeting started Thu Nov 14 14:00:14 2019 UTC and is due to finish in 60 minutes.  The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:18 <abhishekk> #topic roll call
14:00:20 <openstack> The meeting name has been set to 'glance'
14:00:30 <davee_> o/
14:00:33 <abhishekk> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:00:41 <abhishekk> o/
14:00:54 <rosmaita> 0/
14:01:21 <smcginnis> o/
14:01:43 <jokke_> o/
14:01:47 <yebinama_> o/
14:01:54 <abhishekk> all are here, lets start
14:02:15 <abhishekk> #topic Updates
14:02:35 <abhishekk> Last week we had good summit and PTG
14:02:50 <abhishekk> Below is the summary of what happened in summit and PTG
14:02:59 <abhishekk> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-November/010752.html
14:03:16 <smcginnis> Thanks for the nice recap. I appreciated being able to catch up with that.
14:03:33 <abhishekk> smcginnis, ty
14:04:06 <abhishekk> also, we have came up with planning and priorities which is captured at the end of this etherpad
14:04:16 <abhishekk> #link https://etherpad.openstack.org/p/Glance-Ussuri-PTG-planning
14:04:44 <abhishekk> I will submit a patch for ussuri priorities this week for the same
14:05:02 <abhishekk> Lets move to next topic
14:05:05 <rosmaita> question: have we removed v2 from the glanceclient?
14:05:35 <jokke_> rosmaita: ehh, nope, not that I know of
14:05:43 <abhishekk> rosmaita, nope
14:05:47 <rosmaita> yeah, i didn't think so
14:05:54 <rosmaita> the priorities look pretty full
14:06:09 <abhishekk> rosmaita, yes :D
14:06:10 <rosmaita> but you might want to slap that in there just in case we get someone else
14:06:32 <jokke_> rosmaita: why in earth we would want to remov v2 from the client?
14:06:35 * smcginnis loves removing code :)
14:06:35 <abhishekk> rosmaita, v2 or v1?
14:06:43 <rosmaita> no bugs
14:06:48 <rosmaita> i meant v1
14:06:53 <abhishekk> :D
14:06:54 <rosmaita> (was thinking of cinder)
14:07:19 <davee_> rm -rf *.*  should do it
14:07:26 <jokke_> ;)
14:07:30 <rosmaita> i may have to stop attending these meetings, it is too confusing
14:07:38 <abhishekk> I will include this in priorities and will report a bug with low-hanging-fruit tag for the same
14:07:48 <rosmaita> sounds good
14:07:56 <abhishekk> Cool
14:08:15 <abhishekk> #action abhishekk to  include this in priorities and will report a bug with low-hanging-fruit tag for the same
14:08:33 <abhishekk> moving ahead
14:08:43 <abhishekk> #topic release/periodic jobs update
14:08:48 <jokke_> yeah, I'm at least much more interested to get ri of the registry atm vs the v1 client. It doesn't hurt much there and haven't been causing maintenance burden
14:09:23 <abhishekk> jokke_, right
14:09:24 <jokke_> and there's still some very old deployments out there
14:10:15 <abhishekk> As we are talking Ussuri milestone is just 3 weeks away
14:10:43 <jokke_> and we have a lot on the U1 target list
14:10:46 <abhishekk> it will be around 09-13 December and we need to do lots of stuff in this milestone
14:11:26 <abhishekk> I have contacted greg who submitted the multiple imports specs and he is still willing to do the same
14:11:51 <yebinama_> Yes I confirm, I'm here :)
14:12:04 <abhishekk> yebinama_, great, welcome
14:12:11 <jokke_> yebinama_: \o/
14:12:18 <yebinama_> Thanks
14:12:36 <yebinama_> I had some questions regarding the PTG discussion on this topic
14:12:54 <abhishekk> yebinama_, yes, we can discuss this during Open discussion
14:12:57 <yebinama_> Can I ask now or wait for the open discussion?
14:13:02 <yebinama_> ok thanks
14:13:31 <abhishekk> Similarly I will write a spec to copying existing image in multiple stores early next week
14:13:44 <yebinama_> great
14:14:18 <abhishekk> moving ahead, Periodic jobs all green except two transition related failures and post failure
14:14:53 <abhishekk> jokke_, as discussed in PTG I have tried to analyze the code around fixture, but not able to reproduce this issue in local any more
14:15:18 <abhishekk> earlier it was failing in my local environment every time but suddenly it has stopped
14:15:40 <abhishekk> so not able to debug and analyze
14:15:52 <abhishekk> so I will keep this aside for some time
14:16:34 <abhishekk> moving ahead
14:16:50 <abhishekk> #topic Open discussion
14:16:56 <abhishekk> yebinama_, stage is yours
14:17:18 <yebinama_> :)
14:17:41 <tosky> (after that and if there is still some time, I have some questions, so please don't quickly close the meeting :)
14:17:49 <yebinama_> Si I had some questions to clarify some points
14:17:55 <abhishekk> tosky, ack
14:18:07 <abhishekk> go ahead
14:18:33 <yebinama_> Regarding the flag to bypass the image activation on location
14:18:42 <yebinama_> What is this about?
14:20:32 <abhishekk> yebinama_, if we specify multiple stores in request then we should set image to active when it is imported/uploaded to all stores
14:21:41 <abhishekk> so when we set image location it becomes active to avoid this and maintain backword compatibility we should set one flag and restrict image being active unless it is uploaded to all stores
14:22:50 <yebinama_> Why not simply update locations at once when all uploads have been treated?
14:23:05 <jokke_> yebinama_: I haven't looked into it yet too much on code level, but like abhishekk said, currently we do activate image when first location is added to it. We need to be able to postpone that to the point we know if we're failing the import  or not
14:24:28 <jokke_> yebinama_: that's one option, but will cause issues if we loose the glance node during the import process. We end up with image data in the stores we have no track of as the locations are not in the database. Which means that if someone deletes that image to try again that ghost data will stay lingering in the store
14:24:46 <jokke_> if we do have the locations updated, we will delete the data from the store upon image delete
14:25:10 <abhishekk> right
14:25:16 <yebinama_> ok fine
14:25:53 <abhishekk> yebinama_, also we need to find out how delete works when import is in progress
14:26:17 <yebinama_> Should we accept a delete request if import is on progress?
14:26:28 <yebinama_> Or send a 409?
14:27:11 <rosmaita> i believe we accept a delete while an image is in 'saving' status now, so should probably accept the delete
14:27:13 <abhishekk> yebinama_, we don't have locking mechanism or way to find out any operation is still in progress on that particular image id
14:28:23 <abhishekk> agree, we should accept the delete request and raise HTTPGone from current request if staging data is not available any more
14:28:27 <jokke_> rosmaita: right, and I think when we are importing into multiple stores, we need to check before starting new upload that the image is not "deleted"
14:28:49 <rosmaita> yes, that would be smart
14:29:03 <jokke_> so we can signal the taskflow to stop instead of keep uploading the data
14:29:09 <abhishekk> ++
14:29:44 <smcginnis> Agree we need a way to cancel that. Would hate to have to upload a large image that we know will just be deleted.
14:30:29 <abhishekk> yebinama_, anything else?
14:30:31 <jokke_> smcginnis: the bigger problem is that if we process the delete and keep uploading, we won't delete from those stores so we end up again with untracked ghost data
14:30:51 <smcginnis> That would be very bad too!
14:31:08 <jokke_> it might cause revert if the location add fails 'though
14:31:22 <jokke_> so we might aleady have failsafe for that
14:31:24 <abhishekk> jokke_, that will not happen as on deleting it deletes the data from staging area too
14:31:27 <jokke_> but still
14:31:57 <jokke_> abhishekk: yeah, but the data is really not deleted from staging as long as we have file descriptor open to it.
14:32:12 <jokke_> we just need to be careful
14:32:16 <abhishekk> ack
14:32:24 <yebinama_> ok
14:32:44 <yebinama_> and last one
14:32:49 <jokke_> yebinama_: other than that it should be pretty straight forward
14:33:00 <jokke_> feel free to ping me with any questions
14:33:00 <yebinama_> About " Is there need for internal plugin (only available on create-image-via-import, on /v2/import)"
14:33:22 <yebinama_> jokke_: ok thanks
14:33:41 <yebinama_> what do you mean by internal plugin?
14:34:04 <jokke_> yebinama_: yes, so we were discussing does it actually make sense to create new import method rather than just add the functionality to the existing ones
14:34:52 <yebinama_> What I've already implemented (based on the first spec)
14:34:55 <jokke_> and we came to the conclusion that it makes more sense to add it to the current ones (so we just need to implement task that does the looping into the taskflow if the store list is in the import request)
14:35:10 <jokke_> yebinama_: which way did you do it?
14:35:16 <yebinama_> I have keep the same method
14:35:25 <jokke_> yebinama_: great
14:35:31 <yebinama_> and call 2 new methods in it
14:35:34 <abhishekk> jokke_, he has PoC reference in the specs
14:35:46 <yebinama_> one for multistore and one for old behaviour
14:35:48 <jokke_> I will need to take a look on that
14:36:06 <abhishekk> #link https://review.opendev.org/#/c/667132/
14:36:12 <yebinama_> yep i've already uploaded some code
14:36:22 <jokke_> yebinama_: cool. As we don't want to end up with "glance-direct-to-multistore" and "web-download-to-multistore" :P
14:36:39 <rosmaita> jokke_: ++
14:36:49 <yebinama_> Agree :))
14:37:03 <abhishekk> jokke_, we also need to figure out how we trigger taskflow from upload in case nova and cinder sends us base image id
14:37:56 <jokke_> abhishekk: that is outside of the scope for this for now. we need to do quite a bit of piping for that feature to become reality
14:38:12 <abhishekk> jokke_, yes
14:38:54 <yebinama_> last one
14:39:08 <yebinama_> Regarding policies
14:39:23 <yebinama_> I don't think we need to add a new one
14:39:27 * jokke_ runs!
14:39:52 <abhishekk> :d
14:39:53 <yebinama_> :)
14:39:57 <yebinama_> Since we can already choose a store to upload to
14:40:15 <jokke_> yebinama_: I do agree, for this specific feture there is no need to do policy work
14:40:28 <yebinama_> great :)
14:40:52 <yebinama_> that's all for me
14:41:06 <yebinama_> thanks for the feedbacks
14:41:17 <jokke_> we got some ideas/requests to create policies to control who is able to create images to specific stores, but I totally agree, it should not be requisite for this feature to land
14:41:18 <abhishekk> cool, feel free if you have any more questions
14:41:27 <jokke_> it will affect it later when implemented
14:41:46 <abhishekk> yebinama_, in which timezone you are
14:41:55 <yebinama_> abhishekk: ok thanks
14:41:55 <abhishekk> tosky, your turn now
14:42:02 <yebinama_> GMT+1 Paris
14:42:15 <tosky> I will be very quick
14:42:24 <abhishekk> yebinama_, cool
14:42:27 <jokke_> tosky: no worries, you have 18min :P
14:42:28 <tosky> I proposed 3 test-related backports for stein
14:42:58 <tosky> the base one is https://review.opendev.org/#/c/691308/, rosmaita already commented; I think it was simply a slip because this change was really intended for stein
14:43:43 <tosky> the other two depends on it; once you remove py35, the other jobs should be fixed too: https://review.opendev.org/#/c/691313/ (this is not a backport, but it's a branch-specific fix) and https://review.opendev.org/#/c/691367/
14:43:45 <rosmaita> abhishekk: can approve that now, he was just added to the stable-maint team
14:44:16 <tosky> and that's it, it was just a small ping for them, so that the functional tox test environment works out of the box with python 3.6 as expected
14:44:46 <abhishekk> ty rosmaita
14:45:11 <abhishekk> tosky, will have a look at them asap
14:45:20 <rosmaita> abhishekk: don't know if you saw matt r's email, make sure you read through https://docs.openstack.org/project-team-guide/stable-branches.html
14:45:22 <jokke_> we should not merge that
14:46:03 <abhishekk> rosmaita, ack, I have seen it and also aware of these guidelines
14:46:06 <jokke_> Just doing comment on the patch, but if the py35 is not present and we're fixing those envlists we should not leave the py35 env there ;)
14:47:10 <rosmaita> they don't hurt anything
14:47:14 <tosky> uhm, that's true; at least it's not superharmful, I was trying to not touch too many things
14:47:15 <rosmaita> the gate doesn't run 'tox'
14:47:29 <rosmaita> it runs for particular envs
14:48:07 <rosmaita> i imagine centos still has 3.5
14:48:26 <abhishekk> is it?
14:48:30 <jokke_> and funny enough, looking the check job list openstack-tox-py35 runs and passes
14:50:08 <jokke_> so either that is named wrong and is actually running py36 (wouldn't be first time) or the 35 intrepreter is somewhere available still :P
14:50:31 <abhishekk> agree with jokke_ here, if py35 environment is not available it should be included
14:50:55 <tosky> another patch then?
14:51:31 <jokke_> tosky: if you just remove the 35 envs from the list and definitions we can push that through today I assume
14:51:42 <jokke_> tosky: it looks otherwise very reasonable
14:52:26 <rosmaita> slightly related, we need to keep an eye on https://review.opendev.org/#/c/693951/
14:52:40 <rosmaita> we aren't installing constraints and requirements correctly
14:52:42 <tosky> jokke_: so changing the first patch in the stack?
14:52:55 <jokke_> or we can merge that cherrypick and put another patch out as it seems that abhishekk's original patch did leave them behind already in Train
14:53:01 <jokke_> tosky: wait a sec
14:53:06 <jokke_> because of ^^
14:53:28 <tosky> oh, sure, if they are still in train they should be removed first there (and backported)
14:53:33 <jokke_> abhishekk: rosmaita: what do you think?
14:53:59 <tosky> there is another patch that I could backport
14:54:05 <jokke_> https://review.opendev.org/#/c/648614
14:54:11 <abhishekk> +1
14:54:54 <jokke_> ok, I can approve this one now and we can do the cleanup as follow-up for backport clarity
14:54:58 <tosky> this one: https://review.opendev.org/#/c/664475/
14:55:03 <jokke_> thanks tosky
14:55:43 <abhishekk> we have 5 minutes left
14:57:21 <abhishekk> anything else to discuss?
14:58:23 <abhishekk> Thank you all, see you in next meeting
14:58:37 <rosmaita> bye!
14:58:38 <yebinama> Bye
14:58:55 <abhishekk> #endmeeting