14:00:25 <jokke_> #startmeeting glance
14:00:26 <openstack> Meeting started Thu Mar 14 14:00:25 2019 UTC and is due to finish in 60 minutes.  The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:27 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:29 <openstack> The meeting name has been set to 'glance'
14:00:31 <jokke_> #topic roll-call
14:00:34 <jokke_> o/
14:00:56 <jokke_> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:01:19 <rosmaita> o/
14:01:43 <abhishekk> o/
14:02:20 <lpetrut> o/
14:02:40 <jokke_> hey all
14:03:02 <jokke_> #topic updates
14:03:34 <jokke_> Soo we're on FF after the extended merge them features now we agreed last week.
14:04:04 <abhishekk> #link https://etherpad.openstack.org/p/glance-stein-milestone-3
14:04:12 <abhishekk> current status ^^
14:04:30 <jokke_> I think lpetrut's work on the Windows compatibility is the last thing that is not purely bugfix we will consider
14:04:51 <abhishekk> yes
14:05:00 <jokke_> I don't think we have any FFE requests on anything else. We have some bugfixes pending but in general we're looking pretty good
14:05:08 <abhishekk> also do we need FF for visibility?
14:05:17 <rosmaita> was just going to ask
14:05:25 <rosmaita> that may need to be postponed to train
14:05:30 <jokke_> We need to drop Tempest gating for that
14:05:32 <abhishekk> agree
14:05:38 <lpetrut> thanks a lot for your help on this. some updates on this matter: we're almost done with the CI
14:06:06 <abhishekk> cool
14:06:33 <jokke_> Which is yet again topic on my PTG list.
14:06:49 <rosmaita> i will help with that, we need a strategy
14:07:14 <jokke_> But lets move on and we can discuss the more pressing current matters under their slots
14:07:23 <jokke_> #topic release updates
14:07:47 <abhishekk> We need to release glance milestone 3 today as agreed in last week
14:07:59 <jokke_> Client got release last week on time and we did not release the milestone 3
14:08:17 <abhishekk> python-glanceclient  --> Version 2.16.0 released last week
14:08:56 <rosmaita> \o/
14:09:14 <abhishekk> for milestone 3, we need releasenotes patch, then need to regenerate sample config, and release patch
14:09:16 <jokke_> I think the big question is do we have something we need the milestone 3 tag for?
14:09:21 <jokke_> RC1 is next week
14:10:13 <jokke_> tags are cheap, I'm not saying we definitely should not do that, but do we have any benefit for tagging it?
14:10:25 <abhishekk> is it possible that we can release RC1 direct?
14:10:42 <rosmaita> that's a question for sean, but i believe we can
14:10:49 <jokke_> abhishekk: yes, as of this cycle forward there is no need to tag milestones
14:11:06 <jokke_> only reason we did need the milestone tagging was for the db migrations to work
14:11:35 <abhishekk> cool, then I guess apart from cache-manage utility we don't have any major feature in this release
14:11:58 <jokke_> As of release model, the Integrated release with milestones changed to kind of Integrated release with possible milestones
14:12:11 <abhishekk> yeah, then we need to submit the patch to open the migration for Train release
14:12:31 <jokke_> yes, that will be around Train-1
14:12:39 <abhishekk> ack
14:13:03 <rosmaita> or when that test breaks the gate
14:13:12 <rosmaita> which i forget what exactly it looks for
14:13:15 <jokke_> or that ^^ ;D
14:13:28 <abhishekk> :D
14:13:43 <jokke_> I think we did merge the instructions into the release liaison document
14:14:05 <abhishekk> jokke_, it was already there
14:14:28 <jokke_> but it definitely should not blow up this week so lets move on unless there is something else about releases?
14:14:50 <jokke_> Ohh, how are the periodical gates looking? Still just sporadical timeouts?
14:15:05 <abhishekk> yes, 6 failures in last week
14:15:15 <abhishekk> all are timeouts
14:15:29 <abhishekk> I have added this topic in PTG discussion etherpad
14:15:37 <jokke_> hmm-m that's like double what we had week before iirc
14:15:55 <jokke_> sounds like the gate load plays a role in there as well
14:15:58 <abhishekk> we might opt help from infra guys?
14:16:27 <abhishekk> that's it from release updates topic
14:16:41 <abhishekk> are we going to tag m3 today or RC1 directly
14:16:50 <jokke_> I think it's at least worth of discussing. We definitely have the data to show now when we have had stable situation for a while
14:17:33 <abhishekk> right
14:17:43 <jokke_> Like said I don't mind either way. Does anyone have any compelling arguments for or against tagging m-3?
14:18:56 <jokke_> I guess not. Lets skip it and focus to get good RC1 out next week then!
14:18:58 <abhishekk> sorry to say, suddenly there are 8-10 failures for periodic jobs
14:18:59 <abhishekk> functional-py35 create: /home/zuul/src/git.openstack.org/openstack/glance/.tox/functional-py35
14:18:59 <abhishekk> ERROR: InterpreterNotFound: python3.5
14:18:59 <abhishekk> ___________________________________ summary ____________________________________
14:18:59 <abhishekk> ERROR:  functional-py35: InterpreterNotFound: python3.5
14:19:19 <jokke_> ^^ looks definitely like Infra issue
14:19:30 <rosmaita> they changed some images or something recently
14:19:40 <abhishekk> #link http://zuul.openstack.org/builds?pipeline=periodic&project=openstack%2Fglance
14:19:47 <abhishekk> yes
14:19:48 <jokke_> thanks abhishekk
14:19:53 <rosmaita> thierry said something about it on the ML, though not that exact issue
14:20:24 <rosmaita> forget that, it was a gpg key issue
14:20:29 <jokke_> ok, moving on
14:20:39 <jokke_> #topic feature Freeze
14:21:07 <jokke_> So we have those couple of patches lpetrut had to rebase there still hanging
14:21:16 <jokke_> How is it looking ot get those reviewed?
14:21:44 <lpetrut> those have entered the gate
14:22:05 <abhishekk> one patch needs to be reviewed where he has added time.sleep in tests
14:22:19 <abhishekk> #link https://review.openstack.org/#/c/643336/1
14:22:27 <lpetrut> yep, I've marked that as WIP as there's still one case that I'll need to fix
14:22:43 <lpetrut> so this got me wondering, did you have such problems before? I mean, due to the timestamps
14:23:04 <abhishekk> AFAIK, never
14:23:12 <jokke_> lpetrut: I can't recall at least. The good thing about that patch is that it's touching tests only
14:23:15 <rosmaita> no, usually the functional tests are slow enough
14:23:18 <lpetrut> interesting, good to know
14:23:56 <rosmaita> what i mean is, we haven't had to worry about instantaneous updates in the functional tests
14:24:07 <jokke_> ^^
14:24:43 <lpetrut> yeah, I'm not sure if it's just very fast or the clock is inaccurate :)
14:25:13 <jokke_> At this point we need to put full focus on nailing the bugs down. Stuff like the test hardenings are great for this time of the cycle
14:25:24 <lpetrut> and sorry to ask, but do gate tests run on bare metal?
14:25:40 <jokke_> Shall we have bug smash around the weekend, Monday maybe?
14:25:55 <rosmaita> lpetrut: i don't think the tests run on bare metal
14:26:01 <jokke_> lpetrut: no they mostly don't
14:26:01 <rosmaita> though, maybe occasionally
14:26:15 <rosmaita> i think in general they are running on "donated" VMs
14:26:34 <abhishekk> yes
14:26:35 <jokke_> there is some baremetal testing but like rosmaita said mainly zuul respins VMs
14:27:15 <lpetrut> got it, thanks
14:27:32 <jokke_> abhishekk: was your yes for lpetrut or bug smash? Sorry I mixed up the discussion here a bit
14:27:48 <abhishekk> bug smash
14:28:04 <jokke_> cool. And I hope you're feeling better
14:28:05 <rosmaita> quick question about lpetrut: how do we feel about the sleeps? just let them through?
14:28:20 <abhishekk> yes, better and better
14:28:23 <rosmaita> they are very small sleeps
14:28:44 <lpetrut> I thought that a few ms should be negligeable. mocking complicate the code and cause other issues
14:29:02 <lpetrut> I can make them optional (e.g. have a separate method for that, something like .add_delay)
14:29:07 <rosmaita> i agree, don't want to mock too much stuff in functional tests
14:29:37 <rosmaita> maybe let's just leave them in so we can get the rest merged and people can start testing it
14:29:46 <rosmaita> do the optimizations as a followup patch
14:29:55 <jokke_> As long as the delays are not in a places where we would expect catching real issues like race conditions I'm fine having them in tests
14:30:31 <rosmaita> well, they are db updates, so race condition city
14:31:24 <jokke_> ok, lets review them carefully and see if we should perhaps make sure the test reflects real life scenario enough.
14:31:38 <abhishekk> ok
14:32:02 <jokke_> if it's testing issue I'm fine with delaying them, if it's something we should fix to make sure it's atomic change, lets fix it :D
14:32:10 <abhishekk> Do lpetrut need to send FFE mail as per standard?
14:32:47 <jokke_> abhishekk: like said earlier these are testing hardening/bug fixing. Perfect for this time. I don't see new features introduced here
14:33:06 <abhishekk> cool
14:33:18 <rosmaita> yeah, but we need these test fixes for the other stuff to merge, right?
14:33:55 <jokke_> rosmaita: I think they are not prerequisite for the stuff gating atm. but for the windows ci to run
14:34:02 <jokke_> lpetrut: correct me if I'm wrong
14:35:25 <jokke_> ok, moving on
14:35:32 <rosmaita> in that case, it would be cool if lpetrut did the optimization of only sleeping if running on windows
14:35:37 <jokke_> #topic py3 glance_store issue
14:35:50 <jokke_> rosmaita: this was yours
14:36:23 <rosmaita> #link https://review.openstack.org/#/c/620234/
14:36:26 <abhishekk> rosmaita, I will try to have a look at test patch today (or may be tomorrow)
14:36:41 <rosmaita> abhishekk: don't i have another idea on that
14:36:54 <abhishekk> great
14:36:58 <rosmaita> i will be changing the patch a bit, had an idea while at the dentist this morning
14:37:14 <abhishekk> :D
14:37:15 <rosmaita> anyway, i just want to push the above
14:37:25 <jokke_> rosmaita: ok what you had in mind?
14:37:41 <rosmaita> well, mainly i need two +2s
14:37:44 <rosmaita> :)
14:37:57 <rosmaita> or need to find out that this is unacceptable
14:38:04 <rosmaita> i would like to close this out ASAP
14:38:33 <jokke_> Just bit of a background. We had discussion with rosmaita about this patch and what Tim said about monitoring the result returned from that read
14:39:32 <rosmaita> yeah, the basic idea is that i want to keep this patch as small as possible, that is, don't want it to mask any other problems
14:39:45 <jokke_> So Tim's proposal was instead of faking the zero read to check every response we get from the reads and if it's anything but actual data coming in, replacing it
14:39:55 <lpetrut> such a bad time to run out of battery :) without 634007, we cannot run the tests on Widndows (currently in the gate), while 643336 fixes unit tests. sorry for going off topic
14:40:36 <jokke_> and that's exctly why we came to the conclusion that faking the zero read instead of replacing something that executed code returns to us would be the better way to go
14:41:00 <jokke_> That way we will catch possible future issues while we try to find out what is the actual root cause
14:41:08 <rosmaita> yes, so we just need sean or abhishek to agree with jokke_
14:41:28 <abhishekk> yes makes sense to me
14:41:48 <rosmaita> in the meantime, i am trying to get functional tests that catch this
14:42:13 <jokke_> and we left the bug open, so the commit message is pointing to related-bug not closes-bug
14:42:21 <abhishekk> and the note added by rosmaita helps better to understand it
14:43:05 <jokke_> so we have reminder that we actually need to figure out what's crapping us in the first place. This change will just fix this specific issue so we can release and indeed work with py3
14:43:25 <rosmaita> i'll add some comments to the bug
14:43:43 <jokke_> great, I think this is beaten, lets move on
14:43:43 <abhishekk> great
14:44:02 <jokke_> #topic data remains in staging
14:44:13 <jokke_> abhishekk: this is something you've been working on
14:44:22 <abhishekk> jokke_, yes
14:44:24 <rosmaita> i put it on the agenda for abhishekk
14:44:39 <rosmaita> i want to make sure we are all ok with the way the problem is handled
14:44:43 <jokke_> And I think rosmaita pointed out very well that the staging should not be relying deployment store enablements
14:44:44 <rosmaita> i think it's fine
14:44:45 <abhishekk> yesterday Me and rosmaita had a discussion about how to fix this
14:44:59 <rosmaita> and we will be refactoring this for Train
14:45:03 <jokke_> ok
14:45:06 <abhishekk> ++
14:45:13 <rosmaita> but this fix i think needs to go back to rocky
14:45:28 <jokke_> so we go as it is for now and keep it in the agenda of PTG discussions how we do this finally?
14:45:32 <abhishekk> I have tested this for single store and multiple store and it is working
14:45:38 <rosmaita> so i do have a question for abhishek, though
14:45:53 <abhishekk> please shoot
14:46:11 <rosmaita> you found in testing that using _build_store() to create the store to delete from modified teh "real" store
14:46:23 <rosmaita> but why does it work in the api in the /stage call?
14:47:07 <abhishekk> just give me a minute
14:47:14 <rosmaita> sure, or we can discuss later
14:47:30 <rosmaita> just wanted to determine whether we need to change the code in /stage also for now
14:47:41 <rosmaita> i don't think so
14:47:48 <rosmaita> but it is kind of weird, i am missing something
14:48:20 <jokke_> The staging code at least used to work in a way that it created new store object, owerwrote the config on that, so the config change stayed local on that object
14:48:52 <rosmaita> right
14:48:54 <abhishekk> in staging code we are creating file store instance and calling add method of file store directly
14:49:16 <jokke_> it is worst kind of black magic wizardry you can do in OOP
14:50:07 <jokke_> abhishekk: and I think that was rosmaita's point on the comment, we should do the same on the cleanup. Just create the object with overwritten config and call it's delete directly
14:50:38 <abhishekk> to call delete method of filestore we need to pass location object to delete method
14:50:49 <jokke_> instead of relying the store library to figure out how to get to the path
14:50:56 <jokke_> abhishekk: ohhh
14:51:08 <abhishekk> and it is so weired to create a location object on the fly
14:51:12 <rosmaita> this is related, i think this will be a problem: https://github.com/openstack/glance/blob/master/glance/api/v2/image_data.py#L76
14:51:16 <jokke_> so we would need to do even more hackstery to create that bloody object
14:51:45 <jokke_> ok, then I agree. Lets keep this simple for now and do it properly in Train
14:51:46 <rosmaita> i think get_location_from_uri requires the scheme to be "registered"
14:51:55 <abhishekk> yes, so instead of that I have kept it simple
14:52:42 <rosmaita> i think we need to do unlink that that _unstage function
14:52:42 <jokke_> abhishekk: would you mind to write me an mail about this whole scenario and the reasons why it sucks? ;) I'd like to have this explained in known issues section of the release notes
14:52:49 <abhishekk> rosmaita, right, and if I register it and if there is filestore in the 'stores' conf option then all images will be stored in staging area :D as it will override
14:52:58 <abhishekk> jokke_, sure
14:53:10 <jokke_> great
14:53:48 <jokke_> ok, so lets stick with the current and explain in documentation.
14:53:51 <abhishekk> I will draft a mail within a hour after the meeting
14:54:23 <jokke_> this should get easier once we get to utilize those reserved stores on multistore for it
14:54:35 <abhishekk> yes, way easy
14:54:45 <jokke_> abhishekk: no rush, I need it like before the final release :D
14:54:50 <rosmaita> ok, so i get it ... the problem is having to use the location for delete
14:55:01 <rosmaita> have i mentioned that i really hate image locations recently?
14:55:13 <abhishekk> #MeToo
14:55:14 <abhishekk> :D
14:55:23 <rosmaita> meaning "recently mentioned" not "recently hated"
14:55:25 <jokke_> rosmaita: and not only location, which we know, but the store delete expecting properly formed location object
14:55:37 <rosmaita> i have hated them for a very long time
14:55:45 <jokke_> yeah same
14:55:52 <rosmaita> ok, so to summarize:
14:56:00 <rosmaita> creating the store in /stage is fine
14:56:27 <rosmaita> probably a problem wiht the _unstage method in the image_data controller for the delete, though
14:56:47 <rosmaita> in the task, abhishek's code is doing the right thing to just manually delete the stuff from teh staging area
14:56:55 <rosmaita> the end
14:57:05 <jokke_> yeah 3 min for open discussion
14:57:11 <jokke_> #topic open discussion
14:57:57 <jokke_> rosmaita: so just to clarify, the glance_store needs the store type to be enabled to be able to form the location object from the uri to pass to the delete ... in nutshell that's the current problem :D
14:58:24 <abhishekk> correct
14:58:26 <rosmaita> right
14:58:38 <jokke_> anything else?
14:58:44 <abhishekk> nope
14:58:46 <jokke_> oh yeah
14:59:04 <abhishekk> You need to send mail for PTL candidacy
14:59:16 <jokke_> #agreed Bug scrub day at Monday! Lets go through our open bugs and make sure we haven't missed anything critical
14:59:32 <rosmaita> jokke_: what abhishekk said!!!
14:59:37 <abhishekk> just to update, I have submitted specs for nova and cinder to use multiple backned of glance
14:59:40 <jokke_> abhishekk: I already submitted my candidacy to the elections repo ;)
14:59:48 <jokke_> abhishekk: Amazing!
14:59:59 <abhishekk> jokke_, great
15:00:36 <jokke_> ok, time ... we can continue on #os-glance
15:00:39 <jokke_> thanks all
15:00:40 <abhishekk> thank you all
15:00:43 <jokke_> #endmeeting