14:04:30 <jokke_> #startmeeting glance
14:04:31 <openstack> Meeting started Thu May 16 14:04:30 2019 UTC and is due to finish in 60 minutes.  The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:04:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:04:35 <openstack> The meeting name has been set to 'glance'
14:04:37 <smcginnis> o/
14:04:38 <jokke_> #topic roll-call
14:04:40 <jokke_> o/
14:05:04 <jokke_> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:05:29 <rosmaita> o/
14:05:46 <jokke_> ok, I was late enough so lets get started ;)
14:05:52 <jokke_> #topic updates
14:06:19 <jokke_> so first recap of PTG plans
14:06:22 <jokke_> #link  https://etherpad.openstack.org/p/Glance-Train-PTG-planning
14:06:57 <jokke_> we took the cinder way and instead of having separate etherpads for each topic recorded the notes under the topic on that one etherpad
14:07:18 <jokke_> I have to say I like it ... much easier to navigate than 20 tabs of spread etherpads
14:07:57 <rosmaita> yes, there is something to be said for that
14:08:19 <jokke_> shoot
14:08:29 <rosmaita> looks like abhishekk had to restart his system due to connectivity problems, we should probably wait a few min
14:08:36 <jokke_> kk
14:10:26 <jokke_> I have filtered out all joins, parts and nick changes to keep my irc clean so if he dropped I missed it
14:11:17 <abhishekk> o/
14:11:17 <smcginnis> He's back! :)
14:11:22 <rosmaita> hooray!
14:11:34 <abhishekk> apology, had some network issues
14:12:06 <jokke_> np
14:12:38 <jokke_> so rosmaita you don't like having everything in one Etherpad?
14:12:58 <rosmaita> no, it's fine
14:13:35 <smcginnis> We've also done by day instead of by topic. Less etherpads, but still not a lot all in one if there ends up being a lot of notes.
14:13:45 <rosmaita> it's like everything ... occasionally there's probably a topic worth having on its own, but in general this is a good way to do it
14:14:03 <abhishekk> +1
14:14:26 <jokke_> yeah, I'd love to have etherpad with permalinks so you could have TOC there
14:14:36 <rosmaita> that's called a wiki
14:14:38 <rosmaita> :)
14:14:50 <jokke_> :P
14:15:01 <smcginnis> :D
14:15:09 <jokke_> ok, shall we move on?
14:15:17 <abhishekk> yep
14:15:30 <smcginnis> Not sure you want to go through the effort, but Jay has been doing nice cleaned up recaps here: https://wiki.openstack.org/wiki/Cinder#PTG_and_Summit_Meeting_Summaries
14:15:58 <jokke_> thanks smcginnis will have a look
14:16:01 <smcginnis> I've found it useful with being able to find the bolded "Action (smcginnis)" parts.
14:16:15 <smcginnis> Slightly higher chance I will actually do something I signed up for. ;)
14:16:20 <jokke_> cool :D
14:17:05 <jokke_> Another thing I wanted to highlight is the Image Encryption weekly meeting. Noticed you guys had put your input into the doodle already
14:17:08 <jokke_> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005861.html
14:17:33 <jokke_> for anyone who missed the discussion in the ML
14:18:12 <jokke_> by the looks of it I finally got my filters fixed and I seem to be able to keep somewhat track whats going on again
14:18:32 <jokke_> moving on
14:18:42 <jokke_> #topic release updates/periodic jobs
14:19:05 <abhishekk> that would be me
14:19:28 <abhishekk> so we are in R-22 week
14:19:46 <abhishekk> we have first milestone release in week Jun 03 - Jun 07 which is two weeks away
14:20:05 <rosmaita> wow that is coming up fast
14:20:09 <jokke_> yup
14:20:12 <abhishekk> the target is to make multi-store stable
14:20:25 <abhishekk> we are on track at the moment
14:20:45 <abhishekk> ** Need timely reviews
14:20:47 <rosmaita> ping me directly for reviews
14:20:50 <rosmaita> :)
14:20:53 <abhishekk> cool, thanks
14:21:14 <abhishekk> Regarding periodic jobs we have only one failure in last three weeks
14:21:27 <abhishekk> unfortunately that is time out due to parser error
14:22:03 <rosmaita> curse you, subparser!
14:22:17 <jokke_> indeed, but one failure in 3 weeks is great news
14:22:20 <abhishekk> In my free time, I am planning to look whether we can get rid of deprecation warnings
14:23:03 <jokke_> we're really getting there from multiple failures a week to failure in multiple weeks :D
14:23:05 <abhishekk> (One thing in my mind is comment out mult-store functional and unit tests and add recheck multiple times)
14:24:09 <abhishekk> some how I am convinced that timeout might be because of we are loading http server for 3-4 tests which is might failing
14:24:38 <jokke_> hmm-m, why is that failing?
14:25:13 <abhishekk> Might be socket timeout (not conclusive evidence in the logs though)
14:25:20 <jokke_> kk
14:25:31 <abhishekk> But to make sure, I will run some tests
14:26:15 <abhishekk> that's it from my side, we are good to move ahead unless someone has something to say :D
14:26:38 <jokke_> sounds good to me
14:26:59 <jokke_> thanks for keeping track on those and thinking of ideas where it might be coming from
14:27:35 <jokke_> #topic QA wants to split tempest jobs
14:27:53 <rosmaita> just an awareness thing
14:28:03 <rosmaita> the email asked "Thoughts on this approach?"
14:28:07 <jokke_> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005871.html
14:28:16 <rosmaita> i didn't have any, but maybe someone else will
14:28:23 <jokke_> So I did reply to the chain
14:28:41 <rosmaita> i missed that
14:28:50 <abhishekk> i saw it
14:28:54 <rosmaita> ok, that's all from me
14:29:00 <jokke_> I have little problem understanding the proposed split, but maybe I get some response and light to it
14:29:39 <jokke_> I just hope they take at least my wish into account and we start by splitting the jobs up and running the split jobs in check instead of check and gate for the beginning
14:30:38 <jokke_> but yeah as long as we don't have QA here to discuss about it with us, thanks rosmaita for bringing this up
14:31:01 <jokke_> #topic requests u-c bump or not
14:31:30 <jokke_> another good heads up from rosmaita
14:31:42 <rosmaita> yeah, this may amount to nothing
14:31:48 <jokke_> #link  http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005956.html
14:32:14 <jokke_> we might want to look what the failure is and if there is easy way to work around it
14:32:34 <jokke_> I'm very very happy about the discussion and the direction it's taking on ML
14:33:21 <rosmaita> yeah, i am not sure what's up with stable/queens
14:33:31 <jokke_> this really isn't the right thing to do with u-c but it was nice to get the heads up and see if work is needed for updating the vulnerable package
14:33:48 <jokke_> me neither, haven't had time to look into why it breaks yet
14:33:58 <smcginnis> In the requirements meeting yesterday it was decided any backports of those will be held until someone writes up a decent plan on how things should be handled if we actually do want to keep maintaining u-c for these.
14:34:11 <rosmaita> good to know
14:34:15 <jokke_> smcginnis: great
14:34:24 <abhishekk> cool
14:35:11 <jokke_> smcginnis: yeah updating u-c would send wrong message that we're actually keeping track of CVEs for all of our dependencies and all their supported versions
14:35:36 <jokke_> happy to hear no-one signed the community up for it but rather we keep the expectations realistic
14:35:37 <smcginnis> Yeah, that's the big concern.
14:36:25 <jokke_> ok, moving on
14:36:41 <jokke_> #topic multi-store for locations API
14:36:47 <jokke_> abhishekk: go go go!
14:36:52 <jokke_> ;)
14:36:54 <abhishekk> :D
14:37:16 <abhishekk> Me and jokke_ just had 30 minutes discussion over it just before the meeting
14:37:50 <abhishekk> We have decided to have a method in each store which will return the prefix of url to match with the location url
14:38:18 <abhishekk> to decide the actual store for that location
14:38:40 <jokke_> so to give a little meat around the bones on this, the problem is that lazy update of locations table with the store ID
14:39:11 <abhishekk> I will work on PoC and jokke_ will help me in case I have any doubts :D
14:39:17 <jokke_> we need a way to determine which store given location actually belongs to and the current store map does not directly give that to us
14:39:29 <rosmaita> i have never liked the store map
14:40:21 <abhishekk> neither do I, I will definitely work on refactor the same in next cycle
14:40:25 <rosmaita> ok, will be interesting to see what you come up with
14:40:31 <jokke_> rosmaita: why and better ideas to handle it? It's still experimental so speak up now when we still can fix it if there is better ways to handle them
14:41:00 <rosmaita> well, the map is ambiguous when you have stores using the same scheme
14:41:25 <rosmaita> it seemed to be designed on the assumption that that would never happen
14:42:17 <abhishekk> right
14:42:27 <jokke_> I think abhishekk's original idea was not to care and just crawl through all stores of specific type until we can find the correct one
14:42:52 <abhishekk> yes
14:42:59 <rosmaita> right, but if we have 1000 edge stores, that could be a problem
14:43:29 <jokke_> So this might be something to think about right _now_ and fix it before we move the design out of experimental and need to worry about the upgrade path to it
14:43:50 <rosmaita> agreed
14:44:00 <rosmaita> i don't have anything off the top of my head
14:44:02 <davee_> +1
14:44:12 <jokke_> rosmaita: yes, that's why I was reluctant about it from the beginning and have insted us to record the store ids with the location from the very beginning ;)
14:44:19 <rosmaita> i think the locations metadata was supposed to help with this
14:44:27 <jokke_> insisted
14:45:25 <abhishekk> do we need to care about adding a store as a metadata to location api?
14:45:51 <jokke_> rosmaita: yes, so that's why we're trying to figure out the recognition pattern now and include that into the store map, so we can update the locations metadata lazy when the image is accessed, so we have the possibly slow recognition process done only once, not every time
14:46:04 <jokke_> and can still avoid heavy migration job during upgrade
14:46:08 <rosmaita> ok, that makes sense
14:46:44 <rosmaita> and the edge sites will all be new, so won't require migration if we design this right
14:46:59 <jokke_> when we have the store id recorded to the location, we can then just request correct store object right away
14:47:14 <abhishekk> Yeah
14:47:27 <rosmaita> what will you use for store id?
14:47:37 <jokke_> rosmaita: indeed, any multi-store image will have that id and will just skip the recognition in the first place
14:48:29 <rosmaita> i'm asking because we may want to have a store uuid that never changes, and a store name that can change
14:48:30 <jokke_> rosmaita: so location has store id in the metadata so when the image is requested, we can take correct store id and correct store object to access the data, istead of walking through all of them
14:48:44 <rosmaita> because you know that some operator will decide to rename the stores
14:49:01 <rosmaita> right now the store_id does double duty
14:49:09 <rosmaita> as name in the info/stores response
14:49:14 <abhishekk> we don't have anything like store_uuid
14:49:25 <rosmaita> yeah, that's why i'm mentioning it
14:49:26 <jokke_> rosmaita: yup, and then they will get lovely slow access when we need to go through the recognition process again
14:50:19 <jokke_> but that will be sped up by having the map being able to provide the url prefix to match agains instead of walking through all stores, trying to access the image and grabbing the first one
14:50:46 <jokke_> specially if that image is in 2300 different ceph stores and someone decides to rename them all
14:51:08 <davee_> what about having a default store id and override it as needed?
14:52:17 <abhishekk> we already have one default store, currently known as default_backend
14:52:22 <jokke_> So the logic needs to be something like "does it have store id, does that store id still exists in map, does that store has the image" if not any re-recognize the locations
14:52:43 <abhishekk> jokke_, ++
14:53:41 <jokke_> that way we get optimized access as long as no-one fucks it up and we get slow access eventually if it gets fucked up but we break nothing
14:53:55 <jokke_> as long as the store is in config
14:54:03 <abhishekk> right
14:54:15 <abhishekk> last 6 minutes to go
14:55:06 <jokke_> yeah, we can continue this on os-glance if needed
14:55:17 <jokke_> 'topic open discussion
14:55:22 <jokke_> #topic open discussion
14:55:35 <jokke_> the priorities topic was late addition
14:56:12 <abhishekk> we have priorities outlined at the end of PTG planning etherpad
14:56:22 <jokke_> but just highligh the multi-store is our current priority. Sooner in the cycle we get that stable the longer we have opportunity to keep it testing and address any bugs
14:57:03 <jokke_> lets get that done, clean the deprecations from glance_store and get the 1.0.0 out of the door
14:57:36 <jokke_> after that we can focus on caching and all the new stuff we have outlined for the cycle
14:58:00 <abhishekk> ack
14:59:32 <jokke_> so like said any other improvement than having the url prefix available to make the store map better would be great to get out there under discussion asap
14:59:55 <jokke_> specially if it needs poking in the glance_store side
15:00:06 <jokke_> and time
15:00:09 <jokke_> Thanks all!
15:00:10 <rosmaita> sorry, got pulled into another meeting, will read through the scrollback
15:00:22 <jokke_> #endmeeting