14:04:30 #startmeeting glance 14:04:31 Meeting started Thu May 16 14:04:30 2019 UTC and is due to finish in 60 minutes. The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:04:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:04:35 The meeting name has been set to 'glance' 14:04:37 o/ 14:04:38 #topic roll-call 14:04:40 o/ 14:05:04 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:05:29 o/ 14:05:46 ok, I was late enough so lets get started ;) 14:05:52 #topic updates 14:06:19 so first recap of PTG plans 14:06:22 #link https://etherpad.openstack.org/p/Glance-Train-PTG-planning 14:06:57 we took the cinder way and instead of having separate etherpads for each topic recorded the notes under the topic on that one etherpad 14:07:18 I have to say I like it ... much easier to navigate than 20 tabs of spread etherpads 14:07:57 yes, there is something to be said for that 14:08:19 shoot 14:08:29 looks like abhishekk had to restart his system due to connectivity problems, we should probably wait a few min 14:08:36 kk 14:10:26 I have filtered out all joins, parts and nick changes to keep my irc clean so if he dropped I missed it 14:11:17 o/ 14:11:17 He's back! :) 14:11:22 hooray! 14:11:34 apology, had some network issues 14:12:06 np 14:12:38 so rosmaita you don't like having everything in one Etherpad? 14:12:58 no, it's fine 14:13:35 We've also done by day instead of by topic. Less etherpads, but still not a lot all in one if there ends up being a lot of notes. 14:13:45 it's like everything ... occasionally there's probably a topic worth having on its own, but in general this is a good way to do it 14:14:03 +1 14:14:26 yeah, I'd love to have etherpad with permalinks so you could have TOC there 14:14:36 that's called a wiki 14:14:38 :) 14:14:50 :P 14:15:01 :D 14:15:09 ok, shall we move on? 14:15:17 yep 14:15:30 Not sure you want to go through the effort, but Jay has been doing nice cleaned up recaps here: https://wiki.openstack.org/wiki/Cinder#PTG_and_Summit_Meeting_Summaries 14:15:58 thanks smcginnis will have a look 14:16:01 I've found it useful with being able to find the bolded "Action (smcginnis)" parts. 14:16:15 Slightly higher chance I will actually do something I signed up for. ;) 14:16:20 cool :D 14:17:05 Another thing I wanted to highlight is the Image Encryption weekly meeting. Noticed you guys had put your input into the doodle already 14:17:08 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005861.html 14:17:33 for anyone who missed the discussion in the ML 14:18:12 by the looks of it I finally got my filters fixed and I seem to be able to keep somewhat track whats going on again 14:18:32 moving on 14:18:42 #topic release updates/periodic jobs 14:19:05 that would be me 14:19:28 so we are in R-22 week 14:19:46 we have first milestone release in week Jun 03 - Jun 07 which is two weeks away 14:20:05 wow that is coming up fast 14:20:09 yup 14:20:12 the target is to make multi-store stable 14:20:25 we are on track at the moment 14:20:45 ** Need timely reviews 14:20:47 ping me directly for reviews 14:20:50 :) 14:20:53 cool, thanks 14:21:14 Regarding periodic jobs we have only one failure in last three weeks 14:21:27 unfortunately that is time out due to parser error 14:22:03 curse you, subparser! 14:22:17 indeed, but one failure in 3 weeks is great news 14:22:20 In my free time, I am planning to look whether we can get rid of deprecation warnings 14:23:03 we're really getting there from multiple failures a week to failure in multiple weeks :D 14:23:05 (One thing in my mind is comment out mult-store functional and unit tests and add recheck multiple times) 14:24:09 some how I am convinced that timeout might be because of we are loading http server for 3-4 tests which is might failing 14:24:38 hmm-m, why is that failing? 14:25:13 Might be socket timeout (not conclusive evidence in the logs though) 14:25:20 kk 14:25:31 But to make sure, I will run some tests 14:26:15 that's it from my side, we are good to move ahead unless someone has something to say :D 14:26:38 sounds good to me 14:26:59 thanks for keeping track on those and thinking of ideas where it might be coming from 14:27:35 #topic QA wants to split tempest jobs 14:27:53 just an awareness thing 14:28:03 the email asked "Thoughts on this approach?" 14:28:07 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005871.html 14:28:16 i didn't have any, but maybe someone else will 14:28:23 So I did reply to the chain 14:28:41 i missed that 14:28:50 i saw it 14:28:54 ok, that's all from me 14:29:00 I have little problem understanding the proposed split, but maybe I get some response and light to it 14:29:39 I just hope they take at least my wish into account and we start by splitting the jobs up and running the split jobs in check instead of check and gate for the beginning 14:30:38 but yeah as long as we don't have QA here to discuss about it with us, thanks rosmaita for bringing this up 14:31:01 #topic requests u-c bump or not 14:31:30 another good heads up from rosmaita 14:31:42 yeah, this may amount to nothing 14:31:48 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005956.html 14:32:14 we might want to look what the failure is and if there is easy way to work around it 14:32:34 I'm very very happy about the discussion and the direction it's taking on ML 14:33:21 yeah, i am not sure what's up with stable/queens 14:33:31 this really isn't the right thing to do with u-c but it was nice to get the heads up and see if work is needed for updating the vulnerable package 14:33:48 me neither, haven't had time to look into why it breaks yet 14:33:58 In the requirements meeting yesterday it was decided any backports of those will be held until someone writes up a decent plan on how things should be handled if we actually do want to keep maintaining u-c for these. 14:34:11 good to know 14:34:15 smcginnis: great 14:34:24 cool 14:35:11 smcginnis: yeah updating u-c would send wrong message that we're actually keeping track of CVEs for all of our dependencies and all their supported versions 14:35:36 happy to hear no-one signed the community up for it but rather we keep the expectations realistic 14:35:37 Yeah, that's the big concern. 14:36:25 ok, moving on 14:36:41 #topic multi-store for locations API 14:36:47 abhishekk: go go go! 14:36:52 ;) 14:36:54 :D 14:37:16 Me and jokke_ just had 30 minutes discussion over it just before the meeting 14:37:50 We have decided to have a method in each store which will return the prefix of url to match with the location url 14:38:18 to decide the actual store for that location 14:38:40 so to give a little meat around the bones on this, the problem is that lazy update of locations table with the store ID 14:39:11 I will work on PoC and jokke_ will help me in case I have any doubts :D 14:39:17 we need a way to determine which store given location actually belongs to and the current store map does not directly give that to us 14:39:29 i have never liked the store map 14:40:21 neither do I, I will definitely work on refactor the same in next cycle 14:40:25 ok, will be interesting to see what you come up with 14:40:31 rosmaita: why and better ideas to handle it? It's still experimental so speak up now when we still can fix it if there is better ways to handle them 14:41:00 well, the map is ambiguous when you have stores using the same scheme 14:41:25 it seemed to be designed on the assumption that that would never happen 14:42:17 right 14:42:27 I think abhishekk's original idea was not to care and just crawl through all stores of specific type until we can find the correct one 14:42:52 yes 14:42:59 right, but if we have 1000 edge stores, that could be a problem 14:43:29 So this might be something to think about right _now_ and fix it before we move the design out of experimental and need to worry about the upgrade path to it 14:43:50 agreed 14:44:00 i don't have anything off the top of my head 14:44:02 +1 14:44:12 rosmaita: yes, that's why I was reluctant about it from the beginning and have insted us to record the store ids with the location from the very beginning ;) 14:44:19 i think the locations metadata was supposed to help with this 14:44:27 insisted 14:45:25 do we need to care about adding a store as a metadata to location api? 14:45:51 rosmaita: yes, so that's why we're trying to figure out the recognition pattern now and include that into the store map, so we can update the locations metadata lazy when the image is accessed, so we have the possibly slow recognition process done only once, not every time 14:46:04 and can still avoid heavy migration job during upgrade 14:46:08 ok, that makes sense 14:46:44 and the edge sites will all be new, so won't require migration if we design this right 14:46:59 when we have the store id recorded to the location, we can then just request correct store object right away 14:47:14 Yeah 14:47:27 what will you use for store id? 14:47:37 rosmaita: indeed, any multi-store image will have that id and will just skip the recognition in the first place 14:48:29 i'm asking because we may want to have a store uuid that never changes, and a store name that can change 14:48:30 rosmaita: so location has store id in the metadata so when the image is requested, we can take correct store id and correct store object to access the data, istead of walking through all of them 14:48:44 because you know that some operator will decide to rename the stores 14:49:01 right now the store_id does double duty 14:49:09 as name in the info/stores response 14:49:14 we don't have anything like store_uuid 14:49:25 yeah, that's why i'm mentioning it 14:49:26 rosmaita: yup, and then they will get lovely slow access when we need to go through the recognition process again 14:50:19 but that will be sped up by having the map being able to provide the url prefix to match agains instead of walking through all stores, trying to access the image and grabbing the first one 14:50:46 specially if that image is in 2300 different ceph stores and someone decides to rename them all 14:51:08 what about having a default store id and override it as needed? 14:52:17 we already have one default store, currently known as default_backend 14:52:22 So the logic needs to be something like "does it have store id, does that store id still exists in map, does that store has the image" if not any re-recognize the locations 14:52:43 jokke_, ++ 14:53:41 that way we get optimized access as long as no-one fucks it up and we get slow access eventually if it gets fucked up but we break nothing 14:53:55 as long as the store is in config 14:54:03 right 14:54:15 last 6 minutes to go 14:55:06 yeah, we can continue this on os-glance if needed 14:55:17 'topic open discussion 14:55:22 #topic open discussion 14:55:35 the priorities topic was late addition 14:56:12 we have priorities outlined at the end of PTG planning etherpad 14:56:22 but just highligh the multi-store is our current priority. Sooner in the cycle we get that stable the longer we have opportunity to keep it testing and address any bugs 14:57:03 lets get that done, clean the deprecations from glance_store and get the 1.0.0 out of the door 14:57:36 after that we can focus on caching and all the new stuff we have outlined for the cycle 14:58:00 ack 14:59:32 so like said any other improvement than having the url prefix available to make the store map better would be great to get out there under discussion asap 14:59:55 specially if it needs poking in the glance_store side 15:00:06 and time 15:00:09 Thanks all! 15:00:10 sorry, got pulled into another meeting, will read through the scrollback 15:00:22 #endmeeting