14:01:06 #startmeeting glance 14:01:07 Meeting started Thu Jun 18 14:01:06 2020 UTC and is due to finish in 60 minutes. The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:08 #topic roll call 14:01:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:11 The meeting name has been set to 'glance' 14:01:13 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:01:15 o/ 14:01:23 o/ 14:01:24 o/ 14:01:30 short agenda today 14:01:39 lets start 14:01:54 #topic release/periodic jobs update 14:02:03 V1 milestone this week 14:02:25 We don't have anything merged so should we skip this milestone release? 14:02:37 +1 14:02:41 ++ no need for it 14:02:42 or we should release glance as we have modified the registry code? 14:03:20 same goes with store and python-glanceclient as well 14:03:30 shouldn't make any difference 14:03:45 i think sean has a patch up for the glanceclient release 14:04:10 oh, will have a look and comment on it 14:04:11 milestone releases are not needed anymore and even tags are cheap, I see no reason fot doing it just for the sake of it 14:04:26 +1 14:04:58 cool, so lets skip this one and focus on milestone 2 14:05:28 In next meeting we will finalized M2 priorities 14:05:42 s/finalized/finalize 14:06:04 Regarding periodic job, 1 functional-py36 job is failing 14:06:24 I spent little time to analyze the failure 14:06:46 One functional test regarding revert logic is failing due to race condition 14:07:12 I will spend some time in next week to rectify it 14:07:24 humm 14:07:27 interesting 14:07:28 As I am the one who has written that test :D 14:07:34 :P 14:07:38 :D 14:08:08 will ping jokke_ if something is needed 14:08:17 * jokke_ ducks 14:08:26 :P 14:08:31 * rosmaita laughs 14:08:31 moving ahead 14:08:41 #topic devstack registry 14:08:54 registry removal devstack patch merged \o/ 14:09:03 \\o \o/ o// o/7 14:09:37 thanks to jokke_ for the glance patch and dansmith for taking it ahead with devstack team 14:09:54 I will continue with the cleanup proper 14:10:03 Yay, finally. 14:10:03 awesome, thank you 14:10:18 this means we have one less config file as well :D 14:10:43 yes, smcginnis thanks for your push as well 14:11:02 Lets move ahead, 14:11:16 #topic Specs review 14:11:23 We need to get on top of this 14:11:40 Because our milestone 2 is dependent on these reviews 14:11:50 sparse image upload - https://review.opendev.org/733157 14:11:50 Unified limits - https://review.opendev.org/729187 14:11:51 Image encryption - https://review.opendev.org/609667 14:11:51 Cinder store multiple stores support - https://review.opendev.org/695152 14:12:03 sorry, i started reviewing the sparse file upload and got sidetracked looking at sparse files 14:12:05 These are some specs with top priorities which needs reviews 14:12:18 * smcginnis gets some tabs open 14:13:01 rosmaita, no worries, eye from you and smcginnis will be additional benefit for us 14:13:30 ok, i will wait for the author to revise that spec as you requested 14:13:57 then there is one new spec related to duplicate downloads which will be good to have reviews as well 14:14:08 About that, I had very fruitful discussion with one of the Ceph devs last week this timeslot 14:14:21 re sparse upload: my thought is that the title is misleading, the action as i understand it would take place after the full image has been staged 14:15:00 that should be sparse image import? 14:15:03 Really feel like I understand the the traffic much better. 14:15:23 jokke_, what was your discussion 14:15:24 Sparse upload is when glance is uploading not when client is 14:16:20 So i think the topic is accurate, if it was glanceclient spec it would point to the step before :P 14:17:18 you had discussion with ceph devs related to sparse upload? 14:18:13 * abhishekk am I disconnected? 14:18:47 But yeah so there is couple of ways we can do the sparse upload and save the bandwidth. If the admin wants to fat provision the image but not send all the zeros over the wire, we can do something like buffered write again. Which sends, say 4kB, sample over the wire and then just tells ceph to write that 200k times. Or we can do thin provisioned images by seeking ahead and writing only the data. 14:18:53 abhishekk: nope, can see you 14:18:58 ack 14:19:49 I think we should look both, and have the thin provisioning on/off configurable 14:20:20 sounds good to me 14:20:38 what about filestore? 14:20:38 So rather than having the config option we talked about in the PTG to turn sparse writes on or off, just flick which way we do the write into ceph 14:21:18 his proposal talks about both rbd and filestore sparse upload support 14:21:21 I think same applies, we can call the config option "thin provisioned" and use the sparse writing there 14:21:54 ok 14:22:18 could you please add this suggestion on specs, we should get it rolling 14:22:22 I think the biggest change is really if admin wants to thin provision or not 14:22:25 sure 14:22:27 will do that 14:22:47 thanks 14:23:05 Had quite a bit clarifications and more understanding/good pointers of other things as well how radoslib handles the I/O 14:23:38 cool 14:24:05 but we can discuss them separately 14:24:19 yes 14:24:20 I'll try to get bit of a refactring spec together 14:24:34 that will be great 14:25:33 I am working on cinder multiple store support PoC 14:26:46 Nice 14:28:06 Ok, please spend some time in specs review this week 14:28:14 Moving in to Open discussion now 14:28:14 yup, will do 14:28:26 #topic Open discussion 14:28:57 Just couple of quick things around the rbd so I have also note recorded. 14:29:04 I have uploaded our PTG recordings on google drive and shared link of those in PTG etherpad 14:29:17 thanks abhishekk!!! 14:30:05 #link https://etherpad.opendev.org/p/glance-victoria-ptg 14:31:08 So first of all, multithreading per se is not a thing. What people are referring with that is async writes. Now my biggest fear of eating all the sockets is happening already. While the rbd client instance is running it maintains the sockets for all the OSDs it accesses with timeout of 900sec 14:32:10 oh 14:32:33 For our usage pattern it also does not make sense to start pooling those rbd clients as main conern there is how long time the auth and handshakes takes, but as we're not dealing with thousands of small objects. it's actually not relevant for our usage pattern and we would need to maintain that pool 14:33:37 We could have pool of async write slots we could scale based on how many concurrent transfers we have ongoing. That would lessen the impact of high latency links 14:34:46 that would be big change imo 14:34:46 but I think the biggest performance improvements we can achieve is by changing the way we write zeros to either that buffered rewrite or sparse seek and by changing how we allocate the size when we don't know image size we're writing 14:35:38 advice I got was to take the same approach as the ceph client is doing in such situation and just double th size every time resize is needed and trim at the end 14:35:44 totally safe 14:36:16 the same we decided earlier for chunks upload? 14:37:44 abhishekk: so instead of growing the size by 1GB, the advice was just double the size. So start like 100MB, then 200, 400, 800 ... etc 14:38:06 ack, got it now 14:38:26 but conceptually the same idea, just don't worry reserving too much and trim when finished 14:38:42 so this will be part of rbd refactor 14:38:52 yep 14:39:05 small change, should be super easy to do actually 14:39:21 I might just do it as bugfix before looking into the more risky things 14:40:00 cool, I think it will be good if its that much easy as it sounds :D 14:40:08 yup 14:40:47 great, So we need to take out multiple rbd thing from our priorities 14:41:10 much more cofortable doing any changes there now when I know how it actually works and happy to share with anyone interested 14:41:23 ME 14:41:30 :) 14:41:59 we will take this down in next week :D 14:42:23 that's all from me. Just wanted to share a quick recap so we have it recorded somewhere :D 14:42:39 thanks 14:42:55 anything else guys? 14:43:49 cool, lets wrap up early 14:43:53 thank you all 14:43:59 Thanks all! 14:44:00 have a nice weekend 14:44:09 #endmeeting