12:02:11 #startmeeting glare 12:02:12 Meeting started Tue Jul 4 12:02:11 2017 UTC and is due to finish in 60 minutes. The chair is mfedosin. Information about MeetBot at http://wiki.debian.org/MeetBot. 12:02:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 12:02:16 The meeting name has been set to 'glare' 12:23:12 idanaroz: hi :) 12:23:18 finally you're here 12:23:44 Hi :) 12:23:59 #topic agenda 12:24:08 #link https://etherpad.openstack.org/p/glare-meeting-agenda 12:24:38 I wanted to discuss a lot of things today 12:24:57 but let's begin with updates 12:25:02 #topic updates 12:25:04 Yes, sure 12:25:17 I dedicated last week to massive refactoring of glare code 12:25:29 but it's a different topic 12:25:46 also I proposed an application under big tent 12:26:12 It will be considered this week 12:26:31 but again, we have a special topic for this too :) 12:26:54 from your side I remember that you proposed a patch for max folder size 12:27:09 but there is an issue with it... 12:27:18 Yes, and you found bug there, if we use external location 12:27:19 and I'm thinking how to deal with it right 12:27:38 idanaroz: there is also another issue :) 12:28:02 Your patch for this bug, were merged 12:28:07 when we have several simultaneous uploads in one folder 12:28:59 I knew about that issue and I had several proposals how to implement it right 12:29:25 I'll be glad to further hear about it 12:29:57 so, problem: when user performs several simultaneous uploads in one folder, he can avoid the max_folder_size limitation 12:30:35 generally speaking it's also related to quotas and I want to combine these two things in one fix 12:30:56 mmmm 12:30:56 solution #1 (easy and radical): 12:31:22 reject simultaneous uploads from one user 12:31:53 sounds not ideal 12:32:02 I asked Pavan about this and he said that they use simultaneous uploads 12:32:31 so, yeah, it is easy to implement, but it's not very convenient 12:33:20 solution #2: synchronize uploaded data size in db 12:34:49 i.e. when one process has uploaded some amount of data (1 megabyte, for instance), it inserts this information in db 12:35:20 another process reads this information and updates how many data it can upload 12:36:01 did I put it clearly? 12:36:25 so, there will be a synchronization through database 12:36:43 Yes, but i am not sure regarding the execution of it 12:36:57 it works, but unfortunately it's very resource consuming 12:37:19 during upload there will be a lot of db inserts and reads 12:37:39 yes... 12:37:45 so, it's almost unacceptable in practice 12:37:55 and finally we have 3rd solution 12:38:23 solution #3: request content_length header 12:39:03 if user wants to perform several simultaneous uploads he has to specify how much data he wants to upload 12:39:36 otherwise we fallback to one upload per user at one time 12:40:03 it's relatively easy to implement 12:40:18 yes, that's sounds a good idea 12:40:37 yes! and it should work 12:40:51 generally speaking we need two changes: 12:41:12 in client we need to implement sending the header 12:41:57 yes, and in the server , to use this info... 12:41:58 and in the server, when blob instance is creating, instead of blob['size'] = None 12:42:30 we should do blob['size'] = req.headers('content_length') 12:42:53 i would be glad to take this task 12:43:03 it will reserve the requested amount of data before starting the uploading 12:43:23 you can take client side and I'll take the server one 12:43:46 cool 12:44:00 okay, we can continue the discussion in #openstack-glare 12:44:15 and now I want to say a couple of things about the refactoring 12:44:29 #topic massive refactoring 12:44:43 we don't have too much time left 12:44:59 and I just want to state the purpose of this initiative 12:45:19 yes 12:45:28 I has been planning it for a long time... 12:45:46 there are two main reasons 12:45:55 1. we have too much code duplication 12:46:12 a lot of checks are applied several times 12:47:22 one group of checks is applied in one place and second in another 12:47:33 and it leads to inconsistencies 12:47:54 for example, status and visibility are mutable fields by design 12:48:03 user can change them after activation 12:48:15 but they are declared as immutable 12:48:36 and nevertheless user *can* change their values 12:48:58 to prevent these inconsistencies I proposed the refactoring 12:49:26 i.e. combine all checks in one place and prevent code separation and duplication 12:49:56 purpose number two is to follow json-patch standard strictly 12:50:23 important reasons ... 12:50:44 because the standard states that all the changes should be applied sequentially 12:50:50 with specific order 12:51:04 but in glare it's not true 12:51:17 refactoring fixes it too 12:51:48 I think these are the main benefits 12:52:09 also I want to mention that the patch contains a lot of unrelated changes 12:52:40 that can be separated in standalone patches 12:53:31 good example of this is that we shouldn't set validator MaxStrLen, if AllowedValues are specified 12:54:03 so, probably I'll try to remove all unrelated code (as much as I can) to different patches 12:54:29 I understand 12:54:47 That is very valuable 12:54:55 the main challenge is that even with those changes the patch will be really big 12:55:03 and hard to review 12:55:48 thus you'll have to dedicate some time and nerves to review it 12:56:39 Sure, i see the importance of it 12:56:51 idanaroz: thank you :) 12:57:06 this simplification will allow to move forward faster 12:57:24 more than 300 loc were removed 12:57:39 loc? lines? 12:57:46 lines of code 12:57:55 wow 12:58:25 #link https://review.openstack.org/#/c/479511/ 12:58:35 +482, -813 12:58:46 big one, indeed 12:59:00 okay, we're almost out of time... 12:59:18 #topic big-tent 12:59:38 application is on review 12:59:42 #link https://review.openstack.org/#/c/479285/ 12:59:52 and the voting started today 13:00:07 okay, we have no more time left... 13:00:19 thanks idanaroz! 13:00:25 #endmeeting