14:01:08 #startmeeting glance 14:01:09 Meeting started Thu Dec 5 14:01:08 2019 UTC and is due to finish in 60 minutes. The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:12 The meeting name has been set to 'glance' 14:01:18 o/ 14:01:21 #topic roll call 14:01:30 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:01:38 o/ 14:02:03 o/ 14:02:25 rosmaita is away, so lets start the meeting 14:02:26 o/ 14:02:35 smcginnis, o/ 14:02:52 #topic Updates 14:03:21 I have pushed ussuri priorities patch, kindly review it 14:03:29 #link https://review.opendev.org/#/c/696017/ 14:03:41 #topic release/periodic jobs update 14:03:57 So this is M1 release week 14:04:17 and we don't have anything concrete work merged in glance 14:04:29 So can we skip this M1 release? 14:04:50 should be fine 14:05:11 jokke_, I remember we also skipped Train M1 14:05:16 Yeah, now milestones are mostly up to the team if they want to do a release or not. 14:05:18 I don't think we have any igrations waiting either 14:05:31 yes 14:05:35 I actually discourage most unless they know a downstream testing group or packaging wants to pick them up and do something with them. 14:05:46 mhm 14:06:15 I think we should just get rid of milestones :P 14:06:18 too much work 14:06:34 smcginnis, make sense, but we don't have anything merged after T release 14:06:40 jokke_, :P 14:06:52 Now they are really kind of just checkpoints on the way. 14:07:05 abhishekk: We should do something about that. :D 14:07:24 So, in priorities patch I have shifted M1 targets to M2 14:07:44 smcginnis, ack, definitely will do something about this 14:08:23 We are lagging behind on spec reviews and merging the same, 14:08:47 smcginnis, could use your help there as couple of specs are in good shape 14:09:01 specially Import image to multiple stores 14:09:18 OK, I'll try to take a look at those today. 14:09:32 as Glance has policy of not merging the specs before having +2 from all the cores 14:09:40 smcginnis, thank you 14:10:13 Regarding periodic jobs all green, surprisingly we didn't hit parser error this time during milestone release 14:10:38 Moving ahead 14:10:52 #topic python3 classifier applied to Glance 14:10:54 I guess no-one else were panicing about the milestone either 14:11:30 This topic is of Brian, but he is not available 14:11:56 We should probably discuss it though. 14:12:01 this is the original discussion back in may 14:12:03 #link http://eavesdrop.openstack.org/meetings/glance/2019/glance.2019-09-05-14.01.log.html#l-112 14:12:08 We are supposed to drop py2 classifiers and move to py3 only. 14:12:25 But we still have some known issues (under certain configurations) running py3-only. 14:12:38 We don't have fully support of python3 and now we are going drop py2 14:12:43 Sooo ... the plan was: 14:13:01 I remember we had discussion with QA guys during PTG that we will keep py2 as non-voting 14:13:04 1) In U we drop py27 support 14:13:16 2) remove the SSL termination from glance-api 14:13:31 3) nothing blocking us claiming py3 support 14:13:40 4) profit at U release 14:14:01 Running behind Apache is really the documented way to run now, so I think this is really a non-issue. 14:14:11 Do we have volunteer to work on removing SSL termination from g-api? 14:14:19 yes, I already - voted the first of those removing py27 testing patches I saw as we agreed to keep those gate jobs around 14:14:23 Document running SSL termination through glance-api is not supported and move on. :) 14:14:38 smcginnis: we shoud not be running behind apache 14:14:48 smcginnis: what are you talking about? 14:14:56 Reality is though that we will not be able to keep testing py2. 14:15:19 Services are already removing py2 support, so things are already starting to break. 14:15:50 jokke_: I thought the issue was only when we were *not* running behind apache and trying to serve requests directly the old-school way. Did I have that wrong? 14:16:31 Sorry, I know the issue has been described multiple times. I have bad memory though. :] 14:16:59 smcginnis, We have documented the workaround for this 14:17:14 smcginnis: we've never got glance-api running acceptable behind apache due to the fact that apache breaks a) chunked encoding transfers (well does not support them) and b) wants to save the request to /tmp/ first before passing it to the app behind it 14:17:36 so running glance behind apache is horrible idea for any production what comes to performance 14:18:03 Ah, OK. I had that wrong in my head then. 14:18:18 So we don't have option than removing the SSL termination from glance-api 14:18:34 what you really want to do is take some non-breaking reverse proxy like HAProxy and run your glance-api as proper eventlet service behind it terminating the ssl/tls on that rev. proxy 14:19:38 jokke_, I guess this is what we have documented as well 14:19:55 so we have totally acceptable and normal way to do it, just don't use shitty and bloaty protocol breaking web server as your rev proxy :P 14:20:10 abhishekk: I'm pretty sure it was indeed what we documented 14:20:19 jokke_, yes 14:21:10 Now instead of fully dropping py2 testing we are going to keep it as non-voting (as discussed with QA team) 14:21:23 OK, then if we have it documented that that is the requirement for SSL, then we are OK to move ahead. 14:21:34 I really don't think we can continue py2 testing. 14:21:35 +1 14:21:43 It's just going to be red all the time. 14:21:50 That boat has sailed. 14:21:53 so yes if we drop py27 support now, we definitely can drop the ssl stuff as well and be fully py3 compatible and claim the tag before the release 14:22:44 jokke_, only thing is we don't have bandwidth to work on this :( 14:23:09 abhishekk: I can look into it. It's pretty important for everyone 14:23:21 jokke_, that will be great 14:23:35 specially now when you decided to clean my plate for the beginning of the cycle 14:24:48 ack 14:25:14 Plan will be still same as explained by jokke_ 14:25:23 1) In U we drop py27 support 14:25:31 2) remove the SSL termination from glance-api 14:25:38 3) Claim py3 support 14:26:04 any suggestions/questions? 14:26:04 4) profit :P 14:26:10 right :D 14:26:11 1 technically should come after 3, but this will all need to be done within weeks really, so it doesn't matter too much. 14:26:46 smcginnis: we won't be releasing anything between 1-3 14:27:01 yes 14:27:03 No, but we will be broken by 2 if we don't take care of it. 14:27:09 Likely before. 14:27:22 ? 14:27:38 Sorry, I mean milestones. 14:27:56 till then we will mark py27 non-voting? 14:28:35 I would recommend just adding the py3 markers along with dropping py2 jobs completely. 14:28:38 abhishekk: smcginnis: I see no reason why our unit/functional tests should be broken by that 14:28:39 Same time. 14:29:09 jokke_, I will have a look at it as well, I need your expertise on specs and code reviews as well 14:29:40 smcginnis: well the whole point of not having them in train was that we agreed not to do it as long as we have broken ssl in the code 14:30:05 Why do we want to continue testing py2 non-voting if it's just going away? Seems a better use of our time to clean it up than caring about something that isn't going to be around long. 14:30:13 and we didn't want to drop it in train because it works just fine in py27 we still supported and we never deprecated that 14:31:04 smcginnis: o we don't break it unnecessarily, anything we need to backport _must_ work py27 anyways so the faster we break py27 the faster we're screwed we need to backport anything 14:31:30 Not really. Minor tweaks may be needed when doing backports, but not too common. 14:31:34 +1, that is what we discussed with QA team 14:32:01 it's not like we're going to tell anyone run Ussuri on py27, it's just pointless to break it just for the sake of breaking it if we can avoid 14:32:37 We're not talking about removing six though. Just dropping the testing and updating the package metadata to only include py3. 14:32:45 Anyway, carry on with the plan. 14:33:07 Just trying to warn that things are going to be broken very soon and I don't recommend spending time fixing things that are going away. 14:33:33 Definitely by milestone 2 when oslo libs start dropping py2. 14:33:52 smcginnis, agree, we should fix this before m2 14:34:33 anything else on this? 14:35:01 We will stick with the plan 14:35:25 moving ahead 14:35:33 #topic Open discussion 14:35:43 sooo, related to the py2-dropping discussion: can you please approve https://review.opendev.org/#/c/659477/? It replaces my own https://review.opendev.org/697050 (I haven't checked for other reviews) 14:36:36 oh that one, I don't think we should do it if we're not supporting py27 anymore :P 14:36:36 tosky, ack, will look after the meeting 14:36:52 why should we cap something on the version we're not supporting 14:37:40 We still need to do that to match global requirements. 14:37:49 agree 14:38:29 yebinama, do you want to discuss related to multiple stores import work? 14:38:30 but I thought there was no need to sync them anymore, that's why the bot doing was removed, right? 14:38:48 jokke_: it's still needed right now 14:38:53 abhishekk: yes I have just one question 14:39:00 jokke_: No, there is still validation that is run that makes sure the requirements entries match. 14:39:23 It just doesn't automatically propose setting specific versions. Upper-constraints are raised for that now. 14:39:37 ok saw tosky's comment as well on the patch, that makes sense 14:39:49 yebinama, after this (soon) 14:39:55 it's an unfortunate temporary technical issue, but that's life 14:40:38 tosky: yeah it hapens, and makes sense now 14:40:42 rosmaita, o/ 14:40:50 \o 14:40:56 yebinama, floor is yours 14:41:00 I was wondering why should we care, but yeah it's not like these changes happens over night on all projects 14:41:42 Regarding properties 14:42:00 do we add it to the spec? 14:42:25 For all cases or only when allow_failure is true? 14:42:30 I guess we should add it in spec as well 14:43:09 I have couple of pointers as well I think we haven't thought of. but lets go one by one. 14:43:22 yebinama: which properties you are worried about? 14:43:32 os_glance_failed_imports 14:43:44 and os_glance_successful_imports 14:44:13 you suggested them so that normal user should not bug admin/operator to know what happened to his request 14:44:36 I think successful_import is not needed when allow_failure is false 14:44:51 as we already update locations each time an import is successful 14:44:52 oh, os it's actually related to what I had in mind. And no I did not suggest successful_imports. 14:45:10 yeah, that was me :D 14:45:15 so "successfull imports" will show up in the stores list anyways 14:45:21 yep 14:45:32 same for "failed_imports" 14:45:36 the point of that no failed list was to indicate stores we are still working on 14:46:03 means remaining stores? 14:46:24 let me describe a flow really quick: 14:46:54 cool 14:47:18 1) user requests copy/import to stores store1,store2,store3,store4 we accept the request and add that list to os_glance_importing_to_stores 14:47:49 2) we finish uploading to store1, we update the property and remove the store1 from the list 14:48:40 3) store2 fails, we add that to the os_glance_failed_import, remove it from the os_glance_importing_to_stores and move on 14:48:51 jokke_, got it 14:48:59 Seems reasonable. 14:49:10 need schema modification for this 14:50:01 isn't a single property enough? 14:50:24 once we're done with the whole process, we have removed os_glance_import_to_store (indication to user that we're done) and there is still os_glance_failed_import=store2 telling the user that the import to that specific store failed 14:50:49 These are not sequential, right? 14:51:04 smcginnis: yes they are 14:51:13 yes they are 14:51:18 when this property will be emptied? 14:51:25 the failed_one 14:51:29 yebinama: the failed one? 14:51:31 ok 14:51:38 OK, that makes it a little safer for updating the os_glance_failed_import value then. 14:52:09 so that can either be cleared by user, or it will be cleared when we kick off new copy-image import job against that image to retry that failed store 14:52:50 what about only using remaining_stores? 14:53:19 if we use allow_failure to true, it is enough as we fails as soon as an upload fails 14:53:24 it's just as long as the tasks api is not available for reqular users we have no other means to keep the user updated of the process than relaying the messages via image properties 14:54:04 if we have allow_failure to false, user can now which store has failed by looking at the locations 14:54:19 well even with allow_failure=false it's actually nice to have some identification to the user what failed 14:54:29 sorry i have inverted the value of allow_failure 14:54:53 last 5 minutes 14:54:54 as the revert will go and wipe them all, right? 14:55:31 yep 14:55:34 jokke_, yes 14:56:29 also related to this, the one question I was going to ask was, if we have allow_failure=true, should we go and activate the image once we have first usable location or wait until we're done with the request. With allow_failure=false we must wait but true we don't 14:57:43 and in big environments this would be something which might improve user experiene a lot when they don't need to wait a day for the image being uploaded everywhere and being finally bootable 14:57:47 jokke_, if don't wait and someone requests that image and our process fails then we might end up cleaning all the data from all available locations 14:58:15 abhishekk: with allow failure we shouldn't 14:58:34 jokke_, right, confused with values 14:58:41 jokke_: yes but if the stores are in different areas 14:58:54 with no direct acces to the stores 14:59:10 last two minutes, then we will shift to openstack-glance 14:59:19 yebinama: the more there should not be reson to prevent using the image where it's already ready 14:59:21 the images will be downloaded by nova 14:59:32 and the boot of the vm will take more time 15:00:09 Lets continue on openstack-glance channel 15:00:12 thank you all 15:00:18 TY 15:00:24 #endmeeting