14:00:05 #startmeeting glance 14:00:06 #topic roll call 14:00:06 Meeting started Thu Mar 12 14:00:05 2020 UTC and is due to finish in 60 minutes. The chair is abhishekk. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:06 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:09 The meeting name has been set to 'glance' 14:00:25 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:00:29 o/ 14:00:52 o/ 14:01:05 lets wait for 2-3 minutes 14:01:20 o/ 14:02:02 o/ 14:02:13 lets start 14:02:28 #topic release/periodic jobs update 14:03:05 We are in last month of Ussuri cycle, some of the important dates are; 14:03:07 Final release for non-client libraries - 2 weeks 14:03:14 Milestone 3 - 3 weeks 14:03:33 So guys please review all remaining and important patches 14:03:50 Surprisingly, Periodic jobs all green 14:03:57 for last couple of days 14:04:20 Makes me wonder if it's actually working. :D 14:04:22 sorry, lost track of time 14:04:33 I have local patch do suppress some deprecation warning messages 14:04:43 will submit it tomorrow 14:04:54 smcginnis, likewise :D 14:04:58 rosmaita, NP 14:05:12 moving ahead; 14:05:28 #topic S3 support for glance_store 14:05:56 This one is important change and we agree to get it in at the start of the cycle 14:06:24 specs and patch are both in good shape, also we have etherpad which tells us how to test this change 14:06:39 spec, #link https://review.opendev.org/687390 14:06:57 Implementation, #link https://review.opendev.org/695844 14:07:09 nao-shark, do you want to share anything on this? 14:07:43 yes 14:07:51 please carry on 14:08:18 i also summarized how to configure s3 driver on etherpad 14:08:41 and i know deadline of glance_store's final release is approaching 14:08:51 I'll try to find time to review that 14:08:56 so kindly review above patches 14:09:02 thanks. 14:09:35 nao-shark: thanks for the very good documentation how to utilize this and sorry for neglegting it for this long 14:09:35 jokke_, thank you 14:10:00 moving ahead 14:10:17 #topic Delete image from single store 14:10:32 #link https://review.opendev.org/#/c/698049 14:11:08 Need review on this as it is a important change for us 14:11:18 rosmaita, jokke_ smcginnis kindly reveiw 14:11:31 I have added my findings on the patch and made changes accordingly 14:11:42 ok 14:11:55 I hope we will find middle way here (fingers crossed)) 14:12:05 rosmaita, thank you 14:12:09 moving ahead 14:12:24 #topic Transition Rocky to EM 14:12:36 #link https://review.opendev.org/#/c/709888/2 14:12:48 #link 14:12:49 https://etherpad.openstack.org/p/glance-stable-rocky-em 14:13:12 this etherpad tells about some patches are pending to be merged against stable/rocky 14:13:36 but IMO those are not important and not as per the backport standards 14:13:57 I have given my approval for it, kindly suggest if you think otherwise 14:15:01 Yeah, looks like maybe some useful stuff in openstack/glance, but nothing too critical. 14:15:02 abhishekk: if those backports are not approriate by the standards, are they blocked? 14:15:23 jokke_, no as of now 14:15:23 And we can still continue to merge fixes after we transition, just can't do an official community release. 14:15:25 as in -2'd or -W'd 14:15:38 ok, we might want to do that and abandon after 14:15:42 jokke_, not ye 14:15:50 jokke_, ack 14:15:50 kk 14:15:58 will do this later tonight 14:16:08 also if you have time, kindly have a look 14:16:21 smcginnis, makes sense 14:17:02 Cool, moving ahead; 14:17:14 #topic Open discussion 14:17:45 So as per community goal we were missing some pieces in contributors guide 14:17:49 I have added the same 14:17:54 kindly have a look; 14:18:02 #link https://review.opendev.org/712236 14:18:16 thank you to rosmaita for this :P 14:18:57 what did i do? 14:19:23 you submitted same for cinder, I took reference from there 14:19:33 ok, ok, as long as it was something good 14:19:44 thought maybe you were being sarcastic 14:19:56 haha 14:20:00 Reload is broken under py3 14:20:08 #link https://bugs.launchpad.net/glance/+bug/1855708 14:20:09 Launchpad bug 1855708 in Glance "Reload tests broken in Py3" [Critical,Triaged] - Assigned to Khuong Luu (organic-doge) 14:20:13 We are in serious trouble 14:20:20 jokke_, stage is yours 14:21:31 ok, so this turned from "flaky test" to actually quite nasty bug 14:22:57 This was originally bundled to same pile ith our broken ssl test and at least I thought it was test issue with python 3. feshfood took a proper spin on it and we looked his findings yesterday as he couldn't figure out why the parent process disappeared under him in the middle of the test 14:24:00 well looks like py3 multithreading changes actually properly broke our reload. So current situation is that if you try to HUP the glance-api service to pick up config changes on flight, you end up loosing the parent process and leaking orphaned child processes 14:24:47 So big hand to Khuong who spent time on this and provided me enough details to draw initial conclusions, but this is nasty and we really need to figure it out 14:25:40 yes, this is very critical or else we will be missing out on big feature of reload 14:25:42 and very easy to reproduce, just run glance-ap with bunch of workers, sighup ip (even without config changes) and you see the outcome 14:27:01 Yes so one thing which makes this super critical and nasty is that we have been marketing the multistore feature being very easy to add new stores without downtime as you can just add the store in configs and reload and api should pick them up. Well, not exactly 14:27:05 i think our code was done before oslo.config supported reloadable config 14:27:19 but now it does 14:27:42 rosmaita, yes 14:27:43 but i think you have to mark the reloadable items explicitly or something 14:29:09 not able to recollect it, but there was some limitations on some config parameters which cannot be reloaded 14:29:46 we do allow only subset of configs too, what looks like the problem is, that iiuc py3 multithreading works in a way where signal handlers are persisten and only processed in the main thread, and we overload different handlers to child and parent which might actually cause us to gracefully die on parent instead of doing any of the reload logic 14:30:17 The only issue is where we store off the config item rather than reading it from oslo.config. So "self.value = CONF.value" during init, then just always referencing "self.value". 14:30:35 as the child hup handler is just "finish requests in flight and die" 14:31:20 smcginnis: thanks for that clarification 14:31:46 I don't know how the oslo_config reload works but we actually most of the cases maintain the socket and just respun all the workers with the new config 14:31:59 or that's how it worked in py27 14:32:00 You do also have to explicitly mark opts as mutable or they won't change on a config reload. 14:32:40 If there's code you need to run on a reload there's a hook for that too. 14:32:43 rosmaita, we don't use oslo.services so oslo.config reload will not work for us 14:33:20 glance used to support even changing the bind on the flight where we released the socket and rebind if that was changed, but indeed that's fully separate topic as abhishekk said ^^ 14:33:38 bnemec, AFAIK, this will work if and only if you are using oslo.services, right? 14:34:00 that's what I thought as well 14:35:34 You get it for essentially free in oslo.service, but it should be possible to mutate config without it. 14:35:52 i guess the question is how much re-architecting we will have to do, and if so, what direction it should take 14:36:28 yeah, I do not know yet. I'm still trying to find the actual rot cause, I know the symptoms and likely origin for them 14:36:33 root 14:36:47 Rot may be applicable too. :) 14:36:50 I will also spend some time around this 14:36:55 haha 14:37:04 mhm 14:37:29 I'm just happy that we found out now, not when someone had production failing under them due to this 14:37:37 yes indeed 14:37:40 +1 14:37:42 but this needs some cycles sooner than later 14:38:06 yeah, because with this release its no longer possible to run under py27 14:38:20 indeed 14:38:24 yes 14:38:28 and we claim py3 support 14:38:32 If you're using cotyledon it looks like that supports mutable config too, FYI. 14:39:06 cotyledon? 14:39:10 need to roll our sleeves for this 14:39:15 so, may need to send something to the ML that it's not a good idea to run train, etc under py3 if you intend to use the reload config via sighup feature 14:39:28 rosmaita: yeah, will do that 14:39:28 It was a non-eventlet replacement for oslo.service. 14:39:45 bnemec: ah, well we're running eventlet+oslo_config :P 14:40:19 I think we're on some modified incubator stage of oslo.service before it became oslo.service 14:40:32 yes 14:40:37 like we've been on lots of those incubator thingies 14:40:59 Ah, yuck. :-/ 14:41:12 I remember I have migrated glance to oslo.service but that was rejected at that time :P 14:41:41 it was craploads of refactoring and broke about dozen core things :P 14:42:28 I remember you looking into it at the time 14:42:44 yeah 14:43:49 but that's it from me, at this point just awareness and shout out to be careful and have a look if you have spare cycles 14:44:15 I think jokke_ and I should co-ordinate with each other to share the findings 14:44:23 I'll keep my focus on those couple of critical reviews, this and the uncompress plugin for now 14:44:53 ack, I will try to spend some time on it as don't have anything big atm 14:45:12 apart from delete from store thing, which is already in good shape 14:45:30 it was in good shape 3 months ago :P 14:46:13 haha 14:46:49 cyril ran into something yesterday ... looks like that gate job cinderclient was using to make sure it still worked with v1 API wasn't actually testing patches until *after* they were merged to master already 14:47:13 we didn't notice because we never broke v1 compatability, apparently 14:47:46 nice 14:47:59 rosmaita, yes, he has backported some patches to stable/train yesterday 14:48:38 yeah, so my point is, don't use that zuul config as a model if you need to do something like that in the future! 14:49:04 ;) 14:49:13 good point, thnx 14:50:58 last 10 minutes 14:52:43 I'm good 14:52:58 rosmaita, smcginnis ? 14:53:09 nothing from me 14:54:09 cool, wrapping it up for today 14:54:21 guys keep reviewing important patches 14:54:25 Thank you all 14:54:33 Thanks all! 14:55:00 #endmeeting