14:00:35 #startmeeting glance 14:00:36 Meeting started Thu Nov 29 14:00:35 2018 UTC and is due to finish in 60 minutes. The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:40 The meeting name has been set to 'glance' 14:00:41 #topic roll-call 14:00:47 o/ 14:00:48 o/ 14:00:53 o/ 14:01:01 0/ 14:01:08 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:02:03 I think we have quorum here so lets get started 14:02:11 #topic updates 14:02:55 First of all I have change up in project config to get Review Priority vote for Glance similar what Designate is doing 14:02:58 #link https://review.openstack.org/#/c/620904/ 14:03:59 you can see example here: 14:04:04 #link https://review.openstack.org/#/c/616238/ 14:04:08 the gate has gotten really slow, that's been sitting for an hour 14:04:17 yeah, I know 14:04:51 it's probably my fault ... i have a one-line cinder change that i've had to recheck 4 or 5 times 14:05:00 :P 14:05:08 :D 14:05:21 i like the review priority idea 14:05:45 +1 14:05:52 So the reason for this column is twofolded ... first it should help us to get our priority reviews communicated over the TZs bit easier 14:06:08 specially for Brian and Sean who are not doing this for full time 14:06:55 second benefit is that we do not need to use code review -2 anymore during any of the freezes ... so now we have freeze merge block separate from our code review 14:07:10 nice 14:07:48 which brings us to the next important topic of today 14:07:52 Spec Freeze 14:08:37 I think we have gone through pretty much everything in flight and where we have been onboard through the core team should be merged. If that's not the case, please let us know asap 14:09:27 As usual, spec lites can be accepted still on case by case basis, but today marks the deadline for us to know what API/major changes are potentially coming this cycle 14:10:05 any questions/comments? 14:10:20 what's the status of the edge spec? are we trying to get that approved for Stein? 14:10:28 (i haven't had time to look at it yet) 14:10:39 rosmaita: you mean the caching one? 14:10:49 it is very vast 14:11:08 https://review.openstack.org/#/c/619638/ 14:11:10 I am not sure it will be completed during Stein 14:11:18 yes, it's the caching enhancements spec 14:12:35 greg updated it yesterday afternoon 14:12:54 I need to have a look, Greg uploaded new version of it, but I have to agree with Abhishek, I'm not convinced we have time to review it throughly and give meaningful feedback. If the team as whole thinks this is something super important and we're all behind the proposal I'm willing to consider exception for it but I don't want to rush it through today 14:13:20 +1 14:13:59 sounds good to me, we can always approve a spec freeze exception 14:14:13 i don't want to reject something if there's someone available to work on it 14:14:27 but on the other hand, don't want to rush through anything to bite us later 14:14:29 I have one spec open for specifying image size during uploading, solution1 is spec, solution 2 may spec-lite. But need to discuss more on this. May cannot catch the time now 14:14:56 I'm specially worried about the security aspects of the proposed spit brain and after we've fought years to plug the bad behavior from our API I really don't want to introduce new Pandora's box without even latch on it 14:15:00 can we ask for PoC? 14:15:21 jokke_: ++ 14:15:38 it would be good to get our policies situation fixed before proceeding 14:15:52 those could help 14:15:52 agree 14:16:13 that said, please take the time to review the spec and lets keep the discussion going 14:16:33 LiangFang: ok, so that has slipped from the last pass 14:16:42 could you link that again for the record 14:17:21 https://review.openstack.org/#/c/609997/ 14:17:44 https://review.openstack.org/#/c/609994/ 14:17:49 this is solution 2 14:17:52 #link https://review.openstack.org/#/c/609997/ 14:18:05 ok ... so please lets get eyes on that 14:18:28 https://review.openstack.org/#/c/608400/ 14:18:30 moving on 14:18:33 this is solution 1 14:18:36 thanks 14:18:38 LiangFang, specs is in merged conflict 14:18:43 OK 14:18:44 #topic Release/Periodic jobs 14:18:59 We are in R-19 week, Stein milestone 2 is in the week between 7-11 January, which is mostly a month away 14:19:53 spec freeze is today, so if you want an exception please send FFE mail and get a approval from reviewers 14:20:26 Periodic jobs all green, except one of the job fails due to timeout issue 14:20:56 Not much help from the logs, but I will continue to monitor 14:21:04 that's it from me 14:21:08 Iss that the same job that has been sporadically failing for quite some time? 14:21:26 yes 14:21:40 glance-tox-functional-py35-glance_store-tips 14:21:41 ok ... so it's definitely persistent condition then 14:22:12 http://zuul.openstack.org/builds?project=openstack%2Fglance&pipeline=periodic 14:23:14 If anyone has free cycles to look into it and perhaps get in touch with QA/Infra to get some insight that would be great 14:23:14 moving on 14:23:14 #topic running glance in devstack 14:23:14 rosmaita: your go 14:23:40 Just want to highlight the creeping urgency on this. It's coming up more and more weekly 14:23:48 #link http://lists.openstack.org/pipermail/openstack/2018-November/047195.html 14:23:59 yeah, people want to test out import and can't do it 14:24:17 i promised in that email to follow up on this 14:24:33 at some point we had a path forward, anyone remember what it was? 14:24:50 probably all new PTLs at this point 14:25:16 rosmaita: I think you have patch up for it. And I've been talking with gmann about this. 14:25:32 right, my patch is still working 14:25:43 one of the CI is failing on that 14:25:53 So we need to change it back to be deployed on it's own and work around the grenade to accept the fact 14:26:36 i think it's the grenade part that needs work 14:27:05 I think we've been hanging on this for long enough by now that we should be able to just force the change to grenade and we can get through it on one pass (grenade would not be going back and forth anymore) 14:27:55 makes sense 14:27:59 ok, i will try to find some time to see what i need to do to grenade and report back next week 14:28:04 as that was one issue to get it going through without grenade seeing two endpoint changes within one run over (I think 3) branches 14:28:28 rosmaita, if you were busy, pass it to me 14:28:38 rosmaita: ty, if needed lets find time to meet with gmann and get his insight for it 14:29:04 abhishekk: if i can't get answers by next meeting, i will pass it off to you 14:29:08 based on our conversation he has pretty good idea what needs to happen to get it through 14:29:26 rosmaita, ack 14:29:32 jokke_: maybe we can set something up for monday or tuesday next week? 14:29:35 he would also know if there has something changed and we should take another approach to it 14:29:50 yes, that would save time 14:30:06 rosmaita: I'll try to get in touch with him and see what we can plan 14:30:21 ok, great. it would be good to have this fixed by S-2 14:30:33 I would like you to be there as you have best idea about that patch and what your thought was with it 14:30:37 yes 14:30:43 sooner the better 14:30:53 moving on 14:31:11 #topic deprecated config options 14:31:17 that's me again 14:31:18 rosmaita: you can keep going ;) 14:31:24 #link https://etherpad.openstack.org/p/glance-stein-deprecations 14:31:39 as promised a few meetings back, here's the list of deprecated options 14:31:57 a bunch of them will disappear when the registry is removed 14:32:08 not sure what our timeline is on that 14:32:42 IIRC based on the deprecation policy we can get rid of it at the beginning of Train 14:34:14 I think we slipped too late with the v2 registry deprecation to remove it this cycle 14:34:57 please do correct me if I'm wrong and I'm more than happy to make the time to do it 14:35:13 that's a good sean question 14:35:21 looks like it was deprecated in Queens 14:35:37 I guess it will be better if we do it early in T 14:35:37 so we left it in for Rocky 14:35:51 so yeah, 2 cycles would mean we can do it in T 14:35:56 ok 14:36:20 i think it would be good to get rid of owner_is_tenant before someone actually uses it in non-default setting 14:36:21 so ... how bad our deprecation debt is beyond registry? 14:36:44 But still we have some options which are deprecated before queens 14:36:49 i think mainly glance_store 14:37:16 those were deprecated in Rocky 'though? 14:37:35 abhishekk: we have few that we have deprecated ages ago, but won't be removed 14:37:46 around the locations 14:37:50 update capabilities says removal in Stein 14:38:00 oh yeah that one! 14:38:11 woppee ... 14:38:23 and the swift ones can all go 14:38:32 #action jokke to get rid of "update capabilities" 14:38:48 ++ 14:39:18 if someone is happy to have a look on the swift ones, feel free to take that on 14:40:23 anything else? 14:40:32 show_multiple_locations showed up on the ML today 14:40:45 i think we need to update our statement about it 14:40:53 oh? 14:41:00 #link http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000335.html 14:41:15 are we still stating it's deprecated for removal or was it just flagged as deprecated? 14:41:21 jokke_: abhishekk: please read over my reply 14:41:32 (to make sure it's accurate) 14:41:53 says pike or later since ocata 14:42:01 had originally been sooner 14:42:18 rosmaita, ack 14:42:25 but we are way past pike at this point 14:43:04 we really can't remove it until the locations policies are refactored 14:43:29 yes, it's reporting deprecated for removal, I think we just need to reword that to indicate that it will definitely be around until the policies are refactored to the shape we actually can remove it 14:43:54 best workaround at the moment 14:44:10 i will put up a patch for that 14:44:17 ty 14:44:23 (updating the deprecation, not actually removing!) 14:44:37 rosmaita: Big hand for taking the time to go through these, thanks a lot 14:44:44 np 14:44:46 +1 14:45:13 can we move on? we have bit of a time check here. 14:45:22 all done 14:45:28 yes 14:46:02 #topic ForeignKey constraint violation 14:46:13 ok 14:46:27 LiangFang: thanks for taking the time on this. 14:46:31 https://review.openstack.org/#/c/617889/ 14:46:35 np 14:46:43 the issue is: 14:47:18 when doing glance-manage db purge, trying to delete record in table task 14:47:18 rosmaita: abhishekk: we had pretty well covering discussion about this on the last weeks meeting. 14:47:40 jokke_, I went through the logs 14:47:47 LiangFang: I'd like to refer everyone to the meeting logs on this and review your changes 14:47:51 but there're some record in table task_info 14:48:11 OK 14:48:12 thanks 14:48:25 that way you don't need to explain it all again ;D 14:48:37 yes, :) 14:48:38 ok, i will have to read through 14:48:48 thanks rosmaita 14:49:12 don't thank me yet, i actually have to do it! 14:49:24 I describe in the bug also:https://bugs.launchpad.net/glance/+bug/1803643 14:49:25 Launchpad bug 1803643 in Glance "task_info FK error when running "glance-manage db purge"" [Medium,Confirmed] - Assigned to Liang Fang (liangfang) 14:49:34 #action everyone lets get this fixed as while it's "only" breaking the purge it's pretty nasty issue which has quite straight forward fix what I can tell 14:49:41 it's weird that we haven't seen it before 14:49:57 might be we haven't used tasks 14:50:11 and me not being the DB guy I didn't want to take rush decision on this 14:50:44 lets move on last 10 minutes 14:50:50 yeah 14:50:58 #topic precent encoding 14:51:10 so this is carried over from last week as well 14:51:11 imacdonn is not around, he has a request for backport approval 14:51:14 request to backport to rocky 14:51:19 what abhishekk said 14:51:43 did we have some concerns, need for better solution on this or did we agree that this should be good as is for now? 14:52:07 i used to hope for a better solution, but i think we should do this small fix now 14:52:15 I remember us discussing this rosmaita but can't recall what we came up with 14:52:31 ok ... lets get it fixed and released then! 14:52:32 yeah, i am supposed to look into what it will take for a good solution 14:52:37 haven't done that yet 14:52:40 kk 14:53:11 #topic multiple back-ends default store on locations api 14:53:15 i thihk the request is also for a release? 14:53:30 this is from imacdonn as well 14:53:43 like expected the locations API is biting us again 14:53:57 jokke_, last time you have suggested we should test nova snapshot as well 14:54:32 I have a setup with glance using multiple ceph, I have tested snapshot and it is working, 14:54:46 should I also need to configure ceph for nova? 14:54:47 abhishekk: yeah I think we have to a) get the default working on the locations b) work with nova to actually provide the store id for us 14:55:19 the problem will definitely be if glance and nova are using non-default ceph store 14:55:39 so this needs documentation and long term fix 14:56:11 I am still confused here :( 14:56:15 so the situation is that to use the ceph fast snapshotting, the snapshot must be in the same ceph ? 14:56:33 abhishekk: lets take this offline and I'll walk you through it 14:56:40 jokke_, sure 14:56:54 we get couple of minutes for open discussion 14:56:59 #topic open discussion 14:57:13 jokke_: it would be good for you to communicate with iain about this 14:57:34 forgot from updates ... the default visibility survey is up and linked to the mailing list 14:57:41 otherwise, i wind up talking to him because of the TZ, and i don't have your ceph knowledge 14:57:49 great! 14:58:02 #action anyone please fill the default visibility survey 14:58:12 vote yes!!! 14:58:36 sorry, where is the survey 14:58:59 LiangFang: there is link sent to the mailing list ... it's google forms 14:59:17 ok 14:59:35 jokke_: which ml did you send it to? 14:59:53 #link https://goo.gl/forms/gLDHArolvyoYX1LI2 15:00:00 I haven't received it (not sure) 15:00:01 rosmaita: openstack-discussion 15:00:20 ok, time, Thanks all! 15:00:24 #endmeeting