14:01:27 #startmeeting glance 14:01:28 Meeting started Thu Dec 20 14:01:27 2018 UTC and is due to finish in 60 minutes. The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:31 The meeting name has been set to 'glance' 14:01:34 #topic roll-call 14:01:37 hello o/ 14:01:37 o/ 14:01:40 o/ 14:01:44 o/ 14:01:45 o/ 14:02:54 sorry, we had company EMEA call that ran til exactly the top of the hour, gimme a minute here 14:03:37 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:03:40 there we go 14:04:21 #topic updates 14:05:13 So as agreed we will not have a meeting next week the 27th due to the EMEA and US holiday season 14:05:44 \o/ 14:05:46 ack 14:05:55 yey 14:06:07 As we have the M-2 coming up at 10th I will be in for the meeting at 3rd of Jan 14:06:22 me as well 14:07:55 although, please note I will be travelling between new year and mid Jan, so the start of January I will definitely not be available for most of US timezones 14:08:10 ack 14:08:43 me neither, but for different reasons :) 14:09:05 ack :) 14:09:26 then to the technical side 14:10:27 The gating failures we saw were traced back to oslo.policy change that was merged and released pretty much on same day so we did not get advanced warning from our tips tests 14:11:37 I guess it started to fail on the same day when oslo.policy new version was released 14:11:58 did we ever add back the glance functional tests to the requirements gate? 14:12:13 this change started to make deepcopies of the target object for it's internal processing and due to our onion model and the fact that our policies are not on the API layer it hit one of our nested proxy objects that definitely were not copyable 14:12:42 causin infite loop in the gate 14:12:53 #link https://review.openstack.org/#/c/625086/ 14:13:30 #link https://review.openstack.org/#/c/625114/ 14:13:52 we definitel should work through the later before logging off for the holidays 14:14:17 although I will be around still parts of next week 14:14:47 i will be around for this too 14:14:56 kind of important to get it fixed correctly 14:15:05 (item 6 on the agenda) 14:15:35 this was fun to debug as it started to put so much output in gate that it also failed the logging there and the amount of details we got from the gate job was next to none 14:16:15 huge hand to cdent and mriedemann for helping us out with this! 14:16:25 ++ 14:16:26 +1 14:16:46 LiangFang, has also tried from his end 14:18:17 absolutely and my point was not to undermine any of the glance teams effort but those two guys are not actively working on glance, yet they took the time and stepped in when the understanding of gating etc. was needed 14:19:08 which was extremely helpful to filter out what was just jitter or side effects happening in the gate vs. what was actually breaking 14:19:59 ok ... lets move on so we do not run out of the time 14:20:11 #topic release updates 14:20:18 abhishekk: stage is yours 14:20:31 yes 14:20:45 Stein Milestoen 2 is 3 weeks away 14:21:05 Please put/review the patches as much as possible 14:21:23 Periodic tips jobs are all green (Finally) 14:21:34 Thanks to erno and nova guys again 14:21:57 did the review priority stuff merge into gerrit? 14:22:00 That's it from the topic 14:22:20 not yet I guess 14:22:22 rosmaita: not yet, zuul doesn't agree with me on something there 14:22:44 jokke_: grab sean to help, he got it merged for cinder 14:23:27 just shoot him an email and ask him to take over the patch 14:23:36 nice ... I will, it's on my todo list for today, I'd like to get it in so we can utilize it already during the holidays when we're running skeleton crew 14:23:44 exactly 14:24:25 he also set up a dashboard with a tiny url, ask him to do that too 14:25:17 Now I'd like to highlight here also that we do not release milestone releases anymore 14:25:37 but with M-2 the lib freeze is approaching very quickly 14:25:44 yes 14:25:59 so if there is anything major that needs to be done in glance-store, lets get that ongoing 14:26:09 ack 14:26:27 same with our client patches even I think they freeze bit later 14:26:30 is there anything other than finalizing the multiple back end support? 14:26:38 for glance_store, i mean 14:26:50 nothing as of now 14:27:14 * smcginnis sneaks in the back 14:27:20 I have one patch of glance_store 14:27:38 abhishekk: and rosmaita already reviewed it 14:27:41 smcginnis: read the scrollback, i volunteered you for some stuff 14:27:43 I don't think so. Majority of the back-end handling changes we will likely need to do is glance end and how store is interfaced rather than actual store changes 14:27:50 rosmaita: Hah, will do. 14:27:54 smcginnis: welcome welcome 14:28:20 ok, can we move on? 14:28:24 yes 14:28:45 one minute 14:29:03 there are some deprecated config opts in glance_store due for removal 14:29:38 #link https://etherpad.openstack.org/p/glance-stein-deprecations 14:29:43 ok, if they can be safely done, we should get those done 14:30:13 asap so we can get those gating for a while before hitting the deadline 14:30:35 take a quick look at the etherpad 14:30:57 'stores' and 'default_store' say subj to removal in stein 14:31:00 is that true? 14:31:09 we are not removing stores and default_store option yet, so that needs to be changed 14:31:46 the swift ones should at least be ok to go 14:31:56 yeah 14:31:58 rosmaita: I don't think we can 14:32:00 Earlier (when spec was approved it is decided to remove it in S but later we decided to keep it for another cycle or two) 14:32:16 ok, i will put up a patch to amend the deprecation text for those two 14:32:33 #action rosmaita glance_store patch updating dep notice for 'stores' and 'default_store' 14:32:36 thanks 14:32:41 yeah I think it was bit hastily decided to be ready to get rid of the old style in Stein and just move to the multistore model 14:33:00 ok 14:33:38 any volunteers for the swift options? 14:33:44 basically the multi-store is still experimental so we cannot expect production clouds to migrate to that for their upgrades be clean to Stein if we remove those 14:34:15 right 14:34:37 reminding the N -> N+1 should be working with same config file principle we have 14:35:30 Lets move ahead 14:35:52 I can't promise I will be able to take those swift ones before the break so if someone else have time to tackle them, fleet free 14:35:55 feel 14:36:00 ok moving on 14:36:13 #topic updating show_multiple_locations deprecation notice 14:36:22 that's me 14:36:23 rosmaita: this is yours I think 14:37:00 it's come up in the glance channel that some operators have taken the deprecation notice seriously and tried to make it False and use policies 14:37:07 which of course we know cannot work 14:37:13 and they are quite annoyed 14:37:21 as you would 14:37:26 anyway, i figure we should update the deprecation notice to be accurate 14:37:39 so please look at the patch and let me know what you think 14:38:05 #link https://review.openstack.org/625702 14:38:07 looks straight forward to me 14:38:22 i just wanted to make sure what i say there is accurate 14:38:37 yes, unfortunately that deprecation notice was done based on the promise of us having people to work on the policy refactoring which never happened and the change slipped from us 14:38:46 agreed 14:38:52 so lets make that change _and_ backport it to stable 14:39:01 it's my fault for letting the notice not be changed for so long 14:39:14 i kept thinking we were about to start work on it 14:39:26 rosmaita: well I have been equally ignorant about that 14:39:39 how far back to we want to backport it? 14:39:56 should be pretty clean, just deprecation note change and release note 14:40:10 thus, lets push it to stable as well so we get those notices corrected for people who are running stable and trying to work on them 14:40:32 rosmaita: Rocky and Queens I'd quess 14:40:56 we still have ocata and pike in stable/ 14:41:00 and once that is done, me or you can write to the mailing list about it 14:41:15 rosmaita: yes, but those two are not our responsibility 14:41:40 cool 14:42:21 ok lets move on, thanks for bringing this up 14:42:36 np 14:42:39 #topic lower constrains under py36 14:42:48 me again 14:43:10 there is something very weird about the lower constraints job 14:43:22 cdent left a note on the test patch i put up 14:43:34 first of all, pardon my ignorance, I didn't realize we even had 3.6 jobs I thought the community is focusing on 3.5 and 3.7 14:43:53 i looked at what he mentioned, and it does indeed seem like the job as currently configured is behaving properly 14:44:20 jokke_: i must admit, i lost track of what is happening with py3 efforts 14:44:26 3.6 is the current targeted runtime for stein. 14:44:47 The policy has been to go with whatever is the default on the LTS releases of our supported platforms at the start of the cycle. 14:44:56 what smcginnis said, so all the infra stuff is switching to run in py36 14:44:59 That means 3.5 is no longer the target and 3.6 is. 14:45:10 We need to be ready for 3.7 as that's likely what we will end up with for Train. 14:45:15 ohh 14:45:32 We've just been lucky that we've had a period there where there wasn't much change so we got used to just sticking with the same version. 14:46:00 anyway, i have lost focus on what the lower-constraints are supposed to do, so i would like someone to take over the WIP patch i put up with the changes cdent suggested 14:46:19 there's a comment on the patch with a link to 2 pastes showing the different pip freeze outputs you get 14:46:31 #link https://review.openstack.org/626008 14:47:11 so the lower constraints IIUC is the minimum versions of dependencies we test and promise to be working 14:47:11 i don't know that this is a high priority 14:47:23 jokke_: Correct. 14:47:29 rosmaita, I will take this (but around 26) 14:47:49 so in that light this is fairly important 14:47:50 abhishekk: ty 14:47:54 thanks abhishekk 14:48:03 Looks like it's passing with that patch, so that's good. 14:48:08 no issues 14:48:41 yeah, i think we may need to adjust the l-c.txt based on this, but there doesn't seem to be any major problem (yet) 14:48:58 so what I'm bit confused about is why do we have version numbers in our requirements with upper and lower constraints but in general that's the case ;) 14:50:02 also note that project based blacklisting does not work in the requirements 14:50:30 so us changing our requirements files has no effect on gate, all those changes must be done in the requirements repo 14:50:43 exactly 14:51:02 this came to my knowledge when we were troubleshooting the gating issues discussed earlier 14:51:38 I tried to blacklist the oslo.policy to see if it indeed was the cause of our issues and gate still installed the blacklisted version and used that 14:51:59 so again I'm not sure why we do have those files in our repo in the first place 14:52:57 should we change tox.ini to use blacklist? 14:52:57 we have limited time and GregWaines has something to say in open discussion 14:53:05 i'm done 14:53:13 :D 14:53:51 #topic catch rdb NoSpace exception 14:54:00 LiangFang: you had this one 14:54:05 it's mine, thanks 14:54:42 i want to confirm with rosmaita, should I change image name to image id in this patch or in a followup patch? 14:55:06 LiangFang, its good to be fixed in same patch I guess 14:55:15 OK 14:55:37 i wasn't sure if there was a good reason for rbd to use name instead of id? 14:55:40 we should always log id 14:56:05 that was my thought, but i wasn't sure, since it's name everywhere else in that file 14:56:15 so if we have previously been logging name somewhere that is bug 14:56:36 yeah, so maybe a separate patch would be better 14:56:38 as image doesn't need to have name at all so that would just lead the string being omitted 14:57:03 ok 14:57:10 rosmaita: for fixing the existing, yes. Anything that is introduced now should be fixed before merging 14:57:43 Last 3 minutes, next topic is already discussed 14:57:44 ok, cool. so LiangFang use id on this patch in that code you touch, file a new bug, and then change the rest 14:57:45 we should not introduce buggy behavior even for consistency 14:57:54 ++ 14:57:56 ok 14:58:00 ty 14:58:01 moving on 14:58:16 yeah, skip 6, though look at my comment on the patch, please 14:58:25 #topic Open discussion 14:58:29 I wanted to check in on status of REVIEW https://review.openstack.org/#/c/619638/ 14:58:29 ( Image Caching Enhancements for Edge Cloud ) 14:58:36 I have made updates based on first round of comments from Erno and Abhishek . 14:58:36 Major change was to simplify proposal by removing the capability of Edge Cloud to be able to create local images … this was causing issues and was not absolutely required. 14:58:40 the proxy topic was pretty well covered at the beginning of the meeting 14:58:45 Just wanted to check in on when next round of reviews could be done ? 14:59:20 GregWaines, I will review it in next couple of days 14:59:30 do you have PoC for this? 14:59:37 excellent ... thanks very much. 14:59:46 GregWaines: i will also commit to review asap 14:59:51 We do have a PoC for this ... although it is based on PIKE 15:00:03 GregWaines: thanks for addressing those concerns. As you might have noticed if you were following the meeting, we've had bit of exitement lately. I will try to do a review on that before going for my travels 15:00:06 great 15:00:09 GregWaines: is there any code implemention done? 15:00:26 Times up. 15:00:33 indeed 15:00:35 thank you all 15:00:36 Thanks all! 15:00:38 happy holidays 15:00:40 Thanks! 15:00:41 GregWaines: and also, you are doing the right thing by showing up and asking for reviews! 15:00:41 #endmeeting