15:00:45 #startmeeting manila 15:00:46 Meeting started Thu Aug 10 15:00:45 2017 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:47 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:49 The meeting name has been set to 'manila' 15:00:53 hello o/ 15:00:56 hello all 15:01:00 \o 15:01:01 hello 15:01:03 hi 15:01:04 hello 15:01:06 hello 15:01:08 hi 15:01:10 Hi 15:01:39 tbarron cknight toabctl: courtesy ping 15:01:41 @! 15:01:41 <_pewp_> jungleboyj ( ´ ▽ ` )ノ 15:01:45 hey 15:01:48 Hi 15:02:04 #topic announcements 15:02:14 The RC1 target date is today! 15:02:26 I think we're close to hitting it 15:02:50 we've had some fixes merge, and I booted out some low priority bugs without fixes 15:03:10 in fact RC1 is the only topic on the agenda, so let's get started 15:03:21 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:03:26 #topic Bugs / RC1 Status 15:03:35 #link https://launchpad.net/manila/+milestone/pike-rc1 15:04:08 so first let's discuss the pair of quota bugs 15:04:16 #link https://bugs.launchpad.net/manila/+bug/1707378 15:04:17 Launchpad bug 1707378 in Manila "Quota usage value error in batch create share" [High,Incomplete] - Assigned to zhongjun (jun-zhongjun) 15:04:21 #link https://bugs.launchpad.net/manila/+bug/1707379 15:04:23 Launchpad bug 1707379 in Manila "Quota usage value error in batch delete" [High,Incomplete] - Assigned to zhongjun (jun-zhongjun) 15:04:44 and this proposed fix: 15:04:47 #link https://review.openstack.org/#/c/489501/ 15:05:12 it sounds like we still have race conditions in the quota code 15:05:30 tbarron brought up this issue in Austin 15:05:41 yes, we still have many race conditions in the quota 15:05:49 it appears tbarron is not around this week though 15:06:14 zhongjun: can you describe the approach for fixing this? 15:06:50 bswartz: We record the wrong usage quota (shares, gigabytes) value 15:06:51 when we delete one share many times at the same time. 15:06:51 Because we reserve -1 shares quota before we delete the 15:06:51 share record in db, then we can continue to get share info 15:06:51 from db and reserve quota -1 many times. 15:07:14 bswatz: Changed to remove the quota reserve to the place that after 15:07:14 the share record has been deleted in db. 15:07:17 now 15:07:20 is it a race condition or a cleanup error? 15:07:47 looks like the latter 15:08:04 the protection of multiple deletes should come from the DB state update from available to deleting 15:08:06 we should either revert-back reservation or commit it 15:08:40 why would we change the quota before updating the object state? 15:08:41 cleanup error for deleting 15:09:36 bswartz: Because one share has many share instances 15:10:03 anyway, but exists and should be fixed, what exactly should we decide in this meeting? 15:10:24 rc1 or next release ? 15:10:33 vponomaryov: We have two ways to fix it 15:10:35 s/but/bug/ 15:10:39 vponomaryov: given that we're at the RC target date we have to decide whether this bug stays in Pike or gets punted to Queens 15:11:18 if we keep these bugs targeted at pike we need to fix them before we can release RC1 15:11:36 and I'm not sure there's a low risk fix here for both issues 15:12:06 given that quotas are a longstanding problem area, pike would be no worse than ocata if we deferred this bug 15:12:40 but I hate just kicking the can down the road and zhongjun has done some work here to try to fix the issue 15:13:13 I guess the question is, can we merge the current fix as is, or is more work needed? 15:13:46 bswartz, need to review it more attentoively first 15:13:52 we don't have to decide in this meeting -- we can handle the issue through code review 15:13:58 bswartz: need more review for it 15:14:01 but here we have a chance to discuss the patch 15:14:31 bswartz zhongjun: can this be backported to stable/pike? it doesn't seem like a ship stopper to me.. 15:14:38 bswartz: I tested it before by myself, but only one guy take a look at it 15:14:45 however if the code reviews don't turn out positive this afternoon we'll have no choice but to punt it 15:15:14 anything else about these bugs before we move on? 15:15:28 gouthamr: I could be back ported to stable/pike if we don't do many change in code 15:15:29 yeah, how many hours do we have before afternoon? 15:15:32 gouthamr: that's an option but it has downsides 15:15:57 vponomaryov: the moment we have resolution of the 3 remaining bugs I will push a tag 15:16:12 the gate infra has been extra cranky this week so I'd rather not wait 15:16:30 but we have at least 3 hours to review it 15:17:02 I have it downloaded in my dev environment and I'm testing it 15:17:12 but I don't have a multibackend setup 15:17:50 alright next bug 15:17:53 bswartz: It just need multiple manila-api services 15:18:15 zhongjun: is it possible to do that w/ devstack? 15:18:42 I run with m-api under apache 15:18:52 bswartz: requires either manual change of screen configs or manila-devstack plugin 15:19:13 I could probably hack something up 15:19:15 bswartz: I don't think so 15:19:16 but not in 3 hours 15:19:22 bswartz: on single node you will need to change ports for each subsequent instance of API service 15:19:26 okay moving on 15:19:46 #link https://bugs.launchpad.net/manila/+bug/1659023 15:19:47 Launchpad bug 1659023 in Manila "Consistent Snapshots are broken in the NetApp cDOT driver" [Medium,In progress] - Assigned to Ben Swartzlander (bswartz) 15:19:55 #link https://review.openstack.org/491877 15:20:01 ^ looking at it 15:20:30 been wanting to fix this since late ocata -- when we killed the CGs code 15:20:55 thanks to vponomaryov for getting the share-groups stuff wrapped up in pike 15:21:19 this just needs reviews today 15:21:33 if anyone finds problems I'm happy to push more patchsets today 15:21:47 but if we find a serious issue we might need to punt this one too 15:22:04 * bswartz hopes we don't find a serious issue 15:22:59 bswartz: better to say "hope we don't HAVE serious issue" )) 15:23:07 true 15:23:17 i can review this.. the chnge looks sane in theory.. will try running the tests that exist for consistent_snapshot_support 15:23:34 if you already haven't, bswartz 15:24:27 gouthamr: I did manual testing -- I wasn't able to get the whitebox tests to run thanks to netapp's gerrit rejecting my SSH key 15:24:39 oh, that's a solvable problem 15:24:40 :) 15:25:17 okay that's it for my agenda 15:25:20 #topic open discussion 15:25:25 anything else for today? 15:25:31 link:https://review.openstack.org/#/c/492358/ 15:26:03 We missing a little parameter in manilaclient 15:26:08 doh 15:26:20 it will be easy to merge that fix in queens 15:26:33 how bad is it to not have it in pike? 15:26:52 not bad at all 15:26:54 just bad CLI user experience, i think.. 15:26:57 oh wait 15:27:00 I think I ran into this bug 15:27:07 create a share-group-type and then add extra-specs later 15:27:07 and I assumed it was something I was doing wrong 15:27:11 We can not create a share group with group specs 15:27:19 yeah that's exactly what I do gouthamr 15:27:19 in cli 15:27:32 okay this can easily wait until queens 15:27:38 thanks zhongjun 15:27:46 we can merge it though, can't we? 15:27:55 just not tag a release right away? 15:27:58 IMO, yes 15:28:01 gouthamr: yes, we still have another way to do it 15:28:08 stable/pike branched a while back for the client 15:28:30 ah yes.. 15:28:34 10 days ago 15:28:47 so it's frozen now 15:28:57 but master should be open for queens-related fixes 15:29:21 okay just a reminder -- we have 3 weeks until the Pike release 15:29:29 there's still a lot of docs issues that need addressing 15:29:46 please use the next weeks to update any docs related to changes in pike 15:29:56 and we have the docs migration to deal with too 15:30:35 Manila PTG will be the week after the Denver event 15:30:54 which is 4.5 weeks away I think 15:31:27 alright we're done for today 15:31:31 thanks for reviewing these last 2 changes 15:31:36 #endmeeting