15:01:19 #startmeeting manila 15:01:25 Meeting started Thu Apr 19 15:01:19 2018 UTC and is due to finish in 60 minutes. The chair is tbarron. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:29 The meeting name has been set to 'manila' 15:01:35 hi 15:01:36 hi 15:01:37 hello o/ 15:01:40 hello 15:01:41 hi 15:01:47 manila courtesy ping: gouthamr zhongjun xyang markstur vponomaryov cknight toabctl bswartz ganso 15:01:53 \o 15:02:01 \o. 15:02:10 Hello all! 15:02:22 bswartz is now *officially* present. 15:02:34 Indeed 15:02:38 \o/ 15:02:39 \o 15:02:39 #topic announcements 15:03:04 TC campaigns are underway. Read the ML (archives). vote. 15:03:17 ^^ #1 15:03:46 #2) spec freeze today. specs are on the agenda so we'll talk about the freeze and possible extensions in a moment. 15:04:20 #3) I plan to cut rocky milestone #1 today for manila, python-manilaclient, manila-ui 15:04:31 #link http://lists.openstack.org/pipermail/openstack-dev/2018-April/129542.html 15:04:36 Today is the deadline. 15:04:44 Any concerns ^^ ? 15:05:07 tbarron: are you familiar with the mechanics of cutting tags and everything? 15:05:12 lmk if you need any he;[ 15:05:14 help even 15:05:20 bswartz: thanks! 15:05:31 * tbarron has bswartz on speed-dial 15:05:45 Any other announcements? 15:06:01 ok 15:06:06 #agenda https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting 15:06:22 "end extensions" :) 15:06:23 #topic Spec Reviews 15:07:03 #link https://etherpad.openstack.org/p/manila-rocky-specs 15:07:16 gouthamr: I take it your remark pertains to these? 15:07:32 tbarron: yep, i see two not ready for merge 15:07:39 ok, sec 15:07:43 * vkmc sneaks in late 15:07:45 hi o/ 15:08:08 \o 15:08:17 vkmc has been fighting noble battles 15:08:33 because horizon badly broke manila-ui 15:08:36 manila-ui battles? 15:08:42 amito: yes 15:08:52 * amito salutes vkmc 15:09:00 Back on the specs .... 15:09:13 json schema validation merged 15:09:18 #link https://review.openstack.org/#/c/553662/ 15:09:45 How ready is the access-rule metadata spec? 15:09:59 #link https://review.openstack.org/#/c/551106/ 15:10:03 tbarron: it has just one thing to discuss 15:10:04 It left one question 15:10:10 ganso is your concern addressed yet? 15:10:20 I did not see it discussed in the comments 15:10:35 +1 I think we should discuss some about this 15:10:37 ganso: there was a reply inline 15:10:44 tbarron: yes I saw the reply 15:11:04 What is the outstanding issue? 15:11:06 it shouldn't hold the spec back, but it's worth discussing imo 15:11:12 I asked about changing the endpoint to create access rules to share-access-rules using a PUT method 15:11:18 I would like to know what everyone thinks about this 15:11:44 zhongjun_ is right that it is probably too much to do on a metadata implementation, we could do it later 15:11:45 PUT instead of the current POST ? 15:11:51 Do we have to change all the current access APIs to new APIs? 15:12:00 PUT overwrites an object 15:12:11 so, the old access APIs are actions on the share resource 15:12:12 tbarron: no, he's asking to change the API endpoint for creating access rules 15:12:18 If that's the semantics of what it does, then PUT is more appropriate 15:12:28 tbarron: The actually is use the new API instead of all old access rule APIs 15:12:56 the metadata APIs creates a new resource and endpoints. This new resource is a share access CRUD, but it is lacking the create and delete functionality, which is still available through the old way 15:13:47 ganso: yes, It will too much to do on a metadata inplementation to use new APIs instead of old APIs 15:13:49 to create an access rule today you use POST /shares/{share-id}/action 15:14:03 and to delete one you use POST /shares/{share-id}/action 15:14:36 access rule is being treated as attribute of share resource 15:15:07 tbarron: yes 15:15:23 so, by the end of Rocky, if implemented as proposed in the spec, the access rules resources will have their functionality split between 2 resource endpoints. It looks a bit messy. It is ok for me if we all agree we would like to improve that later and deprecate the way of creating/deleting access rules through POST actions in share resources 15:16:01 zhongjun_: what do you think about that idea? 15:16:09 ganso: i disagree that it is going to be a bit messy, the latter part of your statement was my original intent 15:16:21 I am not saying it needs to be done in the access rules metadata, I believe it shouldn't 15:16:44 gouthamr: well it looks messy to me :P 15:16:46 tbarron: I would like to save the old API 15:17:35 the reason zhongjun_ wants to not deprecate the old API is because of users. That's my #3 item in the inline response 15:17:36 ganso: i'm suggesting a multi-step evolution for the access rules API - let's start with the GET and the new update 15:18:09 gouthamr: yes, a multi-step is the correct way to do this. That is, if we all agree that's what we want. 15:18:21 to achieve that, we would need to discuss that #3 concern 15:18:27 tbarron: We use the current common APIs of access rule for a long time, all user know it and used it. We could not change their habit? 15:18:56 zhongjun_: even multi-step with microversions? 15:19:34 microversions already avoid breaking applications. zhongjun_ seems concerned with the habit 15:19:45 tbarron: If we change this common API, it will let user software broke when they want to simply upgrade manila to the latest 15:19:45 version. They have to change there software code about the part of the access rule API. 15:19:50 zhongjun_: note that I'm not saying that *you* are responsible for the subsequent steps 15:20:12 tbarron: more concerned with the habit and small change for user 15:20:26 zhongjun_: if it is microversioned, shouldn't break any software using the old API 15:20:36 Are you folks thinking of having a period where both are accepted and the old way is deprecated? 15:21:17 tbarron: yes, the old API would have an upper limit microversion, and would go away when we deprecate v2 15:21:17 tbarron: :) yeah, we just discuss if the API resource should be changed 15:21:18 but that would be doing it wrong.. what's the advantage of supporting two endpoints on a given version of the API? 15:21:33 gouthamr: the old version will not have the new features 15:21:59 ganso: deprecate v2? 15:22:02 :O 15:22:07 :O 15:22:09 lol 15:22:13 eventually 15:23:06 Is there general agreement that zhongjun_'s current spec and work don't have to solve the future steps problem, just mention it as outstanding issue? 15:23:17 deprecation is no big deal 15:23:22 Or do you folks think we can drive this to a good conclusion now? 15:23:29 It's the eventual removal of the old behavior where the problems start 15:23:40 The sooner you deprecate "bad" APIs the better 15:24:05 bswartz: +1 15:24:26 * tbarron remembers to route all customer cases pertaining to this to gouthamr 15:24:35 :-D 15:24:48 hahaha 15:25:16 I'm not against doing the right thing, but someone would need to write it up and I can keep it in our backlog. 15:25:16 tbarron: regarding the spec, I don't think there is anything to decide here, but some of us don't like the idea of fixing this future problem, and they don't see it as a problem, so it could maybe never be fixed 15:25:51 ganso: the question is whether you want to "hold this spec hostage" to resolution of the future problem 15:26:03 I don't 15:26:12 I just kept the -1 to have this discussion 15:26:16 and then either way, who is going to write up the future problem and drive it to resolution 15:26:17 If you are a client, you like to change your habit ? changing your all old API to new API? 15:26:19 ganso: ack 15:27:00 tbarron: yep, it is probably not going to be customer driven, so hardly anyone will pick this up, ever 15:27:02 OK, sounds like ganso will write another remark in the spec for the record and remove his -1, maybe vote _2 15:27:03 tbarron: the spec does say that one of the APIs will not be supported on API version >=2.43 (~) where zhongjun introduces the new API 15:27:06 +2 15:27:22 Have I got that right? 15:27:46 Can we merge this one today then? 15:27:46 gouthamr: yep, the GET API will stop working, so it will kinda break users' habit right there, right zhongjun_ ? 15:27:53 tbarron: we will merge it today 15:28:22 correction, the GET API will stop working >= 2.43 15:28:35 OK, those who care about the future steps issue should write up a draft spec on the matter and we can backlog it. 15:28:37 ganso: yes, I also want to let GET API back if we got agreement 15:29:19 * gouthamr the "GET" API which is in reality the POST /shares/{share-id}/action | body {'access-list': null} 15:29:31 zhongjun_: hmm, are you putting the consensus at risk at this point? 15:29:39 gouthamr: yes, the old GET 15:29:40 grrr, /action 15:30:23 tbarron: :) 15:30:45 zhongjun_, tbarron: I am not sure we will get agreement on restoring the old GET API 15:31:05 zhongjun_: we will be marking it with upper microversion limit and we should not go back 15:31:24 zhongjun_: you need to spend a couple of minutes white-boarding with bswartz, i'm sure you won't feel the same about your concerns 15:31:35 heh 15:32:11 let's approve the spec as is. zhongjun_ if you persuade ganso and gouthamr offline we can always revise the spec. 15:32:22 gouthamr: hah 15:32:25 tbarron: yup, as is it is fine 15:32:40 +2'ed 15:32:40 #link https://review.openstack.org/#/c/546301 correct scenario test spec 15:32:57 This one has only procedural issues outstanding, right? 15:33:17 If so, then let's reach out to Nir and get them resolved but not worry about the deadline. 15:33:18 tbarron: yes, i can help with this one if the original author's missing 15:33:28 It's a spec correction. 15:33:32 Kind of a bug. 15:33:32 although, does it *have* to merge today? 15:33:41 I can help with that one as well 15:33:44 gouthamr: no, that's what I'm saying. 15:34:06 Let's get it done real soon now though, even if we push the git change ourselves. 15:34:30 #link https://review.openstack.org/#/c/552935 access rules priority 15:34:37 I'm the trouble maker on this one. 15:34:52 But I don't think we've talked it through all the way. 15:35:39 please read the latest and explain what's wrong with the match most specific first resolution of the ambiguity issue 15:35:51 we can do it offline and take another week or so 15:36:15 I was ok with the proposal last time I reviewed it but the latest discussion got me confused 15:36:30 access rules have been a painful part of our history and I don't want to rush a solution until we get a good consensus (if possible) 15:36:32 I think the point is weather the driver itself support the "most specific rule wins" now 15:36:33 I think it's just details we need to sort out 15:36:38 ganso: it is possilbe that I'm confused 15:36:46 bswartz: well, I don 15:37:01 don't consider not presenting priorities to the end user 15:37:05 which is what I propose 15:37:10 a "detail" 15:37:24 Well we have to decide what to specify in the case of a tie 15:37:28 bswartz: now it's possible that I'm just nuts on this one 15:37:41 bswartz: with my proposal there are no ties 15:37:53 Oh sorry I missed that 15:37:56 we could return error in case of ties 15:38:02 bswartz: the latest comments on it are about not allowing user-defined priorities, so no API changes 15:38:04 why have ties ever? 15:38:04 That will make upgrades a bit harder 15:38:23 we only have conflicts in the api 15:38:30 tbarron: for backwards compatibility 15:38:33 gouthamr: how would the user choose which priority matters then? 15:38:47 ganso: they don't, take a look at the alternative 15:38:47 and deterministic, no-tie semantics in the access rule engines 15:39:11 bswartz: so I agree that compatability would need to be thought through BUT 15:39:25 bswartz: +1 about upgrade 15:39:29 we don't know what the old behavior actually is except for our favorite back ends 15:39:46 gouthamr: I am not sure I understood, but I'd disagree, because the manager sorting stuff and deciding is deterministic, but could end up in a way opposed to what the user desires 15:39:54 So that's why I want to think about this one a bit. 15:40:11 My suggestion is to allow ties and to leave how ties are handled as unspecified (but equal to legacy behavior) 15:40:13 gouthamr: and I didn't see one way to circumvent that and achieve what the user wants 15:40:31 That makes upgrades and backwards compatibility as painless as possible 15:40:32 The only thing we don't allow the user to do with my proposal is "mask" out more specific with more general. 15:40:58 tbarron: But the user may want that 15:41:07 I'm not convinced that that "use case" isn't a recipe for confusiion and trouble cases with customers anyways and 15:41:11 we don' 15:41:21 we don't know that all back ends can do it 15:41:39 so there are pluses and minuses here I think 15:41:50 For backends that can't, drivers can easily compensate 15:42:12 I really don't see the value of the so-called use case. 15:42:13 The drivers can choose not to send down rules which they know will be shadowed by more general rules 15:42:29 I imagine it would be used in transient situations, not on an ongoing basis 15:42:30 It's quite contrived. 15:42:51 bswartz: I am concerned by how to implement such logic, because there are far too many combinations, and the user may want to intentionally do that 15:43:01 For example, if I want to force all my shares to read only for a short period while I fix something, but I don't want to have to delete all my rules out of manila to achieve that 15:43:34 bswartz: you are creative :) 15:43:39 bswartz: +1 15:44:26 Let's continue this discussion in the review for one more week. I promise not to be a blocker if 15:44:44 I can't persuade the community. 15:44:46 that is a great use case that needs to be mentioned on the spec ^ 15:44:48 My bigger concern is how we implement backwards compatibility 15:45:07 And for that, we need to allow ties with legacy behavior 15:45:11 But right now I really like the simplicity and clarity of the shortest match semantics. 15:45:15 but enumerating all rules, denying them and adding them back is easy too :) 15:45:39 gouthamr: not if you're a UI-clicker 15:45:44 I agree that the compatability issue is important and promise to think about it. 15:46:21 bswartz: maybe we can fix that, bulk export 'export-rules' and bulk import 'export rules' via the UI :)( 15:46:36 The most-specific-match proposal is listed in the spec as an alternative but there are not reasons given not to do it, so please fill 15:46:46 those out if we're going to reject it. 15:46:51 tbarron: I also agree with the compatability, I think bswartz's way is good to solve the compatability 15:47:10 zhongjun_: so please explain that in the spec 15:47:15 I missed it in my spec 15:47:35 OK, any problem with taking one more week on this one? 15:47:38 tbarron: sure 15:47:58 #topic Bugs 15:48:07 dustins: ? 15:48:20 #link: https://etherpad.openstack.org/p/manila-bug-triage-pad 15:48:23 I've got a few from last week, yeah 15:48:39 We'll likely run out of time, but we can get through a few! 15:48:57 https://bugs.launchpad.net/manila/+bug/1762900 15:48:58 Launchpad bug 1762900 in Manila "tearDownClass manila_tempest_tests.tests.api.test_share_networks_negative.ShareNetworksNegativeTest Failed" [Undecided,New] 15:48:58 * gouthamr we don't have bugs, 12 minutes is plenty 15:49:17 gouthamr: If only that were true 15:50:31 dustins: any flavor here? 15:50:36 is this another tempest resource cleanup issue? 15:50:44 bswartz: not really 15:50:44 interesting 15:51:12 tbarron: yes, it seems like another thing where our tests are not cleaning up after themselves 15:51:13 It does feel like cleanups a possibly being done in the wrong order 15:51:14 this is a share-server cleanup issue 15:51:47 it was found in newton (osp10) 15:52:10 i.e, the share server still exists past the test and we're trying to tear down the dynamic credentials used to run the test class 15:52:23 * tbarron jokes: close with please upgrad and reopen if you hit it againi 15:52:48 tbarron: looks like someone's trying to run through RH-OSP certification here 15:52:57 poor devil 15:53:36 well we should see if we can fix this one 15:53:42 Totally 15:54:08 * dustins thanks whomever added the summaries 15:54:19 Any volunteers for looking at this one? 15:54:19 tempest is branchless so unless this was fixed at a later commit we still have the issue 15:54:25 share.create_networks_when_multitenancy_enabled = False 15:55:01 interesting, i think the fix for this was proposed by Yogesh a while ago 15:55:22 gouthamr: is the patch still around? 15:55:34 tbarron: i may be wrong, but this one: https://review.openstack.org/#/c/493962/ 15:55:48 it's not made it to ocata and newton 15:55:52 was fixed in queens 15:56:33 ok, we can put it in stable/ocata and tell the victim to submit a downstream BZ for osp10 15:56:49 Heh, victim 15:56:54 stable/ocata is being maintained now 15:57:05 2018-04-03 09:14:55.131 1036768 DEBUG tempest [-] share.share_network_id = 20439dd1-77b9-4f11-8ee9-ea944101dbdf 15:57:14 from the tempest log 15:57:48 i think there's a correlation here - they're pre-configuring the share-network 15:57:57 so they're only ever going to have 1 share server 15:58:15 throughout the test suite 15:58:46 do we expect tempest to pass with that configuration? 15:58:57 tbarron: no, hence the bug-fix 15:59:01 right 15:59:13 gouthamr: ok, will you update the bug? 15:59:22 tbarron: yep, i'll ask some questions 15:59:24 I'll review the patch. 15:59:45 Just add reviewers to the patch as appropriate. 15:59:48 * bswartz peeks at the clock 15:59:51 We're out of time. 15:59:54 ;-( 15:59:57 Thanks everyone!!! 15:59:58 tbarron: although Yogesh'll probably helpful here if he has time away from kubernetes testing 16:00:11 old clouds, new clouds 16:00:14 #endmeeting