15:02:03 <bswartz> #startmeeting manila
15:02:04 <openstack> Meeting started Thu Jun 16 15:02:03 2016 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:02:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:02:06 <dustins> m3m0: no worries!
15:02:08 <bswartz> hello all
15:02:09 <openstack> The meeting name has been set to 'manila'
15:02:09 <cknight1> Hi
15:02:10 <markstur> hello
15:02:11 <vponomaryov> hello
15:02:11 <ganso> hello o/
15:02:14 <dustins> \o
15:02:17 <aovchinnikov> hi
15:02:18 <tpsilva> hello
15:02:19 <zhongjun_> hello
15:02:31 <tbarron> hi
15:02:45 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:02:56 <bswartz> #topic announcements
15:03:12 <bswartz> hopefully you all saw the ML post about the midcycle etherpad
15:03:32 <bswartz> #link https://etherpad.openstack.org/p/newton-manila-midcycle
15:03:35 <bswartz> suggest topics there
15:04:11 <bswartz> also, regarding manila-stable-maint: we will NOT be adding the whole core reviewer team to that group
15:04:40 <bswartz> reviewers interested in joining the stable-maint team need to nominate themselves and we'll add individuals to the team
15:05:27 <bswartz> I know of several of who you want to be on that team, if anyone else does please let me know
15:05:57 <bswartz> the main requirement to be on that team is that you have to understand the stable-maint rules and enforce them ruthlessly
15:06:07 <bswartz> :-)
15:06:13 <gouthamr> enforce them ruthlessly, sounds interesting
15:06:29 <bswartz> #topic A share support multiple protocols
15:06:47 <vkmc> o/
15:06:54 <zhongjun_> link: https://review.openstack.org/329392
15:07:07 <zhongjun_> There are two methods to implement the feature about Multiple protocols share.
15:07:25 <zhongjun_> Please see which one is suitable for manila, or which one is definite not suitable for manila. Thanks
15:08:10 <bswartz> zhongjun_: will you summarize here for those who haven't read the spec?
15:08:44 <zhongjun_> User case: A customer may want to use one share in his linux system, and also use the same share in his windows system or something else. The customer
15:08:45 <zhongjun_> can operate the same share data from different operating systems.
15:09:06 <zhongjun_> Method1: we could let a share have multiple protocols.
15:09:06 <zhongjun_> Method2: we could let multiple shares connected to one.
15:09:38 <ganso> ^ "to one instance"
15:09:50 <bswartz> I dislike method 2 because it would in strange API semantics
15:10:06 <bswartz> what if I resize a share -- the other share magically resizes too?
15:10:15 <zhongjun_> bswartz: of cource
15:10:26 <bswartz> it would result* in strange API semantics
15:10:33 <vponomaryov> as said in review for spec, variant 2 is not even discussable, IMHO
15:10:53 <gouthamr> multiple shares 1 instance, 1 share multiple instances.. this could get real confusing
15:10:53 <bswartz> vponomaryov: +1
15:10:57 <cknight1> bswartz: +1  Method 2 breaks the semantics for share instances completely
15:11:11 <cknight1> vponomaryov: +1
15:11:22 <bswartz> method 1 is worth discussing though
15:11:27 <tbarron> +2
15:11:36 <tbarron> +1 i meant :)
15:11:41 <zhongjun_> but method 2 will not change the allow/deny access API
15:11:54 <bswartz> so this has come up a number of times before
15:12:31 <bswartz> Originally it was dual CIFS+NFS access, which NetApp has always supported (but not in Manila) and which the generic driver could easily support if we decided to add it
15:12:32 <zhongjun_> In method 1, if I resize a share is no problem.
15:12:59 <bswartz> Then the Ceph guys proposed CephFS+NFS access
15:13:16 <bswartz> zhongjun_: which specific use case did you have in mind?
15:13:30 <bswartz> as vponomaryov pointed out, the spec lacks a problem statement
15:13:44 <tbarron> yeah, having an NFS or CIFs gateway in front of a share that is natively other is i think a valid use case
15:14:10 <zhongjun_> bswartz:  CIFS+NFS access
15:14:54 <zhongjun_> bswartz: yes, I will add the problem statement later.
15:15:11 <bswartz> so one thing we would need to decide, if we allow this, is whether to only allow specific combinations of protocols, or to allow arbitrary combinations of protocols
15:15:51 <bswartz> if CIFS+NFS and CephFS+NFS are the only 2 use cases, then we could special case those 2 and reduce the complexity of solving this I think
15:16:22 <vponomaryov> bswartz: any-native + NFS is possible
15:16:32 <ganso> bswartz: are you assuming this is not something worth implementing in a generic way?
15:16:38 <tbarron> vponomaryov: +1
15:17:00 <bswartz> ganso: I'm just thinking ahead to the specific technical problems this will introduce
15:17:08 <bswartz> and trying to limit the scope of what we have to deal with
15:17:29 <tbarron> eventually it would need to be solved generally, but maybe short term could be done more narrowly scoped
15:17:53 <tbarron> the narrowly scoped solutions should be removable in favor of more general solution though if we go that way
15:18:35 <zhongjun_> bswartz: the original idea is to allow arbitrary combinations of protocols
15:19:08 <bswartz> so where this gets ugly is with access allow, and with file metadata
15:19:31 <bswartz> access allow is currently designed use different things for different protocols
15:19:46 <bswartz> when we have dual protocol we might need to specify for which protocol the access is being modified
15:19:56 <tbarron> yeah
15:20:24 <ganso> the backend would also need to be aware of adding an IP rule may need to add for more than one export in the backend, usually the exports store the access rules
15:20:25 <vponomaryov> bswartz: it is proposed in spec
15:20:56 <bswartz> yes -- I think that will work
15:21:13 <bswartz> and the issue of file metadata -- that's the reason we decided not to do this originally
15:21:49 <bswartz> I think it's inevitable that a NetApp CIFS+NFS share will exhibit different behavior from a Generic CIFS+NFS share when it comes to metadata handling
15:21:58 <tbarron> endin general users don't expect metadata translation on a share
15:22:01 <bswartz> or any other backend for that matter
15:22:06 <tbarron> in general
15:22:15 <bswartz> tbarron: don't they?
15:22:24 <tbarron> between CIFS and NFS for instance
15:22:35 <tbarron> not full
15:22:51 <bswartz> If I request a CIFS+NFS share, presumably it's because I want to actually use it for both CIFS and NFS, and in that case I need my mode bits and ACLs to be sane across protocols
15:23:30 <bswartz> It's no good if I create a file on my CIFS share owned by bswartz, and then I access it through NFS and it's owned by "nobody"
15:23:44 <tbarron> some stuff has to map, but not everything will.  "sane" doesn't mean it all maps.
15:24:01 <tbarron> backend responsibility
15:24:08 <ganso> maybe we should document that there will be restrictions
15:24:24 <ganso> it relies a lot on security service configuration
15:24:29 <bswartz> the point is that users will care about the behavior to a certain degree, and if we can't guarantee that behavior then it's a useless feature
15:25:11 <zhongjun_> ganso: yes  the security service need to be changed in method1
15:25:49 <bswartz> it will boil down to "you get what you get" and nobody will be able to write applications without knowing what backend is behind manila (which defeats the purpose of manila)
15:26:44 <bswartz> if we can find a way to guarantee some minimum standard then I'm more interested
15:27:13 <bswartz> but I know from experience that this is wickedly complicated
15:27:18 <cknight1> bswartz: +1
15:27:23 <tbarron> is there a common minimum mapping for NFS and CIFS?
15:27:32 <bswartz> NFSv3 and NFSv4 don't even play particularly well together
15:27:35 <cknight1> bswartz:  It's gotta just work and be consistent
15:27:56 <tbarron> I think we could document what to expect for NFS + ceph
15:28:22 <tbarron> so as you point out we already have the issue within "NFS"
15:28:29 <bswartz> tbarron: CephFS has the advantage of only having a single implementation, so whatever it does is automatically standard
15:29:04 <bswartz> as long as cephfs doesn't change what it does between versions I suppose
15:29:15 <tbarron> but there could conceivably be different NFS gateway implementations, so I agree that there should be some documentation of what to expect
15:29:35 <bswartz> in the case of CIFS+NFS, you need some external concept of users and mappings of those users between protocols
15:30:22 <tbarron> or NFSv3 and NFSv4 really
15:30:40 <zhongjun_> +1  I will add it.
15:30:46 * tbarron is getting a headache
15:30:46 <bswartz> "bswartz" has a UID for NFSv3, a name for NFSv4, and a SID for CIFS
15:31:21 <tbarron> so we have an implicit external representation of bswartz and the guy who has those three things
15:31:22 <bswartz> those have to be defined somewhere -- currently we rely on the security service for those definitions
15:31:35 <tbarron> as the guy
15:32:15 <tbarron> who may also live in NIS, LDAP, or a spreadsheet somewhere
15:32:18 <bswartz> however nowhere do we have the information that bswartz UID 12699 is the same as bswartz@netapp.com is the same as SID S-1-5-1234090579835-234582349857892345-234582739485
15:32:53 <tbarron> or just in the same cube, using those three things
15:33:44 <bswartz> so a NetApp controller stores the above information in a mapping table on the device itself
15:34:52 <bswartz> with the generic driver, we would have to configure Samba very carefully to do the right thing
15:35:14 <markstur> user mapping would be up to the admin
15:35:28 <markstur> vendor support or lack-of would be a capability or a requirement
15:35:57 <vponomaryov> markstur: I already imagine all the sufferings of admins
15:36:07 <markstur> yep
15:36:10 <ganso> if this needs admin intervention I believe this can no longer make sense for DHSS=True mode
15:36:37 <bswartz> markstur: that's not possible with security services
15:36:45 <bswartz> ganso: +999
15:36:46 <markstur> in dhss=true the security service is in play, but still mapping is needed somewhere
15:37:05 <markstur> to allow cross-protocol
15:37:24 <bswartz> the whole point of security services is that, as a tenant, I can run my own AD controller and my own Kerberos/LDAP domains
15:37:29 <markstur> unless there is a cross-protocol w/ mangled acls as a feature
15:37:53 <vponomaryov> built generic driver with two protocols? Congratulations! You just achieved "God-like" rank.
15:38:07 <dustins> hahaha
15:38:24 <markstur> So I'd think the cross-protocol use case we want would be only the one where we have a sane user mapping
15:39:01 <markstur> but I think that is dhss=false and too much responsibility on the admin to do stuff outside of manila
15:39:17 <bswartz> The only way this can make sense for DHSS=true drivers with security services is to provide a mechanism for tenants to define their own user mappings...
15:39:37 <markstur> ok
15:39:56 <bswartz> and I'm nervous about undertaking that effort
15:39:58 <vponomaryov> it requires very experienced users
15:39:59 <markstur> but ultimately it is the subtle differences from one vendor to another that would be the hard part
15:40:13 <markstur> the not-knowing-what-you'll-get problem
15:40:38 <bswartz> markstur: precisely
15:41:04 <bswartz> I'm sure we could implement this and get it to "mostly" work
15:42:07 <markstur> Yep. For single-vendor environment probably worth a try, but the multi-vendor inconsistency is a bad problem unless we have something reasonable like 2 well-known behaviors we can deal with
15:42:10 <bswartz> but there would be weird behaviors that would creep in and any user wanting to consume dual protocol would be left wondering how compatible their application really was with other backends (other than what they tested on)
15:42:47 <ganso> bswartz: either way, users should not need to worry about backends
15:43:00 <bswartz> yeah that's part of our mission as an openstack project
15:43:16 <bswartz> everything you need to know you can get through the manila API
15:43:34 <bswartz> there's no secret out of band information needed to consume the manila API
15:44:43 <bswartz> anyways, we have another topic on the agenda, so I want to table this
15:45:01 <markstur> is there a # commad for tabled?
15:45:03 <bswartz> It's still a very interesting use case with obvious value if we can only figure out how to provide it in a common way
15:45:20 <bswartz> markstur: midcycle etherpad!
15:45:52 <bswartz> we need to discuss this more and I hope people can come up with ideas to define a minimum standard that can be depended on
15:45:53 <zhongjun_> bswartz: ok
15:46:14 <bswartz> and the user mapping problem with DHSS=true is something we need to sort out too
15:46:28 <bswartz> so let's keep this discussion going
15:46:30 <tbarron> how is that uid 12699 is the same as bswartz@netapp.com not secret out of band info today?
15:46:58 <tbarron> agree that we can table, and agree that it's a real issue
15:47:01 <bswartz> tbarron: today you can't use both in the same context, so it doesnt' matter
15:47:25 <bswartz> oh wait
15:47:37 <bswartz> no you're talking about NFSv3+NFSv4
15:47:45 <tbarron> yeah
15:47:48 <bswartz> yeah that's a hairy mess that I'm not happy with
15:48:12 * bswartz shakes his fist at the NFSv4 designers
15:48:13 <tbarron> so i agree that we should think more before going further down that road
15:49:00 <bswartz> tbarron: in theory if you use NIS you can setup a security service where the NIS server provides that mapping
15:49:10 <tbarron> right
15:49:27 <bswartz> we need to move on however
15:49:40 <bswartz> #topic Drivers with minimum size for shares
15:49:49 <tpsilva> thanks bswartz
15:49:51 <bswartz> tpsilva: you're up
15:50:03 <bswartz> openstack: ping
15:50:28 <bswartz> oh the bot is on the fritz
15:50:33 <tpsilva> so I'm implementing a driver for one of our backends and it has a minimum share size limitation
15:50:44 <bswartz> how big is the minimum?
15:51:00 <tpsilva> I know that one of the EMC drivers for cinder had an issue like this, but they solved it by rounding it in the driver
15:51:02 <ganso> bswartz: 128 gb
15:51:03 <tpsilva> bswartz: 128GB
15:51:07 <vponomaryov> and how small is the maximum? ))
15:51:30 <xyang1> The EMC one has limit on multiple of 8gb
15:51:33 <tpsilva> vponomaryov: some PBs... don't remember it
15:51:34 <bswartz> yeah I was present for the cinder discussions and EXTREMELY unhappy with the outcome
15:51:56 <bswartz> so what is the granularity of you exceed the minimum?
15:52:07 <bswartz> can the backend support 129GB?
15:52:09 <jseiler__> vponomaryov, +1
15:52:12 <tpsilva> yes
15:52:14 <ganso> bswartz: yes
15:52:21 <tpsilva> i mean, you cannot create shares with less than 128GBs
15:52:32 <tpsilva> but anything bigger than that is ok
15:52:45 <bswartz> is it possible to create a 128GB share, but to limit usage of the share to some smaller amount using quotas?
15:53:06 <tpsilva> no, unfortunately
15:53:11 <vponomaryov> tpsilva: and your backend does not support thin provisioning?
15:53:21 <tbarron> note that if you "solve" this in the driver your quotas will still be off
15:53:34 <bswartz> it seems like this backend is broken!
15:53:38 <tpsilva> vponomaryov: it does, but if you create a default minimum with 128GB, a share with 1GB will have a real size of 128GBs
15:53:49 <tbarron> even if you return a model update that corrects the usage in the database
15:53:56 <dustins> Hmm, creates an interesting problem for Tempest tests as well, but that could be fixed...
15:54:03 <ganso> tbarron: we do not allow such model update
15:54:06 <dustins> Just have an option for minimum share size
15:54:10 <tpsilva> what I would ideally want is some kind of filter in the scheduler
15:54:14 <xyang1> in tempest you can set size
15:54:23 <bswartz> no -- I am opposed to playing games with the share size in manila
15:54:31 <vponomaryov> tpsilva: you map share to volume directly?
15:54:36 <tbarron> ganso: nor IMO should we, but I lost the cinder argument
15:54:51 <tbarron> ganso: either way your quotas are wrong
15:54:53 <bswartz> it seems less bad to silently give the user more space than they asked for
15:55:00 <tpsilva> vponomaryov: share to filesystem in this particular backend
15:55:16 <xyang1> ganso: you have the same issue in cinder?
15:55:35 <tpsilva> bswartz: that's one way to solve it
15:55:38 <ganso> tbarron: yes, the quotas need to be correct, if a 128gb share in this backend is created, the user will be able to reach 128gb, and this needs to reflect on his manila quotas
15:55:49 <cknight1> bswartz, tpsilva: What do you propose?  A ShareSizeFilter?
15:55:59 <ganso> xyang1: yes, we have the same problem for cinder
15:56:22 <vponomaryov> cknight1: share size filter is really good solution for it
15:56:22 <tpsilva> cknight1: Ideally, yes... but apparently, bswartz is against it
15:56:25 <bswartz> ganso: the only way I can see that happening is if we add a "minimum share size" option to manila itself so the admin can force users to ask for shares of no less than 128 GB
15:56:53 <vponomaryov> bswartz: just filter it out on scheduler step as cknight1 proposed
15:56:54 <dustins> And that would also carry over into the suggestion for Tempest as well
15:57:01 <ganso> bswartz: that looks confusing for users
15:57:06 <bswartz> if it's an error to ask for a 127GB share, then we should surface that error all the way back to the user
15:57:27 <bswartz> I strongly dislike giving users something other than what they ask for
15:57:33 <markstur> would this backend be happier with an "unlimited" share feature?
15:57:41 <bswartz> if we can't give them what they ask for, it should be an error and we help them ask for the right thing
15:57:46 <tbarron> bswartz: +1, and the driver should document their "feature"
15:57:51 <ganso> if users request 127, it should land on a backend that supports a 127gb size share. If user requests 128, then it may land on this backend
15:57:58 <gouthamr> markstur: petabytes != unlimited
15:58:00 <cknight1> bswartz: it's hard to do this check in the API layer, but a user message could say why the request didn't pass the ShareSizeFilter
15:58:12 <ganso> markstur: that would not solve the problem
15:58:14 <cknight1> bswartz: and the default minimum share size could be 1
15:58:14 <tpsilva> markstur: not sure if that's possible
15:58:26 <markstur> well nothing is really unlimited.  but unlimited has been proposed in the past
15:58:34 <ganso> markstur: I am assuming you mean something like size = null, so unlimited
15:58:37 <xyang1> the scheduler does not provide a nice user message though
15:58:37 <gouthamr> oh you mean no size..
15:58:48 <ganso> since this backend maps to a filesystem, the filesystem needs a size
15:58:51 <xyang1> it says no host found
15:58:52 <bswartz> unlimited is not infinite
15:59:00 <gouthamr> ameade's here, user messages are still a possibility :)
15:59:17 <vponomaryov> time check
15:59:27 <bswartz> unlimited means we don't enforce any limit -- but the laws of physics will inevitably enforce some limit
15:59:34 <tpsilva> the idea of just letting the user know that this is an error is also good, but how?
15:59:41 <markstur> so filter seems like the best thing for now -- but it is unfortunate the user won't get a message that says "try asking for a little more space please"
15:59:54 <bswartz> let's take this discussion to the channel
15:59:59 <tpsilva> k
15:59:59 <cknight1> filter + user message seems reasonable
16:00:05 <gouthamr> cknight1 +1
16:00:06 <bswartz> #endmeeting