15:02:07 <bswartz> #startmeeting manila
15:02:10 <openstack> Meeting started Thu Jan 23 15:02:07 2014 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:02:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:02:13 <openstack> The meeting name has been set to 'manila'
15:02:37 <bswartz> hello folks
15:02:41 <bswartz> who do we have?
15:02:43 <vponomaryov> hi
15:02:47 <xyang1> hi
15:02:47 <gregsfortytwo1> hi
15:02:48 <bill_az> Hi
15:02:51 <vbellur> hello
15:02:51 <achirko> hello
15:02:53 <rraja> hi
15:02:58 <caitlin56> hi
15:03:14 <bswartz> wow we've got everyone it looks like
15:03:56 <bswartz> once again I don't have anything special for todays agenda
15:04:11 <bswartz> I know some drivers are getting close to being done
15:04:26 <bswartz> so let's do the status update first
15:04:33 <bswartz> #topic dev status
15:04:42 <vponomaryov> I will update about it
15:04:54 <vponomaryov> Dev status:
15:04:55 <bswartz> vponomaryov: ty
15:05:01 <vponomaryov> 1) network-api
15:05:01 <vponomaryov> Successfully merged, it includes: security-services entities, share-networks and its linking.
15:05:01 <vponomaryov> Not documented, for now,
15:05:01 <vponomaryov> it is expected to be added here: https://wiki.openstack.org/wiki/Manila/API
15:05:31 <vponomaryov> 2) Multitenant Generic driver (work in progress) - https://review.openstack.org/#/c/67182/
15:05:31 <vponomaryov> For now, it:
15:05:39 <vponomaryov> a) creates service vm for manila in service tenant (vm of nova + volumes of cinder)
15:05:48 <vponomaryov> b) uses ssh connection via namespaces for making connection between service vm and user vm (neutron approach, no floating ip).
15:05:58 <vponomaryov> c) allows to create shares and snapshots
15:06:09 <vponomaryov> work left for generic driver:
15:06:17 <vponomaryov> d) Implement handling of rules (NFS and CIFS)
15:06:17 <vponomaryov> e) Implement proper handling of service VM, after restart of manila for each VM state
15:06:33 <vponomaryov> 3) NetApp Cmode multitenant driver - https://review.openstack.org/#/c/59100/
15:06:43 <vponomaryov> Main work is done, left:
15:06:49 <vponomaryov> a) implement proper removing of errored shares
15:06:49 <vponomaryov> b) test setting of all provided security services data (depends on environment)
15:07:11 <vponomaryov> NetApp Cmode driver and generic driver are multi-tenant and they are dependent on share-network entity, that has only neutron plugin, and can not work without neutron.
15:07:40 <vponomaryov> Other already existing merged drivers are single-tenant and don't depend on share-network entity:
15:07:40 <vponomaryov> - LVM driver (supports nfs and cifs, but sid rules not implemented)
15:07:40 <vponomaryov> - glusterfs driver (supports only nfs)
15:07:40 <vponomaryov> - netapp 7mode driver
15:07:51 <bswartz> and the "generic" driver can not work without nova or cinder either
15:08:04 <vponomaryov> bswartz: right
15:08:13 <bswartz> so manila will start out being heavily dependent on the rest of openstack
15:08:13 <vponomaryov> that all for status
15:08:24 <bswartz> we might want to think about whether that's a good thing or a bad thing
15:08:55 <vponomaryov> generic driver isn't production-oriented
15:09:00 <bswartz> I personally thing that modularity is important, and dependencies are generally bad
15:09:04 <bswartz> think*
15:09:12 <vbellur> bswartz: +1
15:09:20 <bswartz> however if removing dependencies creates a bunch of complexity then that's not good either
15:09:31 <bswartz> that can be a future topic
15:09:37 <bswartz> maybe something to discuss in atlanta
15:09:46 <bswartz> for now I'm more interested in getting something production ready
15:10:01 <bswartz> thanks for the comprehensive update vponomaryov!
15:10:19 <caitlin56> For any given site the set of manila drivers would be very static, so some setup cmplexitywould be quite tolerable.
15:10:23 <bswartz> just to jump back to the generic driver and the SSH communication -- where is that code?
15:11:00 <bswartz> caitlin56: complexity and tolerable should not appear in the same sentence!
15:11:15 <vbellur> vponomaryov: what are the reasons behind generic driver not being production ready?
15:11:16 <bswartz> caitlin56: that way of thinking is a slippery slope
15:11:17 <xyang1> vponomaryov: are you using a custom image for the generic driver when you launch a VM from nova?
15:11:46 <vponomaryov> bswartz: that code in gerrit topic https://review.openstack.org/#/c/67182/
15:12:10 <achirko> bswartz: https://review.openstack.org/#/q/status:open+project:stackforge/manila+branch:master+topic:bp/generic-driver,n,z
15:12:29 <bswartz> vbellur: I think the point is that the top priority for the generic driver is to function as a software only implementation for the purpose of POCs and automated testing
15:12:30 <vponomaryov> vbellur: for now, it is pretty slow, and it will work much faster will manila backend driver
15:12:44 <bswartz> I would like to see the generic driver become production quality
15:13:04 <vponomaryov> xyang1: for, yes, we are using ubuntu cloud image with installed nfs and samba servers
15:13:06 <vbellur> bswartz, vponomaryov: got it, thanks.
15:13:40 <xyang1> vponomaryov: thanks.  so this image has to be checked in, right
15:14:08 <bswartz> vponomaryov, xyang1: We need to make sure that administrators can run an image of their choosing in the service VM for the generic driver
15:14:19 <bswartz> xyang1: no!
15:14:53 <xyang1> bswartz: admin has to create their own custom image to use the generic driver?
15:15:00 <bswartz> What we need is to document the requirements for the service VM, so that it's possible to build an ubuntu-based service VM or a redhat-based service VM, or a SLES-based service VM
15:15:02 <vponomaryov> xyang1: image dependency should be left for administrator or devstack
15:15:46 <bswartz> as long as the image contains a linux image with SSHD, NFSD, and Samba it should be okay
15:16:16 <xyang1> vponomaryov: is this image you are using downloadable already, or did you install the nfs and samba servers and then create a image yourself?
15:16:25 <vbellur> vponomaryov: +1
15:16:26 <bswartz> maybe someone will take up the task of building a minimal service VM image for manila -- something like Cirros but with file share capabilities
15:17:17 <vponomaryov> xyang1: we have downloaded ubuntu cloud image, installed nfs and samba, and enabled passwords for ssh, thats all
15:17:22 <bswartz> I think the task for the manila team is to simply document the requirements that the generic driver will rely on and leave it to the administrator to find or build something suitable
15:17:38 <xyang1> vponomaryov: ok, thanks
15:17:47 <csaba> bswartz: we have a PoC Cirros hack, that is as much as possible to use an  nfsv3 client, it could be enhanced along said requirements
15:17:59 <bswartz> xyang1: I don't think so
15:18:10 <bswartz> xyang1: oops nm
15:18:47 <bswartz> csaba: that sounds exciting
15:19:05 <bswartz> jumping back up though
15:19:13 <vponomaryov> yeah, cirros with nfs and samba is pretty interesting thing, but we need both protocols
15:19:36 <bswartz> vponomaryov: do you have an estimate for when the generic driver will be complete?
15:19:48 <vponomaryov> we hope, next week
15:20:02 <bswartz> okay that's what I expected
15:20:16 <bswartz> and the NetApp driver too?
15:20:29 <bswartz> any issues with NetApp cmode other than LDAP support?
15:21:02 <vponomaryov> we need to test it with all variants of environment
15:21:14 <vponomaryov> active directory, ldap, kerberos
15:21:54 <bswartz> okay
15:21:56 <vponomaryov> if we are successfull, next week too
15:22:24 <bswartz> okay let's hope for no problems
15:22:27 * bswartz crosses his fingers
15:22:43 <bswartz> #topic generic driver
15:23:06 <bswartz> okay what's preventing the generic driver from being production ready?
15:23:14 <bswartz> is it mostly performance limitations?
15:23:34 <aostapenko> yes
15:24:00 <bswartz> is running NFSD in a VM inherently slower than running it on bare metal?
15:24:06 <caitlin56> poor performance even with a single vserver?
15:24:20 <bswartz> is it a question of the backend performance w/ Cinder?
15:24:53 <vponomaryov> also, if we restart manila, it should validate service vm
15:25:02 <vponomaryov> each share one by one
15:25:12 <vponomaryov> and it is take a while
15:25:28 <bswartz> I realize that a whitebox file server running open source software will not really compete with custom hardware/software, but I'd like the generic driver to be at least as good as other pure OSS approaches on comparable hardware
15:25:29 <vponomaryov> if we try create something during this period, we get fail
15:25:37 <vbellur> how bad is the performance with the generic driver at the moment?
15:26:17 <aostapenko> if we find good light image for service vm it will be much better
15:26:18 <achirko> Because of using cinder to provide volumes and nova for vservers we introducing additional layer (abstraction and resources utilization) compare to some cool back-end
15:27:00 <bswartz> Maybe during Juno we can undertake a manila-generic-performance-enhancement project
15:27:04 <csaba> how far is it down on the road that container-based service vm becomes feasible?
15:27:04 <vponomaryov> vbellur: after using manila-compute for this, it is endless waiting =)
15:27:29 <vbellur> vponomaryov: ok, I feel the performance impact now :)
15:27:42 <bswartz> achirko there are cinder backends that can serve local storage though -- those should dramatically reduce bottlenecks related to remote backends
15:28:03 <vbellur> csaba: that is a good point., might help address the restart case
15:28:57 <achirko> csaba: as far as I know containers do not support run-time volumes attachment, so it can be a blocker
15:29:34 <vponomaryov> achirko: also no NFS for containers, in repositories
15:29:59 <bswartz> well there's no kernel NFS but userspace NFS is not looking so bad
15:30:03 <aostapenko> no nfs kernel server for containers
15:30:28 <vponomaryov> will be right, nfs kernel server won't work with containers
15:30:33 <bswartz> In fact I'd like to talk about userspace NFS and gateways
15:30:38 <vbellur> yeah, we could try with Ganesha or Gluster NFS within a container
15:30:46 <bswartz> #topic gateways
15:31:22 <bswartz> okay so there are a number of backends which can't natively support multitenancy with segmented VLANs
15:31:28 <vponomaryov> vbellur: if we have cirros with userspace nfs and samba, so it will be breakthrough
15:31:39 <xyang1> vbellur: has anyone tried Ganesha?  Seems very buggy to me.
15:32:12 <bswartz> xyang1: it's under active development -- are you sure you're running the latest stable branch?
15:32:28 <vbellur> xyang1: we have been playing around with it (both v1.5 and v2)
15:32:32 <xyang1> bswartz: I think so.
15:32:37 <vponomaryov> xyang1: +1, itworks on CentOS and similar to it, not on debian, as for me
15:33:00 <xyang1> vponomaryov: I tried on debian
15:33:01 <bswartz> my current idea for gateways looks very similar to the generic driver
15:33:13 <vbellur> xyang1: and we have been able to run reasonable tests
15:33:46 <xyang1> vbellur: ok, good to know.
15:34:23 <bswartz> the server that will be visible to the tenant will be something like ganesha-nfs, either running in a VM or somewhere that has a network interface on the tenant's VLAN
15:34:48 <bswartz> ganesha-nfs will in turn be a client of the real backend
15:35:10 <vbellur> bswartz: sounds feasible to me
15:35:20 <bswartz> so if you have a giant clustered filesystem, ganesha will re-export some subtree of the namespace
15:35:56 <vbellur> bswartz: right..
15:36:18 <xyang1> vbellur: are you running ganesha-nfs in a container or VM?
15:36:26 <bswartz> the gateway could in principle be a full HVM, or a container-based VM, or just a process running on a hypervisor node -- the key would be that it would need dual network connectivity -- to the tenant VLAN on one side and to the backend storage network on the other side
15:36:35 <vbellur> xyang1: VM and bare metal too
15:37:06 <xyang1> vbellur: I tried in a container, maybe that's why I had problems.
15:37:34 <bswartz> there's potential for resuing a bunch of the code that's been written for the generic driver
15:37:36 <vbellur> xyang1: worth a try in a VM
15:37:50 <xyang1> vbellur: sure
15:38:07 <bswartz> as we get some of the drivers wrapped up I'd like ot move onto prototyping the gateway-based multitenancy
15:38:11 <bswartz> probably a few weeks from now
15:38:23 <bswartz> so anyone who is interested get ready to participate
15:38:29 <vbellur> bswartz: we are exploring Ganesha on similar lines
15:38:43 <bswartz> or go ahead and start whipping up some prototypes so we'll have something to look at in a few weeks
15:38:50 <vbellur> csaba did open a blueprint but I guess it is titled hypervisor mediated ...
15:39:01 <bswartz> vbellur: oh I need to review the current BPs
15:39:27 <vbellur> on nomenclature - what should we call this scheme? gateway or hypervisor? gateway seems more generic to me.
15:39:48 <bswartz> I like gateway, since it doesn't preclude any approach that I can think of
15:39:53 <caitlin56> +1 to gateway. It is more general, and the hypervisor is a gateway.
15:40:08 <bswartz> +1
15:40:17 <vbellur> ok, we will rename the blue print.
15:40:29 <bswartz> vbellur: ty
15:40:43 <bswartz> #topic incubation
15:41:05 <bswartz> okay so many of you will remember that the TC felt manila was not ready for incubation last november
15:41:26 <bswartz> Given how much work we've needed to do on multitenancy I think they were correct
15:41:47 <vbellur> bswartz: agree
15:41:52 <bswartz> but I'm now hopeful that we'll have working multitenancy by Icehouse
15:42:04 <bswartz> and there is another hurdle that the TC threw in front of us
15:42:33 <bswartz> they want to see that Manila is production-ready -- which implies that someone somewhere is actually running it in production
15:42:57 <bswartz> so I wanted to ask the community if there are any organizations interested in being early deployers
15:43:42 <bswartz> NetApp definitely is interested, but hardware and people are needed and I'm not sure that NetApp is best equipped to do this quickly
15:44:12 <vbellur> bswartz: there was some interest expressed in the openstack ML, we can try reaching out to those folks too.
15:44:34 <bswartz> yeah I don't think group is the right group to ask -- but I wanted to put the word out that we're looking for a "beta" site so to speak
15:44:45 <vponomaryov> vbellur: which mailing list group are you talking about?
15:45:13 <bswartz> we need someone who wants the services that Manila offers and is willing to deal with the pain of deploying brand new code
15:45:22 <vbellur> vponomaryov: openstack@lists.openstack.org - the user ML
15:46:09 <bswartz> after real drivers are available this will matter more
15:46:24 <vbellur> bswartz: yeah, we can possibly seek out testing help from the community. that would be a natural way of converting those test deployments to real ones.
15:46:28 <bswartz> it's not urgent, but I wanted everyone to be aware that the TC considers this a condition for incubation
15:46:48 <vbellur> not sure how many noticed the barbican incubation request on openstack-dev ML
15:46:59 <bswartz> I did not
15:47:05 <bswartz> anything interesting?
15:47:17 <vbellur> quite a bit, let me try to post a link to that thread
15:47:36 <vbellur> here it is - http://lists.openstack.org/pipermail/openstack-dev/2013-December/020924.html
15:47:59 <bswartz> vbellur: ty
15:48:16 <vbellur> does give a good idea on the factors that TC considers for incubation
15:48:47 <bswartz> #topic open discussion
15:48:58 <bswartz> alright we have some time for anything else
15:49:22 <csaba> let me ask a silly question
15:49:41 <csaba> how do we exaclty define multi-tenancy in this context?
15:49:59 <bswartz> csaba: that's not a silly question! that's a good question
15:50:01 <vponomaryov> csaba: separated resources for each tenant
15:50:03 <csaba> ie. in what way is gluster , lvm ,etc considered single-tennant capalbe
15:50:27 <csaba> when they can serve multiple tenants as subnets
15:51:07 <bswartz> when we talk about multitenancy, we're referring to an environment where different tenants have compute resources in segmented networks so they're securely isolated from eachother
15:51:58 <bswartz> multitenancy is about making sure that different users cannot perceive eachother's existence and cannot interfere with eachother intentionally or unintentionally
15:52:09 <vponomaryov> also isolated ip rules for NFS
15:52:28 <bswartz> VLANs are the typical approach for isolating the network, but there are other perfectly valid approaches
15:52:54 <bswartz> such as GRE tunnels or VXLANs
15:53:45 <bill_az> bswartz:  do you consider vlan isoloation sufficient isolation for multi-tenancy?
15:53:53 <bill_az> (isolation)
15:53:53 <bswartz> the main limitation with some shared filesystems (GlusterFS, CephFS, GPFS, and Isilon NFS are the ones I know about) is that they only support a single security domain
15:53:55 <csaba> oh so simply IP ranges (in the same network) with fw/export rules to control access to resources is too poor separation to call it multi-tenancy?
15:54:04 <caitlin56> The latter are becoming omore common, you run out of VLANs way too quickly.
15:55:08 <bswartz> csaba: yes access rules aren't sufficient because they assume a single IP routing domain and a common set of usernames/passwords
15:55:21 <caitlin56> csaba: the real question is do you have mount control, and prevention of stealing mount handles. IF you could do that without network separation that *would* work.
15:55:32 <bswartz> it's possible for tenant A to have a VM on 10.10.10.10 and for tenant B to also have a tenant VM on 10.10.10.10 and for those IP addresses to be completely different
15:55:35 <caitlin56> separate logical networks is a lot simpler.
15:55:52 <bswartz> it's also possible for tenant A to have a user called "bswartz" and tenant B to have a user called "bswartz" who is totally different
15:58:00 <csaba> is there an advantage of ip collisions?
15:58:05 <bswartz> note that there are cases when less-secure tenant isolation would be acceptable
15:58:32 <bswartz> csaba: there's no advantage -- it's just unavoidable at scale due to the small space of IPv4 addresses
15:58:43 <caitlin56> csaba: you cannot tell Tenant A that they cannot ue IP addressX because tenant B is using it.
15:59:12 <vbellur> csaba: a tenant would be unaware of resource utilization by other tenants
15:59:21 <csaba> ah I get it
15:59:52 <bswartz> okay our time is up
16:00:00 <bswartz> that was a good question csaba
16:00:08 <vbellur> quick qn - should we do a 0.1 release of manila after generic driver is in?
16:00:11 <csaba> thx the answers
16:00:17 <bswartz> see you all next week when we hope to have the drivers merged!
16:00:27 <vbellur> thanks
16:00:32 <bswartz> #endmeeting