15:02:07 #startmeeting manila 15:02:10 Meeting started Thu Jan 23 15:02:07 2014 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:02:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:02:13 The meeting name has been set to 'manila' 15:02:37 hello folks 15:02:41 who do we have? 15:02:43 hi 15:02:47 hi 15:02:47 hi 15:02:48 Hi 15:02:51 hello 15:02:51 hello 15:02:53 hi 15:02:58 hi 15:03:14 wow we've got everyone it looks like 15:03:56 once again I don't have anything special for todays agenda 15:04:11 I know some drivers are getting close to being done 15:04:26 so let's do the status update first 15:04:33 #topic dev status 15:04:42 I will update about it 15:04:54 Dev status: 15:04:55 vponomaryov: ty 15:05:01 1) network-api 15:05:01 Successfully merged, it includes: security-services entities, share-networks and its linking. 15:05:01 Not documented, for now, 15:05:01 it is expected to be added here: https://wiki.openstack.org/wiki/Manila/API 15:05:31 2) Multitenant Generic driver (work in progress) - https://review.openstack.org/#/c/67182/ 15:05:31 For now, it: 15:05:39 a) creates service vm for manila in service tenant (vm of nova + volumes of cinder) 15:05:48 b) uses ssh connection via namespaces for making connection between service vm and user vm (neutron approach, no floating ip). 15:05:58 c) allows to create shares and snapshots 15:06:09 work left for generic driver: 15:06:17 d) Implement handling of rules (NFS and CIFS) 15:06:17 e) Implement proper handling of service VM, after restart of manila for each VM state 15:06:33 3) NetApp Cmode multitenant driver - https://review.openstack.org/#/c/59100/ 15:06:43 Main work is done, left: 15:06:49 a) implement proper removing of errored shares 15:06:49 b) test setting of all provided security services data (depends on environment) 15:07:11 NetApp Cmode driver and generic driver are multi-tenant and they are dependent on share-network entity, that has only neutron plugin, and can not work without neutron. 15:07:40 Other already existing merged drivers are single-tenant and don't depend on share-network entity: 15:07:40 - LVM driver (supports nfs and cifs, but sid rules not implemented) 15:07:40 - glusterfs driver (supports only nfs) 15:07:40 - netapp 7mode driver 15:07:51 and the "generic" driver can not work without nova or cinder either 15:08:04 bswartz: right 15:08:13 so manila will start out being heavily dependent on the rest of openstack 15:08:13 that all for status 15:08:24 we might want to think about whether that's a good thing or a bad thing 15:08:55 generic driver isn't production-oriented 15:09:00 I personally thing that modularity is important, and dependencies are generally bad 15:09:04 think* 15:09:12 bswartz: +1 15:09:20 however if removing dependencies creates a bunch of complexity then that's not good either 15:09:31 that can be a future topic 15:09:37 maybe something to discuss in atlanta 15:09:46 for now I'm more interested in getting something production ready 15:10:01 thanks for the comprehensive update vponomaryov! 15:10:19 For any given site the set of manila drivers would be very static, so some setup cmplexitywould be quite tolerable. 15:10:23 just to jump back to the generic driver and the SSH communication -- where is that code? 15:11:00 caitlin56: complexity and tolerable should not appear in the same sentence! 15:11:15 vponomaryov: what are the reasons behind generic driver not being production ready? 15:11:16 caitlin56: that way of thinking is a slippery slope 15:11:17 vponomaryov: are you using a custom image for the generic driver when you launch a VM from nova? 15:11:46 bswartz: that code in gerrit topic https://review.openstack.org/#/c/67182/ 15:12:10 bswartz: https://review.openstack.org/#/q/status:open+project:stackforge/manila+branch:master+topic:bp/generic-driver,n,z 15:12:29 vbellur: I think the point is that the top priority for the generic driver is to function as a software only implementation for the purpose of POCs and automated testing 15:12:30 vbellur: for now, it is pretty slow, and it will work much faster will manila backend driver 15:12:44 I would like to see the generic driver become production quality 15:13:04 xyang1: for, yes, we are using ubuntu cloud image with installed nfs and samba servers 15:13:06 bswartz, vponomaryov: got it, thanks. 15:13:40 vponomaryov: thanks. so this image has to be checked in, right 15:14:08 vponomaryov, xyang1: We need to make sure that administrators can run an image of their choosing in the service VM for the generic driver 15:14:19 xyang1: no! 15:14:53 bswartz: admin has to create their own custom image to use the generic driver? 15:15:00 What we need is to document the requirements for the service VM, so that it's possible to build an ubuntu-based service VM or a redhat-based service VM, or a SLES-based service VM 15:15:02 xyang1: image dependency should be left for administrator or devstack 15:15:46 as long as the image contains a linux image with SSHD, NFSD, and Samba it should be okay 15:16:16 vponomaryov: is this image you are using downloadable already, or did you install the nfs and samba servers and then create a image yourself? 15:16:25 vponomaryov: +1 15:16:26 maybe someone will take up the task of building a minimal service VM image for manila -- something like Cirros but with file share capabilities 15:17:17 xyang1: we have downloaded ubuntu cloud image, installed nfs and samba, and enabled passwords for ssh, thats all 15:17:22 I think the task for the manila team is to simply document the requirements that the generic driver will rely on and leave it to the administrator to find or build something suitable 15:17:38 vponomaryov: ok, thanks 15:17:47 bswartz: we have a PoC Cirros hack, that is as much as possible to use an nfsv3 client, it could be enhanced along said requirements 15:17:59 xyang1: I don't think so 15:18:10 xyang1: oops nm 15:18:47 csaba: that sounds exciting 15:19:05 jumping back up though 15:19:13 yeah, cirros with nfs and samba is pretty interesting thing, but we need both protocols 15:19:36 vponomaryov: do you have an estimate for when the generic driver will be complete? 15:19:48 we hope, next week 15:20:02 okay that's what I expected 15:20:16 and the NetApp driver too? 15:20:29 any issues with NetApp cmode other than LDAP support? 15:21:02 we need to test it with all variants of environment 15:21:14 active directory, ldap, kerberos 15:21:54 okay 15:21:56 if we are successfull, next week too 15:22:24 okay let's hope for no problems 15:22:27 * bswartz crosses his fingers 15:22:43 #topic generic driver 15:23:06 okay what's preventing the generic driver from being production ready? 15:23:14 is it mostly performance limitations? 15:23:34 yes 15:24:00 is running NFSD in a VM inherently slower than running it on bare metal? 15:24:06 poor performance even with a single vserver? 15:24:20 is it a question of the backend performance w/ Cinder? 15:24:53 also, if we restart manila, it should validate service vm 15:25:02 each share one by one 15:25:12 and it is take a while 15:25:28 I realize that a whitebox file server running open source software will not really compete with custom hardware/software, but I'd like the generic driver to be at least as good as other pure OSS approaches on comparable hardware 15:25:29 if we try create something during this period, we get fail 15:25:37 how bad is the performance with the generic driver at the moment? 15:26:17 if we find good light image for service vm it will be much better 15:26:18 Because of using cinder to provide volumes and nova for vservers we introducing additional layer (abstraction and resources utilization) compare to some cool back-end 15:27:00 Maybe during Juno we can undertake a manila-generic-performance-enhancement project 15:27:04 how far is it down on the road that container-based service vm becomes feasible? 15:27:04 vbellur: after using manila-compute for this, it is endless waiting =) 15:27:29 vponomaryov: ok, I feel the performance impact now :) 15:27:42 achirko there are cinder backends that can serve local storage though -- those should dramatically reduce bottlenecks related to remote backends 15:28:03 csaba: that is a good point., might help address the restart case 15:28:57 csaba: as far as I know containers do not support run-time volumes attachment, so it can be a blocker 15:29:34 achirko: also no NFS for containers, in repositories 15:29:59 well there's no kernel NFS but userspace NFS is not looking so bad 15:30:03 no nfs kernel server for containers 15:30:28 will be right, nfs kernel server won't work with containers 15:30:33 In fact I'd like to talk about userspace NFS and gateways 15:30:38 yeah, we could try with Ganesha or Gluster NFS within a container 15:30:46 #topic gateways 15:31:22 okay so there are a number of backends which can't natively support multitenancy with segmented VLANs 15:31:28 vbellur: if we have cirros with userspace nfs and samba, so it will be breakthrough 15:31:39 vbellur: has anyone tried Ganesha? Seems very buggy to me. 15:32:12 xyang1: it's under active development -- are you sure you're running the latest stable branch? 15:32:28 xyang1: we have been playing around with it (both v1.5 and v2) 15:32:32 bswartz: I think so. 15:32:37 xyang1: +1, itworks on CentOS and similar to it, not on debian, as for me 15:33:00 vponomaryov: I tried on debian 15:33:01 my current idea for gateways looks very similar to the generic driver 15:33:13 xyang1: and we have been able to run reasonable tests 15:33:46 vbellur: ok, good to know. 15:34:23 the server that will be visible to the tenant will be something like ganesha-nfs, either running in a VM or somewhere that has a network interface on the tenant's VLAN 15:34:48 ganesha-nfs will in turn be a client of the real backend 15:35:10 bswartz: sounds feasible to me 15:35:20 so if you have a giant clustered filesystem, ganesha will re-export some subtree of the namespace 15:35:56 bswartz: right.. 15:36:18 vbellur: are you running ganesha-nfs in a container or VM? 15:36:26 the gateway could in principle be a full HVM, or a container-based VM, or just a process running on a hypervisor node -- the key would be that it would need dual network connectivity -- to the tenant VLAN on one side and to the backend storage network on the other side 15:36:35 xyang1: VM and bare metal too 15:37:06 vbellur: I tried in a container, maybe that's why I had problems. 15:37:34 there's potential for resuing a bunch of the code that's been written for the generic driver 15:37:36 xyang1: worth a try in a VM 15:37:50 vbellur: sure 15:38:07 as we get some of the drivers wrapped up I'd like ot move onto prototyping the gateway-based multitenancy 15:38:11 probably a few weeks from now 15:38:23 so anyone who is interested get ready to participate 15:38:29 bswartz: we are exploring Ganesha on similar lines 15:38:43 or go ahead and start whipping up some prototypes so we'll have something to look at in a few weeks 15:38:50 csaba did open a blueprint but I guess it is titled hypervisor mediated ... 15:39:01 vbellur: oh I need to review the current BPs 15:39:27 on nomenclature - what should we call this scheme? gateway or hypervisor? gateway seems more generic to me. 15:39:48 I like gateway, since it doesn't preclude any approach that I can think of 15:39:53 +1 to gateway. It is more general, and the hypervisor is a gateway. 15:40:08 +1 15:40:17 ok, we will rename the blue print. 15:40:29 vbellur: ty 15:40:43 #topic incubation 15:41:05 okay so many of you will remember that the TC felt manila was not ready for incubation last november 15:41:26 Given how much work we've needed to do on multitenancy I think they were correct 15:41:47 bswartz: agree 15:41:52 but I'm now hopeful that we'll have working multitenancy by Icehouse 15:42:04 and there is another hurdle that the TC threw in front of us 15:42:33 they want to see that Manila is production-ready -- which implies that someone somewhere is actually running it in production 15:42:57 so I wanted to ask the community if there are any organizations interested in being early deployers 15:43:42 NetApp definitely is interested, but hardware and people are needed and I'm not sure that NetApp is best equipped to do this quickly 15:44:12 bswartz: there was some interest expressed in the openstack ML, we can try reaching out to those folks too. 15:44:34 yeah I don't think group is the right group to ask -- but I wanted to put the word out that we're looking for a "beta" site so to speak 15:44:45 vbellur: which mailing list group are you talking about? 15:45:13 we need someone who wants the services that Manila offers and is willing to deal with the pain of deploying brand new code 15:45:22 vponomaryov: openstack@lists.openstack.org - the user ML 15:46:09 after real drivers are available this will matter more 15:46:24 bswartz: yeah, we can possibly seek out testing help from the community. that would be a natural way of converting those test deployments to real ones. 15:46:28 it's not urgent, but I wanted everyone to be aware that the TC considers this a condition for incubation 15:46:48 not sure how many noticed the barbican incubation request on openstack-dev ML 15:46:59 I did not 15:47:05 anything interesting? 15:47:17 quite a bit, let me try to post a link to that thread 15:47:36 here it is - http://lists.openstack.org/pipermail/openstack-dev/2013-December/020924.html 15:47:59 vbellur: ty 15:48:16 does give a good idea on the factors that TC considers for incubation 15:48:47 #topic open discussion 15:48:58 alright we have some time for anything else 15:49:22 let me ask a silly question 15:49:41 how do we exaclty define multi-tenancy in this context? 15:49:59 csaba: that's not a silly question! that's a good question 15:50:01 csaba: separated resources for each tenant 15:50:03 ie. in what way is gluster , lvm ,etc considered single-tennant capalbe 15:50:27 when they can serve multiple tenants as subnets 15:51:07 when we talk about multitenancy, we're referring to an environment where different tenants have compute resources in segmented networks so they're securely isolated from eachother 15:51:58 multitenancy is about making sure that different users cannot perceive eachother's existence and cannot interfere with eachother intentionally or unintentionally 15:52:09 also isolated ip rules for NFS 15:52:28 VLANs are the typical approach for isolating the network, but there are other perfectly valid approaches 15:52:54 such as GRE tunnels or VXLANs 15:53:45 bswartz: do you consider vlan isoloation sufficient isolation for multi-tenancy? 15:53:53 (isolation) 15:53:53 the main limitation with some shared filesystems (GlusterFS, CephFS, GPFS, and Isilon NFS are the ones I know about) is that they only support a single security domain 15:53:55 oh so simply IP ranges (in the same network) with fw/export rules to control access to resources is too poor separation to call it multi-tenancy? 15:54:04 The latter are becoming omore common, you run out of VLANs way too quickly. 15:55:08 csaba: yes access rules aren't sufficient because they assume a single IP routing domain and a common set of usernames/passwords 15:55:21 csaba: the real question is do you have mount control, and prevention of stealing mount handles. IF you could do that without network separation that *would* work. 15:55:32 it's possible for tenant A to have a VM on 10.10.10.10 and for tenant B to also have a tenant VM on 10.10.10.10 and for those IP addresses to be completely different 15:55:35 separate logical networks is a lot simpler. 15:55:52 it's also possible for tenant A to have a user called "bswartz" and tenant B to have a user called "bswartz" who is totally different 15:58:00 is there an advantage of ip collisions? 15:58:05 note that there are cases when less-secure tenant isolation would be acceptable 15:58:32 csaba: there's no advantage -- it's just unavoidable at scale due to the small space of IPv4 addresses 15:58:43 csaba: you cannot tell Tenant A that they cannot ue IP addressX because tenant B is using it. 15:59:12 csaba: a tenant would be unaware of resource utilization by other tenants 15:59:21 ah I get it 15:59:52 okay our time is up 16:00:00 that was a good question csaba 16:00:08 quick qn - should we do a 0.1 release of manila after generic driver is in? 16:00:11 thx the answers 16:00:17 see you all next week when we hope to have the drivers merged! 16:00:27 thanks 16:00:32 #endmeeting