15:01:09 #startmeeting manila 15:01:10 Meeting started Thu Feb 6 15:01:09 2014 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:13 The meeting name has been set to 'manila' 15:01:15 garyk, ah yes 15:01:28 hello guys 15:01:35 Hi 15:01:41 well those are all easy to fix so no big deal 15:01:43 Hello 15:01:44 Hello 15:01:46 hi 15:01:46 hello 15:01:47 Hi.. 15:01:47 Hi 15:01:57 hi 15:02:10 hello 15:02:21 wow lots of people here today 15:02:24 that's good 15:02:39 and I'm glad freenode got over the DDoS attack from last weekend -- that was annoying 15:03:35 I don't suppose we have rraja here today 15:03:48 did all of you see his email? 15:03:57 yes 15:04:10 I'd like to spend some time talking about taht 15:04:12 #link http://thread.gmane.org/gmane.comp.cloud.openstack.devel/15983 15:04:23 csaba: ty! 15:05:08 I'd also like to revisit the neutron/nova/service VM networking stuff 15:05:39 it will matter even more for the gateway-mediated stuff than for the generic driver I think 15:05:43 but first 15:05:51 #topic dev status 15:06:07 vponomaryov: do you have updates like usual? 15:06:13 yes 15:06:18 Dev status: 15:06:27 1) Bugfixing. 15:06:27 Main forces were directed to bugfixing this week. Bugs for share networks available on launchpad. 15:06:37 2) BP https://blueprints.launchpad.net/manila/+spec/share-network-activation-api 15:06:37 gerrit: https://review.openstack.org/#/c/71497/ (client) 15:06:37 TODO: server side implementation 15:07:01 Generic driver - https://review.openstack.org/#/c/67182/ 15:07:01 Some improvements. Now, it works much faster using python paramiko module for ssh instead using venv with ssh client... 15:07:23 let's talk about (2) briefly 15:08:18 the idea behind the share network activation API is that under the original design, we didnt' actually create a vserver until the first share was created 15:08:19 ok, anyone have questions about bp https://blueprints.launchpad.net/manila/+spec/share-network-activation-api ? 15:08:58 this API allows us to create it early -- which is good for stuff like validating the parameters that were passed in when the share network was created 15:09:23 Can I assume everything thinks that's a good thing? 15:09:47 everyone? 15:09:59 okay silence is consent 15:10:38 anyone who wants to provide review input to the generic driver, now is the time 15:10:45 I want to merge this in the next week 15:11:03 bswartz: we too 15:11:27 note that we may still modify it in the future -- in particular we may do some of the things rraja suggests 15:11:47 unittests are in progress, and there will be some minor changes 15:11:58 but with I-3 coming I want to have feature completeness for at least a few drivers 15:12:04 also, want to remind, that generic driver still requires lightweight image with nfs and samba services for generic driver 15:12:27 vponomaryov: where is that list of requirements documented? 15:12:47 vponomaryov: I'm doing some work in that direction 15:13:04 bswartz: I think that we should merge what we have now, and then make some changes 15:13:05 bswartz: we haven't documented such stuff 15:13:13 hopefully I can present next week 15:13:23 csaba: thanks 15:13:26 let's write down a list of all of the things the generic driver will depend on from the glance image 15:13:43 obviously an SSH server is required, as well nfs-kernel-server and samba 15:14:03 does it matter if it's samba3 or samba4? 15:14:28 are there any other subtle requirements? does the image need cloud-init? 15:15:01 image should have server and client sides for NFS and Samba 15:15:19 why would the image need NFS/samba clients? 15:15:26 because this image can be used as client VM image for mounting shares 15:15:42 what shares would it mount? 15:15:42 unified for both purposes 15:15:50 manila's shares 15:15:57 oh you mean rraja's ideas? 15:15:57 it requires cloud init for key injection, but we have alter auth thru password 15:16:31 let's hold off on the gateway stuff -- I just want to document what's required for the generic driver 15:16:35 we do not need samba/nfs clients 15:16:38 bswartz: I mean use case, not only creation of share, but using it too. On client's VM 15:16:47 there will be additional requirements if we also use these images as gateways 15:17:06 yes but the client VMs will be some other glance image 15:17:19 and those images will be tenant-owned 15:17:23 this image will be admin-owned 15:17:27 bswartz: yes, it is in wishlist 15:18:26 okay 15:18:42 #topic networking 15:19:43 so we were able to make the generic driver work with separate networks last week 15:20:15 each service VM gets its own service network, and the network is joined to the tenant network with a virtual router 15:20:43 I'm pretty satisfied that this approach works, but the downside is that we use up 8 IPs (a /29 CIDR) for every service VM 15:21:02 bswartz: we can even use clusters in service subnet in future 15:21:08 also there's a small chance that the IP we choose for any given service VM conflicts with something in the tenant's universe 15:22:14 I'm still interested in putting the service VMs directly on the tenant networks if/when we can solve the issues currently preventing that 15:22:30 scottda: did you discover anything new since last week? 15:22:39 Nothing earth-shattering... 15:22:54 scotta: do you just want to share with the team what we discussed last week 15:22:59 The Neutron people have the idea of a Distributed Virtual Router (DVR) 15:23:10 https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr 15:23:12 bswartz: not only tenant universe, cloud universe, because tenants networks can be connected 15:23:38 They are actively working on this, but don't expect it to be in in Icehouse. Probably will be this summer. 15:24:03 aostapenko: yes but openstack already manages IPs for the whole cloud -- what it doesn't control is what the tenant connects in from the outside world 15:24:19 It will do VM-to-VM intra-tenant routing, but for inter-tenant it will still go out to a Network node. It will have slightly better performance, but not quite what manila wants. 15:24:58 performance is on motivation 15:25:18 s/ on/ one/ 15:25:25 With the proper champion to write the code and push the blueprint, the DVR can, and probably some day will, be enhanced to have a VM-to-VM intra-tenant connectivity option. But that is in the future. 15:26:05 but for me the main thing is that hardware-based drivers will actually be able to directly join tenant networks, and it seems better for the generic driver to have the same behavior -- if only for consistency and common testing 15:26:46 That is the synopsis 15:27:00 thanks scottda 15:27:50 so the plan here is -- we're going to continue with aostapenko's approach of creating a /29 for each instance of the generic driver 15:28:06 however we're going to monitor neutron to see if they give us a better alternative in the future 15:28:58 people in Neutron are also working on Service VM infrastructure - https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms 15:29:05 #topic gateway-mediated 15:29:05 bswartz: we can even extend to /28 if we want to launch clusters of service vms 15:29:47 achirko: that's also very interesting to me 15:29:59 I should probalby join some neutron meetings and +1000 that BP 15:30:04 we can get some feedback from them on our approach, but it probably will slow down Generic Driver delivery 15:30:31 achirko: we don't need to slow down anything -- we can go forward with the current approach 15:30:35 bswartz: all of us could probably +1000 that :) 15:30:37 achirko: if this BP happens we can go back and update 15:30:59 okay we're on a new topic though 15:31:08 and since we still don't have rraja, I'll drive 15:31:35 so I'll repost the link 15:31:35 #link http://thread.gmane.org/gmane.comp.cloud.openstack.devel/15983 15:31:44 I can talk on behalf of rraja 15:31:50 oh okay 15:32:00 csaba: take it away! 15:32:43 well the basic idea is that if we think of various storage backends 15:33:27 ie. there is Cinder as with generic driver, there could be lvm, ganesha, gluster-nfs whatnot... 15:34:03 which are implemented or wip as single tenant drives 15:34:09 *drivers 15:34:18 running on hypervisor 15:34:37 now what they would do in a generic driver like architecture 15:34:40 is not much different 15:35:01 just their activity wouild have to lifted to the service VM 15:36:02 so what we thought, if the archictecture could be split to a backend exporter and a network plumbing component... 15:36:24 okay so let me summarize and see if I'm off base 15:36:38 then it would be easy to leverage those other efforts tand use in a multi-tenant way 15:36:52 we could implement gateway-mediated access with the following: 15:37:12 1) add a network connection from the generic driver's service VM to the backend storage network 15:37:41 2) add filesystem clients to the service VM 15:38:19 3) implement the backend to just serve filesystems to a single storage network 15:38:52 4) bridge the backend filesystem onto the tenant network using an NFS server in the server VM, either ganesha-nfs or nfs-kernel-server 15:39:37 is that it? 15:40:35 what you mean by 3) 15:40:52 "single storage network"? 15:40:57 that sounds right to me 15:41:03 I think the same thing you meant by (10:33:59 AM) csaba: which are implemented or wip as single tenant drives 15:41:09 ah OK 15:41:27 fine 15:41:37 the backends for the gateway-mediated mode wouldn't need to understand tenants really 15:41:46 because the VMs would do that translation 15:42:28 okay so I'd like to have some discussion on the difference between ganesha-nfs and nfs-kernel-server in this context 15:42:48 does redhat have a preference? 15:43:13 +1 to flexibility in choice of NFS server 15:43:24 well point in ganesha is having pluggable storage backends 15:44:01 but then kernel nfs is more mature / known ... so it's really good to allow a choiec 15:44:50 hmm 15:45:04 I kind of don't like giving users an option here 15:45:16 it seems like supporting both will double the testing effort and chances for bugs 15:45:31 it would be better to agree on one and implement that i think 15:45:52 ofc that wouldn't stop someone from also adding a patch to support the other -- but I feel like we should have a recommended approach 15:46:00 well we don't need support both 15:46:34 one can be chosen as supported... down the road 15:46:57 so what I'm asking is, do you have a preference at this time? 15:46:59 bswartz: how do we have nfs-kernel-server reach out to various storage backends? 15:47:11 point is to ease development efforts... for various multi-tenant pocs 15:47:44 vbellur I think nfs-kernel-server layers ontop of VFS inside the Linux kernel, so any filesystem that has a kernel-mode driver will work underneath it 15:48:20 if you cound FUSE as a kernel-mode driver than I think literally anything will work 15:48:27 s/cound/count/ 15:48:45 bswartz: gluster does not have a kernel mode driver and the performance implications of a fuse mount being an export would be pretty bad 15:48:52 ah 15:49:06 well that's a fairly good argument for preferring ganesha-nfs then 15:49:17 esp if redhat wants to be the first working implementation of this mode 15:49:49 anyone see a serious downside to ganesha-nfs? 15:49:50 bswartz: the very reason we implemented our own NFS server was due to severe performance implications that we experienced with exporting a fuse mount point. 15:49:51 addition +1 for ganesha can be its possibility to use over linux containers 15:50:06 vponomaryov: yes I was going to mention that too 15:51:18 so can we have nfs-ganesha as the default for v1? 15:52:19 vbellur: does ganesha have stable releases for most of distros? 15:52:20 if we use lxc provided by nova, we could not launch other hypervisors vms in the case of single node instalation 15:52:28 vbellur: yeah that works for me -- although that approach diverges a little more from teh generic driver than nfs-kernel-server would 15:53:38 vponomaryov: we have packages for CentOS & Fedora. We also can work with Ganesha community for other distros. 15:53:39 aostapenko: I don't understand your comment 15:55:00 bswartz: he mant, that if we use LXC VMs, we can not use VMs with other hypervisors 15:55:07 regarding package and distros, I see it as an administrator's job to provide the glance image that will become the service VM -- if RH-based distros are better suited to running gluster and ganesha then administrators will probably choose those 15:55:09 s/mant/meant/ 15:55:43 yeah LXC has various downsides which we discussed a few months ago 15:55:46 bswartz: but we should be able to launch other vms in cloud. not service 15:56:14 but LXC has some very interesting advantages too -- I want to come back and look at LXC sometime 15:57:15 #topic open discussion 15:57:22 okay anything else before our time is up? 15:57:23 bswartz: i had many exciting nights with lxc, it would be great for performance if we use it 15:58:09 aostapenko: lol 15:58:34 aostapenko: I know it has much lower overhead which is good for scalability in low-load situations 15:59:33 okay thanks everyone 15:59:38 bswartz: thanks! 15:59:41 thanks, bye 15:59:41 thanks 15:59:42 see you next week 15:59:49 bye 15:59:56 bye all 15:59:57 #endmeeting