15:03:00 <bswartz> #startmeeting manila
15:03:01 <openstack> Meeting started Thu Dec  5 15:03:00 2013 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:03:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:03:04 <openstack> The meeting name has been set to 'manila'
15:03:08 <aostapenko_alt> Hi all!
15:03:13 <bswartz> hello
15:03:19 <akerr> hello
15:03:19 <achirko> hi
15:03:19 <Dinny_> hi
15:03:21 <bill_az_> Hi
15:03:23 <vponomaryov> hi
15:03:25 <kdurazzo> hi
15:03:29 <yportnova> hi
15:03:35 <bswartz> cool we have a lot of people
15:03:38 <gregsfortytwo1> hi
15:03:48 <bswartz> unfortunately I'm not as prepared as I should be
15:03:53 <bswartz> heh
15:04:00 <bswartz> #link https://wiki.openstack.org/wiki/ManilaMeetings
15:04:01 <rraja> hi
15:04:09 <vbellur> hi
15:04:18 <bswartz> so it looks like I setup the agenda already
15:04:37 <bill_az_> bswartz: before we get started, I want to introduce Marc Eshel.
15:04:38 <bill_az_> He will be working with me on IBM/GPFS Manila drivers.
15:05:04 <bswartz> bill_az_: does he have a IRC handle?
15:05:15 <bill_az_> Marc has years of experience with NFS, in particular with open source ganesha project
15:05:24 <bswartz> ah, excellent
15:05:26 <bill_az_> His handle is eshel - but I don't think he's on just yet
15:05:33 <csaba> hi
15:05:42 <bswartz> eshel: welcome!
15:05:56 <anands> Hi guys
15:06:00 <caitlin56> hi
15:06:18 <anands> Vijay Bellur is not able to make today's meeting, I will be filling in from the redhat folks
15:06:20 <bswartz> okay so bill_az_ and vbellur and I were supposed to get together to discuss gateway-mediated attachements
15:06:33 <bswartz> anands: thank you
15:07:17 <bswartz> what I would really like is for someone to volunteer to prototype a gateway-mediated attachment scheme
15:07:39 <bswartz> I could do it if I had like 100x more spare time
15:07:55 <bswartz> I'm interested in NFS-over-vsock and I'm also interested in VirtFS
15:08:04 <anands> bswartz: Worked with vbellur, ramana etc. we have a draft of the architecture for hyperv-mediated model for multi-tenancy - we can discuss when others are ready - https://wiki.openstack.org/wiki/Manila_Networking/Hypervisor_mediated#References:
15:08:21 <bswartz> I guess I can offer a brief background for people not familiar with the issue
15:09:07 <bswartz> Manila is going to need 2 different mechanisms for attaching guests to shared storage:
15:09:15 <eshel> hi I am on, I was on the manila tab
15:10:22 <bswartz> 1) For backends that support it, manila will create virtual servers on the backends and conenct them directly to the tenant network so guests can use native protocols to talk directly to the backends. This mostly includes hardware backends with features like NetApp vservers -- I know that Nexenta has a comparable feature, and I think EMC has a comparable feature too
15:11:45 <bswartz> 2) For backends that only understand how to serve data to one network segment, manila will provide virtualized access to the shared storage with a bridge of some kind at the hybervisor. Possible bridges including things like VirtFS, or NFS gateways layered on top of a clustered filesystem
15:11:54 <bswartz> eshel: hi
15:12:16 <bswartz> we are making good progress on (1), but we need to do some prototyping for (2)
15:12:41 <anands> We are ready to do that if everyone agrees to the arch outlined in the wiki link for (2)
15:12:52 <bswartz> vbellur, bill_az, and I have volunteered to run a separate working group to work on (2), so please contact us if you want to be included
15:13:40 <bswartz> however we haven't actually started up that working group yet due to the US holidays
15:14:00 <bswartz> bill_az I'll contact you offline
15:14:01 <bill_az_> bswartz:  the description in Hypervisor Mediated storage link is similar to what we are thinking
15:14:11 <bswartz> anands: let vbellur know that we need to connect
15:14:17 <bill_az_> we'd like to be included in discussions of that
15:14:35 <anands> bswartz: sure, was working with him all along wrt the wiki arch
15:14:40 <gregsfortytwo1> anands: what drove you to a per-hypervisor instead of per-tenant ganesha instance?
15:14:42 <bswartz> okay cool
15:15:33 <anands> gregsfortytwo1: we thought a single ganesha on the hypervisor could scale well enough, per-tenant approach seemed like an overkill
15:15:57 <bswartz> anands: what's the benefit of ganesha over nfs-kernel-server?
15:16:23 <vponomaryov> bswartz: userspace nfs
15:16:41 <gregsfortytwo1> bswartz: note the listing of FSALs it already supports? :)
15:16:47 <bswartz> I know it's in userspace, but that shouldn't matter in the context of a hypervisor
15:16:59 <anands> bswartz: its a user-space nfs server implementation, can scale up based on its cache-inode layer which can grow arbitrarily large, a crash will not necessarily reboot your hypervisor ;)
15:17:20 <gregsfortytwo1> anands: hrm, that makes sense, just thinking that per-tenant would have better cache properties
15:17:25 <bswartz> anands: oh I wasn't aware of these FSALs
15:17:25 <anands> gregsfortytwo1: :)
15:17:33 <bswartz> I need to read up on ganesha it would seem
15:17:37 <gregsfortytwo1> and would make it easier to move to a model where we don't rely on the nfs server for doing all the security
15:17:55 <bswartz> hold on there about security
15:18:06 <bswartz> security is one of the thorniest problems with shared filesystems
15:18:13 <gregsfortytwo1> sorry, that comment was about a per-tenant versus per-hypervisor model
15:18:21 <anands> the wiki also refers to some of the security aspects, we could take that topic up separately I guess
15:18:28 <anands> gregsfortytwo1: ah ok
15:18:58 <bswartz> My personal feeling is that in the hypervisor mediated model, security is actually a lot simpler, because we can rely on the hypervisor and the gateway to enforce it
15:19:04 <bswartz> the backend doesn't need to do anything
15:19:32 <anands> bswartz: yes true, but if we further want more fine grained access control we have a way to enforce
15:19:53 <bswartz> I would actually prefer to make the tunneling procotol something stupid like NFSv3 with UNIX security (aka No Fscking Security)
15:20:25 <caitlin56> tenant network mediated offers truer security. hypervisor mediated requires the storage admin to trust the hypervisors.
15:20:29 <anands> um...sure, we could think on those lines
15:20:39 <gregsfortytwo1> why prefer that security model?
15:20:47 <caitlin56> Not that this makes hypervisor mediated totally invalid, but that is one of its limitations.
15:21:19 <bswartz> so with any kind of mediated access -- the point is that the mediator limits the tenants view to a subset of the actual backend filesystems
15:21:34 <bswartz> however, within that subset the tenant should be able to do whatever he wants
15:21:49 <bswartz> because tenants are all "root" within their guest OSes
15:21:55 <anands> yes
15:22:20 <bswartz> now if tenants want to further subdivide access to multiple non-root users within their guests, that's their business
15:22:25 <caitlin56> bswartz: with the vlan method i can configure my file server to squash root.
15:23:35 <caitlin56> But to be clear, I think both models are valid.
15:23:51 <bswartz> caitlin56: I would argue that squashing root is unneeded -- indeed many users may want root access to their shared filesystem
15:24:32 <bswartz> the security model needs to be about which filesystems you can see, or which parts of a larger filesystem you can see -- all of that is implemented at layers above the actual file-sharing procotol
15:24:34 <caitlin56> If hypervisor mediation is acceptable then you do not need root squashing, agreed.
15:25:33 <bswartz> okay so we can talk and talk about this, but I don't want to take up the whole meeting
15:25:57 <bswartz> as I said before, we'll form a working group to drive this to the prototype stage, then we'll present it and solicit feedback
15:26:04 <bswartz> #topic dev status
15:26:44 <bswartz> the more critical tasks are to continue driving method (1) to completion, so we can demonstate a fully working multitenant implementation
15:26:52 <caitlin56> Nexenta has some interest in this, but resources for hypervisor-mediation would probably be unavailable for at least 2 months.
15:27:13 <bswartz> we're a lot closer on (1) than we are on (2)
15:27:33 <bswartz> btw I've seen a lot of changes go into gerrit and they're awaiting review
15:27:42 <eshel> are we developing all 4 options?
15:27:47 <bswartz> I'm behind on reviewing and could use whatever help I can get
15:28:09 <bswartz> eshel: sort of
15:28:10 * caitlin56 will do some reviewing this week.
15:28:29 <csaba> bswartz: https://review.openstack.org/59124, we are looking for getting reviews
15:28:42 <bswartz> eshel: mentally I've collapsing all the mediated methods down to one method with flavors
15:28:53 <csaba> (glusterfs driver resubmitted along lines that we last discussed)
15:29:05 <bswartz> eshel: and the flat network model is just a degenerate case of (1)
15:29:49 <bswartz> csaba: yes I saw it -- I apologize to everyone waiting for reviews, I've been busy since the holidays
15:30:00 <eshel> but to get approval we need them all to be implemented?
15:30:06 <bswartz> I will be out of town this afternoon and tomorrow in Westford MA
15:30:16 <bill_az_> bswartz: csaba  I'll take a look at gluster driver today and comment
15:30:37 <bswartz> eshel: no! the intent is that any given backend only needs to implement 1
15:31:02 <bswartz> I'll try to get some reviewing done in my downtime while travelling
15:31:09 <csaba> bill_az_: kthx
15:31:42 <bill_az_> eshel:  the question about approval is regarding getting into incubation status?
15:31:44 <anands> bill_az_: thx, look forward to your comments
15:32:04 <bswartz> yportnova/vponomaryov: do you want to let the group know what's new this week for you end?
15:33:31 <eshel> so is multi tenants is a requerment?
15:33:32 <bill_az_> bswartz:  is support for multiple backends implemented?  I dont find it
15:34:06 <vponomaryov> network api commit were released
15:34:10 <bswartz> eshel: not right now -- I'm happy to have single tenant drivers until we get the major multitenant work done
15:34:11 <yportnova> bswartz: we are working on implementation of this bp https://blueprints.launchpad.net/manila/+spec/join-tenant-network
15:34:42 <bswartz> bill_az_: it should mostly be there -- I wouldn't be shocked if there are bug though -- it's not something we've done testing on
15:34:43 <vponomaryov> https://review.openstack.org/#/c/60241/ - service
15:34:43 <vponomaryov> https://review.openstack.org/#/c/59466/ - client
15:35:27 <bswartz> anyone with review bandwidth: https://review.openstack.org/#/q/status:open+manila,n,z
15:36:11 <csaba> just to tell, aostapenko_alt 's                                                                            ?' failed
15:36:28 <bswartz> csaba: ?
15:36:28 <csaba> commit https://review.openstack.org/#/c/59082/
15:36:45 <csaba> -- sorry bogus newline --
15:36:55 <csaba> caused a bit hiccup
15:37:07 <bswartz> okay
15:37:29 <csaba> as migration was provided for schema change but the manage.py script is not provideed
15:37:50 <bswartz> and version of the NetApp driver w/ multitenant support is coming along -- I'm hoping it will be upstream soon
15:37:58 <csaba> I resolved it on my end so I can volunteer to sumbit a usable manage.py
15:38:19 <bswartz> the NetApp driver will probably serve as a reference driver for other hardware vendors because I expect the generic driver to actually be fairly complex
15:38:43 <bswartz> although both will be available during icehouse
15:39:07 <bswartz> if there are folks out there trying to work on backends and getting stuck, please reach out to me
15:39:20 <eshel> where are the requierments for multitenant support documented
15:39:30 <bswartz> I want to know what we can do better (other than speeding up our current work)
15:39:41 <vponomaryov> csaba: if you found a bug, please post in launchpad, I have tested share metadata on lvm
15:39:49 <bswartz> eshel: they're not -- it's all being coded up last month and this month
15:40:12 <bswartz> eshel: developer docs / backend docs are something we know we need
15:40:22 <aostapenko_alt> csaba: you can manage migrations by 'manila-manage db'  commands
15:41:25 <bswartz> #topic open discussion
15:41:27 <csaba> aostapenko_alt: thanks I did not now, just worked my way through migration with googling...
15:41:30 <vponomaryov> aostapenko_alt: yes, there is need to sync db
15:41:51 <bswartz> I don't have anything else on my agenda
15:42:06 <bill_az_> general question - let's say I have created a share and given access to several vm instances
15:42:45 <bill_az_> is there (or is there planned) an automated way to mount the share on several instances automatically?
15:43:14 <bill_az_> I can envision how this can be done w/ user data when booting a new instance
15:43:34 <bswartz> bill_az_: yes I consider that a desirable feature to have inside manila
15:43:40 <bill_az_> but not sure about instances that are already running
15:43:47 <caitlin56> It shoulb be just as automatable as it is without virtualization.
15:43:57 <bswartz> bill_az_: however it's not on the critical path to getting the service working
15:44:04 <anands> caitlin56: yes, agree
15:44:33 <bill_az_> caitlin56:  true - my question is whether we have ideas on how to do that
15:44:45 <bswartz> I want to consider offering agents that can be installed on guests to automate the mount process -- we need to standardize a protocol so manila can at least try to automate the mount
15:44:50 <bill_az_> bswartz:  I agree it's not highest priority
15:44:57 <caitlin56> bill_az: why not keep the mechanisms that work?
15:45:01 <bswartz> whether the agent is a boot time only thing or is always running -- perhaps we offer both
15:45:23 <caitlin56> If you belong to a tenant network, you mount your file shares on that network the same way you would have done on a corporate intranet.
15:45:24 <bswartz> and interfacing with cloud_init seems like an obvious starting point
15:45:47 <bswartz> the key is that whatever we do it will be optional for tenants
15:45:55 <caitlin56> the tenant shouldn't be able to tell them apart.
15:47:30 <bswartz> if that's all for today I'll see you next week
15:47:38 <kdurazzo> the automation point actually lends itself well to the hypervisor model
15:47:51 <kdurazzo> instead of per tenant
15:47:59 <bswartz> kdurazzo: perhaps
15:48:07 <caitlin56> kdurazzo: why?
15:48:19 <bswartz> kdurazzo: I'd like to see some prototypes here -- it's something that can be worked on in parallel by whoever is interested
15:48:27 <kdurazzo> the HV could handle the init upon bringup
15:48:40 <caitlin56> the tenant network approach can look *exactly* like the corporate intranet. client machines know how to deal with that.
15:48:42 <kdurazzo> ok
15:49:01 <kdurazzo> the ok was for bswartz :-)
15:49:13 <caitlin56> the limitation on tenant-network method is that it is harder on the storage servers, which have to offer vritualized interfaces. But it is definitely simpler for the clients.
15:49:29 <bill_az_> bswartz:  +1 on cloud-init
15:49:58 <kdurazzo> we should probably discuss that more, I am not sure it is *simpler*, would like to understand your point on that
15:50:45 <Dinny_> bswartz : what is cloud_init you refer here? I could not follow that
15:50:48 <caitlin56> kdurazzo: with a proper dhcp-agent the network-mediated approach can leave the guest os with the illusion that they are on a corporate network.
15:51:13 <bswartz> Dinny_: https://help.ubuntu.com/community/CloudInit
15:51:27 <kdurazzo> caitlin56: I believe the same could be accomplished with the HV init approach as well
15:51:32 <caitlin56> kdurazzo: I posted one comment on this to the blueprint, the current dhcp-agent does not do as much as many corporate intranet DHCP servers do. That is the only gap.
15:51:41 <Dinny_> bswartz: thanks for the link :)
15:51:59 <kdurazzo> caitlin56: will look at your comments and review the BP again
15:52:14 <caitlin56> kdurazzo: that depends on whether the hypervisor-mediated is using a virtFS or is transparently proxying NAS services.
15:52:45 <caitlin56> virtfs is a great solution, but guest OSs don't support it today.
15:53:17 <bswartz> caitlin56: I fear the proxying won't ever by transparent in the mediated approaches -- that's not the intent
15:53:27 <bswartz> s/by/be/
15:53:37 <caitlin56> transparent proxying is indeed complex to set up.
15:54:16 <gregsfortytwo1> anands: just looked at it again because of caitlin56's comments and noticed the draft doesn't specify how clients will connect to the local ganesha server; what are your plans for plumbing that?
15:54:22 <bswartz> the intent of the mediated approaches is to provide a filesystem to the tenant, where the tenant's view of the filesystem is whatever the gateway wants it to be, and it independent of the real backend filesystem
15:54:42 <bswartz> indeed the real backend filesystem should be unknown to the tenant when a mediated approach is used
15:55:13 <anands> gregsfortytwo1: there are some details to be ironed out completely; main idea is that sub-dirs can be exported as tenant shares - each directly mountable by a tenant
15:55:21 <caitlin56> bswartz: isn't the simplest way to do hypervisor mediated to require the tenant to install a virtfs library?
15:55:44 <bswartz> caitlin56: that's one approach, but we need to be more flexible
15:56:10 <caitlin56> bswartz: agreed, because existingVMs cannot use a library.
15:56:10 <bswartz> caitlin56: until somone writes a virtfs library for windows that's not an option for some users
15:56:11 <gregsfortytwo1> anands: right, but it'll still need to do the network plumbing somehow (I'm not sure what the options are when it's all on one machine)
15:57:09 <bswartz> okay so before we dive back down the mediated multitenany rathole I'm going to end the meeting
15:57:16 <anands> gregsfortytwo1: vsock interface is something vbellur has mentioned, can sync up with him and get back on that. Maybe the answer?
15:57:17 <caitlin56> Effectively "hypervisor mediated with proxy" is the same as "network mediated" -- with the difference of where the "proxying" is done.
15:57:28 <caitlin56> With "network mediated" the "proxying" is done of the storage server itself.
15:57:32 <bswartz> please feel free to continue the discussion in #openstack-manila
15:57:34 <gregsfortytwo1> k, just curious
15:57:46 <bswartz> I'll be over there
15:57:53 <bswartz> #endmeeting