15:01:08 <bswartz> #startmeeting manila
15:01:09 <openstack> Meeting started Thu Dec 12 15:01:08 2013 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:12 <openstack> The meeting name has been set to 'manila'
15:01:19 <bswartz> hey, is anyone here today?
15:01:39 <xyang1> hi
15:01:42 <vponomaryov> hi
15:01:42 <bill_az> Hi
15:01:49 <zhaoqin__> hello
15:01:52 <bswartz> ah, so I'm not alone
15:01:56 <csaba> hi
15:01:59 <rraja_> hi
15:02:04 <gregsfortytwo> hi
15:02:43 <bswartz> I don't have a specific  agenda for this week -- unfortunately I have another meeting right before this one that tends to leave me with no time to prepare :-(
15:02:57 <bswartz> I'll work on solving that issue though
15:03:15 <bswartz> I think I want to cover the same issues we did last week because I believe we've made some progress on all of them though
15:03:24 <aostapenko> hi
15:03:37 <bswartz> #topic gateway-mediated multitenancy
15:04:16 <bswartz> so first of all, the wiki document is out of date now, due to the many new ideas that have come up in the last 6 weeks or so
15:04:47 <bswartz> I plan to update the document but first I'll offer a preview and see if anyone thinks this is crazy
15:05:06 <bswartz> My thinking is that the manila-share service itself will only understand 2 types of attach calls:
15:05:48 <bswartz> 1) Attach directly to tenant network, including support for VLANs, full network connectivity, with a virualized server, etc
15:06:33 <bswartz> and 2) Attach to flat network, just like the existing drivers, where an multitenancy support will be handled externally, either in nova or some kind of manila agent
15:07:07 <bswartz> All of the gateway-mediated multitenancy support could be build on top of (2) I believe
15:07:30 <bswartz> and all of the VLAN-based multitenancy could be build using (1) which is pretty close to being ready
15:08:16 <bswartz> I need to draw a picture of how this will work and go through all of the use cases and demonstrate how each will be handled
15:08:53 <bswartz> I think that this should make backend design for things like ceph/gluster/gpfs relatively easy, and the hard work will be done outside the manila-share service
15:09:13 <bswartz> Does anyone think I'm crazy?
15:09:46 <bswartz> oh caitlin56 isn't here, she would mention something I'm sure
15:10:11 <shamail> Gateways etc fall in 1 as well?
15:10:42 <bswartz> no in (1) there is no gateway -- the backend is responsible for virtualizing the server and connecting directly to a tenant network
15:11:09 <bswartz> that method provides more functionality, and is preferred for those backends that can support iut
15:11:10 <bswartz> it*
15:11:19 <hagarth> bswartz: any thoughts on how to handle multi-tenancy support externally for (2) ?
15:11:28 <shamail> Thanks
15:11:32 <bswartz> hagarth: absolutely
15:12:21 <bswartz> The approach will be more or less the same as the current wiki, but the new thing I'm proposing is that the manila backend doesn't really need to be aware of most of it
15:13:05 <bswartz> The main thing I realized is that whether the model is "flat"/single tenant, or multitenant with various forms of gateways, the interaction with the actual storage server is pretty much the same
15:13:38 <bswartz> in the (2) case, when the attach call comes in, the backend just has to share a directory with a client IP, that's it
15:14:00 <bswartz> implementing only that will allow us to build everything else in a generic and reusable way, I think
15:15:03 <bswartz> then for multitenant situations, there needs to be code on the hypervisor (either manila agent or nova extensions) which mounts the share and re-exports it into the tenant using one of many approaches
15:15:24 <bill_az> bswartz:  for 2), I would say "attach to network" - the driver may choose to do different network plumbing  depending on req'ts
15:15:45 <bswartz> In particular I'm looking for the gpfs and gluster people to tell me why this is crazy
15:16:27 <bswartz> bill_az: based on our meeting this week, I understood that gpfs operates in a flat network
15:17:22 <bill_az> yep - but we may want to use vlan connections from tenant guests to specific cluster node
15:17:39 <bswartz> bill_az: I realize that gpfs has many different networking options, but semantically I think they all support the concept of "grant access to /a/b/c to host 10.20.30.40"
15:18:54 <bill_az> yes
15:19:10 <vbellur> bill_az: maybe appropriate drivers can override the attach_share action?
15:19:13 <bswartz> bill_az: okay I was unaware that gpfs could join a vlan directly -- maybe there's an opportunity for GPFS to implement a VLAN-mediated style of driver aka (1)
15:20:04 <bswartz> or maybe the thing that joins the VLAN is just a proxy/gateway itself
15:20:13 <bill_az> I think there may end up being a hybrid of 1/2
15:20:24 <bswartz> that blurs the lines a bit >_<
15:21:09 <bswartz> okay I'm glad this came up though -- I'll incorporate it into my doc update
15:21:32 <bswartz> bill_az: one question for you
15:21:49 <caitlin56> Doesn't GPFS do pNFS-like direct transfers? That makes a straight server-side proxy tricky, unless you can limit the proxy role to metadata.
15:21:55 <bill_az> bswartz:  we are still discussing design internally - I just want to point out 2) as you described it might not be exactly where we end up
15:22:02 <bswartz> is the part of the system you would use to export a GPFS filesystem directly into another vlan part of GPFS itself, or some addon that you guys maintain?
15:22:50 <bill_az> initial driver is nfs (ganesha or could be kernel nfs) on top of gpfs
15:22:55 <bswartz> caitlin56: I think at one layer you're right, but you can always implement a second proxy layer on top of that
15:23:07 <zhaoqin__> bill_az: I see your code is sharing gpfs via nfs. do you plan to let vm to mount the shares via gpfs protocol?
15:23:36 <bswartz> bill_az: is there any reason that the nfs-ganesha layer couldn't sit on top of some other filesystem like cephfs, glusterfs, or another NFS?
15:24:08 <bill_az> zhaoqin__:  not initially - that would be in a future driver
15:24:31 <anands> bswartz: no, don't there is...that aligns with the proposal last week
15:24:35 <zhaoqin__> bill_az: ok
15:24:43 <bill_az> bswartz:  thats' what ganesha brings
15:25:02 <bill_az> there is fsal for various filesystems
15:25:35 <bswartz> so to answer hagarth's earlier question, nfs-ganesha is one way we can bridge arbitrary backend filesystems into a tenant network
15:26:09 <bswartz> it could be the preferred method even, if it work well
15:26:11 <bswartz> works*
15:26:27 <shamail> Is Manila-agent just in architecture/design phase or has anyone started working on it already?
15:26:37 <bill_az> btw - ganesha v2 is was released this week
15:26:40 <caitlin56> Can NFS-ganesha use NFSv4 or NFSv4.1 tricks?
15:26:55 <anands> caitlin56: yes it supports v4, v4.1
15:26:58 <bswartz> shamail: it doesn't exist -- it's just something we're thinking about
15:27:31 <vbellur> caitlin56: are you looking at something specific in v4/v4.1?
15:27:51 <bswartz> caitlin56: I'm pretty sure that ganesha-nfs will sit right in the middle of the data path though -- all traffic will flow through it when it's being used as a gateway
15:27:53 <zhaoqin__> bill_az:great, it need to have a try
15:28:31 <caitlin56> Our servers support v4, so nfs-ganesha could be a backup method of adding vservers. We will probably use OpenSolaris zones, however. But that has not passed QA yet.
15:28:35 <bill_az> zhaoqin__:  you can ping me if you have trouble building / getting started
15:28:38 <bswartz> so in that scenario I'm not sure what "tricks" it could take advantage of
15:29:09 <zhaoqin__> bill_az: thank you
15:29:59 <vbellur> caitlin56: would that mean you would require ganesha to run on OpenSolaris?
15:30:12 <anands> bswartz: speaking of all traffic being routed through ganesha, do you see it as a bottleneck?
15:30:27 <bswartz> anands: definitely not
15:30:28 <caitlin56> vbellur, not necessarily, we can run Linux inside an OpenSolaris zone.
15:30:41 <bswartz> anands: ganesha can run on the hypervisor nodes and scale along with them
15:30:45 <vbellur> caitlin56: ok
15:31:05 <anands> bswartz: precisely, yes, its what we suggested last week as part of the proposal
15:31:15 <jvltc> bswartz, yes ganesha can scale well; well 2.0 is just out there could be some issues with it; but yea architecture wise it does
15:31:39 <jvltc> infact 1.5 we experimented here at IBM and scaled very well
15:32:27 <bswartz> the important thing is that if we can locate teh ganesha gateways on the same physical hardware as the guest vms that are using them, there will be no fan-in from a network perspective
15:33:21 <bswartz> the scaling should be as good as cinder
15:33:23 <caitlin56> bwartz: with the caveat that distributed proxies imply distributed security. Some customers will want the vserver option.
15:33:59 <bswartz> caitlin56: the tenants wouldn't know how the clound was built internally
15:34:07 <gregsfortytwo> I'm having some trouble visualizing how Ganesha would interact with (2) (or maybe just with (2) itself); can somebody spell that out a little more?
15:34:08 <bswartz> all of this should be invisible to a tenant
15:34:12 <caitlin56> bswartz: if the backend servers are NFSv4 then it should actually be better than cinder. Not as good as object storage, but quite good.
15:34:33 <anands> what about the availability story wrt ganesha? Or is the plan to discuss that separately?
15:34:50 <bswartz> gregsfortytwo: don't worry you're not alone -- I intend to capture the new design in an updated wiki
15:35:49 <vbellur> anands: I think we need to have another discussion around HA for NFS-Ganesha.
15:35:52 <bswartz> anands: again if ganesha runs on the same physical machine where the guest lives, then hardware failures are not a problem because any hardware failure that affects ganesha will affect the guest too
15:36:30 <bswartz> and we all know that software failure are not a problem because we never write bugs into our software, right?
15:36:35 <anands> bswartz: if its a ganesha crash?
15:36:39 <anands> Vijay: sure
15:36:41 <bswartz> haha
15:36:50 <caitlin56> bwartz: yes, there are a number of scenarios where the fact that the NFS proxy and its clients die at the same time allows you some freedom regarding NFS session rules.
15:36:53 <vbellur> all the code I write is completely free of bugs :)
15:36:55 <jvltc> anands, bswartz just said no software bugs. :)
15:37:33 <jvltc> anands, if ganesha is on the hypervisor node; if it crashes all it needs is a restart right?
15:37:34 <bswartz> okay so we need to move to the next topic
15:37:44 <jvltc> In this architecture I am guessing ganesha runs as stand alone
15:37:48 <caitlin56> ananda: NFSv3 or NFSv4? Any caching done under NFSv3 is risky (but common). NFSv4 has explicit rules.
15:37:48 <bswartz> I'm sure we'll keep spending time on multitenancy in the coming weeks
15:38:07 <bswartz> #topic dev status
15:38:30 <bswartz> okay can we have an update on the new changes for the last week?
15:38:48 <bswartz> https://review.openstack.org/#/q/manila+status:open,n,z
15:39:30 <hagarth> bswartz: rraja_ and csaba are adding unit tests for the flat network glusterfs driver
15:39:42 <bswartz> vponomaryov? yportnova?
15:39:50 <vponomaryov> We are working on three things:
15:39:55 <vponomaryov> 1) Transfer Manila to Alembic is in progress: https://review.openstack.org/#/c/60788/
15:40:01 <vponomaryov> 2) NetApp driver (cmode) is in progress: https://review.openstack.org/#/c/59100/
15:40:09 <vponomaryov> 3) Implementation of BP https://blueprints.launchpad.net/manila/+spec/join-tenant-network is still in progress.
15:40:09 <vponomaryov> 3.1) https://review.openstack.org/#/c/60241/
15:40:09 <vponomaryov> 3.2) https://review.openstack.org/#/c/59466/
15:40:53 <vponomaryov> And have one open item
15:41:19 <bswartz> vponomaryov: will any of these be ready for merge in the next few days? I've reviewed some but not all of them
15:41:23 <vponomaryov> alexpec had asked about not clear situation with driver interfaces in manila's chat
15:41:31 <caitlin56> yponomaryov: When will you be confident that your interface with Neutron is stable?
15:42:02 <vponomaryov> we beleave, can get working ones next week
15:42:41 <caitlin56> yponomaryov: that fits our schedule well, we'll probably start coding early next month on the nexenta driver.
15:42:51 <bswartz> yeah I need to answer alexpc
15:43:19 <vponomaryov> So, we think, that driver interfaces should be refactored
15:43:26 <bswartz> last 2 days I've been stuck in long meetings so sorry for those of you waiting for responses from me by email
15:44:35 <vponomaryov> so, this refactor should be done asap
15:44:48 <vponomaryov> before hard working on different drivers
15:45:19 <caitlin56> yponomaryov: let us know when you'd be starting any refactoring. We'll let you know when we're about ready to start coding. Don't want to start coding 1 week before you change everything.
15:45:55 <vponomaryov> it means, that lvm driver is the only acceptable for now
15:46:15 <vponomaryov> and it should be refactored
15:46:20 <vponomaryov> even for singletenancy
15:46:58 <bill_az> vponamaryov:  is there a blue print / design for the refactoring?  what are your thinking of changing?
15:47:18 <vponomaryov> there is no Bp for now
15:47:36 <vponomaryov> we think to change 3 existing methods to one
15:47:52 <vponomaryov> because differnet backends will use different methods
15:48:52 <caitlin56> Basically, having 3 entry points only makes sense for certain backends? Therefore just go with one and let each backend map to its implementation?
15:49:37 <bill_az> vponamaryov:  that seems like a good idea
15:50:16 <vponomaryov> caitlin56: yes
15:50:28 <vponomaryov> own clear implementation
15:50:51 <caitlin56> +1 then, and the best time to refactor is before we have 4 backends.
15:51:21 <bswartz> I agree, but the best way to get the design right is to have real use cases implemented
15:51:41 <bill_az> vpononmaryov:  one question I brought up last week - I dont see multiple backend support fully implemented - is there work planned to finish that?
15:51:49 <bswartz> without multiple functioning drivers it will be hard to validate that our design is flexible enough
15:52:06 <vponomaryov> yes, the idea had popped up exactly after trying to implement
15:52:14 <bswartz> so it's a bit of a chicken and an egg
15:52:35 <bswartz> whoever implements first will probably have some pain working through the refactorings as the design settles
15:53:01 <bswartz> okay thanks vponomaryov
15:53:05 <caitlin56> bswartz: we obviously cannot be 100% confident until after multiple backends have been implemented. Still fixing things that already look likely to need fixing makes sense.
15:53:19 <bswartz> #topic open discussion
15:53:32 <bswartz> okay I'll open up the floor to other topics
15:53:32 <vponomaryov> bill_az: we propose to refactor, and it will affect all of you
15:53:43 <vponomaryov> who had begun drivers
15:54:30 <bswartz> btw those of you who have contributed drivers already: thank you!
15:54:40 <caitlin56> yponomaryov: the nexenta resources are booked for at least 3 weeks before we start coding.
15:54:54 <vbellur> bswartz: we plan to go ahead and prototype the ganesha mediated model. I assume that would be fine given the direction we are heading?
15:55:18 <bswartz> vbellur: yeah I'm looking forward to seeing a prototype
15:55:19 <vponomaryov> caitlin56: there is not a lot of work, the question about severity
15:55:33 <bswartz> vbellur: does your team have familiarity with ganesha already?
15:55:52 <vbellur> bswartz: cool. Yeah, we have a good degree of familiarity with ganesha.
15:56:05 <bill_az> vpononmaryov:  ok on multi-backend.  we have initial gpfs driver w/ knfs working - starting on ganesha flavor now
15:56:40 <bill_az> but no problem if driver interface changes some
15:57:13 <vponomaryov> so, everyone agree, that we create appropriate BP and do the refactor?
15:57:51 <bswartz> vponomaryov: +1
15:58:07 <shamail> vponomaryov: +1
15:58:28 <caitlin56> +1
15:58:33 <vbellur> vponomaryov: +1
15:58:36 <bswartz> anything else?
15:58:45 <bswartz> we're near the end of our hour
15:58:56 <bswartz> I plan to hold this meeting as usual next week
15:59:17 <bswartz> the following week is a holiday week here
15:59:40 <bswartz> we can discuss next week, but probably I'll cancel the 26 Dec meeting
15:59:45 <vbellur> bswartz: makes sense
15:59:53 <alagalah> Good morning
15:59:56 <bswartz> thanks everyone
16:00:05 <aostapenko> thanks, bye
16:00:11 <bswartz> #endmeeting