15:01:08 #startmeeting manila 15:01:09 Meeting started Thu Dec 12 15:01:08 2013 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:12 The meeting name has been set to 'manila' 15:01:19 hey, is anyone here today? 15:01:39 hi 15:01:42 hi 15:01:42 Hi 15:01:49 hello 15:01:52 ah, so I'm not alone 15:01:56 hi 15:01:59 hi 15:02:04 hi 15:02:43 I don't have a specific agenda for this week -- unfortunately I have another meeting right before this one that tends to leave me with no time to prepare :-( 15:02:57 I'll work on solving that issue though 15:03:15 I think I want to cover the same issues we did last week because I believe we've made some progress on all of them though 15:03:24 hi 15:03:37 #topic gateway-mediated multitenancy 15:04:16 so first of all, the wiki document is out of date now, due to the many new ideas that have come up in the last 6 weeks or so 15:04:47 I plan to update the document but first I'll offer a preview and see if anyone thinks this is crazy 15:05:06 My thinking is that the manila-share service itself will only understand 2 types of attach calls: 15:05:48 1) Attach directly to tenant network, including support for VLANs, full network connectivity, with a virualized server, etc 15:06:33 and 2) Attach to flat network, just like the existing drivers, where an multitenancy support will be handled externally, either in nova or some kind of manila agent 15:07:07 All of the gateway-mediated multitenancy support could be build on top of (2) I believe 15:07:30 and all of the VLAN-based multitenancy could be build using (1) which is pretty close to being ready 15:08:16 I need to draw a picture of how this will work and go through all of the use cases and demonstrate how each will be handled 15:08:53 I think that this should make backend design for things like ceph/gluster/gpfs relatively easy, and the hard work will be done outside the manila-share service 15:09:13 Does anyone think I'm crazy? 15:09:46 oh caitlin56 isn't here, she would mention something I'm sure 15:10:11 Gateways etc fall in 1 as well? 15:10:42 no in (1) there is no gateway -- the backend is responsible for virtualizing the server and connecting directly to a tenant network 15:11:09 that method provides more functionality, and is preferred for those backends that can support iut 15:11:10 it* 15:11:19 bswartz: any thoughts on how to handle multi-tenancy support externally for (2) ? 15:11:28 Thanks 15:11:32 hagarth: absolutely 15:12:21 The approach will be more or less the same as the current wiki, but the new thing I'm proposing is that the manila backend doesn't really need to be aware of most of it 15:13:05 The main thing I realized is that whether the model is "flat"/single tenant, or multitenant with various forms of gateways, the interaction with the actual storage server is pretty much the same 15:13:38 in the (2) case, when the attach call comes in, the backend just has to share a directory with a client IP, that's it 15:14:00 implementing only that will allow us to build everything else in a generic and reusable way, I think 15:15:03 then for multitenant situations, there needs to be code on the hypervisor (either manila agent or nova extensions) which mounts the share and re-exports it into the tenant using one of many approaches 15:15:24 bswartz: for 2), I would say "attach to network" - the driver may choose to do different network plumbing depending on req'ts 15:15:45 In particular I'm looking for the gpfs and gluster people to tell me why this is crazy 15:16:27 bill_az: based on our meeting this week, I understood that gpfs operates in a flat network 15:17:22 yep - but we may want to use vlan connections from tenant guests to specific cluster node 15:17:39 bill_az: I realize that gpfs has many different networking options, but semantically I think they all support the concept of "grant access to /a/b/c to host 10.20.30.40" 15:18:54 yes 15:19:10 bill_az: maybe appropriate drivers can override the attach_share action? 15:19:13 bill_az: okay I was unaware that gpfs could join a vlan directly -- maybe there's an opportunity for GPFS to implement a VLAN-mediated style of driver aka (1) 15:20:04 or maybe the thing that joins the VLAN is just a proxy/gateway itself 15:20:13 I think there may end up being a hybrid of 1/2 15:20:24 that blurs the lines a bit >_< 15:21:09 okay I'm glad this came up though -- I'll incorporate it into my doc update 15:21:32 bill_az: one question for you 15:21:49 Doesn't GPFS do pNFS-like direct transfers? That makes a straight server-side proxy tricky, unless you can limit the proxy role to metadata. 15:21:55 bswartz: we are still discussing design internally - I just want to point out 2) as you described it might not be exactly where we end up 15:22:02 is the part of the system you would use to export a GPFS filesystem directly into another vlan part of GPFS itself, or some addon that you guys maintain? 15:22:50 initial driver is nfs (ganesha or could be kernel nfs) on top of gpfs 15:22:55 caitlin56: I think at one layer you're right, but you can always implement a second proxy layer on top of that 15:23:07 bill_az: I see your code is sharing gpfs via nfs. do you plan to let vm to mount the shares via gpfs protocol? 15:23:36 bill_az: is there any reason that the nfs-ganesha layer couldn't sit on top of some other filesystem like cephfs, glusterfs, or another NFS? 15:24:08 zhaoqin__: not initially - that would be in a future driver 15:24:31 bswartz: no, don't there is...that aligns with the proposal last week 15:24:35 bill_az: ok 15:24:43 bswartz: thats' what ganesha brings 15:25:02 there is fsal for various filesystems 15:25:35 so to answer hagarth's earlier question, nfs-ganesha is one way we can bridge arbitrary backend filesystems into a tenant network 15:26:09 it could be the preferred method even, if it work well 15:26:11 works* 15:26:27 Is Manila-agent just in architecture/design phase or has anyone started working on it already? 15:26:37 btw - ganesha v2 is was released this week 15:26:40 Can NFS-ganesha use NFSv4 or NFSv4.1 tricks? 15:26:55 caitlin56: yes it supports v4, v4.1 15:26:58 shamail: it doesn't exist -- it's just something we're thinking about 15:27:31 caitlin56: are you looking at something specific in v4/v4.1? 15:27:51 caitlin56: I'm pretty sure that ganesha-nfs will sit right in the middle of the data path though -- all traffic will flow through it when it's being used as a gateway 15:27:53 bill_az:great, it need to have a try 15:28:31 Our servers support v4, so nfs-ganesha could be a backup method of adding vservers. We will probably use OpenSolaris zones, however. But that has not passed QA yet. 15:28:35 zhaoqin__: you can ping me if you have trouble building / getting started 15:28:38 so in that scenario I'm not sure what "tricks" it could take advantage of 15:29:09 bill_az: thank you 15:29:59 caitlin56: would that mean you would require ganesha to run on OpenSolaris? 15:30:12 bswartz: speaking of all traffic being routed through ganesha, do you see it as a bottleneck? 15:30:27 anands: definitely not 15:30:28 vbellur, not necessarily, we can run Linux inside an OpenSolaris zone. 15:30:41 anands: ganesha can run on the hypervisor nodes and scale along with them 15:30:45 caitlin56: ok 15:31:05 bswartz: precisely, yes, its what we suggested last week as part of the proposal 15:31:15 bswartz, yes ganesha can scale well; well 2.0 is just out there could be some issues with it; but yea architecture wise it does 15:31:39 infact 1.5 we experimented here at IBM and scaled very well 15:32:27 the important thing is that if we can locate teh ganesha gateways on the same physical hardware as the guest vms that are using them, there will be no fan-in from a network perspective 15:33:21 the scaling should be as good as cinder 15:33:23 bwartz: with the caveat that distributed proxies imply distributed security. Some customers will want the vserver option. 15:33:59 caitlin56: the tenants wouldn't know how the clound was built internally 15:34:07 I'm having some trouble visualizing how Ganesha would interact with (2) (or maybe just with (2) itself); can somebody spell that out a little more? 15:34:08 all of this should be invisible to a tenant 15:34:12 bswartz: if the backend servers are NFSv4 then it should actually be better than cinder. Not as good as object storage, but quite good. 15:34:33 what about the availability story wrt ganesha? Or is the plan to discuss that separately? 15:34:50 gregsfortytwo: don't worry you're not alone -- I intend to capture the new design in an updated wiki 15:35:49 anands: I think we need to have another discussion around HA for NFS-Ganesha. 15:35:52 anands: again if ganesha runs on the same physical machine where the guest lives, then hardware failures are not a problem because any hardware failure that affects ganesha will affect the guest too 15:36:30 and we all know that software failure are not a problem because we never write bugs into our software, right? 15:36:35 bswartz: if its a ganesha crash? 15:36:39 Vijay: sure 15:36:41 haha 15:36:50 bwartz: yes, there are a number of scenarios where the fact that the NFS proxy and its clients die at the same time allows you some freedom regarding NFS session rules. 15:36:53 all the code I write is completely free of bugs :) 15:36:55 anands, bswartz just said no software bugs. :) 15:37:33 anands, if ganesha is on the hypervisor node; if it crashes all it needs is a restart right? 15:37:34 okay so we need to move to the next topic 15:37:44 In this architecture I am guessing ganesha runs as stand alone 15:37:48 ananda: NFSv3 or NFSv4? Any caching done under NFSv3 is risky (but common). NFSv4 has explicit rules. 15:37:48 I'm sure we'll keep spending time on multitenancy in the coming weeks 15:38:07 #topic dev status 15:38:30 okay can we have an update on the new changes for the last week? 15:38:48 https://review.openstack.org/#/q/manila+status:open,n,z 15:39:30 bswartz: rraja_ and csaba are adding unit tests for the flat network glusterfs driver 15:39:42 vponomaryov? yportnova? 15:39:50 We are working on three things: 15:39:55 1) Transfer Manila to Alembic is in progress: https://review.openstack.org/#/c/60788/ 15:40:01 2) NetApp driver (cmode) is in progress: https://review.openstack.org/#/c/59100/ 15:40:09 3) Implementation of BP https://blueprints.launchpad.net/manila/+spec/join-tenant-network is still in progress. 15:40:09 3.1) https://review.openstack.org/#/c/60241/ 15:40:09 3.2) https://review.openstack.org/#/c/59466/ 15:40:53 And have one open item 15:41:19 vponomaryov: will any of these be ready for merge in the next few days? I've reviewed some but not all of them 15:41:23 alexpec had asked about not clear situation with driver interfaces in manila's chat 15:41:31 yponomaryov: When will you be confident that your interface with Neutron is stable? 15:42:02 we beleave, can get working ones next week 15:42:41 yponomaryov: that fits our schedule well, we'll probably start coding early next month on the nexenta driver. 15:42:51 yeah I need to answer alexpc 15:43:19 So, we think, that driver interfaces should be refactored 15:43:26 last 2 days I've been stuck in long meetings so sorry for those of you waiting for responses from me by email 15:44:35 so, this refactor should be done asap 15:44:48 before hard working on different drivers 15:45:19 yponomaryov: let us know when you'd be starting any refactoring. We'll let you know when we're about ready to start coding. Don't want to start coding 1 week before you change everything. 15:45:55 it means, that lvm driver is the only acceptable for now 15:46:15 and it should be refactored 15:46:20 even for singletenancy 15:46:58 vponamaryov: is there a blue print / design for the refactoring? what are your thinking of changing? 15:47:18 there is no Bp for now 15:47:36 we think to change 3 existing methods to one 15:47:52 because differnet backends will use different methods 15:48:52 Basically, having 3 entry points only makes sense for certain backends? Therefore just go with one and let each backend map to its implementation? 15:49:37 vponamaryov: that seems like a good idea 15:50:16 caitlin56: yes 15:50:28 own clear implementation 15:50:51 +1 then, and the best time to refactor is before we have 4 backends. 15:51:21 I agree, but the best way to get the design right is to have real use cases implemented 15:51:41 vpononmaryov: one question I brought up last week - I dont see multiple backend support fully implemented - is there work planned to finish that? 15:51:49 without multiple functioning drivers it will be hard to validate that our design is flexible enough 15:52:06 yes, the idea had popped up exactly after trying to implement 15:52:14 so it's a bit of a chicken and an egg 15:52:35 whoever implements first will probably have some pain working through the refactorings as the design settles 15:53:01 okay thanks vponomaryov 15:53:05 bswartz: we obviously cannot be 100% confident until after multiple backends have been implemented. Still fixing things that already look likely to need fixing makes sense. 15:53:19 #topic open discussion 15:53:32 okay I'll open up the floor to other topics 15:53:32 bill_az: we propose to refactor, and it will affect all of you 15:53:43 who had begun drivers 15:54:30 btw those of you who have contributed drivers already: thank you! 15:54:40 yponomaryov: the nexenta resources are booked for at least 3 weeks before we start coding. 15:54:54 bswartz: we plan to go ahead and prototype the ganesha mediated model. I assume that would be fine given the direction we are heading? 15:55:18 vbellur: yeah I'm looking forward to seeing a prototype 15:55:19 caitlin56: there is not a lot of work, the question about severity 15:55:33 vbellur: does your team have familiarity with ganesha already? 15:55:52 bswartz: cool. Yeah, we have a good degree of familiarity with ganesha. 15:56:05 vpononmaryov: ok on multi-backend. we have initial gpfs driver w/ knfs working - starting on ganesha flavor now 15:56:40 but no problem if driver interface changes some 15:57:13 so, everyone agree, that we create appropriate BP and do the refactor? 15:57:51 vponomaryov: +1 15:58:07 vponomaryov: +1 15:58:28 +1 15:58:33 vponomaryov: +1 15:58:36 anything else? 15:58:45 we're near the end of our hour 15:58:56 I plan to hold this meeting as usual next week 15:59:17 the following week is a holiday week here 15:59:40 we can discuss next week, but probably I'll cancel the 26 Dec meeting 15:59:45 bswartz: makes sense 15:59:53 Good morning 15:59:56 thanks everyone 16:00:05 thanks, bye 16:00:11 #endmeeting