15:00:53 <bswartz> #startmeeting manila
15:00:54 <openstack> Meeting started Thu Sep 19 15:00:53 2013 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:58 <openstack> The meeting name has been set to 'manila'
15:01:07 <bswartz> hello everyone
15:01:07 <vbellur> bswartz: hello
15:01:09 <vbelokon> hi
15:01:14 <yportnova_> hi
15:01:17 <shamail> Hi
15:01:58 <vponomaryov> hi
15:02:03 <bswartz> okay so last week we said our goal was to get the code into stackforge by today
15:02:13 <bswartz> #topic status
15:02:34 <bswartz> We've submitted the code at least
15:02:45 <bswartz> vbelokon: can you tell us where that sits now?
15:03:08 <vbelokon> bswartz: yes, we submitted to code, but we need approval from core-developers of Infra project
15:03:12 <vbelokon> https://review.openstack.org/#/c/46919/
15:03:21 <vbelokon> one +2 we have, another +2 we need
15:03:37 <vbelokon> Jeremy Stanley (core-developer) promised us to set +2 today during the day.
15:03:55 <vbelokon> so, I hope Manila project will in StackForge today or tomorrow
15:04:05 <bswartz> okay
15:04:16 <vbelokon> also we created blueprints to add changes in DevStack and Tempest to support Manila
15:04:18 <vbelokon> https://blueprints.launchpad.net/devstack/+spec/manila-project-support
15:04:20 <vbelokon> https://blueprints.launchpad.net/tempest/+spec/manila-project-tempest-tests
15:04:50 <vbelokon> these blueprints we need to push our changes (supporting of Manila) to DevStack and Tempest
15:04:57 <bswartz> vbelokon: ty
15:05:11 <vbellur> vbelokon: sounds good
15:05:31 <vbelokon> so, currently we are working on current bugs https://bugs.launchpad.net/manila/+bugs
15:05:32 <bswartz> okay so we need to wait another day or so on stackforge
15:05:42 <vbelokon> bswartz: yes
15:05:51 <bswartz> once we're accepted there I plan to make another announcement on the ML
15:06:14 <bswartz> and hopefully we'll see some more participation in these meetings
15:06:38 <tkatarki> hello all. in way of quick intro, I am from Red Hat and work with vbellur
15:06:39 <bswartz> I imagine most people are holding back because they couldn't contribute code if they wanted to right now
15:06:49 <bswartz> tkatarki: welcome!
15:07:00 <vbelokon> tkatarki: hi
15:07:09 <caitlin-nexenta> bswartz: my suggestion is to focus on documentation for NAS vendors first, users later.
15:07:31 <vbellur> we have rraja and zaitcev joining from Red Hat too.
15:07:45 <bill_az> Hi, this is Bill Owen from IBM joining
15:07:56 <vbellur> bill_az: hello
15:08:01 <bswartz> caitlin-nexenta: I agree -- I've been thinking about some of the "last mile" issues and how they will be tacked by various backends and it's a complicated problem
15:08:02 * caitlin-nexenta is from Nexenta
15:08:17 <vbellur> caitlin-nexenta: hello
15:08:22 <bswartz> I need to capture my thoughts in a document and get it up online
15:08:32 <bswartz> bill_az: hello and welcome
15:08:49 <bill_az> Hi, thanks, glad to be here
15:08:52 <caitlin-nexenta> bswartz feel free to use me as a beta proofreader.
15:09:27 <zaitcev> I am just looking if I can be useful in the future, but currently I am up to my neck in Swift until end of September.
15:09:42 <bswartz> okay
15:10:11 <bswartz> this is a good time to be joining because we're still designing things and and it's not too late to influence the eventual direction we take
15:10:40 <bswartz> we're at an odd phase now relative to the rest of openstack, because everytone else is worried about getting their release done and we don't have to worry about that
15:11:16 <bill_az> bswartz:  I have been working on icnder driver for gpfs backend, and am looking forward to contributing to manila as well
15:11:17 <esker> bswartz: wrt to "holding back" your first announcement never actually made it to the wider list
15:11:19 <bswartz> I expect a lot of stuff to get solidified during the icehouse timeframe however such that 6 months from now it will feel like a normal openstack project
15:11:25 <bill_az> (cinder driver)
15:12:04 <bswartz> bill_az: are you the actual developer? if so I'd like to talk to you and get you opinions on our design for the manila "driver" interface
15:12:16 <zaitcev> Indeed, as esker says I did not even knew Manila existed until Ayal mentioned it.
15:13:20 <bill_az> bswartz:  yes, I'm doing most of the work on cinder driver for gpfs, and I'd be happy to discuss driver interface for manila
15:13:21 <bswartz> esker, zaitcev: Yes I apologize but the initial announcement was evidently eaten by the ML. By the time I discovered what happened I'd changed my mind about making the announcement to wait until we were part of stackforge
15:14:01 <bswartz> so the announcement which you can expect next week will actually be the "first" announcement that most people will see
15:14:14 <esker> bswartz: better this way... when the announcement goes out to the wider community there will be something substantive to look at
15:14:30 <caitlin-nexenta> You're better off announcing sooner than later. I was thinking of making a proposal myself when googling found the older Netapp Cinder proposal, and I asked about it.
15:14:45 <vbellur> yeah, the timing is quite right now.
15:14:47 <caitlin-nexenta> Others might just start the same work in Cinder or Swift, better to be visible.
15:14:59 <bswartz> caitlin-nexenta: I know -- it's just a few days away now
15:15:01 <esker> great... the stackforge stuff is almost done so all is aligning
15:15:40 <bswartz> caitlin-nexenta: regarding doing the work in cinder or swift -- we tried that and got nowhere, so I'm not worried about someone else trying that and succeeding
15:15:46 <esker> bswartz:  it might be time to start blueprinting the various options for solving the last mile problem
15:15:58 <tkatarki> +1 on that
15:15:59 <bswartz> but the point is still valid that we want the community to know we're here
15:16:03 <shamail> +1
15:16:15 <esker> FWIW, we've operated under the presumption that on 1 option will suffice
15:16:23 <esker> rather... no 1 option will suffice
15:16:26 <caitlin-nexenta> bswartz : the worry is them trying and failing.
15:17:12 <bswartz> okay anything more on status? we sort of took a side track here
15:17:28 <bswartz> vbelokon: anything you wanted to discuss?
15:17:39 <vbelokon> yes
15:17:58 <vbelokon> I'd like to discuss nearest roadmap
15:18:13 <vbelokon> and ask you to approve existing blueprints
15:18:46 <vbelokon> or create new most important for current step
15:19:09 <bswartz> okay it sounds like we need the next topic
15:19:14 <bswartz> #topic planning
15:19:46 <vbelokon> https://blueprints.launchpad.net/manila
15:20:11 <bswartz> #link https://blueprints.launchpad.net/manila
15:21:03 <bswartz> okay since we've already discussed quotas a great deal, I'd like to get that design finalized so we can finish it
15:21:17 <bswartz> we can talk a bit more about that in a minute
15:22:05 <bswartz> these other BPs are also important
15:22:24 <bswartz> I think we need more however
15:23:17 <bswartz> the issue of connecting shares to guests is a complex one which we need to approach carefully
15:23:38 <bswartz> by next  week I'll get a wiki page up that capture the issues
15:23:51 <vbelokon> bswartz: good
15:23:57 <caitlin-nexenta> Especially if we want to embrace as many NAS vendors as possible.
15:24:00 <bswartz> #action bswartz create wiki page for "last mile problem"
15:24:36 <bswartz> Hopefully people can read it and maybe next week we can create some BPs to address the isssues that exist
15:24:49 <yportnova__> bswartz: will we make changes in current ACL implementation?
15:24:49 <bswartz> For those who are new, I'll summarize my thinking
15:25:11 <bswartz> yportnova__: yes we will need to
15:25:22 <bswartz> yportnova__: that's just one part of what's needed however
15:25:52 <tkatarki> bswartz: it would definitely help. I am new. I hget the 100K view of what we want to do but could use a 10K view.
15:26:08 <bswartz> So our current thinking is that theres not a single way to deal with the network plumbing issue that will work for every user
15:26:28 <vbellur> bswartz: agree with that
15:26:33 <bswartz> We need to implement a few ways to do it, and give administrators a choice (an option they can set in manila.conf)
15:27:18 <bswartz> This will probably result in a "frontend" plugin interface of some kind, because there will be parts of the connect-share-to-guest code that can be shared by all backends
15:27:43 <caitlin-nexenta> bswartz: having a few options is great, but selling participation to my management requires that it not be an open-ended set of extra options.
15:27:44 <bswartz> that being said, I think it will be unavoidable that backends will need to explicitly support various connection methods
15:28:21 <bswartz> The connection methods we're considering are:
15:28:45 <bswartz> 0) Assume flat network and require the storage controller to enforce security
15:29:23 <bswartz> 1) Support segmented tenant networks and integrate w/ neutron to connect the storage controller to the tenant network
15:30:05 <bswartz> 2) Rely on hypervisor to tunnel NAS protocol from backend network to tenant network
15:30:38 <esker> additionally, there's the question of how instances (or other consumers) dynamically absorb and mount shares
15:30:43 <bswartz> 3) Exotic stuff like VirtFS to wrap the filesystem access
15:30:43 <caitlin-nexenta> Why is 2) different than 1)? Neutron supports different forms of tunneling.
15:31:14 <bswartz> caitlin-nexenta: it comes down to how smart the backend is required to be in the 2 cases
15:31:30 <caitlin-nexenta> Or to put it another way, if a new tunelling method is needed - why not do it in the Neutron project?
15:32:53 <bswartz> caitlin-nexenta: I suspect the differences will be more clear when I write down all the details
15:33:07 <bswartz> caitlin-nexenta: if there's a trick we can use to force all the ugliness into neutron I'd be thrilled
15:33:26 <navneet> caitlin-nexenta: +1
15:33:37 <caitlin-nexenta> You mean allow neutron to provide more features. ;-)
15:33:50 <bill_az> bswartz:  can we hide the details of the connection type from the drivers?  can those details be encapsulated in connection class?
15:33:52 <bswartz> At the end of the day, the backends will have to understand the differences between tenants though
15:34:23 <navneet> how abt having a connection manager to deal with different networks
15:34:26 <bswartz> bill_az: I want to minimize how much the drivers have to deal w/ because we want to make drivier writing easy
15:34:35 <caitlin-nexenta> bill_az: I don't see how you hide the difference between 0 and 1. It is the context thaqtyou do authentiation in.
15:34:37 <navneet> also interface with neutron
15:35:12 <bswartz> however I think you'll see that when you consider all the issues there's no way to encapsulate a solution in something simple and clean
15:35:26 <bswartz> nevertheless we will strive toward something simple and clean!
15:35:34 <ekarlso> can I ask, what's Manila ?
15:35:44 <shamail> bswartz: Let's get the wiki out there.  I think this topic will have great discussions.
15:35:48 <vbellur> I think it would be good to await out bswartz's proposal.
15:35:49 <bswartz> these are longer term issues though, lets get back to the next week or 2
15:36:17 <bswartz> ekarlso: a file-share management service for OpenStack
15:36:34 <bswartz> #link https://wiki.openstack.org/wiki/Shares_Service
15:36:35 <ekarlso> wasn't that a part of cinder ?
15:36:43 <bswartz> ekarlso: yes!
15:36:55 <bswartz> ekarlso: it was never accepted and we've move the code to this new project called manila
15:36:59 <bswartz> moved*
15:37:50 <bswartz> #topic quotas
15:38:10 <bswartz> okay so last week we spent a few minutes discussion the question of how quotas interact w/ snapshots
15:38:16 <bswartz> discussing*
15:38:44 <bswartz> if you allocate a 10GB NFS share, that counts as 10GB towards your quota
15:38:58 <caitlin-nexenta> Where would quotas be enforced? (stepping directly into the first tar-pit)
15:39:09 <bswartz> however if you only fill up 1GB of that share and then take a snapshot, it feels like charging another 10GB to your quota is not the right thing to do
15:39:46 <bswartz> intuitively, we should charge 1GB to the quota in that case
15:39:51 <caitlin-nexenta> And if your NAS target does compression and/or dedup, do you get credit?
15:39:57 <shamail> bswartz: Array-level snapshots?
15:40:24 <bswartz> The problem comes in when you're trying to figure out how much space was actually consumed out of the 10GB
15:40:41 <bswartz> I believe that drivers are going to need to provide that information because manila will not be able to determine it in a generic way
15:40:57 <caitlin-nexenta> The big question is: are the quotas for guaranteed resources or actually consumed resources?
15:41:13 <caitlin-nexenta> Actual consumption is more useful, but only the targets can provide that data.
15:41:55 <bswartz> caitlin-nexenta: regarding space savings from vendor-specific technologies, I think we have to hide those savings from end-users and make the admin the beneficiary of them
15:42:32 <bswartz> the reason for this is because we don't want inconsistent behaviour in multivendor environments
15:42:34 <caitlin-nexenta> At least for cross-tenant savings, otherwise you are leaking info between tenants.
15:42:34 <tkatarki> i would think quota is "not to exceed" limit
15:43:13 <yportnova__> bswartz: in this case how we can reserve resources?
15:43:41 <bswartz> tkatarki: yes the idea is that manila will enforce your quota by preventing you from taking a snapshot or creating a new share when you've reached your quota
15:44:02 <caitlin-nexenta> But are those limits enforced in a client with *no* concept of what actual resources are used, or in the target with deliberately limited single-tenant knowledge?
15:44:29 <caitlin-nexenta> Particularly with snapshots, the client view will radically overstate the usage of storage with a Netapp or ZFS style snapshot.
15:44:42 <bswartz> if we can agree that the drivers are required to report the "size" of a snapshot after its created, then the only issue is what kind of guidance do we provide to driver authors
15:45:05 <tkatarki> quota is not same as usage. So I would think that it is an enforcement at the client
15:45:13 <shamail> bswartz: wouldn't Manila only be aware of allocation vs consumption of resources?
15:45:29 <bswartz> shamail: yes
15:45:41 <shamail> bswartz: I like the idea of the drivers handling this
15:45:43 <caitlin-nexenta> The client does not want to limit the total "size" of a single snapshot. They want to know how much disk space their snapshots are actually consuming.
15:45:44 <bswartz> if you allocate a 10GB share, you lose 10GB from your quota immediately, even if you've written no data
15:46:23 <tkatarki> that seems ok to me
15:46:28 <bswartz> this is my proposal:
15:47:05 <caitlin-nexenta> If you snapshot a 10 GB share 10 times are you using 100 GB of quota?
15:47:07 <bswartz> the "default" implementation of the snapshot size driver method should return the size of the share it was created from (which is the maximum value we could ever return)
15:47:57 <bswartz> we will suggest that driver authors provide a way to report how much space data the user has acutally filled up, not counting any space savings
15:48:29 <caitlin-nexenta> bswartz: at the minimum a driver module should be able to report aggregate actual usage (with the no-cross-tenant leakage caveat).
15:48:34 <tkatarki> guys let me ask a more basic question: is quota what u request to reserve when a file share is requested and created? That is, will I as a user be requesting 10GB and that is my "quota"
15:48:42 <bill_az> if the driver returns both allocated and actually consumed space then the cloud operator has freedom to decide how to bill / account for this
15:48:46 <vbellur> so would those be two separate methods?
15:49:12 <bswartz> caitlin-nexenta: yes
15:49:23 <caitlin-nexenta> Providing both figures allows an operator to intelligently decide how many snapshots to retain.
15:50:08 <bswartz> the whole point of quotas in manila is to allow the manila admin to give a certain amount of space to a tenant that he can mange himself, either by creating new shares or taking snapshots of existing shares
15:50:22 <bswartz> the question of how the backend enforces that you don't write 11GB in to a 10GB share is a different issue
15:51:13 <bswartz> some controllers will use internal quotas to enforce such limits, and that's not what we're talking about
15:51:19 <shamail> bswartz: There are also differences in how snapshot storage is allocated (reserved vs dynamic) and in either case snapshot capacity will change based on change rate.  If we track utilization then thus information may need to be updated periodically
15:51:46 <bswartz> shamail: I think that's exactly what we want to avoid
15:51:51 <shamail> this*
15:52:05 <shamail> bswartz: Agreed
15:52:13 <bswartz> we don't want the size of a snapshot to change because someone changed something else in the system somewehre
15:52:39 <bswartz> the size of the snapshot, for quota purposes, should be determined at the time it's taken
15:53:02 <caitlin-nexenta> But how much disk resources are required for a specific snapshot *does* change over time, even when viewed from a single tentant perspective.
15:53:19 <bswartz> then if the actual spaec consumed by the snapshot increases or decreases on the controller due to  compression or dedupe or other files getting deleted, we don't care
15:53:43 <caitlin-nexenta> I'm talking about how much overlap there is with the current file system.
15:54:09 <caitlin-nexenta> With ZFS at the moment a snapshot is taken it is effectively consuming nearly zero disk space.
15:54:11 <bswartz> caitlin-nexenta: that's why I want to return a number large enough that it represents a number that will never be exceeded under any circumstance (a worst-case space number)
15:54:39 <bswartz> caitlin-nexenta: that's right -- and the driver should not report the 0 number it should report how much data is in the snapshot
15:54:51 <shamail> There needs to be a defined "cost" for snapshots (each vendor can specify via driver)... The actual capacity would be great but difficult to track.
15:54:52 <caitlin-nexenta> A quota for *all* snapshots on those terms might be reasonable.
15:55:22 <vbellur> shamail: +1
15:55:25 <caitlin-nexenta> But stating a quota for *each* snapshot will grossly overallocate.
15:55:27 <davenoveck> Ok, but the issue of ovelap with current use is different than that of overlap among snapshots.
15:55:28 <bswartz> okay I'm sensing this issue might be a sensitive one and require refinement over time
15:55:51 <bswartz> I'm going to propose that we do something VERY conservative for now, and we plan to propose better solutions in the future
15:56:03 <caitlin-nexenta> davenoveck: +1
15:56:11 <vbellur> bswartz: +1
15:56:28 <bswartz> so in the short term, we will massively overcharge users for snapshots w.r.t. quotas on most backends (except very stupid ones)
15:56:57 <bswartz> if we can agree on a better system then we will implement that later -- but I feel that we need to get some kind of quota support in from the beginning
15:57:45 <bswartz> we're almost out of time today
15:57:52 <bswartz> #topic open discussion
15:58:03 <bswartz> anyone have anything else to bring up?
15:58:22 <vbellur> nothing from me
15:58:24 <caitlin-nexenta> Have you given any thought to the issue of NFSv4 vs NFSv3?
15:58:33 <shamail> Is there any intersection between quota management and ceilometer in the future?
15:58:36 <bswartz> caitlin-nexenta: yes
15:59:20 <bswartz> shamail: ceilometer is something I haven't spent time on but we will need to
15:59:24 <vbellur> should we move discussions over to #openstack-manila?
15:59:38 <caitlin-nexenta> I'm assuming from a charter point of view that we will have to support both, do you agree?
15:59:55 <bswartz> yes everyone is welcome to join #openstack-manila and continue talking after we lose this room
16:00:24 <bswartz> expect an annoncement on the ML from me when we get into stackforge finally!
16:00:41 <bswartz> #endmeeting