15:01:28 <bswartz> #startmeeting manila
15:01:28 <openstack> Meeting started Thu Jun 30 15:01:28 2016 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:32 <openstack> The meeting name has been set to 'manila'
15:01:36 <toabctl> hi
15:01:37 <cknight> Hi
15:01:38 <dustins> \o
15:01:38 <gouthamr_> hello o/
15:01:39 <jseiler> hi
15:01:40 <vponomaryov> hello
15:01:40 <tbarron> hi
15:01:40 <bswartz> hello
15:01:41 <ganso> hello
15:01:41 <aovchinnikov_> hi
15:01:44 <zhongjun_> hello
15:01:48 <xyang1> hi
15:01:57 <MikeG451> Hi
15:02:04 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:02:54 <bswartz> amazing that we covered about 16 hours worth of topics in the last 2 days and we still have more things to discuss!
15:03:22 <bswartz> I'm going to reorder the agenda here and cover zhongjun's topic first
15:03:35 <zhongjun_> thanks
15:03:43 <bswartz> #topic  share backup spec discussion
15:04:04 <zhongjun_> In share backup spec, there are two way to implement share backup
15:04:14 <bswartz> zhongjun_: we discussed this topic briefly yesterday, but we didn't spend too much time on it since you couldn't attend
15:04:39 <bswartz> #link https://review.openstack.org/#/c/330306
15:04:47 <zhongjun_> bswartz: I saw it, thanks
15:04:55 <zhongjun_> 1. implement share backup at the storage driver level, may ensure that there is no metadata loss,
15:04:56 <zhongjun_> and could support the backup policy.
15:04:56 <zhongjun_> (The policy means the backup driver could have its own backup strategies. According to the backup strategy to create multiple backup copies)
15:04:56 <zhongjun_> 2. implement share backup in a generalized way(copying files from src share snapshot to des share), may not prevent metadata loss.
15:05:28 <zhongjun_> Does we need to add a flag in backup api, to check whether the metadata is allowed to be lossy,
15:05:28 <zhongjun_> like the migration feature? such as: --lossy,
15:05:34 <bswartz> #2 is where we should start
15:05:47 <bswartz> that's what we did with share migration
15:06:05 <bswartz> we started with the "fallback approach" and now we're starting to add driver-assisted migration
15:06:52 <dustins> Agreed, let's start with the general way then go with specialized ways once we get the general way situated
15:07:07 <bswartz> I think we can use similar flags to the migration API
15:07:41 <bswartz> --preserve_metadata (default to true, must set it to false to allow fallback backup implementation)
15:07:52 <zhongjun_> agreed with use similar flags
15:07:58 <bswartz> actually now that I write that it seems weird
15:08:05 <bswartz> ganso: that's how migration works right?
15:09:32 <ganso> bswartz: yes
15:09:54 <bswartz> it just seems weird that you'll always have to specify --preserve_metadata False with every backup call
15:10:15 <ganso> bswartz: indeed
15:10:29 <bswartz> and there's another issue now that I think about it
15:10:40 <bswartz> an issue which the cinder guys already discovered
15:10:45 <ganso> bswartz: maybe we should at least attempt to have a driver interface or a mechanism that drivers could start working, instead of only having the fallback codepath
15:11:02 <bswartz> when you're doing a migration you know the destination and can determine if an optimized migration can be performed
15:11:22 <bswartz> when you're doing a backup, you don't know the ultimate restore destination at the time of the backup
15:11:55 <zhongjun_> but the preserve_metadata is really needed.
15:12:05 <bswartz> therefore even if you could preserve the metadata on the backup, there's no guarantee that you could preserve the metadata on a restore, or that restore would work at all to a different backend
15:12:14 <zhongjun_> I think we could add a backup scheduler
15:12:34 <bswartz> well there's a basic decision that needs to be made
15:12:52 <bswartz> should backups be restorable to any backend, or only back to the same backend that made them?
15:13:04 <bswartz> the cinder folks decided on the former
15:13:42 <bswartz> which was easy for them to do because block storage backups are fairly storage-neutral
15:14:05 <tbarron> i think there is value in being able to get data back even if you can't get all the filesystem metadata
15:14:13 <bswartz> if we allow the latter approach, we need to think carefully about the end user impacts
15:15:10 <cknight> tbarron: +1  Restoring from backups of non-replicated shares is still a form of DR.
15:15:13 <bswartz> tbarron: the problem I see is that a vendor's backup format might not be convertible into a tarball-style of backup in order to do a metadata-losing restore
15:15:42 <bswartz> I suppose we could simply require that it is
15:15:55 <tbarron> or allow 2 styles of backup?
15:15:57 <bswartz> which would address this particular problem
15:16:04 <bswartz> tbarron: that would be silly
15:16:12 <ganso> bswartz: the backup restore could be similar to a migration
15:16:50 <bswartz> either we require that you can restore to a dumb backend or we don't -- if we require it, then vendors would have to make whatever compromises necessary to allow that, including possible backing up everything in 2 different formats
15:17:13 <bswartz> s/possible/possibly/
15:17:33 <tbarron> 2 difft formats possibly is more what i was intending
15:17:59 <bswartz> but it's not a choice the user makes, it would be a choice the vendor makes
15:18:11 <bswartz> and doing so would be obviously wasteful
15:18:11 <tbarron> don't intentionally defeat vendor value addon, just require the common denominator in any case
15:18:49 <bswartz> maybe we can poll vendors here and find out if this is a problem in practice
15:19:23 <bswartz> does any storage vendor have a proprietary backup scheme which makes it impossible to restore back to a simple tarball format?
15:19:54 * bswartz thinks about netapp...
15:19:58 <ganso> bswartz: if we standardize that a backup can assume semantics similar of a share, that it can be mounted and accessed, then it can be copied back with a fallback approach. If it is acceptible the case where a backup can be performed without metadata loss via driver but then may be restore via driver or via fallback with metadata loss, then looks valuable
15:20:36 <bswartz> ganso: that might preclude some implementations which could otherwise satisfy all our requirements
15:21:18 <bswartz> I'm pretty sure that whatever technology netapp might end up using, we could restore to a tarball like format
15:21:21 <ganso> bswartz: yes, the tarball may be something useful, but only if everybody supports. A more generic approach would be more restrictive, but if every vendor supports it may be better
15:22:06 <bswartz> well I agree that if the "backup format" looks like a NFS share you can mount, then we can have some common reusable restore logic
15:22:44 <bswartz> I wonder if we could define something slightly better for CIFS shares
15:22:54 <bswartz> better than "tarball-like"
15:23:25 <ganso> bswartz: humm, why something better for CIFS? shouldn't backups be protocol-agnostic?
15:23:37 <bswartz> Is there any kind of tarball-like standard for Windows which preserves NTFS metadata?
15:23:58 <zhongjun_> define diffenrent restore way between NFS and CIFS share?
15:23:58 <bswartz> I'm presuming that backup would have a protocol and you would restore to the same protocol share
15:24:28 <tbarron> http://linux.die.net/man/8/ntfsclone ?
15:24:42 <bswartz> thus a CIFS share backup restored to another CIFS share might preserve CIFS metadata better
15:24:57 <ganso> bswartz: humm, I assumed we mostly concerced about data
15:24:58 <vkmc> wouldn't it be sane to start by implementing backup/restore for the same backend?
15:25:14 <bswartz> tbarron: that's more like a qcow-style disk image
15:25:22 <vkmc> having driver neutral backup/restore sounds cool, but not really sure how much frequent is that use case
15:26:15 <bswartz> vkmc: that would create vendor lock in -- once you had a substantial number of backups, if they were unrestorable to other backends it would be very hard to add new backends from a different vendor
15:26:38 <vkmc> hm, true that
15:26:39 <bswartz> presumably people like manila because we provide a vendor neutral interface
15:27:04 <vponomaryov> bswartz: most deployments use single type of storage
15:27:23 <bswartz> vponomaryov: that's true -- but for different reasons
15:27:26 <vponomaryov> bswartz: so, restore-to-the-same-type-of-backend is the most likely use case
15:27:42 <vkmc> my point is... if you choose to use a vendor fs solution, how common it is that you decide to change it?
15:27:43 <bswartz> yes and it's a case we should optimize, but not a case we should depend on
15:28:58 <bswartz> My view is that people who want to be locked into a vendor would use that vendor's own specific tools. People who choose manila do so because they want to start with one vendor and have freedom to change in the future without having to change their tools (even if they never actually change)
15:30:25 <vkmc> ++
15:30:27 <bswartz> that might not be true of all users, but I think it's true of enough users that we shouldn't betray them by adding new APIs which end up creating lockin
15:30:49 <bswartz> also this is the view the cinder community has expressed
15:31:37 <bswartz> I like the strategy of optimizing the common case of single vendor solutions while leaving the door open for interoperability
15:32:17 <bswartz> therefore a backup scheme which was guaranteed to be restorable to any backend (with possible metadata loss) seems like the direction we should go
15:32:47 <zhongjun_> so, even if we use storage driver level to backup, we still need to support common restore way(to any backend)?
15:33:09 <bswartz> yes that's my opinion
15:33:44 <bswartz> we can have optimized backup and optimized restore, but fallback backup always needs to work, and _fallback_restore_ always needs to work
15:34:37 <zhongjun_> It seems I need to add a flag(such as --lossy) in restore API?
15:34:52 <bswartz> yes
15:35:02 <tbarron> yeah, but don't name it 'lossy' :)
15:35:04 <bswartz> although I think ganso would prefer --preserve-metadata=False
15:35:19 <vponomaryov> "--lossy" does not refer to metadata directly
15:35:19 <ganso> bswartz: yes :)
15:35:45 <vponomaryov> so, "--lossy" is ambiguous
15:35:48 <ganso> vponomaryov: a backup that may not preserve anything other than metadata is very less reliable
15:35:58 <gouthamr> file level acls can still prevent some copies..
15:35:59 <cknight> bswartz: how can fallback restore always work if the driver did something vendor specific for the backup?
15:36:09 <ganso> gouthamr: fallback copies as root
15:36:28 <ganso> cknight: it would mount the backup and copy data as root
15:36:44 <bswartz> cknight: we would require vendors to make their vendor-specific thing reversible (so you can get a tarball back out) or else we don't allow it
15:37:45 <bswartz> cknight: when I asked if that was a problem for any vendor nobody said anything
15:38:50 <bswartz> zhongjun_: anything else regarding this topic?
15:39:11 <ganso> bswartz: I don't have the answer if my backend can do the tarball restore
15:39:18 <bswartz> zhongjun_: are you planning to deliver a working implementation for newton? time is pretty short
15:39:21 <ganso> bswartz: need to take a look at that
15:39:42 <bswartz> ganso: yeah and not all vendors are here so we should leave that question open for a few weeks
15:39:59 <xyang1> bswartz: I need to check that too
15:40:53 <zhongjun_> bswartz: now, that's it.
15:41:19 <bswartz> my instinct tells me that most implementations should be able to mount their backup object and pull the filesystem out of it somehow
15:41:24 <vponomaryov> zhongjun_: do you have answer for bswartz's question?
15:41:57 <zhongjun_> bswartz: I am planning to do in newton
15:42:56 <bswartz> zhongjun_: this proposal clearly calls under the "major feature" designation which means we need the code to be done by R-11
15:42:57 <zhongjun_> bswartz: yes, time is pretty short, before july 21, right?
15:43:16 <bswartz> yes that's the announced deadline
15:43:51 <bswartz> okay we have one more topic
15:44:00 <bswartz> #topic non-disruptive nfs-ganesha export modifications
15:44:02 <bswartz> rraja: you're up
15:44:10 <bswartz> #link https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839
15:44:53 <rraja|afk> i just wanted to update that the problem with dynamic exports in ganesha still exists and being discussed
15:45:15 * bswartz is reading thread
15:45:15 <rraja|afk> unfortunately it seems like it's not a top priority
15:45:30 <aovchinnikov_> has it been promoted to higher priority?
15:45:35 <vponomaryov> rraja|afk: not top priority for ganesha team?
15:45:38 <aovchinnikov_> oh, I see
15:45:39 <rraja> yes
15:45:56 <bswartz> rraja: I understood it wasn't a high priority, but we were hoping you could make it a high priority
15:46:08 <aovchinnikov_> do they have any estimate on when they plan to do something about it?
15:46:09 <bswartz> I think the argument is obvious
15:46:39 <bswartz> it's silly to shutoff I/O to existing clients just to add access to an additional client
15:46:54 <bswartz> I understand there are some more complexities in the actual implementation
15:47:05 <rraja> bswartz: i think the first step would be clearly state our need as well in that thread.
15:47:05 <bswartz> removing access is way more difficult to deal with than adding accesss
15:47:09 <tbarron> so i think it would help if aovchinnikov_ and bswartz stated just what you are saying now in the thread
15:48:06 <bswartz> if you think my voice carries more weight than rraja's I'll be happy to speak up there
15:48:20 <rraja> bswartz: that'd be great.
15:48:24 <tbarron> a different weight and tenor :)
15:48:26 <bswartz> I had assumed that someone inside redhat might have more influence
15:48:40 <tbarron> both voices are important
15:48:43 * bswartz looks for his megaphone
15:48:55 <rraja> bswartz: csaba and I'd already brought this up before a long while ago.
15:49:09 <bswartz> is the nfs-ganesha ML one that requires you to subscribe before you can post?
15:49:23 <rraja> i'm not sure.
15:49:49 <bswartz> well I'm already subscribed to 2 dozen mail lists, why not one more?
15:50:09 <bswartz> okay thanks for pointing out that thread rraja
15:50:13 <bswartz> #topic open discussion
15:50:15 <bswartz> anything else for today?
15:50:28 <jseiler> I had previously asked about submitting a new manila driver in newton for Dell FluidF
15:50:36 <jseiler> Due to internal dependencies, we are now targeting the new driver for submission early in ocata. Since we will submit in ocata, should I submit the blueprint after Newton releases?
15:50:54 <bswartz> jseiler: it doesn't matter much
15:51:02 <bswartz> I could target the BP to ocata any time
15:51:33 <bswartz> it would be good to make it clear to people that they shouldn't spend time during newton reviewing it though if it won't happen in this cycle
15:51:44 <jseiler> bswartz: so in the blueprint, should I just state this is for ocata.  Or is there a field for that.  sorry, new to this.
15:51:53 <bswartz> there's a field
15:52:02 <bswartz> IDK if the ocata release has been added to launchpad yet though
15:52:09 <bswartz> I'll have to check that
15:52:12 <jseiler> bswartz: ok, thanks
15:52:29 <bswartz> okay
15:52:54 <bswartz> next week the meeting is still on, despite the holiday in the US
15:53:00 <tbarron> note that nvs-ganesha project is upstream, open source, deriving from plan 9.  redhat develpers participate but don't *own* it.
15:53:10 <bswartz> I expect some of our US-based team members might be on vacation next week
15:54:04 <bswartz> tbarron: it's not even packaged by canonical though
15:54:51 <tbarron> interesting
15:54:52 <bswartz> that tells me that interest in the project is pretty limited outside of redhat -- or that it's another case of redhat/canonical fighting
15:55:50 <tbarron> https://github.com/nfs-ganesha/nfs-ganesha/wiki
15:55:56 * bswartz wishes we could all just get along
15:56:02 <tbarron> ^^^ for anyone who wants to contribute
15:56:25 <bswartz> tbarron: that wiki is in a pretty sorry state
15:56:40 <bswartz> I went there looking for the 2.3 release bits and couldn't find them
15:56:44 <vponomaryov> tbarron proposes to fix ganesha problem right away without any disscussions ))
15:56:48 <aovchinnikov_> tbarron: they don't update it very often
15:57:07 <bswartz> they're not on source force either last time I checked
15:57:21 <tbarron> my only point is who is "they".  if we have a problem we can help fix.
15:57:23 <bswartz> sourceforge*
15:58:00 <aovchinnikov_> the collective entity behind nfs-ganesha project
15:58:44 <bswartz> okay we're at the end of our time
15:58:46 <aovchinnikov_> while being open-source it still seems to be dominated by rh people
15:59:06 <bswartz> happy holiday weekend to those in the USA
15:59:22 <bswartz> #endmeeting