15:01:28 #startmeeting manila 15:01:28 Meeting started Thu Jun 30 15:01:28 2016 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:32 The meeting name has been set to 'manila' 15:01:36 hi 15:01:37 Hi 15:01:38 \o 15:01:38 hello o/ 15:01:39 hi 15:01:40 hello 15:01:40 hi 15:01:40 hello 15:01:41 hello 15:01:41 hi 15:01:44 hello 15:01:48 hi 15:01:57 Hi 15:02:04 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:02:54 amazing that we covered about 16 hours worth of topics in the last 2 days and we still have more things to discuss! 15:03:22 I'm going to reorder the agenda here and cover zhongjun's topic first 15:03:35 thanks 15:03:43 #topic share backup spec discussion 15:04:04 In share backup spec, there are two way to implement share backup 15:04:14 zhongjun_: we discussed this topic briefly yesterday, but we didn't spend too much time on it since you couldn't attend 15:04:39 #link https://review.openstack.org/#/c/330306 15:04:47 bswartz: I saw it, thanks 15:04:55 1. implement share backup at the storage driver level, may ensure that there is no metadata loss, 15:04:56 and could support the backup policy. 15:04:56 (The policy means the backup driver could have its own backup strategies. According to the backup strategy to create multiple backup copies) 15:04:56 2. implement share backup in a generalized way(copying files from src share snapshot to des share), may not prevent metadata loss. 15:05:28 Does we need to add a flag in backup api, to check whether the metadata is allowed to be lossy, 15:05:28 like the migration feature? such as: --lossy, 15:05:34 #2 is where we should start 15:05:47 that's what we did with share migration 15:06:05 we started with the "fallback approach" and now we're starting to add driver-assisted migration 15:06:52 Agreed, let's start with the general way then go with specialized ways once we get the general way situated 15:07:07 I think we can use similar flags to the migration API 15:07:41 --preserve_metadata (default to true, must set it to false to allow fallback backup implementation) 15:07:52 agreed with use similar flags 15:07:58 actually now that I write that it seems weird 15:08:05 ganso: that's how migration works right? 15:09:32 bswartz: yes 15:09:54 it just seems weird that you'll always have to specify --preserve_metadata False with every backup call 15:10:15 bswartz: indeed 15:10:29 and there's another issue now that I think about it 15:10:40 an issue which the cinder guys already discovered 15:10:45 bswartz: maybe we should at least attempt to have a driver interface or a mechanism that drivers could start working, instead of only having the fallback codepath 15:11:02 when you're doing a migration you know the destination and can determine if an optimized migration can be performed 15:11:22 when you're doing a backup, you don't know the ultimate restore destination at the time of the backup 15:11:55 but the preserve_metadata is really needed. 15:12:05 therefore even if you could preserve the metadata on the backup, there's no guarantee that you could preserve the metadata on a restore, or that restore would work at all to a different backend 15:12:14 I think we could add a backup scheduler 15:12:34 well there's a basic decision that needs to be made 15:12:52 should backups be restorable to any backend, or only back to the same backend that made them? 15:13:04 the cinder folks decided on the former 15:13:42 which was easy for them to do because block storage backups are fairly storage-neutral 15:14:05 i think there is value in being able to get data back even if you can't get all the filesystem metadata 15:14:13 if we allow the latter approach, we need to think carefully about the end user impacts 15:15:10 tbarron: +1 Restoring from backups of non-replicated shares is still a form of DR. 15:15:13 tbarron: the problem I see is that a vendor's backup format might not be convertible into a tarball-style of backup in order to do a metadata-losing restore 15:15:42 I suppose we could simply require that it is 15:15:55 or allow 2 styles of backup? 15:15:57 which would address this particular problem 15:16:04 tbarron: that would be silly 15:16:12 bswartz: the backup restore could be similar to a migration 15:16:50 either we require that you can restore to a dumb backend or we don't -- if we require it, then vendors would have to make whatever compromises necessary to allow that, including possible backing up everything in 2 different formats 15:17:13 s/possible/possibly/ 15:17:33 2 difft formats possibly is more what i was intending 15:17:59 but it's not a choice the user makes, it would be a choice the vendor makes 15:18:11 and doing so would be obviously wasteful 15:18:11 don't intentionally defeat vendor value addon, just require the common denominator in any case 15:18:49 maybe we can poll vendors here and find out if this is a problem in practice 15:19:23 does any storage vendor have a proprietary backup scheme which makes it impossible to restore back to a simple tarball format? 15:19:54 * bswartz thinks about netapp... 15:19:58 bswartz: if we standardize that a backup can assume semantics similar of a share, that it can be mounted and accessed, then it can be copied back with a fallback approach. If it is acceptible the case where a backup can be performed without metadata loss via driver but then may be restore via driver or via fallback with metadata loss, then looks valuable 15:20:36 ganso: that might preclude some implementations which could otherwise satisfy all our requirements 15:21:18 I'm pretty sure that whatever technology netapp might end up using, we could restore to a tarball like format 15:21:21 bswartz: yes, the tarball may be something useful, but only if everybody supports. A more generic approach would be more restrictive, but if every vendor supports it may be better 15:22:06 well I agree that if the "backup format" looks like a NFS share you can mount, then we can have some common reusable restore logic 15:22:44 I wonder if we could define something slightly better for CIFS shares 15:22:54 better than "tarball-like" 15:23:25 bswartz: humm, why something better for CIFS? shouldn't backups be protocol-agnostic? 15:23:37 Is there any kind of tarball-like standard for Windows which preserves NTFS metadata? 15:23:58 define diffenrent restore way between NFS and CIFS share? 15:23:58 I'm presuming that backup would have a protocol and you would restore to the same protocol share 15:24:28 http://linux.die.net/man/8/ntfsclone ? 15:24:42 thus a CIFS share backup restored to another CIFS share might preserve CIFS metadata better 15:24:57 bswartz: humm, I assumed we mostly concerced about data 15:24:58 wouldn't it be sane to start by implementing backup/restore for the same backend? 15:25:14 tbarron: that's more like a qcow-style disk image 15:25:22 having driver neutral backup/restore sounds cool, but not really sure how much frequent is that use case 15:26:15 vkmc: that would create vendor lock in -- once you had a substantial number of backups, if they were unrestorable to other backends it would be very hard to add new backends from a different vendor 15:26:38 hm, true that 15:26:39 presumably people like manila because we provide a vendor neutral interface 15:27:04 bswartz: most deployments use single type of storage 15:27:23 vponomaryov: that's true -- but for different reasons 15:27:26 bswartz: so, restore-to-the-same-type-of-backend is the most likely use case 15:27:42 my point is... if you choose to use a vendor fs solution, how common it is that you decide to change it? 15:27:43 yes and it's a case we should optimize, but not a case we should depend on 15:28:58 My view is that people who want to be locked into a vendor would use that vendor's own specific tools. People who choose manila do so because they want to start with one vendor and have freedom to change in the future without having to change their tools (even if they never actually change) 15:30:25 ++ 15:30:27 that might not be true of all users, but I think it's true of enough users that we shouldn't betray them by adding new APIs which end up creating lockin 15:30:49 also this is the view the cinder community has expressed 15:31:37 I like the strategy of optimizing the common case of single vendor solutions while leaving the door open for interoperability 15:32:17 therefore a backup scheme which was guaranteed to be restorable to any backend (with possible metadata loss) seems like the direction we should go 15:32:47 so, even if we use storage driver level to backup, we still need to support common restore way(to any backend)? 15:33:09 yes that's my opinion 15:33:44 we can have optimized backup and optimized restore, but fallback backup always needs to work, and _fallback_restore_ always needs to work 15:34:37 It seems I need to add a flag(such as --lossy) in restore API? 15:34:52 yes 15:35:02 yeah, but don't name it 'lossy' :) 15:35:04 although I think ganso would prefer --preserve-metadata=False 15:35:19 "--lossy" does not refer to metadata directly 15:35:19 bswartz: yes :) 15:35:45 so, "--lossy" is ambiguous 15:35:48 vponomaryov: a backup that may not preserve anything other than metadata is very less reliable 15:35:58 file level acls can still prevent some copies.. 15:35:59 bswartz: how can fallback restore always work if the driver did something vendor specific for the backup? 15:36:09 gouthamr: fallback copies as root 15:36:28 cknight: it would mount the backup and copy data as root 15:36:44 cknight: we would require vendors to make their vendor-specific thing reversible (so you can get a tarball back out) or else we don't allow it 15:37:45 cknight: when I asked if that was a problem for any vendor nobody said anything 15:38:50 zhongjun_: anything else regarding this topic? 15:39:11 bswartz: I don't have the answer if my backend can do the tarball restore 15:39:18 zhongjun_: are you planning to deliver a working implementation for newton? time is pretty short 15:39:21 bswartz: need to take a look at that 15:39:42 ganso: yeah and not all vendors are here so we should leave that question open for a few weeks 15:39:59 bswartz: I need to check that too 15:40:53 bswartz: now, that's it. 15:41:19 my instinct tells me that most implementations should be able to mount their backup object and pull the filesystem out of it somehow 15:41:24 zhongjun_: do you have answer for bswartz's question? 15:41:57 bswartz: I am planning to do in newton 15:42:56 zhongjun_: this proposal clearly calls under the "major feature" designation which means we need the code to be done by R-11 15:42:57 bswartz: yes, time is pretty short, before july 21, right? 15:43:16 yes that's the announced deadline 15:43:51 okay we have one more topic 15:44:00 #topic non-disruptive nfs-ganesha export modifications 15:44:02 rraja: you're up 15:44:10 #link https://sourceforge.net/p/nfs-ganesha/mailman/message/35173839 15:44:53 i just wanted to update that the problem with dynamic exports in ganesha still exists and being discussed 15:45:15 * bswartz is reading thread 15:45:15 unfortunately it seems like it's not a top priority 15:45:30 has it been promoted to higher priority? 15:45:35 rraja|afk: not top priority for ganesha team? 15:45:38 oh, I see 15:45:39 yes 15:45:56 rraja: I understood it wasn't a high priority, but we were hoping you could make it a high priority 15:46:08 do they have any estimate on when they plan to do something about it? 15:46:09 I think the argument is obvious 15:46:39 it's silly to shutoff I/O to existing clients just to add access to an additional client 15:46:54 I understand there are some more complexities in the actual implementation 15:47:05 bswartz: i think the first step would be clearly state our need as well in that thread. 15:47:05 removing access is way more difficult to deal with than adding accesss 15:47:09 so i think it would help if aovchinnikov_ and bswartz stated just what you are saying now in the thread 15:48:06 if you think my voice carries more weight than rraja's I'll be happy to speak up there 15:48:20 bswartz: that'd be great. 15:48:24 a different weight and tenor :) 15:48:26 I had assumed that someone inside redhat might have more influence 15:48:40 both voices are important 15:48:43 * bswartz looks for his megaphone 15:48:55 bswartz: csaba and I'd already brought this up before a long while ago. 15:49:09 is the nfs-ganesha ML one that requires you to subscribe before you can post? 15:49:23 i'm not sure. 15:49:49 well I'm already subscribed to 2 dozen mail lists, why not one more? 15:50:09 okay thanks for pointing out that thread rraja 15:50:13 #topic open discussion 15:50:15 anything else for today? 15:50:28 I had previously asked about submitting a new manila driver in newton for Dell FluidF 15:50:36 Due to internal dependencies, we are now targeting the new driver for submission early in ocata. Since we will submit in ocata, should I submit the blueprint after Newton releases? 15:50:54 jseiler: it doesn't matter much 15:51:02 I could target the BP to ocata any time 15:51:33 it would be good to make it clear to people that they shouldn't spend time during newton reviewing it though if it won't happen in this cycle 15:51:44 bswartz: so in the blueprint, should I just state this is for ocata. Or is there a field for that. sorry, new to this. 15:51:53 there's a field 15:52:02 IDK if the ocata release has been added to launchpad yet though 15:52:09 I'll have to check that 15:52:12 bswartz: ok, thanks 15:52:29 okay 15:52:54 next week the meeting is still on, despite the holiday in the US 15:53:00 note that nvs-ganesha project is upstream, open source, deriving from plan 9. redhat develpers participate but don't *own* it. 15:53:10 I expect some of our US-based team members might be on vacation next week 15:54:04 tbarron: it's not even packaged by canonical though 15:54:51 interesting 15:54:52 that tells me that interest in the project is pretty limited outside of redhat -- or that it's another case of redhat/canonical fighting 15:55:50 https://github.com/nfs-ganesha/nfs-ganesha/wiki 15:55:56 * bswartz wishes we could all just get along 15:56:02 ^^^ for anyone who wants to contribute 15:56:25 tbarron: that wiki is in a pretty sorry state 15:56:40 I went there looking for the 2.3 release bits and couldn't find them 15:56:44 tbarron proposes to fix ganesha problem right away without any disscussions )) 15:56:48 tbarron: they don't update it very often 15:57:07 they're not on source force either last time I checked 15:57:21 my only point is who is "they". if we have a problem we can help fix. 15:57:23 sourceforge* 15:58:00 the collective entity behind nfs-ganesha project 15:58:44 okay we're at the end of our time 15:58:46 while being open-source it still seems to be dominated by rh people 15:59:06 happy holiday weekend to those in the USA 15:59:22 #endmeeting