15:59:55 <smcginnis> #startmeeting Cinder
15:59:55 <openstack> Meeting started Wed Aug 10 15:59:55 2016 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:59:56 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:59:59 <openstack> The meeting name has been set to 'cinder'
16:00:00 <smcginnis> https://wiki.openstack.org/wiki/CinderMeetings#Next_Cinder_Team_meeting
16:00:09 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex vincent_hou kmartin patrickeast sheel dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao,tommylike.hu
16:00:11 <yuriy_n17> hi
16:00:13 <_alastor_> o/
16:00:13 <scottda> hi
16:00:14 <erlon> hey
16:00:17 <smcginnis> Greetings and salutations.
16:00:17 <geguileo> Hi! o/
16:00:18 <Swanson> hello
16:00:22 <e0ne> hi
16:00:25 <tommylikehu> hello
16:00:25 <mtanino> hello
16:00:27 <eharney> hi
16:00:27 <baumann> Hello all
16:00:48 <flip214> hi
16:00:49 <fernnest> hi
16:00:54 <hemna> hey
16:00:56 <guy___> hi
16:00:56 <jseiler> hi
16:00:58 <smcginnis> #topic Announcements
16:01:04 <tbarron> hi
16:01:05 <wewe0901> hi
16:01:12 <smcginnis> Just the usual...
16:01:17 <xyang1> hi
16:01:19 <smcginnis> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus
16:01:28 <rhedlind> hi
16:01:28 <smcginnis> I need to spend some time updating that etherpad.
16:01:59 <e0ne> #link https://wiki.openstack.org/wiki/CinderMeetings#Next_Cinder_Team_meeting
16:02:03 <patrickeast> Hi
16:02:05 <smcginnis> #undo
16:02:06 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x7f5bbf31e9d0>
16:02:23 <smcginnis> e0ne: Doesn't really do any good to have that in the logs when it's always changing. ;)
16:02:25 <jgregor> Hello
16:02:30 <adrianofr_> hi
16:02:38 <smcginnis> #topic Policy on adding new APIs
16:02:40 <ntpttr___> hi
16:02:42 <avishay> hi
16:02:44 <smcginnis> scottda: Hey
16:02:57 <scottda> Hey. so this came up during review..
16:03:14 <scottda> #link https://review.openstack.org/#/c/351275/
16:03:24 <hemna> scottda, is this the volume group review ?
16:03:26 <diablo_rojo> Hello
16:03:27 <scottda> We added support for cinder list-manageable
16:03:32 <scottda> hemna: no
16:03:40 <e0ne> scottda: IMO, we should implement everythong new as microversion, not extension
16:03:40 <hemna> oh ok
16:03:58 <scottda> and the new api is available on both /v3 endpoint (with microversion) and on /v2 endpoint..
16:04:04 <scottda> e0ne: I agree
16:04:08 <jungleboyj> o/
16:04:17 <smcginnis> As an extension - that's a significant point.
16:04:18 <hemna> I don't think we should add new APIs to v2
16:04:28 <scottda> This one was in-flight during mitaka, so it's a bit on the fence for microversions.
16:04:30 <hemna> that's kinda the point of v3 microversions
16:04:31 <eharney> i think the whole "official" theory of microversions is that we'd add new things into microversions, might as well start doing that now and stop w/ v2
16:04:35 <smcginnis> But yeah, I agree we should only go with microversions at this point.
16:04:46 <hemna> otherwise we are back where we started with v2 new APIs and versions nightmare
16:04:49 <e0ne> hemna: +1
16:04:55 <geguileo> I agree, we have to use microversions for everything
16:04:58 <_alastor_> hemna: +1
16:05:08 <geguileo> And we actually agreed to that
16:05:20 <scottda> OK. Any dissent on the fact that we should not be adding any new/changed APi to /v2?
16:05:22 <geguileo> That's why I had to change my A/A patches to use microversions for the new APIs
16:05:50 <smcginnis> I can't remember, but I think the argument at the time was that doing an extension wasn't technically modifying the v2 API. But that's just a path to confusion IMO.
16:06:15 <hemna> ick
16:06:15 <eharney> smcginnis: technically true, but i think it's still best to not do it
16:06:21 <smcginnis> eharney: I agree.
16:06:31 <scottda> avishay: Anything to say on this? We still need to discuss the status of cinderclient changes to expose /v2 endpoint for list-manageable...
16:06:53 <scottda> since we already merged /v2 support for list-manageable in the c-api
16:06:55 <smcginnis> #info No new v2 extensions to the API. All changes must now only be v3 microversions.
16:07:06 <avishay> scottda: Whatever you guys decide.  It's already in and kind of a pain to remove the v2 extension now, but if that's the vote, that's the vote...
16:07:08 <xyang2> scottda: isn't the server side change already merged for this one?
16:07:09 <jungleboyj> smcginnis: +2
16:07:19 <avishay> xyang2: yes
16:07:36 <scottda> I'm Not in favor of removing the /v2 extension that we've merged. It's already exposed....
16:07:45 <scottda> xyang2: Yes.
16:08:05 <smcginnis> scottda: So leave it as an oops and move on?
16:08:14 <eharney> i'm fine with leaving what already merged too
16:08:17 <scottda> smcginnis: That's my opinion.
16:08:31 <diablo_rojo> scottda: +1
16:08:35 <smcginnis> I'm fine with that. It's already out there.
16:08:39 <hemna> if it's already in v2, then add the cinder client change to make it usable :)
16:08:47 <tommylikehu> +1
16:08:49 <scottda> hemna: +1
16:08:58 <e0ne> but we merged in only in Newton
16:08:58 <erlon> hemna: +1
16:09:04 <scottda> and it will also be usable as a /v3 microversion.
16:09:22 <jungleboyj> Yeah, it is out there in the wild we can't fix it but now we know the plan going forward.
16:09:28 <diablo_rojo> hemna: +1
16:09:32 <hemna> so the alternative is to move it to v3 and microversion it?
16:09:49 * DuncanT wakes up
16:09:50 <smcginnis> OK, so task for cores - be aware of this and watch to make sure we don't let any other v2 extensions through. ;)
16:09:50 <avishay> hemna: it's already in v3, microversioned, but also a v2 extension
16:09:51 <jungleboyj> It was implied we wouldn't update v2 after v3 went out.  We made a mistake.
16:09:52 <_alastor_> hemna: I think it's already available as a v3 microversion
16:09:57 <smcginnis> myself included
16:09:57 <scottda> hemna: It already is on v3. But on /v2 as well
16:10:28 <hemna> oh...hrmm
16:10:36 <scottda> The only real issue here is that we've exposed it on /v2, and we don't want to do that anymore. I say, leave it, let the client support /v2 ( and v3 + microversions) and don't do it again.
16:10:37 <smcginnis> scottda: OK, I think we have consensus on that.
16:10:45 <smcginnis> scottda: +1
16:10:46 <hemna> but this only landed in the newton cycle ?
16:10:57 <e0ne> hemna: yes
16:11:12 <hemna> flip a coin
16:11:13 <hemna> :P
16:11:20 <avishay> heads
16:11:24 <hemna> win!
16:11:28 <scottda> done
16:11:30 <e0ne> hemna: and it sounds safety  to remove it from v2
16:11:58 <smcginnis> OK, one final time - anyone have any big objections to accepting that we made a mistake allowing a v2 extension in, but now it's there so we should just leave it, and we should make sure we don't do it again?
16:12:10 <DuncanT> I'd leave it now it is merged
16:12:17 <tommylikehu> agree
16:12:19 <DuncanT> Some people run from head, not releases
16:12:31 <DuncanT> We should get used to not screwing them up unnecessarily
16:12:39 <smcginnis> :)
16:12:41 <xyang2> what about the client side?  that's not merged yet
16:13:08 <scottda> I say merge the client /v2 support. This is how we've done things in the past, so it's not without precedent
16:13:31 <e0ne> xyang2: if we leave it in service-side, we should merge client too
16:13:32 <DuncanT> If we've got the server API in V2, merge the client for sure
16:13:38 <xyang2> ok
16:14:03 <smcginnis> OK good.
16:14:07 <jungleboyj> Avoid confusion around partially implemented support.
16:14:13 <scottda> Thanks everyone
16:14:15 <smcginnis> Let's move on then.
16:14:18 <avishay> Cool, thanks
16:14:27 <smcginnis> #topic What makes a storage solution suitable for use as a backup target?
16:14:33 <smcginnis> DuncanT: up
16:14:55 <DuncanT> Ok, so this was prompted by the disco driver
16:15:00 <smcginnis> #link https://review.openstack.org/#/c/349318/ Backup driver patch that prompted this.
16:15:10 <DuncanT> But to give him credit, John Griffith predicted it too
16:15:20 <DuncanT> What do and don't we want to allow as backup drivers?
16:16:13 <DuncanT> There's an argument that any sort of storage /can/ be used, but that starts to get silly very quickly, and encourages deployments that don't isolate storage between volumes and backups, which defeats the purpose.
16:16:23 <DuncanT> Anybody got any good ideas?
16:16:33 <avishay> DuncanT: I guess if it's moving the data to independent storage, and recovery is possible if the original site is destroyed?
16:16:53 <hemna> DuncanT, isn't that mostly a deployment problem though?
16:17:06 <hemna> if an admin creates volumes and backs them up to the same backend...isn't that his problem ?
16:17:08 <hemna> not Cinder's
16:17:16 <smcginnis> Is there something special the driver can do to optimize or improve the backup and/or recovery that can't be done with an existing driver?
16:17:16 <geguileo> hemna: +1
16:17:18 <e0ne> henma: +1, good point
16:17:32 <DuncanT> hemna: It is, but when your only backup target we swift, that wasn't possible.
16:17:40 <flip214> smcginnis: it might do incremental copying, for example.
16:17:52 <hemna> DuncanT, and I don't think that's a good thing...
16:18:08 <flip214> hemna: well, DRBD is the same "backend" as seen by Cinder
16:18:09 <avishay> will all vendors implement backup drivers using replication? :)
16:18:23 <hemna> flip214, sure, that's a bit of a special case
16:18:31 <DuncanT> avishay: different concepts as presented to the tenant
16:18:34 <flip214> but you can add copies, which would satisfy the "independent storage" criteria
16:18:37 <hemna> as drbd isn't a single box/point of failure
16:18:59 <DuncanT> I mean we could just write a backup driver that can use any cinder driver, just seems daft
16:19:13 <hemna> DuncanT, I think that's probably a decent way to go
16:19:20 <flip214> I was about to suggest that "if a snapshot can be restored on the backup backend without dd, it's not a backup", but just invalidated my own argument ;)
16:19:22 <hemna> then backup can use any existing driver
16:19:38 <DuncanT> hemna: Yuck
16:19:38 <hemna> maybe add some new driver methods for backup purposes
16:19:41 <thingee> o/
16:19:47 <DuncanT> thingee: sup?
16:19:54 <e0ne> DuncanT: why not? if storage doesn't supppor this feature, cinder should provide backups, IMO
16:20:09 <hemna> e0ne, +1
16:20:13 <smcginnis> DuncanT: Less yuck if you think of backing up one backend to another backend. Just no reason to do same to same.
16:20:21 <DuncanT> If you want to do that though, just use clone volume to a different type and avoid all the downsides of the backup interface
16:20:34 <hemna> rm -rf cinder/backup
16:20:35 <xyang2> DuncanT: I think with Ceph, you can also use it as volume storage as well as backup device.  block for volume, object for backup
16:20:35 <hemna> :P
16:20:45 <e0ne> hemna: it's too easy
16:21:14 <hemna> seriously though, I think it's a good thing to add backup capabilities to any backend that cinder supports.  how we do it is one thing
16:21:21 <winston-d> xyang2: +2 for merging swift into cinder. ;)
16:21:40 <xyang2> winston-d: swift can't do block though:)
16:21:54 <e0ne> hemna: and we will have CI for backups! I like it!
16:21:54 <bswartz> no matter what kind of interfaces we design and what kinds of drivers we allow, deployers will always be able to deploy it ways that are horribly unsafe, horribly inefficient, and horribly expensive
16:21:59 <DuncanT> hemna: as a backup target or source? Every backend should be a source for sure (and therefore restore target). Storing backups on cinder backends seems daft though
16:21:59 <smcginnis> DuncanT: So I know it's not your patch, but do you know if the disco one is doing anything special that would benefit end users?
16:22:11 <bswartz> so why not provide as many options as possible?
16:22:14 <DuncanT> guy___: Are you here?
16:22:26 <guy___> DuncanT: yes
16:22:27 <hemna> bswartz, +1
16:22:31 <eharney> bswartz: agreed
16:22:40 <hemna> DuncanT, some backends can do backupy like stuffs though
16:22:45 <e0ne> bswartz: +1
16:22:47 <hemna> it's a deployment choice
16:22:48 <flip214> DuncanT: why would using cinder be daft?
16:22:53 <Swanson> bswartz, +1
16:22:54 <DuncanT> bswartz: Support and test matrix. Trying to help people deploy sane systems.
16:23:24 <DuncanT> flip214: Just use clone volume, and get a copy of the data with way more flexibility and potentially performance
16:23:35 <bswartz> DuncanT: that's probably better address through guides and documentation than control of what code goes in
16:23:38 <bswartz> addressed*
16:24:22 <winston-d> sorry to joining late, is there any specific change/patch we are refering to or this is just a general discussion for backup?
16:24:25 <Swanson> Seems like a documentation/training issue. Don't do backups to the thing you just spent a million on.
16:24:36 <DuncanT> winston-d: DISCO backup driver prompted it
16:24:38 <bswartz> winston-d: https://review.openstack.org/#/c/349318/
16:24:50 <flip214> DuncanT: ah yes, right.
16:24:51 <winston-d> DuncanT, bswartz: thx
16:24:56 <DuncanT> winston-d: It's a general discussion though
16:25:08 <guy___> smcginnis: disco do the backup into another disco instance
16:25:14 <flip214> so, not backups per se, but a special backup API. got it.
16:25:48 <flip214> a separate backup API would make sense if it keeps the data path out of the cinder node, ie. directly goes storage => storage.
16:26:00 <flip214> but that will be hard to do cross-vendor...
16:26:03 <smcginnis> guy___: So just a quick scan - the benefit is if you are doing DISCO to another DISCO there are some built in capabilities that would optimize that backup and restore. Is that a correct statement?
16:26:11 <DuncanT> flip214: The migrate API already allows that... nothing needs adding
16:26:36 <xyang2> DuncanT: my understanding is a backup device usually supports object or file.  if a storage only supports block, it's not used as backup device usually due to number of volumes limitation.  So if storage support object or file, I don't see why it can't be used to do backups
16:26:48 <guy___> smcginnis: yes it is correct
16:26:51 <avishay> DuncanT: yea, you can do a generic backup driver with clone + migrate
16:27:12 <eharney> smcginnis: that's the same model the ceph backup driver follows
16:27:38 <avishay> xyang2: not every block storage limits # of volumes
16:27:49 <smcginnis> So then at least in this instance, I don't see why we wouldn't allow that if it brings a definite improvement for users.
16:27:50 <eharney> smcginnis: it gives you a good reason to use a cinder driver for it rather than sticking it behind swift
16:28:11 <hemna> smcginnis, +1
16:28:14 <xyang2> avishay: in that case, block is ok too
16:28:35 <DuncanT> Does it provide any benefit that the clone plus migrate model wouldn't?
16:28:44 <guy___> wewe0901 and I both work on disco , we have a feature  call WADB. When a user wants to backup their volume to another deployment of disco, . we can backup all data and snapshots ( transfer delta to remote and take snapshot on remote disco deployment).
16:29:04 <winston-d> xyang2: even cinder vol backend shouldn't have # vol limit IMHO
16:30:03 <DuncanT> winston-d: There's always the limit you hit when you run out of disk space, so every backend has a limit
16:30:22 <bswartz> DuncanT: I'm not sure what you're getting at -- but the advantage of backups compared to clone+migrate is that backups are not required to be attachable like volumes are
16:30:39 <avishay> DuncanT: I guess the main advantage is incremental
16:30:45 <hemna> avishay, +1
16:31:01 <DuncanT> bswartz: I meant for the disco driver... if we're opening this up wide then we should do a generic thing immediately
16:31:10 <bswartz> I see
16:31:30 <avishay> backup is basically replication with possibly higher RPO and with restore capability (higher RTO)
16:31:57 <xyang2> DuncanT: nfs also has a cinder driver and backup driver.  what's the difference with disco?
16:32:00 <DuncanT> e0ne: This has nothing to do with storage features
16:32:26 <DuncanT> xyang2: I want to avoid needing a separate backup driver for every cinder storage backend
16:32:35 <bswartz> xyang2: nfs volume driver and nfs backup driver could run on entirely different NFS implemetnations
16:32:45 <tbarron> IMO it would make sense to do the generic part of this driver generically - it's basically a dd - and put in a hook for vendor-advantaged backup (from disco to disco e.g.)
16:33:11 <DuncanT> The disco driver isn't using the chunked infrastructure, so it seems that there's nothing that could be made much more generic
16:33:14 <xyang2> bswartz, DuncanT: I have not looked at the disco patch.  Does it require them to be on the same node?
16:33:16 <tbarron> xyang2: nfs driver went out of its way to fit in to a pretty genric scheme
16:33:46 <avishay> i don't know what the latest replication API is, but couldn't a generic backup driver between storages of the same type be implemented that way?
16:34:12 <avishay> xyang2: guy___ says it's two separate disco deployments
16:34:14 <tbarron> the generic part of this driver is from foreign volume to disco
16:34:25 <tbarron> and it's basically a dd
16:34:36 <DuncanT> There's nothing in the DISCO driver to do incremental unless you're doing disco-to-disco either
16:34:45 <xyang2> avishay: ok, so I don't see a problem then
16:34:57 <bswartz> avishay: the latest replication API not really comparable to the current backup API
16:35:01 <DuncanT> Replication != backup
16:35:07 <avishay> xyang2: i don't think there's a problem either
16:35:15 <DuncanT> They're different ways of protecting data
16:35:24 <xyang2> DuncanT: oh, so it doesn't inherit from the chunked driver
16:35:28 <DuncanT> Backups are point in time.
16:35:32 <DuncanT> xyang2: No
16:35:45 <tbarron> i think we want to figure a way forwards for vendor-advantaged backup when from same type of backend to (another AZ with) the same type of backend
16:35:57 <xyang2> DuncanT: agree replication and backups are different
16:36:01 <smcginnis> tbarron: +1
16:36:01 <DuncanT> So If people thing there's a benefit to having backup to block devices, then I think we can do much better than this driver
16:36:06 <avishay> DuncanT: if you do async replication by taking a snapshot and sending the diff, it's exactly the same
16:36:19 <tbarron> but i don't think the generic (foreign  to disco) part of this adds value or is generic enough
16:36:32 <DuncanT> avishay: Implementation detail. You can implement one using the other as a building block, doesn't make it the same thing
16:36:59 <xyang2> tbarron: right, vendor has incremental snapshot support that can be leveraged
16:36:59 <smcginnis> DuncanT: I think there are clear advantages for folks going disco to disco (party hopping?), so I think this driver is good.
16:37:15 <smcginnis> DuncanT: But aside from that, it's an interesting idea to think about a generic approach.
16:37:20 <DuncanT> It think we should design a good (supporting incremental, etc) version of backup-to-block and add vendor magic hooks to that
16:37:35 <smcginnis> DiskToDiskBackupDriver
16:37:35 <DuncanT> Rather than merging another special case driver that makes future work hard
16:37:43 <DuncanT> smcginnis: Sure
16:37:55 <bswartz> DuncanT: I can agree with that
16:38:02 <winston-d> DuncanT: +1
16:38:11 <avishay> DuncanT: +1
16:38:13 <e0ne> DuncanT: +1
16:38:28 <DuncanT> I think we should be looking at nice things like change block tracking driven backups, and adding another driver unlike all the others just makes adding things like that harder
16:38:37 <avishay> DuncanT: what i meant was some drivers would be able to use their storage's replication feature to implement the magic hook
16:38:40 <flip214> DuncanT: +1
16:38:46 <DuncanT> The TSM driver is already a bit of a millstone on backup developing
16:39:08 <DuncanT> avishay: Maybe, but backups are point in time, a different thing in general to replication
16:39:29 <DuncanT> guy___: Comments?
16:39:31 <avishay> DuncanT: depends on the implementation, that's why i said some drivers
16:39:40 <smcginnis> Well, another relevant point to this discussion then is who will be working on this general backup mechanism?
16:39:52 <smcginnis> If no one can/will, then it's kind of a moot point.
16:40:10 <DuncanT> smcginnis: It generally comes down to 'those who see sufficient value in the feature'
16:40:24 <xyang2> DuncanT: so allow driver to use incremental snapshots if on the same storage, otherwise provide a generic implementation using the chunked driver?
16:40:29 <DuncanT> smcginnis: It's why we block non-generic things - to drive the work on the generic thing
16:40:35 <avishay> there is value IMO.  if someone ever does something generic i guess this one can be refactored.
16:40:58 <bswartz> smcginnis: it's not moot if we officially declare that we won't accept specific implementations of block-to-block backup and tell the interested parties to work on a generic way of doing it
16:41:00 <eharney> just adding another driver sounds like way less work than trying to launch an effort on a new overarching generic plan which may or may not actually be worked on
16:41:04 <guy___> DuncanT: as tbarron said, we use dd for foreign volume to disco and some advantages to backup disco to disco
16:41:05 <DuncanT> xyang2: Not sure on the fine details, but sounds right
16:41:14 <smcginnis> eharney: That's where I'm at right now.
16:41:20 <xyang2> DuncanT: have you submitted a bp in Nova for the changed block thing?
16:41:35 <xyang2> DuncanT: you mentioned about it at the summit
16:41:41 <eharney> changed block tracking is an enormous effort and not interesting for discussing any of this today IMO
16:41:57 <tbarron> i think that's another discussion
16:41:57 <DuncanT> But if we add the driver now, then the impetuous to work on the generic version is gone
16:42:03 <hemna> tbarron, +1
16:42:37 <DuncanT> xyang2: My employer currently doesn't want to pay me to work on it, or indeed anything cinder related...
16:42:47 <tbarron> but i sure wouldn't mind thought being put into how to use the pretty generic infra we have now (chunked driver) with  hooks for vendor advantaged drivers
16:42:50 <xyang2> DuncanT: :(
16:42:57 <smcginnis> We have 7 backup drivers as of right now: http://docs.openstack.org/developer/cinder/drivers.html#backup-drivers
16:43:01 <smcginnis> Just a data point.
16:43:06 <hemna> DuncanT, join the club
16:43:30 <DuncanT> smcginnis: Most of them are just details on the chunked driver though... they all follow the same model
16:43:43 <avishay> also backup driver CI *hides*
16:43:46 <smcginnis> Yeah, almost all inherit from the chunked driver.
16:44:12 <hemna> hey, no CI = driver removal
16:44:17 <hemna> rm -rf cinder/backup
16:44:26 <hemna> problem solved!
16:44:30 <DuncanT> dd is a step backwards in general... the disco driver only gives any real value for disco-to-dicso
16:44:30 <diablo_rojo> hemna:  Ha ha :)
16:44:30 <e0ne> hemna: :)
16:44:35 <avishay> hemna: very generic indeed
16:44:50 <geguileo> XD XD
16:44:53 <DuncanT> ceph and swift backup both get some testing in gate FWIW
16:46:03 <DuncanT> guy___: Do you guys (pun not intended) have any bandwidth to look at improving the none-disco source case?
16:49:40 * tommylikehu wake up
16:49:55 <DuncanT> It's all gone quiet... netsplit?
16:49:56 <bswartz> is there a netsplit going on?
16:50:06 <winston-d> shall we continue?
16:50:07 <smcginnis> I don't see anyone stepping up to do the generic work.
16:50:09 <diablo_rojo> Lol I thought my IRC disconnected cause no one was talking.
16:50:10 <tommylikehu> i have no idea
16:50:38 <e0ne> nobody wants/has resources to implement generic driver
16:51:04 <DuncanT> I'm not a fan of merging something obviously weak and very focused on one vendor. If we did that the very good generic interfaces we have would never have happened
16:51:22 <e0ne> DuncanT: +1
16:51:28 <smcginnis> Isn't the whole point of our drivers that they are focused on one vendor?
16:51:38 <hemna> smcginnis, +1
16:51:42 <DuncanT> Not really, no.
16:51:53 <DuncanT> The CG interfaces, replication, etc
16:52:04 <winston-d> one example is generic volume migration
16:52:10 <DuncanT> Lots of work went in to make sure those were useful to many vendors
16:52:24 <eharney> on the other hand, the generic interfaces have resulted in about 7 drivers total so far, so... i'm not sure they did a lot if the goal is to make it easy to enable more backends
16:52:25 <xyang2> winston-d: yes, that is generic solution
16:52:25 <winston-d> we actually merged that before we merge the migration support for rbd
16:52:31 <hemna> those are cinder features, not drivers though.
16:52:39 <DuncanT> Migration too. Even backup itself has worked hard to be better than DD for any source volume
16:52:45 <hemna> 2 different topics imho
16:52:58 <smcginnis> And backup is useful to many vendors. Just like replication and CG. It's the individual cases that have optimizations.
16:53:12 <DuncanT> eharney: The chunked driver made it trivial to implement the google driver, as an example
16:53:28 <DuncanT> But dd is a pretty poor way of doing a backup
16:53:31 <smcginnis> And migration is the same. If you don't have a vendor solution, set up an NFS share and use the generic case. Otherwise use the vendor driver if there is one.
16:53:39 <winston-d> think about replication v1 and v2k
16:53:42 * hemna writes the dropbox backup driver.....
16:53:53 <guy___> DuncanT: currently, we proposed to use dd for non disco to disco use case. We need to discuss among the team how we can improve it later.
16:53:56 <smcginnis> hemna: Hey, that might not be bad. ;)
16:54:03 <cFouts> o/
16:54:04 <hemna> :)
16:54:11 <DuncanT> hemna: About 30 lines of code
16:54:24 <xyang2> guy_: have you looked at the chunked driver?  Isn't that something you can leverage
16:54:28 <DuncanT> guy___: I'd rather see some evidence of that work (even a spec) before we merge this....
16:54:40 <e0ne> hemna: don't forget about CI:)
16:54:59 <DuncanT> guy___: Maybe use the chunked driver, but I'm not sure where you'd save the metadata file
16:55:32 <DuncanT> hemna: The chunked driver code was written the way it is to make that sort of thing really trivial
16:56:00 <guy___> xyang2, DuncanT: I haven't looked at the chunked driver. I will take a look.
16:56:08 <scottda> 5 minute warning
16:56:19 <smcginnis> I don't think we're going to come to a consensus in the next 5 minutes.
16:56:24 <xyang2> guy__: great!  if you can use chunked driver rather than dd, that will be awesome
16:56:30 <DuncanT> xyang2: +1
16:56:31 <smcginnis> Anything else about this we should cover while we're on it?
16:56:51 <guy___> xyang2: +1
16:57:55 <smcginnis> No time, but distros and end users, please weigh in on this if you can: http://lists.openstack.org/pipermail/openstack-dev/2016-August/101028.html
16:58:04 <hemna> *sigh*
16:58:07 <smcginnis> Anything else really quick?
16:58:13 <hemna> that is the saddest thread on the ML ever.
16:58:18 <DuncanT> On a totally different note, somebody mentioned some broken CIs in the channel earlier... can anybody remember which ones?
16:58:21 <eharney> yeah i'm not even sure where to go with that thread
16:58:33 <smcginnis> DuncanT: I'll run the script again.
16:58:40 <hemna> seriously, new cinder github, we post driver changes there.
16:58:51 <xyang2> smcginnis: there's a reply saying it is ok if we don't name it stable branch
16:58:52 <eharney> DuncanT: Huawei was one of them
16:58:55 <Swanson> Huawei FusionStorage CI and VMware NSX CI are having issues this morning. FYI.
16:58:58 <smcginnis> hemna: That may be what we have to do, but we need to talk about it.
16:59:03 <hemna> yah
16:59:09 <geguileo> hemna: With Travis running pep8 and unitests
16:59:12 <smcginnis> And we'll need more time than we have for that conversation.
16:59:21 <smcginnis> geguileo: Ooh, good call.
16:59:22 <xyang2> smcginnis: so cut a branch and name it something else and we are ok?
16:59:23 <DuncanT> xyang2: Unfortunately, that's one person's view. There's nothing like a contentious on that
16:59:37 <fernnest> I can tell you as someone who has to try to keep their CI running in their spare time, it is hard!
16:59:48 <xyang2> I don't think a consensus can ever be made on the mailing list
16:59:52 <smcginnis> OK, we're out of time. THanks everyone.
17:00:01 <Swanson> toodles
17:00:08 <DuncanT> fernnest: I think that is somewhat indicative of the sad state of openstack, personally
17:00:12 <smcginnis> #endmeeting