16:02:15 <jgriffith> #startmeeting cinder
16:02:15 <openstack> Meeting started Wed Jan 16 16:02:15 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:19 <openstack> The meeting name has been set to 'cinder'
16:02:26 <thingee> o/
16:02:27 <JM1> indeed!
16:02:35 <eharney> o/
16:02:35 <avishay> hello all
16:02:40 <DuncanT> How's the new swimming pool John?
16:02:46 <xyang_> hi
16:02:51 <kmartin> hello
16:03:04 <jgriffith> DuncanT: at least it will be a fancy below ground pool....
16:03:14 <jgriffith> DuncanT: If I can find the darn thing
16:03:15 <bswartz> hi
16:03:24 <avishay> jgriffith: good luck!
16:03:25 <jgriffith> Wow..  good turn out this week
16:03:31 <jgriffith> avishay: Yeah, thanks!
16:03:33 <smulcahy> hi
16:03:44 <winston-d> hi
16:03:46 <jgriffith> Ok.... let's start with the scality driver
16:03:56 <jgriffith> JM1: has joined us to fill in some details
16:04:04 <jdurgin1> hello
16:04:07 <JM1> hi everyone
16:04:13 <jgriffith> https://review.openstack.org/#/c/19675/
16:04:41 <jgriffith> #topic scality driver
16:04:45 <JM1> I see that DuncanT just added his own comment
16:05:15 <jgriffith> JM1: Yeah, and after our conversation yesterday my only remaining concern is missing functionality
16:05:33 <JM1> I didn't expect the lack of snapshots to be such an issue
16:06:09 <DuncanT> From my POV, I think people expect the things that have always worked in cinder to continue to always work, regardless of backend
16:06:26 <JM1> hmmm
16:06:42 <jgriffith> JM1: Keep in mind that the first thing a tester/user will do is try and run through the existing client commands
16:06:45 <JM1> I saw that the NFS driver doesn't support snapshot either
16:06:53 <JM1> but I suppose some people use it?
16:06:56 <jgriffith> JM1: Yeah, and it's been logged as a bug
16:07:04 <jgriffith> :)
16:07:06 <JM1> ok
16:07:18 <JM1> can it be actually implemented with regular NFS?
16:07:20 <jgriffith> So why don't you share some info on plans
16:07:35 <jgriffith> You had mentioned that you do have plans for this in the future correct?
16:07:36 <winston-d> well, I notice that zadara driver doesn't support snapshot either.
16:07:40 <DuncanT> qcow files rather than raw would allow it
16:07:40 <matelakat> Hi all.
16:07:45 <JM1> regarding snapshots, we have no formal plans so far
16:07:58 <jgriffith> oh, I misunderstood
16:07:59 <JM1> of course we're thinking about implementing it
16:08:03 <matelakat> We added a XenAPINFS , and snapshot support is waiting for review. Although it is a bit generous.
16:08:34 <JM1> but right now there is no such thing as a release date for this feature
16:08:34 <matelakat> Making deep copies instead of real snapshots ...
16:08:41 <rushiagr> hi!
16:09:14 <JM1> matelakat: interesting
16:09:18 <jgriffith> JM1: Is there any sort of hack that comes to mind to make it at least appear to have this?
16:09:30 <jgriffith> Similar to what matelakat has done?
16:09:37 <JM1> jgriffith: well, just as was said, we can do a full copy
16:09:53 <jgriffith> JM1: That would alleviate my issue
16:10:06 <DuncanT> ditto
16:10:08 <thingee> +1
16:10:08 <jgriffith> For me it's more of continuity
16:10:10 <JM1> I just thought that this could be more disappointing to users than knowing that they don't have real snapshots
16:10:21 <avishay> I think we have to keep in mind that snapshot implementations will also need to support future functionality, like restore?
16:10:31 <jgriffith> JM1: So I'd rather document to users what you're doing and why it's not a good idea
16:10:36 <jgriffith> but give them the opportunity
16:10:49 <jgriffith> Remember, most users are doing things automated
16:11:00 <rushiagr> jgriffith: +1
16:11:09 <jgriffith> They don't want/need to check "if this backend taht"
16:11:11 <jgriffith> that
16:11:12 <bswartz> jgriffith: the ability to do fast/efficient snapshots seems like the kind of thing a driver should be able to advertise in it capabilities
16:11:12 <jgriffith> etc etc
16:11:36 <avishay> bswartz: +1
16:11:38 <matelakat> bswartz +1
16:11:38 <winston-d> bswartz: good idea. :)
16:11:40 <jgriffith> The continuity and adherence to the API is what matters to me, not the implementation
16:11:51 <jgriffith> bswartz: +1
16:12:07 <guitarzan> so every backend has to implement every feature in the API?
16:12:11 <JM1> jgriffith: well at the moment, they will need to also setup our SOFS before using it in cinder
16:12:13 <bswartz> but I agree that some sort of dumb/slow snapshot implementation is better than none
16:12:28 <JM1> and AFAIK, there can be only one cinder driver at a time
16:12:29 <avishay> Maybe there should be a generic full copy implementation and those who have efficient snapshots will advertise the capability?
16:12:30 <jgriffith> guitarzan: well, I hate to use absolute terms
16:12:44 <guitarzan> well, there is obviously some line being drawn here and it is unclear what it is
16:12:47 <DuncanT> JM1: Multiple driver support is a hot new feature
16:13:02 <JM1> DuncanT: do you mean it's being implemented?
16:13:05 <jgriffith> JM1: But I consier things like create/snapshot/delete/create-from-snap core
16:13:37 <jgriffith> guitarzan: ^^ that was meant for you
16:13:40 <winston-d> JM1: there can be multiple cinder driver for a cinder cluster.
16:13:42 <DuncanT> JM1: I believe it works, within limitations of works (requires both drivers to implement the get_stats function)
16:13:54 <JM1> winston-d: ah good to know!
16:14:00 <guitarzan> jgriffith: yeah, I got that :)
16:14:08 <jgriffith> guitarzan: you object?
16:14:17 <guitarzan> I just think it's an interesting stance to take
16:14:31 <guitarzan> people wanting cinder support for a particular backend probably already know about that backend
16:14:36 <JM1> winston-d: so I suppose there is a system of rules to determine what driver will host what volume?
16:14:50 <winston-d> JM1: yeah, the scheduler
16:14:52 <DuncanT> JM1: volume_types
16:14:53 <jgriffith> guitarzan: that's fair
16:15:04 <jgriffith> guitarzan: let me rephrase it a bit
16:15:07 <hemna__> is there a way for the user to select which backend to use?  volume types?
16:15:32 <winston-d> JM1: scheduler decides which back-end to serve the request based on volume type.
16:15:34 <jgriffith> The initial review is sure to get a -1 from me, but if the vendor simply can't offer the capability then exceptions can and likely will be made
16:15:39 <DuncanT> Can we move multi-driver talk to later in the meetingm, or this is going to get confusing
16:15:49 <jgriffith> DuncanT: +1
16:15:51 <avishay> hemna__: a user shouldn't need to choose a backend, as long as the backend has the capabilities they need
16:16:10 <winston-d> avishay: +1
16:16:17 <guitarzan> jgriffith: that sounds reasonable
16:16:22 <jgriffith> As DuncanT let's table the back-end discussion for the moment
16:16:56 <jgriffith> guitarzan: that's more along the lines of what I had in mind and is why JM1 is here talking to us this morning :)
16:17:02 <guitarzan> :)
16:17:13 <jgriffith> so JM1....
16:17:18 <DuncanT> I'd also be tempted to start being more harsh on new features being added without a reasonable number of drivers having the functionality added at the same time
16:17:20 <JM1> so from this I understand that even copying would be an acceptable fallback to support snapshotting
16:17:37 <JM1> that's something we can do
16:17:43 <jgriffith> DuncanT: I would agree, but I don't know that we haven't adhered to that already
16:17:51 <matelakat> JM1: on the manager level?
16:17:55 <jgriffith> I've avoided putting SolidFire stuff in core for that very reason
16:18:04 <DuncanT> jgriffith: As the project matures, we can bring in tighter rules
16:18:08 <JM1> matelakat: manager?
16:18:24 <matelakat> the one that calls the driver.
16:18:32 <guitarzan> I would guess he means at the driver level
16:18:51 <JM1> ah, you mean as a fallback for drivers like ours that don't have the feature?
16:18:54 <guitarzan> the driver can certainly just do a copy
16:19:02 <guitarzan> ahh, I misunderstood if that's the case :)
16:19:08 <matelakat> yes, so on the driver level, you only need a copy
16:19:09 <JM1> yes I was thinking inside our driver
16:19:55 <JM1> inside the driver it's easy to copy files, rename, whatever
16:20:00 <DuncanT> If it can be made generic enough that NFS can use it to, even better
16:20:09 <JM1> indeed
16:20:21 <bswartz> DuncanT: +1
16:20:59 <JM1> the performance cost is high, so that won't be useful with all workloads
16:21:14 <JM1> but I gather that for some it will be still better than nothing
16:21:23 <DuncanT> I think that is true, yes
16:21:56 <jgriffith> JM1: well snapshots inparticular have been notriously poor performance items in OpenStack
16:22:01 <matelakat> Maybe we could come up with a name for those snapshots, so it reflects that they are not real snapshots.
16:22:02 <jgriffith> LVM snaps suck!
16:22:16 <jgriffith> matelakat: we have clones now
16:22:40 <bswartz> LVM snaps are better than full copies
16:22:44 <matelakat> jgriffith: thanks.
16:23:08 <hemna__> ok I gotta jam to work...
16:23:17 <jgriffith> bswartz: I didn't ask for what's worse, I just said they suck
16:23:19 <guitarzan> matelakat: the snapshot vs backup discussion is another one :)
16:23:19 <JM1> I don't see how LVM snaps can be worse than a full copy of a 1TB volume
16:23:28 <bswartz> jgriffith: fair enough
16:23:35 <jgriffith> They're only worse when you try to use them for something
16:23:42 <jgriffith> and they kill perf on your original LVM
16:23:54 <avishay> it may take longer to make a full copy, but the full copy will perform better
16:23:58 <jgriffith> I'm not talking perf of the action itself, but we're really loosing focus here me thinks :)
16:24:00 <guitarzan> I think we're drifting again...
16:24:12 <jgriffith> guitarzan: +1
16:24:18 <jgriffith> Ok... back in here
16:24:34 <jgriffith> JM1: Do you have a strong objection to faking snapshot support via a deep copy?
16:24:44 <JM1> jgriffith: not at all
16:24:45 <jgriffith> Or does anybody else on the team have a strong objection?
16:24:51 <JM1> but I will have to think about details
16:24:55 <guitarzan> the api doesn't care if a snapshot is a snapshot or a copy
16:25:03 <jgriffith> guitarzan: correct
16:25:12 <JM1> and come back to you folks to ask implementation questions
16:25:15 <jgriffith> So TBH this is exactly what I did in the SF driver anyway
16:25:17 <guitarzan> so until the "backup" discussion starts again, we shouldn't worry about implementation
16:25:27 <jgriffith> we didn't have the concepts of snapshots so I just clone
16:25:39 <JM1> eg. can we expect the VM to pause I/O during the snaphost?
16:25:44 <hemna__> that's pretty much what we do in the 3PAR driver as well
16:25:59 <jgriffith> JM1: typically no, you can't assume that
16:26:14 <jgriffith> JM1: we don't do anything to enforce that
16:26:20 <jgriffith> that could be a problem eh...
16:26:29 <DuncanT> --force option allows snashot of an attached volume, but it is a 'here be dragons' option
16:26:29 <JM1> jgriffith: ok so for a copy we need a mechanism to force a pause
16:26:39 <guitarzan> you have to be disconnected to snap unless you --force right?
16:26:50 <guitarzan> DuncanT: +1
16:26:54 <JM1> DuncanT: oh, so that means usually we snapshot only unattached volumes?
16:27:00 <DuncanT> normal snapshot without force requires source volume to be unattached
16:27:08 <DuncanT> JM1: That is my belief
16:27:13 <JM1> ok, so cp should work
16:27:22 <avishay> yes
16:27:28 <DuncanT> JM1: We don't currently support snap of live volumes, though that will be fixed some time
16:27:42 <JM1> ah ok
16:27:53 <jgriffith> yall speak for yourselves :)
16:27:58 <JM1> I thought it already worked like that on more capable drivers
16:28:03 <guitarzan> to be fair, the api also doesn't prevent you from immediately reattaching :)
16:28:38 <guitarzan> again, "here be dragons"
16:28:40 <DuncanT> Yeah, I'm thinking of proposing a state machine with an explicit 'snapshotting' state to cover that, but that is a state machine discussion
16:28:50 <bswartz> DuncanT: +1
16:28:59 <jgriffith> DuncanT: +1 for states in Cinder!!
16:29:17 <jgriffith> Back to JM1 are we good here or is there more we need to work out?
16:29:17 <rushiagr> DuncanT: +1
16:29:20 <guitarzan> so we've given JM1 a lot of work or stuff to think about
16:29:25 <JM1> jgriffith: I think we're good
16:29:37 <jgriffith> Excellent... anybody else?
16:29:44 <JM1> I will see how to do a simple implementation with simlpe file copies
16:29:49 <winston-d> DuncanT: +1 for state machine
16:29:51 <JM1> and resubmit patches
16:30:30 <JM1> and thank you all for your input
16:30:54 <jgriffith> Ok... so what else have we got
16:31:00 <JM1> being primarily a dev I'm not as familiar with real use cases as I'd like to
16:31:08 <jgriffith> Since we started the multi-backend topic shall we go there?
16:31:13 <jgriffith> I'm going to time limit it though :)
16:31:27 * jgriffith has learned that topic can take up an entire meeting easily
16:31:56 <hemna__> what is left to implement to support it?
16:32:30 <avishay> BRB
16:32:39 <jgriffith> #topic multi-backend support
16:32:49 <jgriffith> So there are option here
16:33:12 <jgriffith> hub_cap: is possibly looking at picking up the work rnirmal was doing
16:33:44 <jgriffith> So we get back to the question of leaving it to a filter sched option or moving to a more effecient model :)
16:34:10 <hemna__> which option has the best chance of making it in G3 ?
16:34:22 <jgriffith> hemna__: They've all got potential IMO
16:34:28 <jgriffith> So let me put it this way
16:34:38 <jgriffith> The filter schedule is a feature that's in, done
16:34:52 <jgriffith> So what we're talking about is an additional option
16:35:03 <jgriffith> The ability to have multiple back-ends on a single Cinder node
16:35:18 <jgriffith> There are two ways to go about that right now (IMO)
16:35:33 <bswartz> jgriffith: do you mean mutiple processes on one host? or 1 process?
16:35:37 <jgriffith> 1. The patch that nirmal proposed that provides some intelligence in icking bakc-ends
16:35:53 <winston-d> bswartz: that's two different approaches
16:35:55 <jgriffith> 2. Running multiple c-vol services on a single Cinder node
16:36:02 <bswartz> oh
16:36:33 <bswartz> (2) seems like it would create some new problems
16:36:33 <DuncanT> I favour option 2 from a keeping-the-code-simple POV
16:36:44 <winston-d> DuncanT: +100
16:36:48 <bswartz> lol
16:37:02 <winston-d> bswartz: which are?
16:37:11 <bswartz> well what would go in the cinder.conf file?
16:37:17 <bswartz> the options for every backend?
16:37:25 <bswartz> would different backends get different conf files?
16:37:34 <guitarzan> I think you'd have the same problem with 1 or n managers
16:37:35 <jgriffith> bswartz: not sure why that's unique between the options?
16:37:53 <guitarzan> with n managers you could build a conf for each
16:37:58 <jgriffith> BTW: https://review.openstack.org/#/c/11192/
16:38:03 <DuncanT> Different conf files or named sections in a single file... neither is overly complicated
16:38:05 <bswartz> well option 1 forces us to solve that problem explicitly
16:38:07 <jgriffith> For a point of reference on what Option 1 looks like
16:38:19 <jgriffith> This also illustrates the conf files, not so bad
16:38:25 <bswartz> ty
16:38:33 <xyang_> opt 2 multiple c-vol services on one single cinder node seems good. you can use filter scheduler to choose node
16:38:50 <xyang_> I mean choose c-vol service
16:38:56 <hub_cap> hey guys sorry im in IRL meeting. just saw my name
16:39:08 <hub_cap> im in favor of yall telling me what to do :D
16:39:09 <avishay> back
16:40:13 <bswartz> okay I'm in favor of (2) as well
16:40:29 <bswartz> bring on the extra PIDs
16:40:32 <JM1> could you attach volumes from 2 different cinder services in the same VM?
16:40:40 <DuncanT> JM1: Yes
16:40:42 <guitarzan> sure
16:40:52 <hub_cap> so multiple c-vol services, would that be like nova-api, spawning multiple listeners in teh same pid?
16:40:52 <winston-d> JM1: of course
16:41:16 <hub_cap> on different ports
16:41:24 <DuncanT> No need
16:41:35 <guitarzan> hub_cap: no, just managers
16:41:39 <DuncanT> They don't listen on a port, only on rabbit
16:41:42 <winston-d> hub_cap: that's a big different, nova-api workers are listening on _SAME_ port
16:42:02 <hub_cap> winston-d: im talking about osapi, metadata and ec2 api in the same launcher
16:42:10 <avishay> +1 for option #2
16:42:12 <winston-d> hub_cap: while c-vol services listen on AMQP
16:42:27 <hub_cap> sure... amqp vs port... not much diff... a unique _thing_
16:42:37 <hub_cap> but ure right it was my bad for saying port :D
16:42:43 <jgriffith> hub_cap: I think you're on the same page
16:42:52 <hub_cap> i dont think operators will be happy w/ us having 10 pids for 10 backends :)
16:43:03 <winston-d> hub_cap: ah well. they should have their own pid if my memory serves me right.
16:43:07 <jgriffith> I think in previous converstations we likened it to swift in a box or something along those lines
16:43:13 <hub_cap> i have a vm running let me c
16:43:19 <winston-d> hub_cap: why not?
16:43:24 <hub_cap> ure right winston-d
16:43:31 <DuncanT> 10 pids for ten backends is better than one fault taking out all your backends!
16:43:34 <bswartz> hub_cap: I think 10 would be uncommon -- I see more like 2 or 3 in reality
16:43:44 <hub_cap> as long as they are started/stopped by a single binscript, like nova does im down for it
16:43:45 <jgriffith> bswartz: hehe
16:43:56 <hub_cap> bswartz: ya i know :D
16:44:02 <jgriffith> hub_cap: that's what I'm thinking
16:44:11 <JM1> or maybe 10 instances of 2-3 drivers?
16:44:21 <jgriffith> We do introduce a point of failure issue here which sucks though
16:44:24 <JM1> I expect people to do such crazy things for eg. performance
16:44:35 <hub_cap> so yall ok w/ me modeling it like the present nova-api, but w/ the subsitiution of amqp to pids, they create different pids but use a single binscript to start/sotp
16:44:52 <hub_cap> ampq to ports... sry
16:44:55 <hub_cap> trying to listen IRL too
16:45:01 <DuncanT> JM1: 10 instances of 1 driver, on a single node, gives almost no performance improvement
16:45:27 <JM1> DuncanT: I meant several instances of a driver but each with different tuning
16:45:46 <jgriffith> JM1: we're not quite that sophisticated (yet) :)
16:45:46 <bswartz> 10 instances of 1 driver on a single node also increases your failure domain if that one node dies
16:45:55 <jgriffith> Ok...
16:46:25 <jgriffith> bswartz: so that's my concern, however that's the price of admission for this whole concept no matter how you implement it
16:46:41 <jgriffith> If HA is a concern, do an HA cinder install or use mltiple back-ends
16:46:50 <jgriffith> That being said... back to the topic at hand
16:47:03 <JM1> or use a redundant storage cluster (hint hint)
16:47:18 <jgriffith> JM1: The problem is if the cinder node goes your toast
16:47:21 <bswartz> yeah I'm just saying that in practice I don't expect a huge of PIDs on the same host -- people will spread them out to a reasonable degree
16:47:30 <jgriffith> There's now way to get to that redundant storage cluster :)
16:47:50 <JM1> jgriffith: this will only affect creation of new instances, no?
16:48:03 <JM1> (but I'm getting off topic)
16:48:03 <bswartz> JM1: creation of new volumes, not instances, but yes
16:48:15 <jgriffith> JM1: volumes, create, attach, delete andy api call
16:48:50 <jgriffith> back on track here...  Sounds like concensus for the multiple processes on a single Cinder node?
16:48:51 <DuncanT> You can put multiple instances of API behind a load ballancer and most things work, there are some gaps still
16:48:52 <hub_cap> ok so it sounds like we have consensus?
16:49:01 <winston-d> jgriffith: +1
16:49:02 <bswartz> the only thing unaffected by cinder going down is your data access
16:49:04 <DuncanT> jgriffith: +2
16:49:04 <hub_cap> single config file iirc? right?
16:49:05 <jgriffith> hub_cap: hehe
16:49:20 <jgriffith> hub_cap: yep, that's what I'm thinking
16:49:23 * hub_cap hopes i didnt open a new can of worms
16:49:26 <hub_cap> cool
16:49:31 <avishay> jgriffith: +3
16:49:36 <xyang_> +4
16:49:39 <hub_cap> w/ specific [GROUP NAMES]
16:49:40 <jgriffith> I just want to make sure up front... is anybody going to come back and say "why are we doing this?"
16:49:44 <jgriffith> speak now
16:50:10 <bswartz> jgriffith: so each process has its own conf file?
16:50:13 <jgriffith> hub_cap: GROUP NAMES seems like an interesting approach
16:50:37 <thingee> jgriffith: I think the HA issue is going to come up.
16:50:56 <hub_cap> itll be nice to do CONFIG.backend_one.ip
16:51:19 <avishay> HA needs to be solved regardless IMO
16:51:35 <avishay> Orthogonal issues, no?
16:51:52 <DuncanT> HA I think is a summit topic, though we might be able to solve some of it before hand
16:52:03 <jgriffith> avishay: I would agree, but thingee I think is pointing out something folks will definitely bring up
16:52:35 <avishay> Agreed
16:52:37 <jgriffith> But my answer there is then don't do it.. use multi-nodes, types and the fitlter scheduler
16:53:00 <jgriffith> DuncanT: avishay and yes, HA is something we really need but it's a seperate issue IMO
16:53:07 <avishay> types + filter sched will work here too, right?
16:53:31 <jgriffith> avishay: solves the more general problem and use case yes
16:53:56 <jgriffith> avishay: but some don't want an independent cinder node for every back-end in their DC
16:54:08 <winston-d> jgriffith: well, i thought the only benefit of multiple c-vol on single node is to save physical machines since a lot of c-vols are just proxy, very lightweight workloads
16:54:49 <jgriffith> winston-d: isn't that what I said :)
16:54:56 <avishay> jgriffith: i agree, but now the scheduler won't choose the backend?
16:54:57 <winston-d> from scheduler point of view, it doesn't even have to know those c-vols are on the same physical server or not.
16:54:58 <jgriffith> winston-d: or are we typing in unison :)
16:55:13 <jgriffith> winston-d: Ahh.. good point
16:55:15 <winston-d> jgriffith: yup
16:55:32 <jgriffith> Ok... off topic again
16:55:34 <avishay> nevermind my comment
16:55:52 <jgriffith> So the question is, everybody on board with hub_cap going down this path?
16:56:02 <DuncanT> Seems good to me
16:56:06 <avishay> me 2
16:56:10 <winston-d> avishay: it still does
16:56:21 <winston-d> me 2
16:56:24 <hub_cap> s/path/rabbit hole/
16:56:25 * jgriffith +1 has wanted it since Essex :)
16:56:35 <thingee> +1
16:56:51 <jgriffith> hub_cap: synonyms :)
16:56:52 <jdurgin1> it's fine with me
16:56:57 <guitarzan> nirmal will be so sad :)
16:56:58 <winston-d> hub_cap: i'd love to help you test/review the patch
16:57:00 * guitarzan runs
16:57:05 <xyang_> sounds good
16:57:05 <jgriffith> guitarzan: haha!!
16:57:09 <jgriffith> guitarzan: speak up
16:57:13 <winston-d> guitarzan: :)
16:57:18 <guitarzan> no, I think multiple managers is good
16:57:32 <jgriffith> alright, awesome
16:57:37 <jgriffith> Let's move forward then
16:57:42 <jgriffith> hub_cap: You da man
16:57:46 <hub_cap> winston-d: thank u sir
16:57:50 <hub_cap> and jgriffith <3
16:57:50 <jgriffith> hub_cap: ping any of us for help though of course
16:57:54 <hub_cap> roger
16:58:07 <jgriffith> Ok... almost out of time
16:58:14 <jgriffith> #topic open discussion
16:58:53 <thingee> I'm back after being sick since last week wednesday...reviews and what not coming...sorry guys
16:59:23 <jgriffith> thingee: glad you're up and about, hope you're feeling better
16:59:29 <avishay> submitted the generic copy volume<->image patch
16:59:42 <rushiagr> we agreed on CONFIG.backend_one.ip type conf files, am i correct?
16:59:48 <avishay> and DuncanT submitted the backup to swift patch which i hope to review soon
16:59:51 <kmartin> jgriffith: any updates on the get_volume_stats() regarding the drivers providing "None" for values that they can't obtain
17:00:01 <DuncanT> Snapshot/volume deletion is the other thing I wanted to bring up
17:00:20 <jgriffith> kmartin: Oh... I haven't talked to anyone about that yet I don't think
17:00:29 <winston-d> kmartin: could you elaborate ?
17:00:46 <DuncanT> Specifically the fact that currently, if you snapshot a volume then delete the volume, the snapshot is unusable if you need the provider loc/auth of the original volume
17:01:14 <jgriffith> DuncanT: sorry... see I told you I'd forget :(
17:01:25 <jgriffith> We're out of time :(
17:01:32 <jgriffith> BuT
17:01:35 <jgriffith> EVERYBODY
17:01:43 <kmartin> winston-d: on 3par we are not able to provide a few of the values that are required
17:01:49 <jgriffith> Please take a look at the backup patch DuncanT submitted
17:01:59 <DuncanT> :-)
17:02:01 <jgriffith> and jump over to #openstack-cinder to finish this conversation
17:02:09 <jgriffith> We need to give the Xen folks the channel now
17:02:16 <jgriffith> #endmeeting