15:59:59 <jgriffith> #startmeeting cinder
16:00:00 <openstack> Meeting started Wed Dec  5 15:59:59 2012 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:01 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:04 <openstack> The meeting name has been set to 'cinder'
16:00:31 <jgriffith> kmartin: let's start with you today :)
16:00:37 <jgriffith> #topic FC update
16:00:44 <kmartin> sure, we have good news
16:00:53 * jgriffith likes good news
16:00:57 <kmartin> We have a proof of concept working for the Fibre Channel support, working on a few issues with detach.
16:01:13 <jgriffith> kmartin: awesome!
16:01:16 <kmartin> I updated the FC spec attached to the cinder blueprint and entered a new blueprint in nova for the required changes
16:01:46 <jgriffith> kmartin: very cool
16:02:01 <jgriffith> kmartin: you think you guys are going to get the patches in the next week or so?
16:02:08 <winston-d> jgriffith: hey~
16:02:18 <jgriffith> winston-d: morning/evening :)
16:02:23 <avishay> hello all
16:02:30 <winston-d> jgriffith: morning
16:02:33 <winston-d> avishay: hi~
16:02:39 <eharney> hi everyone
16:02:53 <winston-d> hi eharney
16:02:54 * jgriffith thinks he started a touch early today :)
16:02:58 <kmartin> still need to get legal approval for sharing any code to a wide group, but we could set something to show you
16:03:10 <jgriffith> kmartin: so what do you think as far as when some patches will hit?
16:03:12 <hemna_> we could do a demo for you at some point
16:03:27 <jgriffith> hemna_: That would be cool
16:03:29 <hemna_> we are still waiting for legal
16:03:37 <jgriffith> bahhh!!!
16:03:39 <hemna_> we put the 3par driver through legal a week ago.
16:03:42 <hemna_> still waiting for that
16:03:42 * jgriffith dislikes lawyers
16:04:05 <kmartin> jgriffith: likewise
16:04:12 <avishay> I'm glad it's not only IBM that's like that :P
16:04:16 <jgriffith> Ya know, considering the investment and backing HP has in Openstack this should be a no brainer for them
16:04:17 <hemna_> There are still some underlying scsi subsystem issues I'm working out with FC, but it should be solvable
16:04:29 <hemna_> yah
16:04:37 <jgriffith> hemna_: Ok... so one recommendation
16:04:51 <jgriffith> hemna_: kmartin Gigantic patches are not fun for anybody
16:04:57 <hemna_> I don't think they are hung up in legal....just takes time for them to dot the I's, cross the T's n such
16:05:02 <kmartin> jgriffith: It is but they just want to make sure, it will happen it's just a slooooow process
16:05:18 <jgriffith> hemna_: kmartin keep in mind if there's a way to break it into digestible chunks it'll help us move on them when you submit
16:05:35 <hemna_> jgriffith, I made clones of the nova, devstack, cinder repos internally and we are tracking against that and have our code checked into those clones
16:05:44 <thingee> jgriffith, hemna_: +1
16:06:03 <hemna_> if we didn't have legal, then I'd make those public
16:06:14 <jgriffith> hemna_: That's cool, but what I'm getting at is don't just dump a single multi K line patch
16:06:21 <hemna_> yah
16:06:23 <hemna_> agreed
16:06:29 <jgriffith> hemna_: Try to break it in to logical chunks as much as possible
16:06:29 <hemna_> the cinder patch is small right now
16:06:34 <hemna_> almost all the work is in nova
16:06:39 <hemna_> and it's fairly small as well
16:06:40 <jgriffith> hemna_: Ok... cool, just wanted to point that out
16:06:54 <kmartin> jgriffith: check out the spec to see the changes not very big at all
16:07:04 <jgriffith> awesome... so we'll just wait for legal and hope for something in the next week or so :)
16:07:30 <hemna_> we could give you a demo later this week on the POC
16:07:36 <hemna_> and I could walk you through the code if you like
16:07:48 <jgriffith> hemna_: I'd be up for that, but probably not this week
16:07:54 <hemna_> I'd rather get a review up front then wait until we submit
16:07:59 <jgriffith> hemna_: maybe we could sync up later and try for next week?
16:08:02 <hemna_> ok that's fine then as well
16:08:03 <hemna_> sure
16:08:08 <jgriffith> there may be other folks here interested as well
16:08:33 <hemna_> do we have a mechanism for desktop sharing n such ?
16:08:44 <jgriffith> hemna_: personally I use Google+
16:08:47 <jgriffith> :)
16:08:48 <avishay> kmartin: Are you in touch with Dietmar from IBM on the FC stuff?
16:09:06 <hemna_> Google+ does desktop sharing?  (linux?)
16:09:12 <kmartin> jgriffith: we're meeting with the Brocade group and we'll update them as well we could probably run it by that group as well
16:09:28 <winston-d> just a thought, isn't showing code to external people before legal approval still a possible legal issue?
16:09:39 <kmartin> avishay: yes, he is part of our weekly meeting
16:09:47 <avishay> kmartin: great
16:10:13 <winston-d> last time when Samsung tried to do that RedHat guys, RH people said, no, please don't do that before you've done legal process.
16:10:32 <hemna_> winston-d, only if the osrb denies our project, which they shouldn't
16:10:42 <kmartin> winston-d: we would not post the code just a demo of what we have working
16:11:24 <winston-d> demo should be ok but you mentioned walk through code. so...
16:11:24 <jgriffith> Ok, we can sort through details on a demo and who's interested offline
16:11:30 <hemna_> ok
16:11:33 <kmartin> sure
16:11:39 <jgriffith> I'd be interested and I'm sure others would
16:11:48 <avishay> I'm interested as well
16:11:52 <jgriffith> Not required, but if you guys want to take the time and effort that would be cool
16:11:53 <winston-d> i'd be interested to see demo as well!
16:11:59 <hemna_> do we have a page for the approximate ship date for Grizzly?
16:12:07 <jgriffith> hemna_: Yeah
16:12:13 * jgriffith opening a browser
16:12:31 <bswartz> https://launchpad.net/openstack/+milestones
16:12:33 <winston-d> next April, 17th maybe?
16:12:39 <hemna_> thnx
16:12:43 <jgriffith> hemna_: the page bswartz reference
16:12:52 <avishay> jgriffith: did you see the agenda for today? :)
16:12:53 <jgriffith> hemna_: and also you should all keep an eye on https://launchpad.net/cinder
16:13:15 <jgriffith> avishay: :)
16:13:34 <hemna_> that page says april 1 ?
16:13:45 <jgriffith> hemna_: say huh?
16:13:58 <jgriffith> hemna_: Ohh... Grizzly
16:14:10 <jgriffith> hemna_: thought you were talking about avishay and the meeting wiki
16:14:15 <hemna_> oh :P
16:14:15 <jgriffith> Ok...
16:14:23 <jgriffith> #topic G2
16:14:33 <jgriffith> Speaking of Grizzly and release dates
16:14:48 <jgriffith> G2 is scheduled for Jan, HOWEVER
16:15:01 <jgriffith> as I mentioned before we loose some time for the holidays
16:15:16 <jgriffith> and we loose some time due to code freeze the week of the milestone cut
16:15:19 <hemna_> HP is out for several weeks
16:15:42 <jgriffith> I just want to stress again...  We need to have the G2 work that's slated done by the end o fthis month
16:15:59 <jgriffith> https://launchpad.net/cinder/+milestone/grizzly-2
16:16:14 <jgriffith> I'm particularly worried about a couple
16:16:19 <jgriffith> Volume Backups...
16:16:35 <jgriffith> I've not heard anything from Francis?
16:17:10 <jgriffith> does anybody know his irc nick?
16:17:19 <jgriffith> (Francis Moorehead)?
16:17:25 <jgriffith> HP
16:17:34 <jgriffith> anyone... bueller, bueller....
16:17:38 <hemna_> no idea
16:17:39 <ollie1> I've just pinged him
16:17:47 <jgriffith> ollie1: :) thanks
16:17:56 <hemna_> I can look up his email address at work, if he's at HP
16:17:58 <jgriffith> so he's part of the cloud services group I'm assuming?
16:18:12 <jgriffith> hemna_: his email is on launchpad
16:18:15 <hemna_> ok
16:18:34 <jgriffith> anyway... that's one I'm concerned about and would like some updates
16:18:43 <jgriffith> The other is the Island work
16:18:48 <hemna_> If you can't get ahold of him, I can ping him on the internal instant messager network
16:19:04 <ollie1> Francis is in the HP cloud services group,
16:19:15 <frankm> Hi
16:19:22 <jgriffith> frankm: :)
16:19:36 <jgriffith> have you had a chance to look at your blueprint for volume backups at all?
16:20:20 <frankm> we're starting to look at it now
16:20:23 <jgriffith> and ollie1 I'm also wondering aobut your BP as well :}
16:20:29 <frankm> i.e. this week
16:20:53 <jgriffith> frankm: so do I need to remove the target for G2?
16:20:54 <ollie1> The glance metadata blueprint is done, code is merged
16:21:08 <jgriffith> ollie1: sorry... wrong line :(
16:21:36 <jgriffith> frankm: do you think this is still going to be something you can get done by Christmas?
16:22:48 <jgriffith> chirp, chirp, chirp.... seems to be a cricket in my office
16:23:05 <avishay> :)
16:23:19 <jgriffith> alright, I'll harass ollie1 and others offline :)
16:23:26 <avishay> jgriffith: I have a couple questions that I wrote down in the agenda concerning volume backups - may I, while we're on the topic?
16:23:27 <jgriffith> avishay: here we goo....
16:23:39 <jgriffith> :)  I'm gettin to it
16:23:42 <avishay> :)
16:23:45 <jgriffith> #topic volume backups
16:23:52 <frankm> maybe not by Christmas, but early in new year
16:24:02 <jgriffith> frankm: hmmmm
16:24:09 <jgriffith> frankm: ok, we'll sync up later
16:24:22 <jgriffith> avishay: maybe you have a better solution anyway :)
16:24:40 <jgriffith> avishay: care to explain a bit on "volume backups pluggable"
16:24:44 <avishay> i just have questions so far :)
16:24:58 * jgriffith doesn't need more questions :(
16:25:02 <jgriffith> just kidding
16:25:21 <avishay> Sure.  Copying to Swift is a great use case, but it seems useful to allow for more back-ends other than Swift
16:25:21 <jgriffith> avishay: so if these are questions, here's some answers...
16:25:49 <jgriffith> avishay: well yes but it's not high on my list for a number of reasons
16:25:53 <avishay> For example, compressing and storing on some file system, backup software, tape, dedup ...
16:26:13 <jgriffith> avishay: primarily if an end-user is backing up a volume they don't want to back it up to another higher perf and higher priced storage
16:26:26 <jgriffith> the ideal is to swift which is cheaper/deeper storage
16:27:01 <avishay> or dedup + tape, or some backup software that will manage all the backups plus store them somewhere cheap
16:27:16 <hemna_> heading off to work...l8rs
16:27:21 <winston-d> jgriffith: i guess tape falls into that category
16:27:34 <jgriffith> winston-d: avishay I'm NOT doing a tape driver!
16:27:43 * jgriffith left the tape world and isn't going back
16:27:44 <smulcahy> jgriffith: also, higher durability due to multiple copies
16:27:54 <winston-d> jgriffith: but IBM guys may. :)
16:28:09 <avishay> I'm just saying, there are lots of backup solutions out there, so why limit the solution?
16:28:09 <jgriffith> smulcahy: winston-d hemnafk so I don't disagree with the *idea*
16:28:28 <jgriffith> avishay: because we're a small team and can only do so much
16:28:44 <jgriffith> I think we need to prioritize and move forward
16:28:48 <avishay> Would making it pluggable and adding back-ends over time be a lot more work?
16:28:55 <jgriffith> I don't think there's any argument that we should NOT have backups to swift
16:29:14 <winston-d> avishay: i think if we can have a pluggable framework, it's ok to have the first working version only support (have) swift plugin.
16:29:27 <jdurgin1> winston-d: agreed
16:29:28 <jgriffith> winston-d: +1
16:29:34 <avishay> I totally agree that the first version can be swift-only
16:29:45 <avishay> But it would be great if it was pluggable for later
16:29:52 <jgriffith> avishay: I agree with that
16:29:53 <smulcahy> how will pluggable work with regard to authentication?
16:30:08 <smulcahy> will all pluggable backends be expected to auth with keystone?
16:30:18 <jgriffith> avishay: I'm just saying I don't want to jeopardize useful/needed cases for theory and what if's
16:30:25 <winston-d> smulcahy: authentication with keystone or backup back-ends?
16:30:58 <jgriffith> Maybe I'm not clear on how "pluggable" you guys are talking
16:31:12 <jgriffith> if you're talking independent services with their own auth model etc
16:31:18 <jgriffith> I say hell nooo
16:31:27 <avishay> No, I meant something along the lines of volume drivers
16:31:35 <jgriffith> if you're talking pluggable modules that's fine
16:31:41 <jgriffith> avishay: Ok... phewww
16:31:42 <winston-d> jgriffith: agree.
16:31:51 <avishay> jgriffith: I'm not crazy... :)
16:32:10 <jgriffith> avishay: yeah, I'm fine with that but it's a harder problem than just saying *make it pluggable*
16:32:14 <smulcahy> jgriffith: agreed, it will dramatically increase the complexity
16:32:33 <smulcahy> I'm not clear on how they will be pluggable if they don't share an auth mehcanism
16:32:53 <jgriffith> So I'd envision something like a backup manager/layer that can sit between the volume drivers and act as a conduit
16:32:57 <winston-d> smulcahy: they can just share auth API?
16:32:59 <jgriffith> or go to swift
16:33:41 <jgriffith> Ok, so I think the answer here is *yes* we should try to keep the design somewhat modular to allow expansion in the future
16:33:45 <jdurgin1> smulcahy: perhaps the same way various volume drivers do their own auth?
16:34:07 <jgriffith> jdurgin1: +1, but we'll need to look at changes to conf files
16:34:17 <jgriffith> So I don't want to get carried away on this right now
16:34:20 <winston-d> jdurgin: +1
16:34:34 <jgriffith> The bottom line is I'm worried we're not even going to get backups to swift in Grizzly at the rate we're going
16:34:46 <avishay> I don't have a clear design here - I just know that almost every customer that has data today also has a backup solution, and they may like to use it for OpenStack too
16:34:46 <jgriffith> let alone add all this cool back-end to back-end stuff to it
16:35:03 <avishay> If you want to leave it out for now and come back to it later, that's fine
16:35:05 <jgriffith> avishay: understood and agreed
16:35:24 <jgriffith> avishay: I think it's something to keep in mind with the work being done now
16:35:27 <smulcahy> jgriffith: we have working code at the moment, just need to work on porting it to grizzly so we should have something
16:35:37 <jgriffith> I think you're right for bringing it up
16:35:48 <jgriffith> smulcahy: for which case?
16:35:57 <jgriffith> smulcahy: for the backup to swift?
16:36:14 <smulcahy> yes, for the backup to swift
16:36:41 <jgriffith> smulcahy: are you working with frankm on this?
16:36:49 <jgriffith> smulcahy: same work?
16:36:59 <frankm> yes, same work
16:37:05 <jgriffith> Ok.. thanks :)
16:37:22 <jgriffith> I'm still getting all the nics together :)
16:37:37 <smulcahy> me too - wasn't sure who frankm was there for a second ;-)
16:37:46 <jgriffith> Ok... cool, so frankm smulcahy see what you can do about pluggable design thoughts on this
16:38:00 <jgriffith> but don't let it jeaopardize getting the code in
16:38:03 <jgriffith> IMO
16:38:08 <avishay> Agreed
16:38:13 <avishay> Thank you
16:38:16 <jgriffith> everybody can hate on me for that if they want :)
16:38:21 <smulcahy> thats my initial thought - we can rework the backend part in a future iteration - but will give it some thought
16:38:31 <jgriffith> smulcahy: sounds good
16:38:34 <avishay> jgriffith: whoever wants to hate on you will find reasons :P
16:38:45 <jgriffith> #topic backup snapshots rather than volumes
16:38:50 <jgriffith> avishay: indeed :)
16:39:05 <jgriffith> So here's the problem with snapshots....
16:39:06 <smulcahy> nova are talking about compute cells know which are kinda like zones/az's as far as I can tell - does cinder have any similar concept?
16:39:08 <jgriffith> They SUCK
16:39:24 <jgriffith> smulcahy: we have AZ's
16:39:41 <avishay> jgriffith: won't volumes be changing while copying?
16:40:06 <jgriffith> avishay: so you can say that to do backups it has to be offline/detached
16:40:11 <jgriffith> avishay: it's not ideal
16:40:12 <bswartz> jgriffith: care to elaborate?
16:40:21 <dtynan> quick question re: snapshots - are there any quota limits on them?
16:40:21 <jgriffith> bswartz: on snapshots?
16:40:30 <bswartz> jgriffith: on suckage
16:40:39 <jgriffith> dtynan: they count against your volume quotas IIRC
16:41:01 <jgriffith> dtynan: I'd have to go back and refresh my memory though
16:41:10 <jgriffith> bswartz: so... yeah, suckage
16:41:33 <jgriffith> The reality is that most of us here are associated with vedors for back-end storage
16:41:44 <jgriffith> We all have killer products with specific things we excel at
16:41:46 <jgriffith> BUT!!!
16:41:58 <jgriffith> the base/reference case for OpenStack is still LVM
16:42:14 <jgriffith> so that needs to be a key focus in things that we do
16:42:34 <jgriffith> once you create an LVM snapshot you've KILLED your volume performance
16:42:43 <jgriffith> it's about 1/8 on average
16:42:54 <jgriffith> I've got a patch coming to address this
16:43:08 <avishay> jgriffith: if you delete the snapshot afterward does performance return?
16:43:14 <jgriffith> avishay: yes
16:43:31 <jgriffith> avishay: it's a penalty you pay based on how LVM snaps work
16:43:47 <avishay> so maybe whoever uses LVM can take a snapshot, back it up, and then delete it?
16:44:06 <jgriffith> avishay: if theyr'e smart they will :)
16:44:34 <jgriffith> avishay: But what I'm saying here is that I don't think we should modify the base code behavior and usage model for something that doesn't work well with LVM
16:44:57 <jgriffith> extensions, extra features etc is fine
16:45:07 <bswartz> jgriffith: so you're not complaining about the snapshot concept, you're complaining about the snapshot implementation in the reference driver
16:45:24 <jgriffith> bswartz: Yeah, I think that's fair
16:45:38 <jgriffith> bswartz: like I said I have a solution but it's not supported in precise yet
16:45:42 <jgriffith> at least not officially
16:45:43 <bswartz> are we generally happy with snapshot abstraction as it exists today?
16:45:46 <avishay> If it didn't work at all, that's one thing, but I think this backup idea is cool, and limiting it to offline volumes because LVM snapshot performance sucks might be holding us back, no?
16:46:00 <jgriffith> bswartz: haha... that's a whole nother can o'worms
16:46:18 <jgriffith> avishay: fair
16:46:34 <jgriffith> avishay: but I wasn't finished.... :)
16:46:47 <jgriffith> The reality is, snapshots pretty much are "backups"
16:46:48 <bswartz> if changing the abstraction allows us to solve some problems I'd be interested in disucssing that
16:46:52 <jgriffith> that's really the point IMO
16:47:16 <bswartz> jgriffith: my view of snapshots has always been "things you can clone from"
16:47:54 <smulcahy> I think the terminology is pretty important to set straight here - we should be clear going forward on what we mean by snapshots and backups and avoid using them interchangeably I think.
16:48:10 <avishay> snapshots are backups, but you can't put them on swift, can't attach them (yet?), can't restore (yet), ... frustrating :(
16:48:13 <jgriffith> smulcahy: and there inlies the challenge
16:48:26 <jgriffith> avishay: I feel your pain
16:48:37 <jgriffith> avishay: I plan to have the restore as I've mentioned
16:48:47 <jgriffith> avishay: backup to swift is ideal IMO
16:49:00 <dtynan> personally I think snapshots like bswartz said are things you can clone from and also things you can create backups from.
16:49:01 <jgriffith> avishay: but there are problems with backup
16:49:37 <jgriffith> avishay: dtynan bswartz the problem is depending on how the snapshot is implemented it's actually nothing useful once it's copied out
16:50:01 <dtynan> yeah, it's a point-in-time reference that you can use to make a backup or a clone...?
16:50:12 <jgriffith> if it's just delta blocks it doesn't do you much good on it's own
16:50:33 <avishay> jgriffith: you can always make a full copy, even if on the controller it's CoW or similar
16:50:49 <jgriffith> avishay: yes
16:51:12 <jgriffith> Ok... so this sort of falls into the same problem/challenge I mentioned earlier
16:51:16 <smulcahy> but thats not what snapshots are at the minute are they?
16:51:23 <jgriffith> we have a lot of great ideas/conversation
16:51:30 <jgriffith> but the reality is we need to implement the code :)
16:52:04 <jgriffith> I would still like to focus a bit
16:52:17 <jgriffith> I'd rather get the blue-prints that are on the table and go from there:
16:52:22 <jgriffith> So what I'm saying is:
16:52:45 <jgriffith> 1. get backups of volumes to swift (TBH I don't care if it's from snap, volume or both)
16:52:59 <jgriffith> 2. Get snapshot/restore and clone implemented
16:53:13 <smulcahy> I thought https://lists.launchpad.net/openstack/msg03298.html clarified the difference between both reasonably well
16:53:15 <jgriffith> Then worry about all these other edge cases like tape backups etc
16:53:33 <smulcahy> jgriffith: agreed, that sounds like a workable plan
16:53:59 <avishay> Sounds good to me
16:54:03 <jgriffith> smulcahy: thanks for the link, yes agreeed
16:54:19 <jgriffith> anybody disagree/object?
16:54:34 <jgriffith> So you all have probably noticed a couple of things
16:54:51 <jgriffith> 1. I prefer to get base implementations in and build on them (start simple and expand)
16:55:14 <jgriffith> 2. We don't have a TON of submissions in the code (we're light on developers)
16:55:59 <jgriffith> make sense?
16:56:11 <avishay> Agreed
16:56:13 <smulcahy> yes
16:56:24 <bswartz> jgriffith: I agree in this case, but in general it's dangerous to implement something without considering how you'll be locked into that implementation forever
16:56:41 <jgriffith> bswartz: Yeah, I'm not saying you do it blindly
16:56:41 <bswartz> it's worthwhile to have these discussions
16:56:51 <smulcahy> bswartzL I think the api definition is the most critical
16:56:54 <jgriffith> bswartz: I'm just saying you don't get stuck in analysis/paralysis
16:56:54 <avishay> Just to clarify - the issues I'm bringing up aren't for going into the code today - just things to keep in mind so we don't have to toss the code later
16:57:07 <winston-d> jgriffith: agree
16:57:08 <jgriffith> avishay: good point, and I totally agree with you
16:57:19 <smulcahy> can people give feedback on the api's referenced in https://blueprints.launchpad.net/cinder/+spec/volume-backups ?
16:57:23 <bswartz> smulcahy: agreed
16:57:24 <jgriffith> bswartz: it's definitely worthwhile.. but
16:57:51 <jgriffith> I also want to point out there are a number of bugs and blue-prints that need work and are not assigned, or not making progress
16:57:55 <jgriffith> that's no good :(
16:58:11 <avishay> jgriffith: I will see if I can help
16:58:16 <jgriffith> You can plan and discuss til your project whithers and dies
16:58:42 <jgriffith> So that's not a knock or an insult to anybody... I'm just trying to make a point
16:58:52 <jgriffith> I'm happy with how Cinder has grown and the participation
16:59:02 <jgriffith> I'm also happy with the discussions we have in these weekly meetings
16:59:15 <jgriffith> I'm just saying we need to make sure we deliver as well
16:59:45 <jgriffith> Ok... surely you've all had enough of me for one day :)
16:59:48 <avishay> jgriffith: I don't think you need to convince anyone of that :)
17:00:08 <jgriffith> avishay: Ok.. cool
17:00:22 <jgriffith> So let's knock out these items avishay posted real quick
17:00:28 <jgriffith> #topic volume-types
17:00:51 <jgriffith> avishay: so you'd like to see some sort of batch create on types?
17:01:26 <avishay> let's take the example you posted for various options for the solidfire driver - do i need a volume type for every permutation?
17:01:57 <jgriffith> avishay: if I remember what you're referencing correctly yes
17:02:02 <avishay> i can easily script creating as many as i need, the question is if that's the way it's meant to be used, or if I'm missing something
17:02:03 <winston-d> avishay: i think that really depends on admin not back-end provider
17:02:10 <jgriffith> winston-d: +1
17:02:24 <avishay> winston-d: agreed
17:02:32 <jgriffith> avishay: Ahhh
17:02:35 <winston-d> avishay: you can always put those useful combination into your back-end manual to educate admin how to fully utilize your back-end
17:02:49 <jgriffith> avishay: the exact usage is really going to be dependent on the provider/admin
17:03:06 <jgriffith> but yes, if they want/have a bunch of types, they can script it exactly as you describe
17:03:24 <avishay> so if the back-end supports RAID-5, RAID-6 and also HDD/SDD, that's 4 volume types, right?
17:03:48 <jgriffith> avishay: that's the way I would do it
17:03:53 <avishay> OK cool
17:04:08 <jgriffith> avishay: so they're all different types, correct?
17:04:26 <avishay> I was just thinking if volume types could be used for affinity between volumes (or anti-affinity)...that would require lots of types
17:05:09 <jgriffith> avishay: hmmm, so that leads to your next item
17:05:14 <jgriffith> avishay: correct?
17:05:34 <avishay> not really, but I guess I did understand the volume type usage correctly, so we can move on :)
17:05:43 <jgriffith> avishay: :)
17:05:48 <jgriffith> #topic filter driver
17:06:09 <jgriffith> So I think you're right on the money here, types is the first implementation of a filter
17:06:18 <jgriffith> there are definitely others we'll want/need
17:06:55 <jgriffith> Doh!  We're over already
17:07:06 <jgriffith> Ok, let's wrap this topic, then I have one more thing to bring up
17:07:17 <jgriffith> avishay: do you want to expand on this topic at all?
17:07:27 <winston-d> jgriffith: nevermind, we have two meeting channels now. :)
17:07:50 <avishay> jgriffith: No, it's just a thought on future directions
17:07:51 <jgriffith> winston-d: Oh that's right :)
17:08:15 <jgriffith> avishay: Yeah, that's kinda the point of the filter scheduler
17:08:46 <jgriffith> avishay: The way it's designed we'll be able to add "different" filters as time goes on
17:08:53 <avishay> OK cool
17:08:54 <jgriffith> just starting with type filters
17:09:15 <avishay> I was really talking more about the API between the scheduler and back-end
17:09:16 <jgriffith> winston-d: slap me upside the head if I'm telling lies :)
17:09:39 <jgriffith> avishay: so you mean calls to get that info?
17:09:52 <avishay> If there should be one function for getting capabilities, another for getting status info, another for getting per-volume info, etc.
17:09:58 <jgriffith> avishay: perf, capacity etc
17:10:00 <winston-d> jgriffith: well, i prefer capabilities filter, rather than type filter. :) but we can have type filter.
17:10:21 <jgriffith> winston-d: fair... you can call it whatever you like :)
17:11:04 <jgriffith> avishay: Yes, I think those are all things that are needed in the volume api's
17:11:29 <winston-d> avishay: back-end reports capabilities, status (of back-end, rather than each volumes) to scheduler.
17:11:48 <avishay> jgriffith: OK, just another future topic to keep in mind :)
17:11:58 <winston-d> scheduler is also able to request those info
17:12:13 <avishay> winston-d: I thought per-volume would be useful in the future, but not needed now
17:12:26 <jgriffith> avishay: I agree with that
17:12:28 <avishay> Maybe migrate volumes based on workload, etc. - not in the near future :)
17:12:41 <jgriffith> avishay: +1 for migration!!!
17:12:53 <avishay> jgriffith: working on a design :)
17:12:56 <winston-d> avishay: per volume status should be taken care of by ceilometer, no?
17:12:58 <jgriffith> avishay: I've been thinking/hoping for that in H release
17:13:18 <avishay> I will also see if I can get some more time to allocate to existing code work
17:13:46 <jgriffith> cool... speaking of which
17:13:52 <jgriffith> #topic bp's and bugs
17:13:59 <jgriffith> one last item
17:14:33 <jgriffith> I really need help with folks to keep up on reviews
17:15:20 <jgriffith> alll I'm askign is that maybe once a day go to:
17:15:22 <jgriffith> https://review.openstack.org/#/q/status:open+cinder,n,z
17:15:42 <rushiagr1> jgriffith: i would make sure i spend time on that from now on
17:15:42 <jgriffith> just pick one even :)
17:15:49 <jgriffith> rushiagr1: cool
17:16:04 <jgriffith> rushiagr1: speaking of which have you been watching the bug reports?
17:16:46 <rushiagr1> jgriffith: not much in the last week but yes..
17:17:40 <thingee> https://bugs.launchpad.net/cinder/+bugs?field.status=NEW&field.importance=UNDECIDED
17:18:26 <jgriffith> thingee: thanks... I got kicked off my vpn
17:18:50 <jgriffith> So that's another one for folks to check frequently
17:19:14 <jgriffith> also notice here: https://launchpad.net/cinder
17:19:25 <jgriffith> There's a recent activity for questions, bugs etc
17:19:49 <jgriffith> anybody that wants to help me out just drop in there once in a while and see what they can do
17:20:05 <jgriffith> alright... I'm off my soapbox for the week
17:20:14 <jgriffith> #topic open discussion
17:20:21 <bswartz> thingee: thanks for the link
17:20:32 <rushiagr1> jgriffith: as a starter, i many a times require a little help to start with a bugfix or a code review, but unfortunately for me, i find very few people available in work hours for my timezones
17:20:57 <jgriffith> rushiagr1: understood
17:21:08 <avishay> I need to go - bye everyone.  Thanks for all the time with my questions!
17:21:09 <bswartz> jgriffith: one item, it is okay if we exempt the Netapp drivers from being split into multiple .py files in the drivers directory?
17:21:16 <jgriffith> rushiagr1: so *most* of the time there are a few of us on #openstack-cinder
17:21:27 * rushiagr1 thinks its time to change my sleep schedule :)
17:21:28 <winston-d> rushiagr1: which timezone r u in?
17:21:33 <jgriffith> I haven't been around at night as much lately, but will be again
17:21:38 <jgriffith> also winston-d is there
17:21:43 * winston-d already changed a lot
17:21:43 <jgriffith> and thingee never sleeps!
17:21:49 <rushiagr1> winston-d: india +5:30
17:22:09 <jgriffith> bswartz: You mean revert the changes already made?
17:22:11 <thingee> rushiagr1: I'm on throughout the day PST and the only time I'm able to work on stuff is at night here so I'm usually on all day O_O
17:22:14 <bswartz> errr
17:22:24 <winston-d> winston-d: ah, i'm in china, that's GMT+8, should overlap a lot
17:22:42 <bswartz> I didn't think the netapp drivers has been split as of yet
17:22:55 <jgriffith> bswartz: nope, so you don't have to worry
17:23:12 <jgriffith> bswartz: I don't think anybody has any plans to do more with that at this time
17:23:15 <bswartz> okay, I'd like to maintain the status quo
17:23:19 <bswartz> that's cool, thank you
17:23:21 <rushiagr1> thingee: winston-d i usually find almost no activity during my office hours on cinder channel, so assumed everyone there are inactive...shouldnt have assumed
17:23:31 <jgriffith> bswartz: if it comes up we'll try to remember and you can -1 the review :)
17:24:00 <thingee> rushiagr1: ah yeah just ping us. I'm lurking most of the time and just talking when I need input
17:24:06 <winston-d> rushiagr1: you can just ask questions, i'll try to answer if i'm in it.
17:24:24 <rushiagr1> jgriffith: haha
17:24:45 <rushiagr1> thingee: winston-d thanks, will surely bother you starting tomorrow :)
17:25:03 <winston-d> rushiagr1: sure, happy to help
17:25:11 <jgriffith> Ok... cool, anything else from folks?
17:25:33 <thingee> rushiagr1: I recommend at the very least, pick something up, drop a question in the channel and worse case you get an answer the next day to proceed. email is acceptable too
17:25:47 <jgriffith> thingee: rushiagr1 good point
17:26:01 <jgriffith> rushiagr1: I log/highlight anything with my name even when I'm not online
17:26:11 <jgriffith> then get back to folks when I arrive
17:26:12 <thingee> ditto
17:26:20 <rushiagr1> jgriffith: thingee agree, will take note of it
17:26:26 * jgriffith is a big fan of leaving irc up and running 24/7
17:27:17 * bswartz is too, when internet cooperates
17:27:24 <jgriffith> alrighty... thanks everyone
17:27:35 <jgriffith> #endmeeting