16:00:32 <jgriffith> #startmeeting cinder
16:00:33 <DuncanT> :-)
16:00:33 <openstack> Meeting started Wed Nov 21 16:00:32 2012 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:35 <openstack> The meeting name has been set to 'cinder'
16:00:49 <jgriffith> Hey everybody!!!
16:00:53 <thingee> o/
16:01:01 <bswartz> hello
16:01:07 <jgriffith> thingee: shouldn't you be asleep???
16:01:12 <russellb> hi
16:01:29 <jgriffith> russellb: hola
16:01:39 <jgriffith> bswartz: morning
16:01:43 <thingee> jgriffith: no sleep till brooklyn
16:01:52 <jgriffith> thingee: NICE!!!!
16:02:06 <jgriffith> alright, I don't have a ton of formal stuff on the agenda
16:02:11 <jgriffith> Just one thing in fact....
16:02:15 <jgriffith> #topic G1 status
16:02:41 <jgriffith> So the good news is that everything we targetted is in review except the driver stuff but that's on it's way
16:02:59 <jgriffith> The bad news is that I thought we had until Thursday...
16:03:30 <jgriffith> But I forgot to account for the repo lockdown we typically get the 3 days prior to release :(
16:03:44 <thingee> oh man
16:03:51 <jgriffith> ttx: has agreed to give us a day or two to catch up
16:04:05 <jgriffith> I won't forget again, totally my bad
16:04:27 <jgriffith> So those of you that I've been bugging the heck out of since yesterday now you know why
16:04:41 <jgriffith> thingee: It looks like all of your deliverables are ready to go just need reviews yes?
16:05:08 <jgriffith> BTW:  https://launchpad.net/cinder/+milestone/grizzly-1
16:05:28 <winston-d> hi, all
16:05:33 <jgriffith> Maybe we lost thingee
16:05:35 <jgriffith> winston-d: Howdy
16:05:39 <thingee> jgriffith: so last night I got done with all the changes requested by people I talked to. Tests pass, I wanted to try things manually and I'm getting some funky results
16:05:41 <DuncanT> I'm more than half way through the API v2 review without finding anything major
16:05:49 <jgriffith> thingee: hmmm
16:05:58 <jgriffith> thingee: need some help from any of us?
16:06:40 <avishay_> Is the API set or is this just framework?
16:06:52 <thingee> well it's just weird stuff where it's the same change I'm bring into my test devstack instance, but it doesn't appear to care about the cinder client changes. for example I removed main method out of cinderclient.shell and it still works :)
16:07:12 <jgriffith> thingee: hrmm?
16:07:34 <thingee> removed pyc files, reloaded cinder api server for the api its changes but no luck...going to play around with it some more
16:07:55 <thingee> avishay_: framework
16:08:20 <thingee> I updated the bp to reflect this and created bps for stuff people asked for
16:08:27 <jgriffith> thingee: that's odd, devstack there's usually no loadin or anything typically for the client/shell
16:08:46 <jgriffith> thingee: Ok... let us know if we can help
16:08:54 <jgriffith> winston-d: You're next on my list :)
16:08:55 <thingee> oh yeah, well this is for list-bootable volumes, so it involves api changes and client changes
16:09:03 <jgriffith> thingee: Ohhh...
16:09:22 <thingee> avishay_: https://blueprints.launchpad.net/cinder/+spec/cinder-apiv2
16:09:27 <jgriffith> thingee: Make sure you wrap service type around the api call in the shell
16:09:33 <avishay_> thingee: thanks
16:09:48 <jgriffith> thingee: although there shouldn't be a conflict in this case...
16:10:12 <winston-d> jgriffith, russellb has a good suggestion and i'm working on it.
16:10:17 <jgriffith> winston-d: Looks like russellb had a suggestion...
16:10:22 <jgriffith> winston-d: You finished my thought :)
16:10:45 <russellb> i think that one needs to be bumped to after grizzly-1
16:10:54 <russellb> it's going to take a while to work that out
16:11:19 <jgriffith> russellb: winston-d fair
16:11:29 <jgriffith> winston-d: I'll leave it to you to tell me what you need there
16:11:41 <jgriffith> winston-d: the other one was the type scheduler
16:11:47 <jgriffith> winston-d: I thought we merged that one?
16:12:07 <jgriffith> winston-d: errr...sorry
16:12:10 <winston-d> jgriffith,  you mean volume RPC api versioning? yes, we merged it.
16:12:11 <russellb> the blueprint shows that the filter scheduler is the implementation of it
16:12:14 <jgriffith> rpcc versions
16:13:04 <jgriffith> winston-d: Ok, I'll get the RPC versioning one updated/fixed
16:13:16 <winston-d> jgriffith, thx.
16:13:41 <jgriffith> winston-d: Do you agree with russellb that we should push the filter change out?
16:14:56 <jgriffith> winston-d: Or would you like some time to look at it before answering
16:15:31 <winston-d> jgriffith, well, since the # of reviews it's got is very few, i agree that we push filter scheduler change.
16:15:51 <jgriffith> winston-d: fair enough...
16:16:00 <bswartz> I'm working on reviewing that -- it's a lot of stuff though
16:16:16 <jgriffith> bswartz: yes it sure is
16:16:43 <jgriffith> alright... I'll leave it for now, but this afternoon I'll plan on retargetting unless something miraculous happens :)
16:16:46 <winston-d> bswartz, yeah, already breaking out a lot needed changes (and had them merged).
16:17:20 <jgriffith> The other change is mate's XenServer fixes
16:17:33 <jgriffith> https://review.openstack.org/#/c/15398/
16:17:43 <jgriffith> I'd like to get this one wrapped up if we can
16:18:13 <jgriffith> It's been through most of the review process, just a recent upate for the new pep-8
16:18:32 <DuncanT> It looked good to me, then PEP8 threw a strop... happy to approve as soon as it passes gating again
16:18:48 <jgriffith> DuncanT: excellent
16:18:58 <jgriffith> The only other thing is the remainder of the volume driver changes
16:19:12 <jgriffith> I started it last night, but rnirmal pinged me this morning and he's about got it done
16:19:23 <jgriffith> so we should see that land later today and be able to button that up
16:20:09 <jgriffith> I won't speak to the changes until it's available, but I think we've talked about it enough that it shouldn't be a big surprise
16:20:51 <jgriffith> also we are planning to do some things to keep backward compat with specifying the driver
16:21:00 <jgriffith> so it should be non-controversial
16:21:12 <jgriffith> So that's about it...
16:21:22 <avishay> jgriffith: which volume driver changes?
16:21:32 <jgriffith> avishay: the layout changes
16:21:39 <avishay> jgriffith: OK
16:21:49 <jgriffith> avishay: So it would look something like:  /volume/driver/san/xxx, xxx, xxx, xx
16:22:02 <avishay> jgriffith: Yep
16:22:06 <jgriffith> avishay: and volume/drivers/xiv, netapp.x, etc etc
16:22:08 <jgriffith> cool
16:22:36 <jgriffith> I need folks to keep an eye on reviews today if they could please
16:23:04 <jgriffith> We need to make sure we get the G1 changes in
16:23:22 <jgriffith> #topic open floor
16:23:23 <ttx> jgriffith: any chance that we could cut at the end of today ?
16:23:43 <jgriffith> ttx: I think if we bump the type scheduler work I think we can yes
16:23:53 <jgriffith> ttx: That's my plan anyway
16:23:59 <ttx> that's reasonable, defer early to focus on the rest
16:24:23 <jgriffith> ttx: yeah, it's pretty much ready but it's a very big change and I don't think we're comfortable rushing the reviews on it
16:24:27 <ttx> jgriffith: I'll be back 5 hours from now for a go/nogo
16:24:37 <jgriffith> ttx: fair... thanks!
16:24:45 <bswartz> at the end of the last meeting I asked if we coudl add rushiagr to the core team -- it sort of got lost in the discussion
16:24:51 <ttx> but I won't cut the branch until tomorrow eu morning anyway
16:25:13 <jgriffith> ttx: yeah, but I'd like to have things stabilized so to speak by end of today :0
16:25:27 <ttx> ack
16:25:33 <jgriffith> I'm out tomorrow, travel tonight so my deadline is shorter :)
16:27:10 <jgriffith> bswartz: You can nominate and propose using the standard method
16:27:21 <jgriffith> bswartz: However I have the same response as I've had in the past
16:27:37 <bswartz> is the standard method something other than proposing it in this meeting?
16:27:40 <jgriffith> bswartz: There are requirements/responsbilities associatd with core that need to be met
16:28:04 <jgriffith> bswartz: proposing here is fine, or bring up a formal nomination via the mail list
16:28:16 <jgriffith> bswartz: You should have noticed a number of these went out over the last week
16:28:25 <jgriffith> bswartz: for a number of projects
16:28:40 <bswartz> ok
16:29:03 <bswartz> I have just yesterday sorted out my email list problems
16:29:14 <jgriffith> bswartz: Understood
16:29:26 <jgriffith> bswartz: So TBH I would -1 it anyway
16:29:39 <jgriffith> rushiagr: no offense
16:30:07 <rushiagr> jgriffith: its okay, I understand
16:30:22 <zykes-> question, how's FC support going ?
16:30:49 <jgriffith> rushiagr: If it's something you want to do keep plugged in, do reviews and submit bug fixes etc
16:31:21 <jgriffith> rushiagr: I'd love to have you, and need core members so I don't want to discourage you at all
16:31:33 <jgriffith> rushiagr: s/I'd/We'd/
16:32:02 * jgriffith will try to find the guidelines wiki for core team membership
16:32:28 <rushiagr> jgriffith: sure, i will definitely pay more attention to it
16:32:42 <jgriffith> Of course if others have input by all means let's hear it
16:33:12 <thingee> jgriffith: +1
16:34:08 <winston-d> jgriffith, +1
16:34:18 <jgriffith> bswartz: Do you have a counter?
16:34:53 <jdurgin1> #link http://wiki.openstack.org/Governance/Approved/CoreDevProcess
16:35:00 <bswartz> no, Rushi will come up to speed on all the responsibilities eventually
16:35:21 <zykes-> jgriffith: or anyone care to give a #fc status once this topic is done ?
16:35:30 <jgriffith> jdurgin1: thanks
16:35:37 <jgriffith> zykes-: sure...
16:35:41 <rushiagr> bswartz: +1 :)
16:36:12 <jgriffith> bswartz: rushiagr awesome
16:36:16 <avishay> i would also like to queue up a topic on standardizing capability keys (see my comment on the filter scheduler patch)
16:36:53 <jgriffith> avishay: Ok...  first zykes
16:36:59 <jgriffith> #topic fc-update
16:37:05 <avishay> jgriffith: I know how a queue works ;P
16:37:13 <zykes-> niiice
16:37:15 <zykes-> :p
16:37:17 <jgriffith> zykes-: I spoke with kmartin briefly last week
16:37:24 <zykes-> k
16:37:33 <jgriffith> zykes-: They're *finally* making progress on getting some things through HP legal
16:38:21 <jgriffith> zykes-: There's been a bit more detail added to the bp here: http://wiki.openstack.org/Cinder/FibreChannelSupport
16:38:36 <jgriffith> zykes-: We're hoping to start seeing some code next week
16:38:51 <jgriffith> zykes-: First pass is management/attachment only
16:39:00 <jgriffith> zykes-: No switch management or zoning support
16:39:13 <zykes-> :|
16:39:14 <jgriffith> zykes-: So that will all have to be done outside of OpenStack by an admin for the time being
16:39:18 <zykes-> jgriffith: what will it be then ?
16:39:54 <jgriffith> zykes-: So it's a driver to manage the storage devices of course, and the ability to FC attach to a compute host
16:40:18 <jgriffith> zykes-: Brocade and other folks are involved so the topology extensions should come along
16:40:33 <zykes-> k
16:40:46 <jgriffith> zykes-: I just haven't recieved any real updates on how that's going to look and who wins out of the gate
16:41:35 <jgriffith> ok... anything else on FC?
16:41:41 * jgriffith doesn't have a ton there....
16:42:05 <jgriffith> zykes-: We'll be hitting you up as soon as patches start to land :)
16:42:16 <jgriffith> zykes-: I'll expect some good reviews and some testing :)
16:42:43 <jgriffith> Ok...
16:42:57 <jgriffith> avishay:
16:43:05 <avishay> jgriffith: Yes
16:43:08 <zykes-> be sure to do jgriffith !
16:43:15 <jgriffith> #topic capability keys
16:43:47 <avishay> Basically, I think there should be some documentation on the capability keys to make sure all drivers are using the same ones
16:44:19 <avishay> E.g., if one uses "volume_size" and another "vol_size", the filter scheduler won't work too well
16:44:47 <rnirmal> #link https://etherpad.openstack.org/cinder-backend-capability-report
16:44:52 <winston-d> avishay, i have something RFC here, very rough but still: https://etherpad.openstack.org/cinder-backend-capability-report
16:44:54 <jgriffith> avishay: Yeah, we talked about that
16:45:01 <jgriffith> Ahh... rnirmal is on it!
16:45:11 <winston-d> rnirmal, beats me to it. :D
16:45:14 <jgriffith> avishay: That should address your concerns :
16:45:20 <avishay> Thank you all :)
16:45:29 <rnirmal> lets agree upon  the capabilities so that we can get the filter scheduler in .. winston-d has been on it for way too long
16:45:48 <rnirmal> winston-d: you really are patient
16:46:30 <jgriffith> rnirmal: +1!!!!!!
16:46:37 <avishay> I think we should agree on the set of capabilities that all drivers MUST implement, and there should also be a set that drivers could add (e.g., thin provisioning support, compression support, whatever)
16:46:52 <avishay> The second being capabilities with a high probability that multiple drivers will use
16:47:07 <jgriffith> avishay: I see that as a next step
16:47:07 <winston-d> rnirmal, well, it is a big patch, so i don't want to be pushy.
16:47:29 <avishay> Or maybe once one driver defines a capability, it goes in the document and other drivers should use the same name
16:48:29 <rnirmal> I think it's best to start off with basic capabilities and then have a section for specific capabilities like "extra specs"
16:48:56 <jgriffith> rnirmal: agreed
16:49:04 <winston-d> avishay, totally agree. i tried to have something in LVM iSCSI driver as an example.
16:49:37 <jgriffith> I'd first like to settle on what we require to be reported (whether it's supported or not is irrelevant right now IMO)
16:49:53 <winston-d> avishay, rnirmal, for MUST implement capabilities right now, is just 'total_capacity_gb', 'free_capacity_gb', 'reserved_percentage'.
16:49:57 <avishay> maybe we just need to keep track of the keys being used and developers/reviewers make sure that new submissions use that list
16:50:28 <winston-d> avishay, sure, i will definitely do that.
16:50:37 <DuncanT> What does 'reserved_percentage' mean in this context?
16:50:48 <jgriffith> avishay: I was thinking once we sort this out we actually implement a report capabilities method that's inheritted
16:50:55 <rnirmal> provisionable ratio
16:51:16 <jgriffith> avishay: That way the keys are set, etc
16:51:20 <rnirmal> don't provision more than 80% of storage etc.... so reserved_percentage would be 20 in that case
16:51:33 <DuncanT> rnirmal: Ah, got you, cheers
16:51:39 <rnirmal> winston-d: is that the correct assumption
16:51:43 <avishay> jgriffith: that could work
16:52:06 <jgriffith> avishay: I still see your point about reviews, but that's a given IMO
16:52:12 <redthrux> brb
16:52:15 <jgriffith> avishay: Once we settle on what those keys are :)
16:52:24 <winston-d> rnirmal, that's right.
16:53:21 <avishay> jgriffith: well even if IBM comes out with feature foobarbaz and i add a key for that, and a month later solidfire comes out with a similar feature, your driver should also use the same key of course
16:53:37 <jgriffith> avishay: agreed
16:53:48 <jgriffith> So this is something that's always been a tough one IMO
16:54:04 <jgriffith> I've proposed that to avoid some of this we set definitions in OpenStack
16:54:18 <avishay> jgriffith: OK, so just some method to keep track of used keys would help developers and reviewers I think
16:54:31 <jgriffith> Even if they don't map 1:1 to every vendor, each vendor can/should adapt
16:54:38 <jgriffith> avishay: Agreed!
16:55:00 <avishay> OK all agree :)
16:55:05 <jgriffith> avishay: That's a requirment IMO and I had assumed that was a big part of what this first pass on this is all about
16:55:11 <avishay> Do we have time to discuss read-only snapshot attachments?
16:55:41 <jgriffith> avishay: I do, so long as noboby else has anything??
16:56:20 <avishay> *crickets chirping*
16:56:23 <jgriffith> LOL
16:56:32 <DuncanT> I just want to say that volume-backup is stuck in HP legal land but should be clear soon and code will appear shortly after the blueprint; before G2 certainly
16:56:36 <jgriffith> avishay: So I outlined my suggestion but you didn't like it :)
16:56:57 <avishay> jgriffith: I don't think I understood it :)
16:56:58 <jgriffith> DuncanT: G2, must have :)
16:57:06 <avishay> DuncanT: any link for volume-backup?
16:57:10 <jgriffith> avishay: Yeah, I tend to confuse people :)
16:57:37 <jgriffith> avishay: So my proposal was thus (warning, it's not what you want) :)
16:58:06 <avishay> jgriffith: I think a big step forward would be to allow (at least) read-only snapshot attachments.  Not all drivers support it (e.g., storwize/svc), but you can pass a "readonly" flag in QEMU for example.
16:58:14 <jgriffith> * Implement restore snapshot (uses a snapshot to put a volume back to the state it was in when snap was taken)
16:58:19 <DuncanT> avishay: detailed blueprint stuck with legal
16:58:25 <avishay> DuncanT: OK
16:58:37 <jgriffith> * Implement clone volume (Makes a clone of existing vol, ready for use, no extra steps)
16:59:12 <jgriffith> avishay: In addition to those, the only thing missing IMO is the R/O capabilities of the snapshot
16:59:22 <jgriffith> avishay: I think this would be interesting/useful....
16:59:31 <jgriffith> avishay: But it's also a big change
16:59:35 <avishay> And the difference between a clone and snapshot?
16:59:51 <jgriffith> avishay: a clone is a ready to use independent volume
17:00:04 <avishay> Basically the implementation shouldn't matter, as long as the behavior is as expected, right?
17:00:05 <jgriffith> avishay: So I liken it to virtual-box snapshots/clones
17:00:12 <DuncanT> I'd rather get restore & clone in before we start looking at R/O mounts
17:00:13 <jgriffith> avishay: Oh, absolutely
17:00:18 <jgriffith> DuncanT: +1
17:00:34 <jgriffith> avishay: So my point as always, nothing's forever and nothings set in stone
17:00:54 <DuncanT> clone = create snapshot; create volume from snap; delete snap?
17:00:56 <jgriffith> avishay: I wouldn't close the door on R/O snaps, but I'd save it for later
17:01:13 <DuncanT> (or suitably optimised version there of)
17:01:28 <jgriffith> DuncanT: partially... no need for a snap in there
17:01:32 <jdurgin1> DuncanT: I think the model needs to be backend specific there, since different definitions of snapshots/clones exist
17:01:43 <jgriffith> DuncanT: unless that makes it more efficient
17:01:48 <jgriffith> jdurgin1: +1
17:02:00 <jgriffith> which goes back to I don't care how it's implemented, just what it means
17:02:05 <jdurgin1> we already have an api for 'clone from snapshot'
17:02:08 <DuncanT> Yeah, I meant that the above would achieve the same results
17:02:17 <jgriffith> DuncanT: Yes!!!  +1
17:02:29 <DuncanT> Got you. +1 the idea then
17:02:37 <jgriffith> so in reality we've all kinda done these things already to expose our features etc
17:02:37 <avishay> jgriffith: I think the API should be documented well so that it's clear what a "volume" is and what a "snapshot" is
17:02:51 <jgriffith> avishay: Yes, that's something I MUST DO
17:03:07 <jgriffith> avishay: I would've already done it but I haven't felt we reached a concensus :)
17:03:32 <jgriffith> avishay: So these are G2 items that I'm most interested in BTW
17:03:41 <avishay> jgriffith: OK, no problem
17:03:57 <jgriffith> a lot of this also needs the API V2 changes from thingee that's why I'm so hot to get them in for G1
17:04:04 <DuncanT> Currently I don't think you can say more than "A cinder snapshot is a point in time reference or copy of a volume; the only thing you can do with it is clone it to one or more new volumes"
17:04:06 <avishay> Is there any way to get the name of a volume/snapshot on the backend, or only through the DB?
17:04:11 <jgriffith> There's a ton of cool stuff for API V2 :)
17:04:28 <jgriffith> DuncanT: Yes, but I want to change that :)
17:04:48 <DuncanT> jgriffith: There be dragons ;-) Should be entertaining
17:04:56 <jgriffith> DuncanT: :)
17:05:08 <jgriffith> avishay: so that's something I have tussled with
17:05:23 <jgriffith> avishay: The problem there is I don't know what ALL back-ends will support/do
17:05:43 <jgriffith> avishay: So default ends up being DB
17:06:16 <avishay> DuncanT: jgriffith: Let's say for backing up volumes, if I could attach read-only I could run the backup software in a guest
17:06:17 <jgriffith> avishay: Of course it could be in base class as DB call and if a device can do it better...
17:06:21 <jgriffith> then they override it
17:06:51 <jgriffith> DuncanT: Sure... but are you talking R/O volumes or snapshots?
17:06:54 <avishay> DuncanT: jgriffith: But if not, I need to figure out the volume name from the DB and back it up from outside of OpenStack?
17:06:55 <jgriffith> DuncanT: Or both :)
17:07:17 * jgriffith fully anticipates a R/O attach of volumes
17:07:27 <avishay> snapshots.  volumes seem less critical for R/O
17:07:46 <jgriffith> avishay: Unless you think of DuncanT 's idea of running abackup app in an instance :)
17:07:55 <DuncanT> avishay: We leave the attach up to the diver, same as for nova-compute
17:08:17 <jgriffith> avishay: So here's my plan....
17:08:31 <jgriffith> avishay: Go with what I described earlier as a start
17:08:41 <jgriffith> avishay: Then we can grow that and look at R/O snaps etc
17:08:48 <avishay> DuncanT: what do you mean "leave the attach up to the driver"?
17:08:51 <DuncanT> jgriffith: We currently only cover volumes, snapshots is the next job
17:08:54 <avishay> jgriffith: OK, no problem :)
17:09:05 <jgriffith> avishay: I'd rather make forward progress on what we can agree on than get bogged down in a detail
17:09:15 <jgriffith> DuncanT: perfect
17:09:24 <jgriffith> Ok... I think we're finally ok witht hat one :)
17:09:29 <jgriffith> Next week...
17:09:37 <avishay> OK, thanks for the clarifications!
17:09:38 <jgriffith> We need to settle on a method for multi-backends
17:09:55 <DuncanT> avishay: Once the blueprint / sample code is up I suspect it will all become clear... can I punt your question for a week or so please?
17:10:01 <jgriffith> So everybody think about that a bit the next few days if you could
17:10:05 <avishay> DuncanT: sure
17:10:20 <jgriffith> I'd like to reach an agreeement next week and see if we can roll it for G2
17:10:39 <DuncanT> I still strongly favour one manager process, one backend
17:10:40 <jgriffith> Anybody have anything else (we're 10 minutes over already)
17:10:48 <jgriffith> DuncanT: noted :)
17:11:01 * jgriffith is beginning to agree
17:11:16 <jgriffith> alright... for those in the states, Happy Thanksgiving!
17:11:20 <avishay> good night/day all!
17:11:28 <DuncanT> You can pass 2 different config files and run 2 (or more) on one node
17:11:29 <jgriffith> those elsewhere... happy Thanksgiving anyway :)
17:11:48 <DuncanT> Cheers John
17:11:51 <jgriffith> Thanks everybody... keep an eye on reviews for G1 items today please :)
17:11:58 <jgriffith> see yaaaaa
17:12:02 <jgriffith> #endmeeting