16:00:32 #startmeeting cinder 16:00:33 :-) 16:00:33 Meeting started Wed Nov 21 16:00:32 2012 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:35 The meeting name has been set to 'cinder' 16:00:49 Hey everybody!!! 16:00:53 o/ 16:01:01 hello 16:01:07 thingee: shouldn't you be asleep??? 16:01:12 hi 16:01:29 russellb: hola 16:01:39 bswartz: morning 16:01:43 jgriffith: no sleep till brooklyn 16:01:52 thingee: NICE!!!! 16:02:06 alright, I don't have a ton of formal stuff on the agenda 16:02:11 Just one thing in fact.... 16:02:15 #topic G1 status 16:02:41 So the good news is that everything we targetted is in review except the driver stuff but that's on it's way 16:02:59 The bad news is that I thought we had until Thursday... 16:03:30 But I forgot to account for the repo lockdown we typically get the 3 days prior to release :( 16:03:44 oh man 16:03:51 ttx: has agreed to give us a day or two to catch up 16:04:05 I won't forget again, totally my bad 16:04:27 So those of you that I've been bugging the heck out of since yesterday now you know why 16:04:41 thingee: It looks like all of your deliverables are ready to go just need reviews yes? 16:05:08 BTW: https://launchpad.net/cinder/+milestone/grizzly-1 16:05:28 hi, all 16:05:33 Maybe we lost thingee 16:05:35 winston-d: Howdy 16:05:39 jgriffith: so last night I got done with all the changes requested by people I talked to. Tests pass, I wanted to try things manually and I'm getting some funky results 16:05:41 I'm more than half way through the API v2 review without finding anything major 16:05:49 thingee: hmmm 16:05:58 thingee: need some help from any of us? 16:06:40 Is the API set or is this just framework? 16:06:52 well it's just weird stuff where it's the same change I'm bring into my test devstack instance, but it doesn't appear to care about the cinder client changes. for example I removed main method out of cinderclient.shell and it still works :) 16:07:12 thingee: hrmm? 16:07:34 removed pyc files, reloaded cinder api server for the api its changes but no luck...going to play around with it some more 16:07:55 avishay_: framework 16:08:20 I updated the bp to reflect this and created bps for stuff people asked for 16:08:27 thingee: that's odd, devstack there's usually no loadin or anything typically for the client/shell 16:08:46 thingee: Ok... let us know if we can help 16:08:54 winston-d: You're next on my list :) 16:08:55 oh yeah, well this is for list-bootable volumes, so it involves api changes and client changes 16:09:03 thingee: Ohhh... 16:09:22 avishay_: https://blueprints.launchpad.net/cinder/+spec/cinder-apiv2 16:09:27 thingee: Make sure you wrap service type around the api call in the shell 16:09:33 thingee: thanks 16:09:48 thingee: although there shouldn't be a conflict in this case... 16:10:12 jgriffith, russellb has a good suggestion and i'm working on it. 16:10:17 winston-d: Looks like russellb had a suggestion... 16:10:22 winston-d: You finished my thought :) 16:10:45 i think that one needs to be bumped to after grizzly-1 16:10:54 it's going to take a while to work that out 16:11:19 russellb: winston-d fair 16:11:29 winston-d: I'll leave it to you to tell me what you need there 16:11:41 winston-d: the other one was the type scheduler 16:11:47 winston-d: I thought we merged that one? 16:12:07 winston-d: errr...sorry 16:12:10 jgriffith, you mean volume RPC api versioning? yes, we merged it. 16:12:11 the blueprint shows that the filter scheduler is the implementation of it 16:12:14 rpcc versions 16:13:04 winston-d: Ok, I'll get the RPC versioning one updated/fixed 16:13:16 jgriffith, thx. 16:13:41 winston-d: Do you agree with russellb that we should push the filter change out? 16:14:56 winston-d: Or would you like some time to look at it before answering 16:15:31 jgriffith, well, since the # of reviews it's got is very few, i agree that we push filter scheduler change. 16:15:51 winston-d: fair enough... 16:16:00 I'm working on reviewing that -- it's a lot of stuff though 16:16:16 bswartz: yes it sure is 16:16:43 alright... I'll leave it for now, but this afternoon I'll plan on retargetting unless something miraculous happens :) 16:16:46 bswartz, yeah, already breaking out a lot needed changes (and had them merged). 16:17:20 The other change is mate's XenServer fixes 16:17:33 https://review.openstack.org/#/c/15398/ 16:17:43 I'd like to get this one wrapped up if we can 16:18:13 It's been through most of the review process, just a recent upate for the new pep-8 16:18:32 It looked good to me, then PEP8 threw a strop... happy to approve as soon as it passes gating again 16:18:48 DuncanT: excellent 16:18:58 The only other thing is the remainder of the volume driver changes 16:19:12 I started it last night, but rnirmal pinged me this morning and he's about got it done 16:19:23 so we should see that land later today and be able to button that up 16:20:09 I won't speak to the changes until it's available, but I think we've talked about it enough that it shouldn't be a big surprise 16:20:51 also we are planning to do some things to keep backward compat with specifying the driver 16:21:00 so it should be non-controversial 16:21:12 So that's about it... 16:21:22 jgriffith: which volume driver changes? 16:21:32 avishay: the layout changes 16:21:39 jgriffith: OK 16:21:49 avishay: So it would look something like: /volume/driver/san/xxx, xxx, xxx, xx 16:22:02 jgriffith: Yep 16:22:06 avishay: and volume/drivers/xiv, netapp.x, etc etc 16:22:08 cool 16:22:36 I need folks to keep an eye on reviews today if they could please 16:23:04 We need to make sure we get the G1 changes in 16:23:22 #topic open floor 16:23:23 jgriffith: any chance that we could cut at the end of today ? 16:23:43 ttx: I think if we bump the type scheduler work I think we can yes 16:23:53 ttx: That's my plan anyway 16:23:59 that's reasonable, defer early to focus on the rest 16:24:23 ttx: yeah, it's pretty much ready but it's a very big change and I don't think we're comfortable rushing the reviews on it 16:24:27 jgriffith: I'll be back 5 hours from now for a go/nogo 16:24:37 ttx: fair... thanks! 16:24:45 at the end of the last meeting I asked if we coudl add rushiagr to the core team -- it sort of got lost in the discussion 16:24:51 but I won't cut the branch until tomorrow eu morning anyway 16:25:13 ttx: yeah, but I'd like to have things stabilized so to speak by end of today :0 16:25:27 ack 16:25:33 I'm out tomorrow, travel tonight so my deadline is shorter :) 16:27:10 bswartz: You can nominate and propose using the standard method 16:27:21 bswartz: However I have the same response as I've had in the past 16:27:37 is the standard method something other than proposing it in this meeting? 16:27:40 bswartz: There are requirements/responsbilities associatd with core that need to be met 16:28:04 bswartz: proposing here is fine, or bring up a formal nomination via the mail list 16:28:16 bswartz: You should have noticed a number of these went out over the last week 16:28:25 bswartz: for a number of projects 16:28:40 ok 16:29:03 I have just yesterday sorted out my email list problems 16:29:14 bswartz: Understood 16:29:26 bswartz: So TBH I would -1 it anyway 16:29:39 rushiagr: no offense 16:30:07 jgriffith: its okay, I understand 16:30:22 question, how's FC support going ? 16:30:49 rushiagr: If it's something you want to do keep plugged in, do reviews and submit bug fixes etc 16:31:21 rushiagr: I'd love to have you, and need core members so I don't want to discourage you at all 16:31:33 rushiagr: s/I'd/We'd/ 16:32:02 * jgriffith will try to find the guidelines wiki for core team membership 16:32:28 jgriffith: sure, i will definitely pay more attention to it 16:32:42 Of course if others have input by all means let's hear it 16:33:12 jgriffith: +1 16:34:08 jgriffith, +1 16:34:18 bswartz: Do you have a counter? 16:34:53 #link http://wiki.openstack.org/Governance/Approved/CoreDevProcess 16:35:00 no, Rushi will come up to speed on all the responsibilities eventually 16:35:21 jgriffith: or anyone care to give a #fc status once this topic is done ? 16:35:30 jdurgin1: thanks 16:35:37 zykes-: sure... 16:35:41 bswartz: +1 :) 16:36:12 bswartz: rushiagr awesome 16:36:16 i would also like to queue up a topic on standardizing capability keys (see my comment on the filter scheduler patch) 16:36:53 avishay: Ok... first zykes 16:36:59 #topic fc-update 16:37:05 jgriffith: I know how a queue works ;P 16:37:13 niiice 16:37:15 :p 16:37:17 zykes-: I spoke with kmartin briefly last week 16:37:24 k 16:37:33 zykes-: They're *finally* making progress on getting some things through HP legal 16:38:21 zykes-: There's been a bit more detail added to the bp here: http://wiki.openstack.org/Cinder/FibreChannelSupport 16:38:36 zykes-: We're hoping to start seeing some code next week 16:38:51 zykes-: First pass is management/attachment only 16:39:00 zykes-: No switch management or zoning support 16:39:13 :| 16:39:14 zykes-: So that will all have to be done outside of OpenStack by an admin for the time being 16:39:18 jgriffith: what will it be then ? 16:39:54 zykes-: So it's a driver to manage the storage devices of course, and the ability to FC attach to a compute host 16:40:18 zykes-: Brocade and other folks are involved so the topology extensions should come along 16:40:33 k 16:40:46 zykes-: I just haven't recieved any real updates on how that's going to look and who wins out of the gate 16:41:35 ok... anything else on FC? 16:41:41 * jgriffith doesn't have a ton there.... 16:42:05 zykes-: We'll be hitting you up as soon as patches start to land :) 16:42:16 zykes-: I'll expect some good reviews and some testing :) 16:42:43 Ok... 16:42:57 avishay: 16:43:05 jgriffith: Yes 16:43:08 be sure to do jgriffith ! 16:43:15 #topic capability keys 16:43:47 Basically, I think there should be some documentation on the capability keys to make sure all drivers are using the same ones 16:44:19 E.g., if one uses "volume_size" and another "vol_size", the filter scheduler won't work too well 16:44:47 #link https://etherpad.openstack.org/cinder-backend-capability-report 16:44:52 avishay, i have something RFC here, very rough but still: https://etherpad.openstack.org/cinder-backend-capability-report 16:44:54 avishay: Yeah, we talked about that 16:45:01 Ahh... rnirmal is on it! 16:45:11 rnirmal, beats me to it. :D 16:45:14 avishay: That should address your concerns : 16:45:20 Thank you all :) 16:45:29 lets agree upon the capabilities so that we can get the filter scheduler in .. winston-d has been on it for way too long 16:45:48 winston-d: you really are patient 16:46:30 rnirmal: +1!!!!!! 16:46:37 I think we should agree on the set of capabilities that all drivers MUST implement, and there should also be a set that drivers could add (e.g., thin provisioning support, compression support, whatever) 16:46:52 The second being capabilities with a high probability that multiple drivers will use 16:47:07 avishay: I see that as a next step 16:47:07 rnirmal, well, it is a big patch, so i don't want to be pushy. 16:47:29 Or maybe once one driver defines a capability, it goes in the document and other drivers should use the same name 16:48:29 I think it's best to start off with basic capabilities and then have a section for specific capabilities like "extra specs" 16:48:56 rnirmal: agreed 16:49:04 avishay, totally agree. i tried to have something in LVM iSCSI driver as an example. 16:49:37 I'd first like to settle on what we require to be reported (whether it's supported or not is irrelevant right now IMO) 16:49:53 avishay, rnirmal, for MUST implement capabilities right now, is just 'total_capacity_gb', 'free_capacity_gb', 'reserved_percentage'. 16:49:57 maybe we just need to keep track of the keys being used and developers/reviewers make sure that new submissions use that list 16:50:28 avishay, sure, i will definitely do that. 16:50:37 What does 'reserved_percentage' mean in this context? 16:50:48 avishay: I was thinking once we sort this out we actually implement a report capabilities method that's inheritted 16:50:55 provisionable ratio 16:51:16 avishay: That way the keys are set, etc 16:51:20 don't provision more than 80% of storage etc.... so reserved_percentage would be 20 in that case 16:51:33 rnirmal: Ah, got you, cheers 16:51:39 winston-d: is that the correct assumption 16:51:43 jgriffith: that could work 16:52:06 avishay: I still see your point about reviews, but that's a given IMO 16:52:12 brb 16:52:15 avishay: Once we settle on what those keys are :) 16:52:24 rnirmal, that's right. 16:53:21 jgriffith: well even if IBM comes out with feature foobarbaz and i add a key for that, and a month later solidfire comes out with a similar feature, your driver should also use the same key of course 16:53:37 avishay: agreed 16:53:48 So this is something that's always been a tough one IMO 16:54:04 I've proposed that to avoid some of this we set definitions in OpenStack 16:54:18 jgriffith: OK, so just some method to keep track of used keys would help developers and reviewers I think 16:54:31 Even if they don't map 1:1 to every vendor, each vendor can/should adapt 16:54:38 avishay: Agreed! 16:55:00 OK all agree :) 16:55:05 avishay: That's a requirment IMO and I had assumed that was a big part of what this first pass on this is all about 16:55:11 Do we have time to discuss read-only snapshot attachments? 16:55:41 avishay: I do, so long as noboby else has anything?? 16:56:20 *crickets chirping* 16:56:23 LOL 16:56:32 I just want to say that volume-backup is stuck in HP legal land but should be clear soon and code will appear shortly after the blueprint; before G2 certainly 16:56:36 avishay: So I outlined my suggestion but you didn't like it :) 16:56:57 jgriffith: I don't think I understood it :) 16:56:58 DuncanT: G2, must have :) 16:57:06 DuncanT: any link for volume-backup? 16:57:10 avishay: Yeah, I tend to confuse people :) 16:57:37 avishay: So my proposal was thus (warning, it's not what you want) :) 16:58:06 jgriffith: I think a big step forward would be to allow (at least) read-only snapshot attachments. Not all drivers support it (e.g., storwize/svc), but you can pass a "readonly" flag in QEMU for example. 16:58:14 * Implement restore snapshot (uses a snapshot to put a volume back to the state it was in when snap was taken) 16:58:19 avishay: detailed blueprint stuck with legal 16:58:25 DuncanT: OK 16:58:37 * Implement clone volume (Makes a clone of existing vol, ready for use, no extra steps) 16:59:12 avishay: In addition to those, the only thing missing IMO is the R/O capabilities of the snapshot 16:59:22 avishay: I think this would be interesting/useful.... 16:59:31 avishay: But it's also a big change 16:59:35 And the difference between a clone and snapshot? 16:59:51 avishay: a clone is a ready to use independent volume 17:00:04 Basically the implementation shouldn't matter, as long as the behavior is as expected, right? 17:00:05 avishay: So I liken it to virtual-box snapshots/clones 17:00:12 I'd rather get restore & clone in before we start looking at R/O mounts 17:00:13 avishay: Oh, absolutely 17:00:18 DuncanT: +1 17:00:34 avishay: So my point as always, nothing's forever and nothings set in stone 17:00:54 clone = create snapshot; create volume from snap; delete snap? 17:00:56 avishay: I wouldn't close the door on R/O snaps, but I'd save it for later 17:01:13 (or suitably optimised version there of) 17:01:28 DuncanT: partially... no need for a snap in there 17:01:32 DuncanT: I think the model needs to be backend specific there, since different definitions of snapshots/clones exist 17:01:43 DuncanT: unless that makes it more efficient 17:01:48 jdurgin1: +1 17:02:00 which goes back to I don't care how it's implemented, just what it means 17:02:05 we already have an api for 'clone from snapshot' 17:02:08 Yeah, I meant that the above would achieve the same results 17:02:17 DuncanT: Yes!!! +1 17:02:29 Got you. +1 the idea then 17:02:37 so in reality we've all kinda done these things already to expose our features etc 17:02:37 jgriffith: I think the API should be documented well so that it's clear what a "volume" is and what a "snapshot" is 17:02:51 avishay: Yes, that's something I MUST DO 17:03:07 avishay: I would've already done it but I haven't felt we reached a concensus :) 17:03:32 avishay: So these are G2 items that I'm most interested in BTW 17:03:41 jgriffith: OK, no problem 17:03:57 a lot of this also needs the API V2 changes from thingee that's why I'm so hot to get them in for G1 17:04:04 Currently I don't think you can say more than "A cinder snapshot is a point in time reference or copy of a volume; the only thing you can do with it is clone it to one or more new volumes" 17:04:06 Is there any way to get the name of a volume/snapshot on the backend, or only through the DB? 17:04:11 There's a ton of cool stuff for API V2 :) 17:04:28 DuncanT: Yes, but I want to change that :) 17:04:48 jgriffith: There be dragons ;-) Should be entertaining 17:04:56 DuncanT: :) 17:05:08 avishay: so that's something I have tussled with 17:05:23 avishay: The problem there is I don't know what ALL back-ends will support/do 17:05:43 avishay: So default ends up being DB 17:06:16 DuncanT: jgriffith: Let's say for backing up volumes, if I could attach read-only I could run the backup software in a guest 17:06:17 avishay: Of course it could be in base class as DB call and if a device can do it better... 17:06:21 then they override it 17:06:51 DuncanT: Sure... but are you talking R/O volumes or snapshots? 17:06:54 DuncanT: jgriffith: But if not, I need to figure out the volume name from the DB and back it up from outside of OpenStack? 17:06:55 DuncanT: Or both :) 17:07:17 * jgriffith fully anticipates a R/O attach of volumes 17:07:27 snapshots. volumes seem less critical for R/O 17:07:46 avishay: Unless you think of DuncanT 's idea of running abackup app in an instance :) 17:07:55 avishay: We leave the attach up to the diver, same as for nova-compute 17:08:17 avishay: So here's my plan.... 17:08:31 avishay: Go with what I described earlier as a start 17:08:41 avishay: Then we can grow that and look at R/O snaps etc 17:08:48 DuncanT: what do you mean "leave the attach up to the driver"? 17:08:51 jgriffith: We currently only cover volumes, snapshots is the next job 17:08:54 jgriffith: OK, no problem :) 17:09:05 avishay: I'd rather make forward progress on what we can agree on than get bogged down in a detail 17:09:15 DuncanT: perfect 17:09:24 Ok... I think we're finally ok witht hat one :) 17:09:29 Next week... 17:09:37 OK, thanks for the clarifications! 17:09:38 We need to settle on a method for multi-backends 17:09:55 avishay: Once the blueprint / sample code is up I suspect it will all become clear... can I punt your question for a week or so please? 17:10:01 So everybody think about that a bit the next few days if you could 17:10:05 DuncanT: sure 17:10:20 I'd like to reach an agreeement next week and see if we can roll it for G2 17:10:39 I still strongly favour one manager process, one backend 17:10:40 Anybody have anything else (we're 10 minutes over already) 17:10:48 DuncanT: noted :) 17:11:01 * jgriffith is beginning to agree 17:11:16 alright... for those in the states, Happy Thanksgiving! 17:11:20 good night/day all! 17:11:28 You can pass 2 different config files and run 2 (or more) on one node 17:11:29 those elsewhere... happy Thanksgiving anyway :) 17:11:48 Cheers John 17:11:51 Thanks everybody... keep an eye on reviews for G1 items today please :) 17:11:58 see yaaaaa 17:12:02 #endmeeting