16:02:00 #startmeeting cinder 16:02:00 Meeting started Wed Jan 9 16:02:00 2013 UTC. The chair is jgriffit1. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:04 The meeting name has been set to 'cinder' 16:02:22 morning everyone 16:02:34 good morning! 16:02:36 good evening :) 16:02:45 avishay: :) 16:02:48 morning 16:02:51 good morning 16:02:53 first topic.... 16:03:01 #topic G2 16:03:16 First off thanks everyone for all the hard work to get G2 out 16:03:38 All targetted items made it except my metadata patch :( 16:04:12 There's something funky in Nose runner that we can't figure out and rather than continue beating heads against the wall we're just moving forward 16:04:26 congratulations to winston-d !!! 16:04:37 We now have a filter scheduler in Cinder!!! 16:04:47 :) 16:04:49 woohoo! 16:04:51 yay 16:04:52 wonderful 16:04:55 winston-d: congrats - great job! 16:04:56 Woo! 16:04:57 winston-d: yes congrats 16:05:05 nice we can use that 16:05:06 thats one big achievement! great! 16:05:37 So just a recap of Grizzly so far, we now have a V2 API (thanks to thingee) and we now have the filter scheduler 16:05:43 thx guys, couldn't have done that without your support 16:06:02 On top of that we've added I lost track of how many drivers with more to come 16:06:34 anyway... Grizzly is going great so far thanks to everyone! 16:06:54 I don't really have anything else to say on G2... now it's on to G3 16:07:32 anybody have anything they want to hit on G2 wrap up ? 16:08:20 Ok 16:08:39 bswartz, wanna share your customer feedback regarding Openstack drivers? 16:08:46 sure 16:08:51 #topic feedback from bswartz 16:09:03 so many of you may have noticed that netapp has submitted a bunch more driver changes 16:10:01 we started our original driver design in the diablo timeframe, and out vision was a single instance of cinder (actually nova-volume) talking to NetApp management software which managed hundreds of storage controllers 16:10:40 since NetApp had already sunk hundreds of man years into management software it seemed dumb not to take advantage of it 16:11:10 but the feedback we've been getting is that customers don't like middleware and they don't like a single instance of cinder 16:11:20 this probably won't surprise many of you 16:11:46 since (nearly?) all of the existing drivers manage a single storage controller per instance of cinder 16:12:05 bswartz: but that's what we did for lunr 16:12:13 creiht: beat me to it 16:12:19 for what reason they dont like single instance of cinder? HA? 16:12:28 single HA instance of cinder talking to our lunr backend 16:12:35 guitarzan: sorry to steal your thunder :) 16:12:46 HA is one reason 16:13:01 scalability is another 16:13:22 Those really should be orthogonal 16:13:26 a single instance of cinder will always have limits 16:13:31 hmm... I've always struggled with this, especially since the cinder node is really just a proxy anyway 16:13:56 and we don't have any HA across nodes *yet* :) 16:13:59 anyway... 16:14:10 the limits may be high, but it's still desirable to be able to overcome those limits with more hardware 16:14:13 there's a big difference between single instance of cinder and cinder running on every storage node 16:14:33 bswartz: you can get ha with cinder by running however many cinder api nodes you want, all talking to your backend 16:14:40 well, no HA so much as "no single point of failure" 16:15:00 yeah, agree with john. is there any number to tell for the limits? 16:15:15 if you have a single cinder instance and it goes up in smoke, then you're dead -- multiple instances addresses that 16:15:32 Facts and customer's views are not always related ;-) 16:15:39 bswartz: yep, we neeed a mirrored cinder / db option :) 16:15:44 DuncanT: agree 16:15:46 DuncanT: amen brother 16:15:58 bswartz: so this is good info though 16:16:17 so anyways, we getting on the bandwagon of one cinder instance per storage controller 16:16:22 bswartz: but that's the point, is if you have a driver then you can run load balanced cinder instances that talk to the same backend 16:16:28 that's what we do with lunr 16:16:30 and the new scheduler in grizzly will make that option a lot cooler 16:16:47 That's also what we do 16:16:50 creiht: we are also pursuing that approach 16:17:19 creiht: however the new drivers that talk directly to the hardware is lower hanging fruit 16:17:26 I can't find our 16:17:31 Sorry, ignore that 16:17:43 there are other reason customers take issue with out management software -- and we're working on addressing those 16:17:51 s/out/our/ 16:18:22 anyways, I just wanted to give some background on what's going on with out drivers, and spur discussion 16:18:26 bswartz: so bottom line, most of the changes that are in the queue are to address whcih aspect? 16:18:29 I didn't understand the comments about lunr 16:18:53 bswartz: so lunr is a storage system we developed at rackspace that has its own api front end 16:19:03 jgriffit1: the submitted changes add new driver classes that allow cinder to talk directly with our hardware with no middleware installed 16:19:38 cinder sits in front, and our driver passes the calls on to the lunr apis 16:19:39 jgriffit1: our existing driver require managmenet software to be installed to work at all 16:19:41 bswartz: ahhh... got ya 16:20:28 creiht: how do you handle elimination of single points of failure and scaling limitations? 16:20:42 traditional methods 16:20:55 * creiht looks for the diagram 16:21:29 We solve SPoF via HA database, HA rabbit and multiple instances of cinder-api & cinder-volume 16:21:43 we do the same, except we aren't using rabbit at all 16:21:49 (All talking to a backend via apis in a similar manner to lunr) 16:21:50 so the good thing is I don't think bswartz is necessarily disagreeing with creiht or anybody else on how to achieve this 16:22:02 DuncanT: so multiple drivers [can] talk to the same hardware? 16:22:07 absolutely 16:22:13 bswartz: yup 16:22:19 bswartz: http://devops.rackspace.com/cbs-api.html#.UO2ZJeAVUSg 16:22:30 that has a diagram 16:22:40 * jgriffit1 shuts up now as it seems he may be wrong 16:22:59 where the volume api box is basically several instances of cinder each with the lunr driver that talks to the lunr api 16:23:09 creiht: thanks 16:23:36 jgriffit1: no I don't think there is any disagreement, just a lot of different ideas for solving these problems 16:23:46 :) 16:23:46 There are a couple of places (snapshots for one) where cinder annoyingly insists on only talking to one specific cinder-volume instance, but they are few and fixable 16:24:04 DuncanT: avishay is working on it :) 16:24:16 jgriffit1: Yup 16:24:23 well and our driver also isn't a traditional driver 16:24:31 jgriffit1: I am? :/ 16:24:38 lol 16:24:38 avishay: :) 16:24:52 avishay: I didn't tell you yet? 16:25:08 jgriffit1: ...what am I missing? 16:25:15 lol 16:25:19 lol 16:25:27 It'll get done anyway... several people interested 16:25:32 avishay: so your LVM work and the stuff we talked about last night regarding clones etc will be usable for this 16:25:35 anyway.. 16:25:57 yeah... sorry to derail 16:25:59 jgriffit1: yes, it's a start, but not tackling the whole issue :) 16:26:25 mutiny? 16:26:35 haha 16:26:48 jgriffith: sorry, thought somebody offed you ;) 16:26:55 Ok... so bswartz basicly your changes are to behave more like the *rest of us* :) 16:27:00 bswartz: You have been assymilated :) 16:27:09 bswartz: just kidding 16:27:12 jgriffith: yes, it's been a learning process for us 16:27:20 but in a nut shell, these changes cut the middleware out 16:27:25 joined the darkside 16:27:26 :) 16:27:34 Ok... awesome 16:27:38 or maybe we are the darkside :) 16:27:39 we're not giving up on our loftier ideas, but in the mean time we're conforming 16:27:45 creiht: hehe 16:27:52 * jgriffith cries 16:28:00 which ideas are the lofty ones? I'm curious what seems more ideal to folks 16:28:01 bswartz: make you a deal, pick one or the other :) 16:28:06 guitarzan: NFS 16:28:14 CIFS to be more specific in Cinder 16:28:28 bswartz: can provide more detail 16:28:37 I mean in regards to this HA cinder to external backend question 16:29:00 jgriffith: I'm not sure what you're asking 16:29:14 the NAS extensions are completely separate from this driver discussion 16:29:38 I assumed that's what you meant by "loftier" goals 16:29:49 so what "lofty" goal are you talking about then? 16:30:00 Please share now rather than later with a 5K line patch :) 16:30:07 no, loftier means that we're leaving the original drivers in there and we have plans to enhance those so customers hate them less 16:30:38 * jgriffith is now really confused 16:30:41 so the netapp.py and netapp_nfs.py files are getting large 16:31:17 jgriffith: we talked about reworking the NAS enhancements so that the code would be in cinder, but would run as a separate service 16:31:25 that rework is being done 16:31:26 jgriffith: the new patch doesn't replace the old drivers that access the middleware, just add the option of direct access. the lofty goal is to improve the middleware so that customers won't hate it. bswartz - right? 16:31:45 Ok.. I got it 16:31:49 avishay: yes 16:31:50 sorry 16:32:03 Why do you need both? 16:32:15 addresses 2 different customer requirements 16:32:25 really? 16:32:31 on is for blocks, other is for CIFS/NFS storage 16:32:51 the direct access vs. the middleware access? 16:32:52 we have lots of different drivers for supporting blocks 16:32:53 alright, I'm out 16:33:00 avishay: yes :) 16:33:10 sorry this has gotten confusing and out of hand 16:33:17 Yup 16:33:22 LOL.. yes, and unfortunately it's likely my fault 16:33:38 * bswartz remembers not to volunteer to speak at these things 16:33:40 bswartz: the question is why you need two options ( direct access vs. the middleware access) - not related to the NFS driver 16:33:53 avishay: thank you! 16:33:59 avishay: from now on you just speak for me please 16:34:07 jgriffith: done. 16:34:14 :) 16:34:15 ;) 16:34:21 I agree we should give customer more options, with direct and middleware access 16:34:31 avishay: regarding our blocks drivers, we're leaving the old ones in, and we're adding the direct drivers 16:34:37 I disagree, but that's your business I suppose 16:34:41 xyang_: do you plan to do similar thing for EMC driver? 16:34:56 avishay: long term we will deprecate one or the other, depending one which works better in practice 16:35:04 cool 16:35:25 OK. I guess all this doesn't affect the "rest of us" anyway. 16:35:49 avishay: the thing that affects the rest of you is the NAS enhancements, and jgriffith made his opinions clear on that topic 16:36:00 Other than monster reviews landing 16:36:07 DuncanT: +1 16:36:10 avishay: so our agreement is to resubmit those changes as a separate service inside cinder, to minimize impact on existing code 16:36:39 bswartz: yes i know, that's fine 16:36:47 avishay: the changes will be large, but the overlap with existing code will be small 16:36:54 that is targetted for G3 16:37:26 Ah ha, that makes sense 16:37:31 you've already seens the essence of those changes with our previous submission, the difference for G3 is that we're refactoring it 16:37:40 seen 16:38:16 okay I'm done 16:38:22 sorry to take up half the meeting 16:38:37 bswartz: no problem 16:38:42 Things are now reasonably clear, thanks for that 16:38:48 bswartz: thx for sharing. 16:39:18 yup, thank you 16:39:28 i've decided to add stress test for single cinder volume instance to see what the limit is in our lab. :) 16:40:02 bswartz: yeah, appreciate the explanation 16:40:20 winston-d: cool 16:42:12 alright, anybody else have anything? 16:42:28 How is FC development? 16:42:46 Will it be submitted soon 16:43:16 xyang_, plan is to get the nova side changes submitted next week for review 16:43:40 Volume backup stuck in corporate legal hell, submission any day now[tm] 16:44:17 we resolved the one issue we had with detach and have HP's blessing :) 16:45:33 creiht: where's clayg? haven't seen him for very long time 16:46:18 clayg also sth on volume backup, if my memory serves me right 16:48:31 :) 16:48:55 ok, DuncanT beat up lawyers 16:49:22 I wonder if 'My PTL made me do it!' will stand up in court? 16:49:32 I'm going to try and figure out why tempest is randomly failing to delet volumes in it's testing 16:49:48 anybody else looking for something to do today that would be a great thing to work on :) 16:49:55 Sure.. why not! 16:49:59 winston-d: he has abondoned us :( 16:50:21 winston-d: he went to work for swiftstack 16:50:26 on swift stuff 16:50:37 creiht: oh, ouch. 16:51:29 winston-d: sorry was being a little silly when I said abandoned :) 16:51:55 and I don't have much room to talk, as I'm back working on swift stuff as well 16:52:17 Things we're working on for Grizzly is generic iSCSI copy volume<->image (the LVM factoring is part of that), and driver updates of course 16:52:34 creiht: :) so who's new guy in rackspace for cinder/lunr now? 16:52:40 guitarzan: 16:52:45 winston-d: -^ 16:52:59 k. good to know. 16:53:57 winston-d: creiht has also abandoned us 16:54:13 well, I haven't left the channel yet :) 16:54:22 :) 16:54:50 i thought block storage is more challenging that obj. :) i still think so. 16:55:00 different challenges 16:55:05 Ok folks... kind of an all over meeting today, sorry for that 16:55:13 but we're about out of time... anything pressing? 16:55:19 jgriffith: doh... sorry :( 16:55:24 we can all still chat in openstack-cinder :) 16:55:34 creiht: No... I feel bad cutting short 16:55:50 I didn't realize I was in the meeting channel :) 16:55:57 creiht: I've doing 4 things at once and I have been trying to be polite for john and the xen meeting that follows 16:56:04 creiht: HA!!! 16:56:14 creiht: Awesome! 16:56:18 jgriffith: xyang_ DuncanT bswartz please remember to update your driver to provide capabilities/status for scheduler 16:56:19 * creiht is in too many channels 16:56:30 winston-d: yeppers 16:56:33 winston-d: Yup 16:56:40 winston-d: got it 16:56:47 speaking of which... please review my driver patches, and any other patches in the queue 16:56:58 catch ya'll later 16:57:02 #endmeeting 16:57:07 bye all! 16:57:10 bye 16:57:12 # end meeting 16:57:16 grrrr 16:57:20 #end meeting 16:58:27 hrmm??? 16:58:27 jgriffith: you started meeting with nick jgriffit1. Is it the reason for this glitch? 16:58:42 #endmeeting 16:58:45 that's quick 16:58:45 without the h 16:58:52 nope, it was without a 'h' 16:59:05 #endmeeting