16:00:45 #startmeeting cinder 16:00:46 Meeting started Wed May 22 16:00:45 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:47 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:49 The meeting name has been set to 'cinder' 16:00:56 o/ 16:01:19 Note agenda #link https://wiki.openstack.org/wiki/CinderMeetings 16:01:43 #topic H1 freeze date 16:01:55 So just FYI freeze for H1 is Next Tuesday 16:02:05 Hi all 16:02:15 Hi 16:02:16 hello rushiagrawal 16:02:21 hi all 16:02:29 o/ 16:02:29 jgriffith: so the patch has to be merged by next Tuesday, right 16:02:30 Hi 16:02:33 hi 16:02:36 xyang_: correct 16:02:36 hi 16:02:48 xyang_: which leads me to my question for you :) 16:03:02 xyang_: do you think you'll get through the legal team by then? 16:03:11 morning 16:03:13 xyang_: we can always defer to H2 16:03:18 good afternoon 16:03:20 not a big deal at all 16:03:21 hi 16:03:23 good evening 16:03:34 jgriffith: I'm still waiting. Can I get back to you later today 16:03:43 xyang_: sure, that's fine 16:03:59 winston-d: around? 16:04:08 jgriffith: yup! 16:04:35 I think you're rate limiting BP is pretty close (at least for the Cinder side) and first portion 16:05:05 winston-d: shoot..wrong bp 16:05:07 :) 16:05:16 scheduler hints :) 16:05:51 winston-d: should we change the owner on that? 16:06:19 For those wondering what I'm talking about: https://review.openstack.org/#/c/28945/ 16:06:25 jgriffith: yes, please change to mirantis folk 16:06:26 thanks 16:06:52 winston-d: I went through it again last night, I think they covered the concerns folks raised 16:07:12 winston-d: I think it's ready to go but needs another good going over by me an at least one or two others 16:07:17 jgriffith: there's something else needs fix. i'll post comments 16:07:34 winston-d: I suspected that might be the case :) 16:07:34 the concern I had with it that I haven't raised is whether it has to touch v2/volume.py if it's using the wsgi.extend decorator 16:08:21 thingee: interesting 16:08:40 I haven't looked closely at how wsgi.extend works, but I wanted to play with the patch to see if it's necessary 16:08:57 if I drop the ball on this, I can always change it later. 16:09:02 thingee: I don't know how that would work, but if you want to look at that, that's great 16:09:12 :) 16:09:19 thingee: we've got time so that's fine by me 16:09:33 thingee: you might want to ask the Mirantis folks if they've already thought/looked at that 16:10:31 So most of the other things for H1 are on me :( 16:10:33 https://launchpad.net/cinder/+milestone/havana-1 16:10:55 But.. if anybody has some cycles, there are a few bugs that aren't asigned or triaged 16:11:25 Any volunteers? 16:11:31 thingee: you can't volunteer! 16:11:44 jgriffith: i'll take a look 16:11:50 winston-d: thanks 16:11:55 * thingee needs to pull more all nighters 16:12:02 * hemna needs to clone himself 16:12:17 I'm kinda slammed at the moment 16:12:17 bswartz: any update on 1139129? 16:12:21 https://bugs.launchpad.net/cinder/+bugs?orderby=status&start=0 16:12:34 * medberry will look a the queue but likely isn't ready to actually triage yet 16:12:45 which 1139129? 16:12:46 medberry: thanks Dave 16:12:54 bswartz: your bug targetted for H1 16:13:18 no update sorry 16:13:22 bswartz: med_ et'all https://launchpad.net/cinder/+milestone/havana-1 16:13:33 I'm most interested in the items that are targetted for H1 16:13:49 The link above is items that we *said* we'd have done for H1 including bugs 16:13:59 bswartz: any idea when you might have an update? 16:14:02 nod 16:14:22 jgriffith: that one hasn't fallen off my radar, it just hasn't been a high enough priority 16:14:28 bswartz: I've already removed your unified driver, sounds like that hasn't gone anywhere either? 16:14:29 I'll plan on H2 16:14:39 bswartz: K, I'll adjust 16:14:41 jgriffith: the unified driver should be ready early in H2 16:15:01 bswartz: Ok, although it was originally slated for H1 16:15:28 bswartz: cool... updated 16:16:01 jgriffith: I'm hoping bug 1122898 will be addressed by hemna's patch for H2 ("Refactor Cinder's iSCSI/FC attach code") 16:16:02 Launchpad bug 1122898 in cinder "Generic iSCSI copy volume<->image doesn't disconnect" [Undecided,Confirmed] https://launchpad.net/bugs/1122898 16:16:05 Ok, anything folks see on that list they want to comment on before I ask some more questions about a few :) 16:16:32 avishay: ok, so you want to retarget that one as well? 16:16:36 Jgriffith: is this the time to discuss the milestones of havana-1? 16:16:42 jgriffith: yessir 16:16:53 avishay: ko! 16:17:01 lakhindr_: yes but hold on a moment 16:17:10 I wanted to ask about the local disk stuff :) 16:17:15 jgriffith, I'll take a look t 1161157 16:17:26 jgriffith, I'll take a look t 1161557 16:17:26 med_: awesome possum 16:17:31 avishay, subscribed to that one. we'll have to test it with the refactor 16:17:40 med_: if you *like* it, assign yourself pretty please :) 16:18:01 https://blueprints.launchpad.net/cinder/+spec/local-disk-storage-utils 16:18:04 hemna: yep, thanks :) 16:18:07 so I threw that out there ^^ 16:18:31 Meanwhile we have: https://review.openstack.org/#/c/27051/ 16:18:38 Here's my question.... 16:18:48 yea...wanted to ask about that... 16:19:00 Do folks see or hear of an advantage regarding using raw disks over LVM? 16:19:31 I see disadvantages 16:19:35 seems unlikely, especially once you get to the snapshot features.. 16:19:44 jgriffith: apparently you - you wrote in the review "This is awesome and very much needed/requested as of late." :) 16:19:46 LVM provides more flexibility in the usage of the raw disk no? 16:19:58 jgriffith: but i don't have a use case for it personally 16:20:03 the hadoop argument (or similar) seems to be valid 16:20:07 avishay: yes, and see my comments from last night... the point is I wanted to get input from others on this as well 16:20:33 i agree with med_ , performance concern could be one 16:20:57 My testing doesn't show LVM taking that big of a hit on perf though which is why I wanted to bring this up 16:21:19 LVM gets a bad wrap from folks like DuncanT :) 16:21:29 lol 16:21:32 But that being said.... 16:21:45 Two folks pointing out a use case works for me 16:21:47 LVM can have bad performance when dealing with snapshots and such, but I can't imagine it being too noticeable if you're using it like a disk 16:22:00 avishay: not necessarily true either :) 16:22:08 avishay: thin provisioning baby!! 16:22:22 Ok... that's all I needed 16:22:26 maybe we can ask for some numbers from those folks 16:22:27 jgriffith: I said "can" :) 16:22:32 avishay: :) 16:22:52 So if I'll figure out timing of Ann's patch versus the brick work 16:23:10 Her patch my be just an "introduction" so to speak :) 16:23:23 so what are the use cases? i missed it? hadoop? 16:23:38 avishay: mostly just a perf thing 16:23:52 avishay: TBH what started it was the bare metal stuff 16:24:05 jgriffith, my hadoop "expert" would really expect some optimizations in hadoop to do the wrong thing if on LVM.... 16:24:09 avishay: there's a real need for disk management fro Ironic and company 16:24:27 jgriffith: gotcha 16:24:43 kk 16:24:56 so we covered H1 and status in one shot :) 16:25:03 Moving along to lakhindr_ 16:25:11 I am here :-) 16:25:13 #topic Test code and example dirver files 16:25:20 lakhindr_: I gave this it's own topic :) 16:25:24 jgriffith: can we discuss the share service briefly? 16:25:25 :) 16:25:39 bswartz: sure there's time at the end 16:25:44 okay 16:25:51 bswartz: or right after this, I'll bump it up on the list :) 16:25:58 ty 16:26:07 bswartz: yw 16:26:12 Ok.. so: https://review.openstack.org/#/c/28791/ 16:26:20 For those of you that haven't been following 16:26:38 There's a debate here and I think lakhindr_ would like to get some input 16:26:47 So I'm going to give him 5 minutes to make his case :) 16:26:56 Starting... *NOW* 16:27:06 time's up! 16:27:09 (Thanks Jgriffith: Folks, I have a proposal, which I wanted to mail, but then thought let me first try http://paste.openstack.org/show/37593/) 16:27:18 hemna: thanks :-) 16:27:21 :) 16:27:29 LULZ 16:28:01 I am here to briefly discuss what I have in mind. I can email it too. But basically I find merit in simulating our back-end compared to say mocking everything. 16:28:21 It has simplicity and power. And with that in mind, if you look at my code you can see how self tests are run. 16:28:29 lakhindr_: I think folks get the point 16:28:41 lakhindr_: and I don't think simulating/mocking the backend is the debate 16:28:43 Oh since you gave me 5 minutes, I thought I have to seak up :-) 16:28:58 lakhindr_, the 3PAR unit tests "simulate" the back end (array) as well via a reimplemented client class 16:28:58 I was coming to that :-) 16:29:01 lakhindr_: ok, sorry go ahead but I didn't want you to waste your 5 minutes :) 16:29:31 the storwize/svc tests work against a simulator as well as real storage 16:29:43 Ah, so I have run afoul of some rules, e.g. where should a sample configuration file go. Some suggestion is, well don;t use it! But I prefer to. 16:30:04 I think that's the crux of the -1's on the review 16:30:11 And the question of where it should go is I think easily resolved when a vendor subdirectory contains all the stuff in one place. 16:31:03 lakhindr_: I thought it was resolved when we create block-storage-admin guide with driver config sections? 16:31:12 s/create/created/ 16:31:27 lakhindr_: here's the issue I have with this. I want unit tests. I won't argue there is some gain in functional, but our tests are already slow with the mixture there is. I'd argue that your focus on your driver tests should be on the implementation, not if something can open a config. 16:31:32 Jgriffith. I am not saying anything about the admin guide. We shall of course abide by that. 16:32:04 My focus is on unit tests, thingee! And we are following all the requirements, indeed. 16:32:19 And of course focus is on the driver actually! Config is part of it! 16:32:57 sighhh... I fear there's no compromise here 16:33:06 What's up with the separate config file trend instead of cinder.conf? 16:33:23 avishay: that is too simple. 16:33:23 lakhindr_: I suspect what's going to happen: Folks will keep -1 your patch until you at least attempt to address the points they've made 16:33:42 lakhindr_: what is too simple? 16:34:06 the confusion I believe is that other driver's also have their sample configs in place....which set a bad precedent. 16:34:09 avishay: as array configuration becomes more sophisticated they run beyond the scope of simple one liner config parameters 16:34:11 lakhindr_: can you tell me the problem with the current things provided to the drivers that are in cinder? 16:34:37 lakhindr_, that's what documentation is for IMHO 16:34:42 hemna: i know, that's why i'm asking why people started with this craziness :) 16:34:44 hemna: +1 16:34:44 not sample config files 16:34:51 thingee: I don't talk of the problems in other drivers, sorry. 16:35:18 hemna: documentation is for what? sorry did not follow? 16:35:33 avishay: I have NO problem with people using external config files 16:35:34 s/?$/.$/ 16:35:56 lakhindr_, to describe the more complex configurations of the drivers. 16:36:19 hemna: there is already a precedent. Many vendors already use it here! 16:36:26 jgriffith: i didn't say i had a problem with it, just asking why not use cinder.conf 16:36:41 lakhindr_: and some have spoke about being ok with removing them since we have proper docs now 16:36:43 there's one benefit of this external config file, you don't have to restart cinder-volume. With cinder.conf, you need to 16:36:44 the more complex the configurations are for a driver(s), the better the documentation needs to be. We don't need sample config files in the source code to help admins. 16:37:21 folks this is not about documentation. I agree we can document everything the way it is required. But I want to run tests off a real configuration file. 16:37:27 lakhindr_, hemna: and that's why I'm mentioning this. This is not cinder's concern. This is a vendors concern in making sure this conveyed in the proper channels. 16:37:29 xyang_, I don't think that's the issue here. using external configs for a driver is ok...the real issue is, do we allow/want sample config files in cinder source 16:38:00 hemna: I say yes. Because we want to test off a sample config. 16:38:27 that config should be in the unit test code itself, not in the driver's source tree. 16:38:28 hemna: that's ok. I'm ok to remove it. I added it there based on some review comments a while back 16:38:31 lakhindr_: I've noticed windows storage for example has stuff for their particular tests in cinder.tests.windows 16:38:39 hemna: +1 16:38:49 thingee: +1 for you too :) 16:38:53 I'm asking hds do the same. Along with moving any simulation code out of the implementation into the test files like the other drivers. 16:39:03 thingee, +1 16:39:03 Hemna: would you like me to pollute cinder/cinder/tests with a config file? 16:39:16 lakhindr_: better than polluting the implementation 16:39:23 lakhindr_, pollute the test_hds.py :) 16:39:32 hemna: +1 16:39:36 hemna: +1 16:39:49 hemna: we have a basic disagreement there. Because I feel value in reading a real config file :-) 16:40:10 I don't disagree with reading a "real config file" 16:40:19 just put that "config file" as a string in test_hds.py 16:40:29 Folks, if this is a big thing, I shall go your way. Just look at my paste bin, and give your comments when we have time. I can post it to some mailing list. 16:40:32 lakhindr_: but you can cover that in tempest tests rather than in unit test 16:40:46 lakhindr_: that shouldn't be the focus of your test. Making sure a config file can open *everytime* in a test is not the focus. 16:40:53 sorry I don't know the tempest test.. 16:41:13 thingee: sorry I don't think I agree with what is the 'focus'. That is so subjective..! 16:41:22 lakhindr_: that's integration test framework 16:41:36 OK. So then allow me to test things fully! 16:42:00 Why is the restriction in testing so tight, please, may I ask? We all have the same goal! 16:42:31 we aren't against testing. we just aren't for polluting the source tree with sample config files. 16:42:43 Ok, we need to save some time for bswartz and I don't see this really being overly productive 16:42:53 yes, this is going around in circles 16:42:59 you can accomplish everything you want by just putting that sample config file as a string in your unit test code. 16:43:03 hemna: thank you for being the voice of reason :) 16:43:07 lakhindr_: I think the community(along with most of cinder core) and expressed the way they would like you to move forward. 16:43:19 OK. 16:43:34 Can I still do the simulation then? 16:43:34 lakhindr_: if you have 3 or 4 folks -1 your patch w/ a recommendation it may be best to at least attempt to cooperate and implement their suggestions 16:43:42 lakhindr_: but I'll leave that up to you 16:43:52 #topic shares-service 16:43:56 bswartz: you're up 16:43:57 I will cooperate with you guys of course! 16:43:59 thanks jgriffith 16:44:19 so we have resubmitted the shares service here: https://review.openstack.org/#/c/29821/ 16:44:44 for those of you that don't remember the discussion from G3, this is adding support for management of NAS storage to openstack 16:44:56 on to the next easy topic :) 16:45:02 :) 16:45:03 kmartin: haha 16:45:05 haha 16:45:19 s/openstack/cinder 16:45:23 :P 16:45:46 Everyone agrees that management of NAS storage shouldn't be in cinder long term, but in teh short term, creating a new service in cinder is the fastest way to get the community started, and to get the feature into customer's hands 16:45:58 yes hemna 16:46:34 bswartz: fast != correct 16:46:47 the main things we've done between G3 and now have been to write up some good documentation, and add tests to tempest, and also bring the share service up to parity with many new volume features from grizzly 16:46:50 bswartz: and I'm concerned about maintenance/support more than anything else 16:47:26 bswartz: I haven't seen the docs and tempest tests? 16:47:30 NetApp is signing up to maintain and support the share service the community grows large enough 16:47:48 bswartz: what about your current code? 16:47:53 jgriffith: the temptest tests are are in a gerrit WIP submission -- they can't go in until the service goes in 16:48:08 jgriffith: we also have devstack support coming, but that depends on the service going in as well 16:48:10 bswartz: sure, but you can share that stuff w/folks ya know 16:48:22 bswartz: again, why is all of this being done in a silo? 16:48:34 what silo? we're posting everyting we do publicly 16:48:58 bswartz, jgriffith: this is true. Really it's going to slow down development on both shared and block in resources. especially if it's not long term (which I'm not going to argue about) 16:49:21 If we know that share service needs to live outside of cinder, and you've been working on this for how many releases now? Lets just make this it's own separate service now. 16:49:33 anyways I've talked with many of you already, and I'm just asking that you review the code, and +1 it if you like it 16:50:22 hemna: work needs to be done on olso before we can effectively split the code -- right now we effectively share a lot of low level cinder stuff 16:50:25 bswartz, we have a vested interest here at HP of getting share service in openstack. But I still don't think it belongs in cinder. 16:50:30 bswartz: can you provide links to the devstack, tempest and docs you referred to? 16:50:45 i think this inside Cinder vs. outside Cinder probably needs some ML discussion outside of just Cinder folks 16:50:46 I'll share links soon.. 16:50:47 to me having NAS in Cinder is almost as strange as Swift in Cinder 16:50:49 hemna: +1, my company as well 16:50:55 bswartz: from last time we spoke for g3, do we have any other vendors chiming in on the api? 16:50:59 rushi_agr: thanks :) 16:51:00 i haven't heard much from other openstack folks on this 16:51:04 avishay, +1 16:51:06 Hopefully tomorrow 16:51:19 rushi_agr: oh... so it's not there yet? 16:51:21 tempest: https://review.openstack.org/#/c/26598/ 16:51:23 * jgriffith is confused 16:51:28 bswartz: cool.. thanks 16:51:31 avishay: I've been say just put share in swift! 16:51:39 lol 16:51:47 hahah 16:51:50 Its there...wanted to check as its a month old 16:51:51 rushiagr: do you have the wiki link? 16:51:51 thingee: +1 16:52:13 thingee: "here - your problem now!" :) 16:52:15 bswartz: rushi_agr so just an observation... 16:52:26 bswartz: but seriously any other vendors chime in on the api? 16:52:32 You've got probably well over 10K lines of code associated with all of this 16:52:38 eharney: are you online? 16:52:43 How much harder would it really be to just create a project? 16:52:51 yes we estimated about 10k of code 16:52:54 10k lines 16:52:56 yes 16:53:10 There is a wiki..need to search for it 16:53:27 maybe I missed a joke, but IRC beeps at me when swift is mentioned. is there something we should be doing on the swift side? 16:53:33 I've talked to eharney about support for gluster in the share service 16:53:51 notmyname: you missed a joke 16:53:56 notmyname: no, it's a joke 16:53:59 notmyname: run 16:53:59 hopefully eharney will be able to find time to review 16:54:12 notmyname: you might end up with 10k lines of code on your doorstep :P 16:54:15 yes, i looked at it a bit ago, and will be diving back into the updated code shortly 16:54:23 eharney: thx 16:54:36 notmyname: you guys are going to imlement share-services in swift :) 16:54:45 anyways, the big complaint about the code during grizzly was that it wasn't ready until G3 and people were nervous 16:55:01 jgriffith: I'll be happy to -2 10k line patches in swift :-) 16:55:02 and that it doesn't belong in cinder 16:55:07 notmyname: :) 16:55:19 we're here now in H1 and we're committed to fixing any issues during havana, and tempest /cinderclient/devstack stuff will come not long after the code is accepted 16:55:19 notmyname: lol 16:55:21 and that it only supported NetApp 16:55:21 notmyname, as long as you don't +2 in cinder 16:55:35 med_: HA!! 16:55:53 * notmyname is not sure what a share service is 16:56:02 bswartz: so if it goes in and bugs get to a threshold and aren't being fixed, and it's not keeping up... 16:56:10 That means it can be removed without objection? 16:56:13 notmyname, nfs, smb, etc. 16:56:28 ah 16:56:42 yeah if netapp can't maintain it or build a community to maintain it then it should be trashed, absolutely 16:56:43 thinks NAS 16:56:47 jgriffith, pulling 8k lines of code spread throughout cinder....at the last minute of H3. 16:56:49 gives me chills 16:56:54 we're just asking for an opporunity to get started 16:57:30 incubating a whole new project is a LONG process 16:57:37 bswartz: Just to be clear for the past year and a half there's been an opporunity 16:57:41 We'd like to do what nova-volume did 16:57:50 bswartz: you would've been through it by now FYI 16:57:56 The safest place for share service and the most flexibility for share service and it's core members I believe is in it's own project. 16:58:03 bswartz: and there's a reason that it's a long process 16:58:16 bswartz: this short cut is a dangerous deal whether you think so or not 16:58:22 I think the community for shares will largely overlap with the community for block storage 16:58:33 bswartz: I don't think that's true 16:58:47 bswartz: I have a difficult time with resources on block as it is 16:58:48 not 100%, but many storage vendors have solutions for both 16:58:57 The community may overlap, but I think these two sides to Cinder will influence each other in bad ways 16:59:00 bswartz: I don't need more vendors adding drivers 16:59:07 bswartz: I need folks to work on the core project 16:59:39 i still think the question of incubating within Cinder vs. starting another project needs to hit the ML for input from some non-Cinder folks 16:59:44 bswartz: anyhow... I stil would urge you guys to do this right and create a project 16:59:51 bswartz: you've already done most of the work 17:00:00 bswartz: say we allow it in cinder for the time being, would it almost be impossible to pull it back out when it becomes it's own service, we're talking 7500 lines of code, where it really belongs in the first place 17:00:10 eharney: we've tried that and there was very little input from the dev community 17:00:19 jgriffith, bswartz: own project and incubated by openstack +1 17:00:28 thingee, +1 17:00:28 kmartin: we did take steps to address that 17:00:47 jgriffith: hrm, seems odd, but maybe that's why i haven't heard much from outside 17:00:52 kmartin: disabling the service if we're not happy with it doesn't require changing much 17:00:55 kmartin: netapp and bswartz inparticular were extremely cooperative in working with me on the design last summer to help with that 17:00:56 jgriffith: heads up, i believe there's another meeting starting here very soon 17:01:08 bswartz: if it was its own project you could +2 it now 17:01:13 danwent: thank you sir 17:01:16 bswartz: :) 17:01:19 folks, we're out of time 17:01:25 #openstack-cinder 17:01:25 own project +1 17:01:25 thanks everyone 17:01:30 we're all there anyway 17:01:37 I was talking about pulling the code out, not disabling it 17:01:37 #endmeeting