15:59:48 #startmeeting cinder 15:59:49 Meeting started Wed Jun 26 15:59:48 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:59:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:59:52 The meeting name has been set to 'cinder' 16:00:01 heelllllllooooooooo 16:00:01 Hi! 16:00:06 hello 16:00:16 hi guys 16:00:32 hey 16:00:39 hi 16:00:53 hi all 16:00:58 o/ 16:01:24 so lets start 16:01:28 seiflotfy_: oh 16:01:30 ok 16:01:31 :) 16:01:39 * jgriffith was getting a cup o'joe 16:01:42 who put in the first item o nthe agenda 16:01:46 me 16:01:47 jgriffith: oh go no hurry 16:01:53 #topic pecan 16:01:55 was waiting for topic switch 16:01:56 there we go 16:02:03 thingee: any blue print for it? 16:02:16 seiflotfy_: it's on the agenda 16:02:19 https://wiki.openstack.org/wiki/CinderMeetings 16:02:29 firefox nightly is broken 16:02:32 i cant see any links 16:02:36 brb 16:02:57 thingee: go for it 16:03:02 folks it's a scary change. if you've read john and I's points on the ML it's going to be one big commit which would scary for reviewing 16:03:10 thingee: why the switch? is it mainly for python 3? 16:03:19 avishay: it works towards that goal sure 16:03:22 gets rid of paste 16:03:32 back 16:03:55 so I propose instead of fixing v1 and v2 to use pecan, we wait for a v3 bump and have pecan and paste run with each other 16:04:03 this is similar to what ceilometer did 16:04:04 +1 16:04:14 thingie cant it be done in subtasks ? 16:04:19 that way we have small commits for each v3 controller with test 16:04:20 thingee: sorry 16:04:21 one at a time 16:04:26 ah ok 16:04:28 cool 16:05:00 I also encourage people to not work on the patch. I don't really care if I'm the person that does, but just the sake of review resources 16:05:20 I've had several people ping about wanting to collaborate and I don't think it's worth resources right now 16:05:43 the blueprint has a link to my github branch which moved v1 over. I can easily change that to be v3 so this should go smooth in I 16:05:47 any questions? 16:05:50 so my 2 cents; changing out the entire web framework out from under the existing API needs some more justification than what we have so far 16:06:06 hi all=) 16:06:07 Doing in in a V3 isolated seems more pragmatic 16:06:21 thingee: sorry... thought you were done :) 16:06:34 no that's fine 16:06:42 here's the thread that lists the points http://lists.openstack.org/pipermail/openstack-dev/2013-June/010857.html 16:06:51 thingee: which release are you targeting with this? 16:07:00 seiflotfy_: I mentioned about I 16:07:03 about=above 16:07:14 if we have a reason for a version bump, which I think we might 16:07:23 I don't want a version bump just for a framework switch 16:07:39 +1 16:07:40 and as john mentioned a version bump each release is kinda a bummer. 16:07:49 would rather make things sane for ops 16:08:12 If there were more compelling gains or bug fixes to going to pecan that'd be one thing 16:08:27 but as it stands I say cache it until we need a bump for other things 16:08:34 got my coffee..phew 16:08:38 but Icehouse we'll probably have a reason for a bump...and I already have most of the work done there for pecan. just gotta make the old framework run along side with pecan which is pretty easy imo 16:08:51 There are a few bit of crazy in our API (inc V2), but as people start to write things that talk to cinder we need to think about long term support 16:09:11 DuncanT: why do I not hear about these? 16:09:13 :) 16:09:31 here I am going to meetups and bragging about how awesome cinder is :P 16:09:44 presentations and all :D 16:09:47 DuncanT: you wanna share your insights? 16:10:12 jgriffith: Little things... resize during snapshot, no need to force for an attach clone 16:10:26 DuncanT: those aren't API issues 16:10:26 Couple of other bits I need to flick through my notebook for 16:10:40 DuncanT: Those are things that *you* don't like in the Cinder behaviors 16:10:43 that's different 16:10:52 DuncanT: make me bugs and have john target them...I now have a lot of bandwidth 16:10:57 and others.. not just you 16:11:02 They're issues with the definition of the API, not the implementation, sure, but they're things that we might want to make sane in V3 16:11:26 The behaviour /is/ the API... 16:11:31 I'm not prepared for the snapshot/volume argument yet again 16:11:37 haha 16:11:44 I've given up on that one for now 16:12:01 Ok so any questions regarding the pecan switch? 16:12:01 Ok... anyway, DuncanT makes a good point 16:12:08 DuncanT: Log some bugs if you would 16:12:13 Sure 16:12:17 Will do 16:12:22 DuncanT: thanks 16:12:36 I think I'm done...anyone have any questions later, feel free to ping me 16:12:37 That'll fit nicely in with thingee 's plan regarding pecan V3 in I (hopefully) 16:12:49 jgriffith: hopefully? :( 16:13:01 thingee: ok... s/hopefully/'' 16:13:07 :) 16:13:37 V3 will be slated for I, I just hope there's other really cool things to go in it 16:13:37 losing faith in me, sheesh 16:13:44 no no no.... not at all 16:13:48 smart ass! 16:13:52 it's a sane approach. it just took me 5k lines of code writing to realize it 16:13:58 :P 16:14:05 that's better than 10k lines of code 16:14:14 So I'm hoping that DuncanT will come up with all kinds of new things we need in V3 16:14:17 hemna: I only did v1 at that point and some tests 16:14:28 So we'll have a brand new shiny toy for I all the way around 16:14:40 ooh...shiny! 16:14:43 The "season of the API" 16:14:48 hemna: imagine thd diff stat once I finished ;) 16:14:50 could be 10k 16:14:54 lol 16:15:02 o_O 16:15:15 alright... everyboy cool with the Pecan decision? 16:15:15 avishay: ? 16:15:17 that would require another release to review it :P 16:15:22 jgriffith: sounds good to me 16:15:23 avishay: you're unusually quiet this evening 16:15:32 jgriffith: just no objections :) 16:15:38 ;) 16:15:40 alright... 16:15:51 #topic ceph support in Cinder 16:15:54 seiflotfy_: you're up 16:16:15 well i wanted to know if everybody is ok with the current map and if it will make it in "I" 16:16:32 I... you mean H? 16:16:39 i dont think it will make it in H 16:16:43 if it can that would be amazing 16:16:48 cinder-backup-to-ceph is *hopefully* ready now ;) 16:16:50 Erm, should make it in H... looks to be making good progress... 16:16:54 Sorry... dont' know what you're talking about then 16:16:55 NICE 16:17:07 You have 3 patches listed, 3 patches under review 16:17:18 You have some other plan that we don't know about :) 16:17:30 jgriffith: just references to say that this is what is still to be done 16:17:34 I can't see any real benefit to the interface class, other than making java coders slightly more at home, but it is harmless enough 16:17:36 and they look good 16:17:58 DuncanT: ;) 16:18:01 only two patchsets here (if you meant me) 16:18:07 one was abandoned 16:18:18 alright... let's back up 16:18:22 DuncanT: good point, but I also see a benefit for other "new" backend services 16:18:26 On the agenda: 16:18:30 Item #2 16:18:41 yep back to number 2 16:18:43 seiflotfy_: has "Discuss status of Ceph support in Cinder" 16:18:52 and there are 3 reviews listed 16:19:02 yeah, so is it possible for us to have it for havana? 16:19:16 also what tests do we have for it 16:19:17 seiflotfy_: so Havana is the current release we're working on 16:19:24 how do we intend to test this properly 16:19:29 seiflotfy_: Havan will be cut from master in the fall 16:19:38 seiflotfy_: that's your job :) 16:19:45 seiflotfy_: It is undergoing a perfectly normally trajectory to land on trunk in the next week or two... 16:19:50 seiflotfy_: submitting that patch means I've assumed you test it :) 16:20:01 I think we need to spend time for performence testing 16:20:04 jgriffith: i tested it with my old shitty patches 16:20:06 and it worked 16:20:16 but it was really slow 16:20:40 jdurgin1: can you test it? :) 16:20:43 managed to backup 1 gig 16:20:46 :P 16:20:47 like actually whitebox testing 16:21:00 come someone clarify what patches we are discussing here 16:21:05 if it is item 2 16:21:07 a question would be how can we make use of ceph 2 ceph backup without going through the generic route 16:21:10 mkoderer, we (my group at HP) just got legal approval to release the performance script I wrote a while back to test cinder 16:21:13 is that up for question 16:21:15 two of those are duplicate 16:21:25 dosaboy: :) 16:21:36 hemna: sounds great 16:21:38 https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/cinder-backup-to-ceph,n,z 16:21:38 dosaboy: indeed 16:21:38 and I have tested them quite extensively 16:21:51 mkoderer, https://github.com/terry7/openstack-stress 16:21:52 bu more testing is never a bad thing 16:22:47 dosaboy: do you intend to allow use of rbd tools for ceph2ceph backups 16:22:48 ? 16:22:59 jgriffith, seiflotfy_: I'll see if jdurgin1 wants to test things out 16:23:18 not sure what you mean, but look at bp for what remains to be implemented 16:23:51 Ok, so I'm not sure how well organized this topic is... shall we move on? 16:23:55 is there anything else relevant to discuss in terms with this in cinder? 16:24:00 ok cool, so to sum it up "cinder ceph backup" ===> might make it in havana, neds more testing 16:24:01 Item #3 ? 16:24:13 jgriffith: yes 16:24:15 yes pls 16:24:23 #topic parent calss for backup service? 16:24:27 class 16:24:30 mkoderer: go ahead 16:25:06 ok I just introduced this interface class 16:25:14 I know DuncanT hate me for it ;) 16:25:36 but I think we could put some overall functionallity in it 16:25:36 mkoderer: I think there's just a question of "what's the plan" 16:25:59 the idea is we have now 2 backends swift and ceph and more will be coming i guess 16:26:22 just to have a standard class that one can orient ones self on 16:26:38 seiflotfy_: so i think this is fine...it sets a guideline to devs adding additional services. However, I would like to see documentation that explains this a bit more for newcomers wanting to add their object store 16:27:08 I don't really care who does that...but it'll be, well, great :) 16:27:08 thingee: good point 16:27:32 thingee: i think mkoderer will take the lead on this 16:27:32 :d 16:27:41 sure np 16:27:50 seiflotfy_, mkoderer: wonderful thanks guys! 16:27:51 so i assume you want us to plan it out more and introduce it again in a better blueprint? 16:28:26 ok cool 16:28:31 that would be a good idea 16:28:33 cool by me, follows our patterns we use everywhere else 16:28:45 item 4? 16:28:52 jepp 16:28:58 #topic community recognition 16:29:11 so in my free time i do work for GNOME and Mozilla 16:29:15 seiflotfy_: I don't think it needs to be planned more...the interface is already defined. I think the documentation will speak for it :) 16:29:26 thingee: ok 16:29:29 so back to 4 16:29:48 the idea is i have a small script i can adapt that goes through git and bugzilla (will change it to launchpad) 16:30:14 we use it at mozilla with every release to detect new code contributors 16:30:28 and publish it via a link in the release notes 16:30:31 doesn't openstack already do this? 16:30:36 seiflotfy_: FYI we have one of those :) 16:30:39 guitarzan: yup 16:30:40 they do? 16:30:40 seiflotfy_: https://github.com/j-griffith/openstack-stats 16:30:42 ok 16:30:45 guitarzan: yes 16:30:48 then no need for me to do it then 16:30:50 it's in community newsletter thing 16:30:52 just wanted to help 16:30:57 seiflotfy_: :) 16:31:08 ok less work for me then :D 16:31:35 wow look at that, 9:30 16:31:40 pdt 16:31:44 16:30 whatever 16:31:53 is it currently done for things like: new reviewers, new people active on launchpad (but haven't committed code)? 16:32:06 thingee: no banking time for next meetings! 16:32:34 eharney: we can look into this and try to work it out during the week 16:32:37 #topic H2 16:32:43 i don't know of any real needs there, just thinking 16:32:45 real quick 16:32:55 https://launchpad.net/cinder/+milestone/havana-2 16:33:02 we're a bit stalled on BP's here 16:33:14 anyone from mirantis around this morning? 16:33:42 eharney: also looking for an update from you on the ILO BP 16:34:12 bueller... bueller 16:34:16 yes, i need to update there 16:34:20 * jgriffith is talking to his dog this morning 16:34:21 :) 16:34:26 haha 16:34:28 at the moment gluster snaps work has been higher priority for me 16:34:41 eharney: You still planning on H2, or you want it deferred? 16:34:54 eharney: I can defer it and if you get to it bring it back in 16:35:01 realistically it should probably be at H3 at this point 16:35:20 eharney: sounds good 16:35:28 i did have a question there though 16:35:38 eharney: have at it 16:35:44 we have this idea of minimum driver requirements, right 16:35:54 eharney: indeed, we do 16:36:02 i'm trying to understand how that works for a driver like this that supports multiple different backends 16:36:28 eharney: not sure I follow? 16:36:41 eharney: this is what I consider more a base layer than a driver per-say 16:36:50 well... it's an iscsi driver 16:37:00 eharney: or are youtalking your gluster work? 16:37:09 here, the driver supports libstoragemgmt, which enables support for targetd, and a couple of other storage platforms 16:37:38 so, meeting minimum requirements for the driver may depend on what backend you configure it to use 16:37:50 eharney: well, I think it's a different category 16:38:00 ok, makes sense 16:38:02 eharney: min requirements for LIO would be >= tgtd 16:38:04 No? 16:38:10 right 16:38:36 eharney: and if we're not switching the default (which it looks like we won't due to time) it's an option/beta so to speak anyway 16:39:00 was that what you were wondering? 16:39:14 i think that covers what i was wondering 16:39:30 eharney: k... ping me if there's more questions 16:39:36 or if I'm missing a point here 16:39:50 ok 16:40:21 folks, for volume-host-attach, when you have time pls take a look on https://review.openstack.org/#/c/34125/ , i think it's closely ready to merge. 16:40:30 Ok.. we have no winston, so we can't get into the QoS rate-limiting debate 16:40:30 phewww 16:40:36 I would like it if folks could help out with guitarzan 's type-quota patch 16:40:43 I would like that as well :) 16:41:18 we need some input on how this should be presented 16:41:22 guitarzan: and I have talked a bit but I think I'm stuck... need some brain-storming 16:41:49 and need to make sure nobody pukes on it when they notice it later 16:41:49 :) 16:41:51 guitarzan: has a number of possibilities worked up he can share 16:41:56 jgriffith, guitarzan: can help after morning meeting...around 17:20 utc 16:42:24 I was starting to read through your discussions on channel, damn you two go on.... 16:42:26 ;-) 16:42:32 haha 16:42:35 * guitarzan hides in shame 16:42:37 DuncanT: we'll need your input as well as you've objected to the approach before 16:42:41 :) 16:42:46 almost as bad as you and I 16:42:52 or me and thingee 16:42:54 Indeed and indeed 16:43:03 or whoever is foolish enough to start a conversation with me :) 16:43:31 Ok, I had more... but quite frankly it'd be nice to wrap a meeting early for a change :) 16:43:34 At least it is harder for me to turn into a shouting match on IRC... apparently that can make bystanders nervious 16:43:38 #topic open discussion 16:43:46 DuncanT: whimps! 16:44:01 anybody have anything? 16:44:11 one more point re: min driver requirements 16:44:21 eharney: yes? 16:44:27 there are a couple of new driver reviews outstanding that probably aren't meeting those... we need to tell them something? 16:44:51 I verified gpfs 16:45:31 eharney: zvm and gpfs are the only two that come to mind 16:45:37 jgriffith, DuncanT: if you guys don't mind, I'm going to take my "that guy" role and start sending emails to driver owners? 16:45:54 Ohh... xtreemfs as well 16:45:56 thingee: :) 16:45:58 jgriffith: xtreemfs, "generic block" thing 16:46:12 thingee: Go for it 16:46:16 that's been stale for a while 16:46:30 speaking of GPFS, any idea why its blueprint isn't showing up in search (and therefore in the link in the commit message)? 16:46:32 https://blueprints.launchpad.net/cinder/+spec/gpfs-volume-driver 16:46:48 because he's got a bogus link 16:46:50 avishay: I believe jgriffith gave 'em a -2 about it 16:46:52 My plan is to put patches in to remove them the day after H3 closes, but it is probably far nicer to give people warning 16:47:00 avishay: does it not show ones "Pending approval"? dunno 16:47:02 I did, and even told him how to fix it 16:47:10 DuncanT: we're splitting, remember? 16:47:12 :) 16:47:28 thingee: :-) 16:48:04 jgriffith: how can he fix? 16:48:08 avishay: FYI https://blueprints.launchpad.net/cinder?searchtext=gpfs 16:48:23 jgriffith: yes, his BP isn't there 16:48:29 avishay: yeah it is 16:48:43 https://blueprints.launchpad.net/cinder/+spec/ibm-gpfs-driver 16:48:47 jgriffith: no it's not...that one was made by someone else and is not relevant 16:48:56 jgriffith: this is his - https://blueprints.launchpad.net/cinder/+spec/gpfs-volume-driver 16:49:05 jgriffith: I've updated the bp for ceph backup to aim for h2 since that is hopefully realistic now 16:49:39 i created ibm-gpfs-driver 16:51:34 avishay: I'll look into it 16:51:41 jgriffith: thanks! 16:51:42 avishay: the fact that he marked it complete may be an issue 16:51:51 zhiyan: hi, can you provide the blueprint in your next patch commit message? 16:51:53 jgriffith: aahhhh... 16:52:20 8 minute warning 16:52:39 zhiyan: can you kill the one you started, or mark it superseded or something 16:52:53 jgriffith: ok 16:53:02 thingee: which one? 16:53:29 zhiyan: the patch that's introducing the gpfs driver should have a blueprint about adding the gpfs driver 16:53:49 thingee: haha... see, you just fell into the same trap that I did :) 16:53:50 thingee: it does, but the link is broken 16:54:02 thingee: zhiyan isn't doing that work... dinesh is 16:54:07 zhiyan: had a bp 16:54:13 dinesh started a new one 16:54:23 zhiyan: please kill https://blueprints.launchpad.net/cinder/+spec/ibm-gpfs-driver 16:54:34 avishay: haha :) 16:54:38 alright folks 16:54:46 we blew our early quit time 16:54:48 avishay: oh yeah that's what I meant :) 16:54:55 thingee: :) 16:54:57 I'm in #openstack-cinder as always 16:55:02 we can still be 5 minutes early 16:55:05 Thanks!! 16:55:10 Bye all! 16:55:16 guitarzan, DuncanT, jgriffith: can we talk about quotas in 20 mins? 16:55:24 sure 16:55:30 done 16:55:35 zhiyan: thanks 16:55:37 thanks everyone 16:55:38 #endmeeting cinder