15:59:48 <jgriffith> #startmeeting cinder
15:59:49 <openstack> Meeting started Wed Jun 26 15:59:48 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:59:50 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:59:52 <openstack> The meeting name has been set to 'cinder'
16:00:01 <jgriffith> heelllllllooooooooo
16:00:01 <mkoderer> Hi!
16:00:06 <agordeev> hello
16:00:16 <seiflotfy_> hi guys
16:00:32 <DuncanT> hey
16:00:39 <zhiyan> hi
16:00:53 <avishay> hi all
16:00:58 <thingee> o/
16:01:24 <seiflotfy_> so lets start
16:01:28 <jgriffith> seiflotfy_: oh
16:01:30 <jgriffith> ok
16:01:31 <jgriffith> :)
16:01:39 * jgriffith was getting a cup o'joe
16:01:42 <seiflotfy_> who put in the first item o nthe agenda
16:01:46 <thingee> me
16:01:47 <seiflotfy_> jgriffith: oh go no hurry
16:01:53 <jgriffith> #topic pecan
16:01:55 <thingee> was waiting for topic switch
16:01:56 <thingee> there we go
16:02:03 <seiflotfy_> thingee: any blue print for it?
16:02:16 <thingee> seiflotfy_: it's on the agenda
16:02:19 <thingee> https://wiki.openstack.org/wiki/CinderMeetings
16:02:29 <seiflotfy_> firefox nightly is broken
16:02:32 <seiflotfy_> i cant see any links
16:02:36 <seiflotfy_> brb
16:02:57 <jgriffith> thingee: go for it
16:03:02 <thingee> folks it's a scary change. if you've read john and I's points on the ML it's going to be one big commit which would scary for reviewing
16:03:10 <avishay> thingee: why the switch?  is it mainly for python 3?
16:03:19 <thingee> avishay: it works towards that goal sure
16:03:22 <thingee> gets rid of paste
16:03:32 <seiflotfy_> back
16:03:55 <thingee> so I propose instead of fixing v1 and v2 to use pecan, we wait for a v3 bump and have pecan and paste run with each other
16:04:03 <thingee> this is similar to what ceilometer did
16:04:04 <jgriffith> +1
16:04:14 <seiflotfy_> thingie cant it be done in subtasks ?
16:04:19 <thingee> that way we have small commits for each v3 controller with test
16:04:20 <seiflotfy_> thingee: sorry
16:04:21 <thingee> one at a time
16:04:26 <seiflotfy_> ah ok
16:04:28 <seiflotfy_> cool
16:05:00 <thingee> I also encourage people to not work on the patch. I don't really care if I'm the person that does, but just the sake of review resources
16:05:20 <thingee> I've had several people ping about wanting to collaborate and I don't think it's worth resources right now
16:05:43 <thingee> the blueprint has a link to my github branch which moved v1 over. I can easily change that to be v3 so this should go smooth in I
16:05:47 <thingee> any questions?
16:05:50 <jgriffith> so my 2 cents; changing out the entire web framework out from under the existing API needs some more justification than what we have so far
16:06:06 <boris-42> hi all=)
16:06:07 <jgriffith> Doing in in a V3 isolated seems more pragmatic
16:06:21 <jgriffith> thingee: sorry... thought you were done :)
16:06:34 <thingee> no that's fine
16:06:42 <thingee> here's the thread that lists the points http://lists.openstack.org/pipermail/openstack-dev/2013-June/010857.html
16:06:51 <seiflotfy_> thingee: which release are you targeting with this?
16:07:00 <thingee> seiflotfy_: I mentioned about I
16:07:03 <thingee> about=above
16:07:14 <thingee> if we have a reason for a version bump, which I think we might
16:07:23 <thingee> I don't want a version bump just for a framework switch
16:07:39 <jgriffith> +1
16:07:40 <thingee> and as john mentioned a version bump each release is kinda a bummer.
16:07:49 <thingee> would rather make things sane for ops
16:08:12 <jgriffith> If there were more compelling gains or bug fixes to going to pecan that'd be one thing
16:08:27 <jgriffith> but as it stands I say cache it until we need a bump for other things
16:08:34 <hemna> got my coffee..phew
16:08:38 <thingee> but Icehouse we'll probably have a reason for a bump...and I already have most of the work done there for pecan. just gotta make the old framework run along side with pecan which is pretty easy imo
16:08:51 <DuncanT> There are a few bit of crazy in our API (inc V2), but as people start to write things that talk to cinder we need to think about long term support
16:09:11 <thingee> DuncanT: why do I not hear about these?
16:09:13 <thingee> :)
16:09:31 <thingee> here I am going to meetups and bragging about how awesome cinder is :P
16:09:44 <thingee> presentations and all :D
16:09:47 <jgriffith> DuncanT: you wanna share your insights?
16:10:12 <DuncanT> jgriffith: Little things... resize during snapshot, no need to force for an attach clone
16:10:26 <jgriffith> DuncanT: those aren't API issues
16:10:26 <DuncanT> Couple of other bits I need to flick through my notebook for
16:10:40 <jgriffith> DuncanT: Those are things that *you* don't like in the Cinder behaviors
16:10:43 <jgriffith> that's different
16:10:52 <thingee> DuncanT: make me bugs and have john target them...I now have a lot of bandwidth
16:10:57 <jgriffith> and others.. not just you
16:11:02 <DuncanT> They're issues with the definition of the API, not the implementation, sure, but they're things that we might want to make sane in V3
16:11:26 <DuncanT> The behaviour /is/ the API...
16:11:31 <jgriffith> I'm not prepared for the snapshot/volume argument yet again
16:11:37 <thingee> haha
16:11:44 <DuncanT> I've given up on that one for now
16:12:01 <thingee> Ok so any questions regarding the pecan switch?
16:12:01 <jgriffith> Ok... anyway, DuncanT makes a good point
16:12:08 <jgriffith> DuncanT: Log some bugs if you would
16:12:13 <DuncanT> Sure
16:12:17 <DuncanT> Will do
16:12:22 <thingee> DuncanT: thanks
16:12:36 <thingee> I think I'm done...anyone have any questions later, feel free to ping me
16:12:37 <jgriffith> That'll fit nicely in with thingee 's plan regarding pecan V3 in I (hopefully)
16:12:49 <thingee> jgriffith: hopefully? :(
16:13:01 <jgriffith> thingee: ok... s/hopefully/''
16:13:07 <jgriffith> :)
16:13:37 <jgriffith> V3 will be slated for I, I just hope there's other really cool things to go in it
16:13:37 <thingee> losing faith in me, sheesh
16:13:44 <jgriffith> no no no.... not at all
16:13:48 <jgriffith> smart ass!
16:13:52 <thingee> it's a sane approach. it just took me 5k lines of code writing to realize it
16:13:58 <thingee> :P
16:14:05 <hemna> that's better than 10k lines of code
16:14:14 <jgriffith> So I'm hoping that DuncanT will come up with all kinds of new things we need in V3
16:14:17 <thingee> hemna: I only did v1 at that point and some tests
16:14:28 <jgriffith> So we'll have a brand new shiny toy for I all the way around
16:14:40 <hemna> ooh...shiny!
16:14:43 <jgriffith> The "season of the API"
16:14:48 <thingee> hemna: imagine thd diff stat once I finished ;)
16:14:50 <thingee> could be 10k
16:14:54 <hemna> lol
16:15:02 <seiflotfy_> o_O
16:15:15 <jgriffith> alright... everyboy cool with the Pecan decision?
16:15:15 <jgriffith> avishay: ?
16:15:17 <seiflotfy_> that would require another release to review it :P
16:15:22 <avishay> jgriffith: sounds good to me
16:15:23 <jgriffith> avishay: you're unusually quiet this evening
16:15:32 <avishay> jgriffith: just no objections :)
16:15:38 <jgriffith> ;)
16:15:40 <jgriffith> alright...
16:15:51 <jgriffith> #topic ceph support in Cinder
16:15:54 <jgriffith> seiflotfy_: you're up
16:16:15 <seiflotfy_> well i wanted to know if everybody is ok with the current map and if it will make it in "I"
16:16:32 <jgriffith> I... you mean H?
16:16:39 <seiflotfy_> i dont think it will make it in H
16:16:43 <seiflotfy_> if it can that would be amazing
16:16:48 <dosaboy> cinder-backup-to-ceph is *hopefully* ready now ;)
16:16:50 <DuncanT> Erm, should make it in H... looks to be making good progress...
16:16:54 <jgriffith> Sorry... dont' know what you're talking about then
16:16:55 <seiflotfy_> NICE
16:17:07 <jgriffith> You have 3 patches listed, 3 patches under review
16:17:18 <jgriffith> You have some other plan that we don't know about :)
16:17:30 <seiflotfy_> jgriffith: just references to say that this is what is still to be done
16:17:34 <DuncanT> I can't see any real benefit to the interface class, other than making java coders slightly more at home, but it is harmless enough
16:17:36 <seiflotfy_> and they look good
16:17:58 <mkoderer> DuncanT: ;)
16:18:01 <dosaboy> only two patchsets here (if you meant me)
16:18:07 <dosaboy> one was abandoned
16:18:18 <jgriffith> alright... let's back up
16:18:22 <seiflotfy_> DuncanT: good point, but I also see a benefit for other "new" backend services
16:18:26 <jgriffith> On the agenda:
16:18:30 <jgriffith> Item #2
16:18:41 <seiflotfy_> yep back to number 2
16:18:43 <jgriffith> seiflotfy_: has "Discuss status of Ceph support in Cinder"
16:18:52 <jgriffith> and there are 3 reviews listed
16:19:02 <seiflotfy_> yeah, so is it possible for us to have it for havana?
16:19:16 <seiflotfy_> also what tests do we have for it
16:19:17 <jgriffith> seiflotfy_: so Havana is the current release we're working on
16:19:24 <seiflotfy_> how do we intend to test this properly
16:19:29 <jgriffith> seiflotfy_: Havan will be cut from master in the fall
16:19:38 <jgriffith> seiflotfy_: that's your job :)
16:19:45 <DuncanT> seiflotfy_: It is undergoing a perfectly normally trajectory to land on trunk in the next week or two...
16:19:50 <jgriffith> seiflotfy_: submitting that patch means I've assumed you test it :)
16:20:01 <mkoderer> I think we need to spend time for performence testing
16:20:04 <seiflotfy_> jgriffith: i tested it with my old shitty patches
16:20:06 <seiflotfy_> and it worked
16:20:16 <seiflotfy_> but it was really slow
16:20:40 <thingee> jdurgin1: can you test it? :)
16:20:43 <seiflotfy_> managed to backup 1 gig
16:20:46 <seiflotfy_> :P
16:20:47 <thingee> like actually whitebox testing
16:21:00 <dosaboy> come someone clarify what patches we are discussing here
16:21:05 <dosaboy> if it is item 2
16:21:07 <seiflotfy_> a question would be how can we make use of ceph 2 ceph backup without going through the generic route
16:21:10 <hemna> mkoderer, we (my group at HP) just got legal approval to release the performance script I wrote a while back to test cinder
16:21:13 <seiflotfy_> is that up for question
16:21:15 <dosaboy> two of those are duplicate
16:21:25 <jgriffith> dosaboy: :)
16:21:36 <mkoderer> hemna: sounds great
16:21:38 <thingee> https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/cinder-backup-to-ceph,n,z
16:21:38 <jgriffith> dosaboy: indeed
16:21:38 <dosaboy> and I have tested them quite extensively
16:21:51 <hemna> mkoderer, https://github.com/terry7/openstack-stress
16:21:52 <dosaboy> bu more testing is never a bad thing
16:22:47 <seiflotfy_> dosaboy: do you intend to allow use of rbd tools for ceph2ceph backups
16:22:48 <seiflotfy_> ?
16:22:59 <thingee> jgriffith, seiflotfy_: I'll see if jdurgin1 wants to test things out
16:23:18 <dosaboy> not sure what you mean, but look at bp for what remains to be implemented
16:23:51 <jgriffith> Ok, so I'm not sure how well organized this topic is... shall we move on?
16:23:55 <thingee> is there anything else relevant to discuss in terms with this in cinder?
16:24:00 <seiflotfy_> ok cool, so to sum it up "cinder ceph backup" ===> might make it in havana, neds more testing
16:24:01 <jgriffith> Item #3 ?
16:24:13 <seiflotfy_> jgriffith: yes
16:24:15 <mkoderer> yes pls
16:24:23 <jgriffith> #topic parent calss for backup service?
16:24:27 <jgriffith> class
16:24:30 <seiflotfy_> mkoderer: go ahead
16:25:06 <mkoderer> ok I just introduced this interface class
16:25:14 <mkoderer> I know DuncanT hate me for it ;)
16:25:36 <mkoderer> but I think we could put some overall functionallity in it
16:25:36 <jgriffith> mkoderer: I think there's just a question of "what's the plan"
16:25:59 <seiflotfy_> the idea is we have now 2 backends swift and ceph and more will be coming i guess
16:26:22 <seiflotfy_> just to have a standard class that one can orient ones self on
16:26:38 <thingee> seiflotfy_: so i think this is fine...it sets a guideline to devs adding additional services. However, I would like to see documentation that explains this a bit more for newcomers wanting to add their object store
16:27:08 <thingee> I don't really care who does that...but it'll be, well, great :)
16:27:08 <mkoderer> thingee: good point
16:27:32 <seiflotfy_> thingee: i think mkoderer will take the lead on this
16:27:32 <seiflotfy_> :d
16:27:41 <mkoderer> sure np
16:27:50 <thingee> seiflotfy_, mkoderer: wonderful thanks guys!
16:27:51 <seiflotfy_> so i assume you want us to plan it out more and introduce it again in a better blueprint?
16:28:26 <seiflotfy_> ok cool
16:28:31 <dosaboy> that would be a good idea
16:28:33 <jgriffith> cool by me, follows our patterns we use everywhere else
16:28:45 <seiflotfy_> item 4?
16:28:52 <mkoderer> jepp
16:28:58 <jgriffith> #topic community recognition
16:29:11 <seiflotfy_> so in my free time i do work for GNOME and Mozilla
16:29:15 <thingee> seiflotfy_: I don't think it needs to be planned more...the interface is already defined. I think  the documentation will speak for it :)
16:29:26 <seiflotfy_> thingee: ok
16:29:29 <seiflotfy_> so back to 4
16:29:48 <seiflotfy_> the idea is i have a small script i can adapt that goes through git and bugzilla (will change it to launchpad)
16:30:14 <seiflotfy_> we use it at mozilla with every release to detect new code contributors
16:30:28 <seiflotfy_> and publish it via a link in the release notes
16:30:31 <guitarzan> doesn't openstack already do this?
16:30:36 <jgriffith> seiflotfy_: FYI we have one of those :)
16:30:39 <thingee> guitarzan: yup
16:30:40 <seiflotfy_> they do?
16:30:40 <jgriffith> seiflotfy_: https://github.com/j-griffith/openstack-stats
16:30:42 <seiflotfy_> ok
16:30:45 <jgriffith> guitarzan: yes
16:30:48 <seiflotfy_> then no need for me to do it then
16:30:50 <thingee> it's in community newsletter thing
16:30:52 <seiflotfy_> just wanted to help
16:30:57 <thingee> seiflotfy_:  :)
16:31:08 <seiflotfy_> ok less work for me then :D
16:31:35 <thingee> wow look at that, 9:30
16:31:40 <thingee> pdt
16:31:44 <thingee> 16:30 whatever
16:31:53 <eharney> is it currently done for things like: new reviewers, new people active on launchpad (but haven't committed code)?
16:32:06 <avishay> thingee: no banking time for next meetings!
16:32:34 <seiflotfy_> eharney: we can look into this and try to work it out during the week
16:32:37 <jgriffith> #topic H2
16:32:43 <eharney> i don't know of any real needs there, just thinking
16:32:45 <jgriffith> real quick
16:32:55 <jgriffith> https://launchpad.net/cinder/+milestone/havana-2
16:33:02 <jgriffith> we're a bit stalled on BP's here
16:33:14 <jgriffith> anyone from mirantis around this morning?
16:33:42 <jgriffith> eharney: also looking for an update from you on the ILO BP
16:34:12 <jgriffith> bueller... bueller
16:34:16 <eharney> yes, i need to update there
16:34:20 * jgriffith is talking to his dog this morning
16:34:21 <jgriffith> :)
16:34:26 <avishay> haha
16:34:28 <eharney> at the moment gluster snaps work has been higher priority for me
16:34:41 <jgriffith> eharney: You still planning on H2, or you want it deferred?
16:34:54 <jgriffith> eharney: I can defer it and if you get to it bring it back in
16:35:01 <eharney> realistically it should probably be at H3 at this point
16:35:20 <jgriffith> eharney: sounds good
16:35:28 <eharney> i did have a question there though
16:35:38 <jgriffith> eharney: have at it
16:35:44 <eharney> we have this idea of minimum driver requirements, right
16:35:54 <jgriffith> eharney: indeed, we do
16:36:02 <eharney> i'm trying to understand how that works for a driver like this that supports multiple different backends
16:36:28 <jgriffith> eharney: not sure I follow?
16:36:41 <jgriffith> eharney: this is what I consider more a base layer than a driver per-say
16:36:50 <jgriffith> well... it's an iscsi driver
16:37:00 <jgriffith> eharney: or are youtalking your gluster work?
16:37:09 <eharney> here, the driver supports libstoragemgmt, which enables support for targetd, and a couple of other storage platforms
16:37:38 <eharney> so, meeting minimum requirements for the driver may depend on what backend you configure it to use
16:37:50 <jgriffith> eharney: well, I think it's a different category
16:38:00 <eharney> ok, makes sense
16:38:02 <jgriffith> eharney: min requirements for LIO would be >= tgtd
16:38:04 <jgriffith> No?
16:38:10 <eharney> right
16:38:36 <jgriffith> eharney: and if we're not switching the default (which it looks like we won't due to time) it's an option/beta so to speak anyway
16:39:00 <jgriffith> was that what you were wondering?
16:39:14 <eharney> i think that covers what i was wondering
16:39:30 <jgriffith> eharney: k... ping me if there's more questions
16:39:36 <jgriffith> or if I'm missing a point here
16:39:50 <eharney> ok
16:40:21 <zhiyan> folks, for volume-host-attach, when you have time pls take a look on https://review.openstack.org/#/c/34125/ , i think it's closely ready to merge.
16:40:30 <jgriffith> Ok.. we have no winston, so we can't get into the QoS rate-limiting debate
16:40:30 <jgriffith> phewww
16:40:36 <jgriffith> I would like it if folks could help out with guitarzan 's type-quota patch
16:40:43 <guitarzan> I would like that as well :)
16:41:18 <jgriffith> we need some input on how this should be presented
16:41:22 <jgriffith> guitarzan: and I have talked a bit but I think I'm stuck... need some brain-storming
16:41:49 <jgriffith> and need to make sure nobody pukes on it when they notice it later
16:41:49 <jgriffith> :)
16:41:51 <jgriffith> guitarzan: has a number of possibilities worked up he can share
16:41:56 <thingee> jgriffith, guitarzan: can help after morning meeting...around 17:20 utc
16:42:24 <DuncanT> I was starting to read through your discussions on channel, damn you two go on....
16:42:26 <DuncanT> ;-)
16:42:32 <guitarzan> haha
16:42:35 * guitarzan hides in shame
16:42:37 <jgriffith> DuncanT: we'll need your input as well as you've objected to the approach before
16:42:41 <jgriffith> :)
16:42:46 <jgriffith> almost as bad as you and I
16:42:52 <jgriffith> or me and thingee
16:42:54 <DuncanT> Indeed and indeed
16:43:03 <jgriffith> or whoever is foolish enough to start a conversation with me :)
16:43:31 <jgriffith> Ok, I had more... but quite frankly it'd be nice to wrap a meeting early for a change :)
16:43:34 <DuncanT> At least it is harder for me to turn into a shouting match on IRC... apparently that can make bystanders nervious
16:43:38 <jgriffith> #topic open discussion
16:43:46 <jgriffith> DuncanT: whimps!
16:44:01 <jgriffith> anybody have anything?
16:44:11 <eharney> one more point re: min driver requirements
16:44:21 <jgriffith> eharney: yes?
16:44:27 <eharney> there are a couple of new driver reviews outstanding that probably aren't meeting those... we need to tell them something?
16:44:51 <thingee> I verified gpfs
16:45:31 <jgriffith> eharney: zvm and gpfs are the only two that come to mind
16:45:37 <thingee> jgriffith, DuncanT: if you guys don't mind, I'm going to take my "that guy" role and start sending emails to driver owners?
16:45:54 <jgriffith> Ohh... xtreemfs as well
16:45:56 <jgriffith> thingee: :)
16:45:58 <eharney> jgriffith: xtreemfs, "generic block" thing
16:46:12 <DuncanT> thingee: Go for it
16:46:16 <hemna> that's been stale for a while
16:46:30 <avishay> speaking of GPFS, any idea why its blueprint isn't showing up in search (and therefore in the link in the commit message)?
16:46:32 <avishay> https://blueprints.launchpad.net/cinder/+spec/gpfs-volume-driver
16:46:48 <jgriffith> because he's got a bogus link
16:46:50 <thingee> avishay: I believe jgriffith gave 'em a -2 about it
16:46:52 <DuncanT> My plan is to put patches in to remove them the day after H3 closes, but it is probably far nicer to give people warning
16:47:00 <eharney> avishay: does it not show ones "Pending approval"?  dunno
16:47:02 <jgriffith> I did, and even told him how to fix it
16:47:10 <thingee> DuncanT: we're splitting, remember?
16:47:12 <thingee> :)
16:47:28 <DuncanT> thingee: :-)
16:48:04 <avishay> jgriffith: how can he fix?
16:48:08 <jgriffith> avishay: FYI https://blueprints.launchpad.net/cinder?searchtext=gpfs
16:48:23 <avishay> jgriffith: yes, his BP isn't there
16:48:29 <jgriffith> avishay: yeah it is
16:48:43 <jgriffith> https://blueprints.launchpad.net/cinder/+spec/ibm-gpfs-driver
16:48:47 <avishay> jgriffith: no it's not...that one was made by someone else and is not relevant
16:48:56 <avishay> jgriffith: this is his - https://blueprints.launchpad.net/cinder/+spec/gpfs-volume-driver
16:49:05 <dosaboy> jgriffith: I've updated the bp for ceph backup to aim for h2 since that is hopefully realistic now
16:49:39 <zhiyan> i created ibm-gpfs-driver
16:51:34 <jgriffith> avishay: I'll look into it
16:51:41 <avishay> jgriffith: thanks!
16:51:42 <jgriffith> avishay: the fact that he marked it complete may be an issue
16:51:51 <thingee> zhiyan: hi, can you provide the blueprint in your next patch commit message?
16:51:53 <avishay> jgriffith: aahhhh...
16:52:20 <thingee> 8 minute warning
16:52:39 <jgriffith> zhiyan: can you kill the one you started, or mark it superseded or something
16:52:53 <zhiyan> jgriffith: ok
16:53:02 <zhiyan> thingee: which one?
16:53:29 <thingee> zhiyan: the patch that's introducing the gpfs driver should have a blueprint about adding the gpfs driver
16:53:49 <jgriffith> thingee: haha... see, you just fell into the same trap that I did :)
16:53:50 <avishay> thingee: it does, but the link is broken
16:54:02 <jgriffith> thingee: zhiyan isn't doing that work... dinesh is
16:54:07 <jgriffith> zhiyan: had a bp
16:54:13 <jgriffith> dinesh started a new one
16:54:23 <avishay> zhiyan: please kill https://blueprints.launchpad.net/cinder/+spec/ibm-gpfs-driver
16:54:34 <jgriffith> avishay: haha :)
16:54:38 <jgriffith> alright folks
16:54:46 <jgriffith> we blew our early quit time
16:54:48 <thingee> avishay: oh yeah that's what I meant :)
16:54:55 <avishay> thingee: :)
16:54:57 <jgriffith> I'm in #openstack-cinder as always
16:55:02 <avishay> we can still be 5 minutes early
16:55:05 <jgriffith> Thanks!!
16:55:10 <avishay> Bye all!
16:55:16 <thingee> guitarzan, DuncanT, jgriffith: can we talk about quotas in 20 mins?
16:55:24 <guitarzan> sure
16:55:30 <zhiyan> done
16:55:35 <thingee> zhiyan: thanks
16:55:37 <thingee> thanks everyone
16:55:38 <jgriffith> #endmeeting cinder