16:01:16 <jgriffith> #startmeeting cinder
16:01:17 <openstack> Meeting started Wed Aug  7 16:01:16 2013 UTC and is due to finish in 60 minutes.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:21 <openstack> The meeting name has been set to 'cinder'
16:01:24 <jgriffith> hey cinder folks
16:01:32 <bswartz> hi
16:01:55 <jgriffith> thingee: DuncanT-
16:01:57 <jgriffith> hey xyang_
16:02:03 <zhiyan> hello
16:02:03 <thingee> o/
16:02:08 <dosaboy> howdi
16:02:09 <jgriffith> zhiyan: howdy
16:02:19 <xyang_> hi jgriffith
16:02:22 <jgriffith> avishay????
16:02:23 <DuncanT-> Hey
16:02:31 <zhiyan> hey jgriffith
16:02:36 <kmartin> hello
16:02:38 <jgriffith> alright.. let's get stareted
16:02:41 <zhiyan> and hi DuncanT
16:02:41 <jgriffith> started even
16:02:46 <med_> dosaboy
16:02:55 <jgriffith> #topic make all rbd clones cow
16:03:06 <jgriffith> dosaboy: ^^
16:03:15 <dosaboy> so i'm looking for comments here
16:03:29 <dosaboy> xiaoi has raised concern about performance issue in ceph
16:03:34 <jgriffith> dosaboy: my first question when I looked was:
16:03:38 <dosaboy> if we have too many clones from same snapshot
16:03:50 <dosaboy> jgriffith: shoot
16:03:52 <jgriffith> What if you want to delete something
16:04:09 <jgriffith> ie isn't this the issue with everything being dependent through the chain?
16:04:13 <jgriffith> or is that incorrect?
16:04:17 <dosaboy> ah ok, so we would have to have some logic to clean up the discrete snapshot
16:04:38 <dosaboy> jgriffith: so, hoping jdurgin can pitch in here but,
16:05:07 <dosaboy> afaik the issue is that having too many cow clones can degredate performance of the original volume
16:05:14 <dosaboy> i am not clear on this though
16:05:16 <dosaboy> but
16:05:22 <jgriffith> dosaboy: yeah, that would be a second concern :)
16:05:35 <dosaboy> that performance concern also applies to the caseof clones from snaposhots
16:05:41 <dosaboy> which is already supported
16:05:42 <dosaboy> so
16:05:48 <dosaboy> with this bug/feature
16:05:57 <dosaboy> i am not trying to solve that sort of issue
16:06:08 <dosaboy> but merely extend the ability to have all clones as cow
16:06:29 <dosaboy> the inherent potential problems will be no different to what we already have
16:06:32 <dosaboy> and imo
16:06:43 <dosaboy> will be dealt with seperately
16:06:48 <dosaboy> thoughts?
16:07:39 <jgriffith> dosaboy: so I'm no ceph expert :)
16:07:41 <jgriffith> dosaboy: but...
16:07:59 <jgriffith> dosaboy: I've talked with jdurgin about this sort of thing in the past on a different topic IIRC
16:08:18 <jgriffith> dosaboy: and it seems there was some issue with having the relationship back to the volumes/snaps
16:08:23 <jgriffith> dosaboy: but I'm really not sure
16:08:42 <dosaboy> jgriffith: yeah that is what I have heard too, but
16:08:51 <jgriffith> dosaboy: the only concerns I would have would be that you can actually somehow have independent volume entities
16:09:00 <dosaboy> that applies to what we currently allow
16:09:05 <jgriffith> dosaboy: exactly
16:09:24 <jgriffith> dosaboy: now I don't even mind if there's some *hidden* special snapshot/volume on the backend
16:09:26 <dosaboy> if we go with this change/addition, it will clearly need testing
16:09:39 <jgriffith> dosaboy: the perf issue as you mentioned is a whole different subject
16:09:44 <dosaboy> just as what we already have i.e. cloen from snapshot will need testing
16:09:48 <dosaboy> ah ok
16:09:55 <dosaboy> yeah 2 seperate issues
16:10:20 <dosaboy> I don't see it as too much of a problem though to have a discrete snap that is silently managed
16:10:39 <dosaboy> failry simple logic to do that
16:10:47 <jgriffith> dosaboy: I don't either, but I think others raised some concerns when I suggested that in the past
16:10:51 <jgriffith> but regardless...
16:11:06 <jgriffith> doesn't sound like anybody has any objections here today?
16:11:11 <dosaboy> ok well i don't wanna hog the stage
16:11:16 <jgriffith> haha
16:11:32 <dosaboy> i'll try to get jdurgin's opinion when he;'s about
16:11:35 <jgriffith> ok... well that was easy enough :)
16:11:41 <dosaboy> :)
16:11:49 <jgriffith> #topic API V1 removal
16:11:54 <jgriffith> DuncanT-: ^^
16:11:57 <DuncanT-> Hey
16:12:00 <jgriffith> dosaboy: thanks by the way!
16:12:03 <dosaboy> np
16:12:24 <jgriffith> DuncanT-: we may have solved your concern here (s/we/thingee/)
16:12:33 <jgriffith> DuncanT-: but go for it
16:12:36 <DuncanT-> Ok, so we've been looking at API migration. AFAICT Rackspace are in the same situation as us: They have the V2 API turned on but it isn't in the catalogue
16:12:57 <DuncanT-> There doesn't seem to be a way currently to advertise it as a none-default opetion in the catalogue
16:13:01 <dosaboy> np
16:13:06 <dosaboy> oops :)
16:13:15 <DuncanT-> It is also not the default in devstack
16:13:29 <hemna> we should set it to the default first
16:13:38 <hemna> before we remove v1 IMO
16:13:46 <jgriffith> DuncanT-: hemna yes we're workign on that
16:13:53 <DuncanT-> This limits the testing exposure for V2, and causes third party apps to tend to be targetted for V1
16:14:28 <thingee> DuncanT-: So I feel v2 is ready with the testing that has been around it/documenting it required whitebox testing, etc. However I worked on the client changes last night to get us http://grab.objects.dreamhost.com/08-06-2013-21-20-59.png
16:14:46 <thingee> I have two cinder endpoints in the catalog with different service_types like nova did
16:14:51 <DuncanT-> With this in mind, I'd like to put some caution in the wind as regards to agressively killing V1
16:14:54 <guitarzan> thingee: cool!
16:15:02 <DuncanT-> thingee: Great stuff!
16:15:06 <avishay> hi all, sorry i'm late
16:15:07 <jgriffith> DuncanT-: so between the additon to the catalog and the type=volumeV2 I think we're good
16:15:09 <guitarzan> thingee: I'm pretty interested in that patch :)
16:15:14 <thingee> guitarzan: :)
16:15:24 <jgriffith> DuncanT-: I'm also much more confident that there isn't going to be compat issues anyway :)
16:15:26 <DuncanT-> We need a migration plan and plenty of time to implement it
16:15:30 <guitarzan> wait, you named the service type differently rather than a version entry/
16:15:57 <DuncanT-> jgriffith: Just flipping the default in devstack broke tempest tests... so there are at least some issues. I've not looked into the details yet
16:15:58 <guitarzan> thingee: not that I know which is better...
16:16:04 <thingee> DuncanT-, guitarzan: so the reason why i feel confident in v2 is because the code isn't far from v1, because people keep backporting features (when they shouldn't ;) )
16:16:16 <jgriffith> DuncanT-: but if you look at *why* it makes sense
16:16:22 <bswartz> thingee: lol
16:16:33 <jgriffith> DuncanT-: anyway...  not arguing
16:16:48 <thingee> DuncanT-, guitarzan, jgriffith: I'll take a look at tempest next once we got this patch up for review
16:17:10 <jgriffith> thingee: IIRC the failures were the nova/volume/cinder.py issues
16:17:11 <guitarzan> thingee: ya, I'm not much worried about the v2 code
16:17:19 <DuncanT-> jgriffith: I intend to look at / be told 'why', I'm just trying to pave the way for a less than aggressive removal date, based on the low testing so far
16:17:21 <guitarzan> it can be fixed if it needs it
16:17:34 <jgriffith> DuncanT-: sure, fair
16:17:45 <jgriffith> DuncanT-: but were we talking about actually removal yet anyway?
16:17:56 * guitarzan looks at the topic :)
16:17:56 <DuncanT-> jgriffith: And of course getting people nervious about it is the best way to see more testing / fix-ups :-)
16:18:02 * hemna reads topic
16:18:03 <jgriffith> DuncanT-: haha
16:18:08 <thingee> guitarzan, DuncanT-: can we agree to reevaluate v2 being default if tempest is fine?
16:18:14 <hemna> DuncanT- +1
16:18:16 <thingee> v1 is not going anyway in I
16:18:26 <DuncanT-> thingee: For devstack? Do it as soon as it works IMO
16:18:36 <guitarzan> thingee: sure, the default doesn't much matter to me
16:18:43 <DuncanT-> thingee: That is the kind of 'not agressive' I like :-)
16:18:49 <guitarzan> thingee: we just need config options to choose which one
16:18:49 <jgriffith> thingee: to be clear, we need to make V2 default ASAP as far as Im' concerned
16:18:55 <jgriffith> thingee: there's no reason not to IMO
16:18:56 <DuncanT-> thingee: Consider me far less worried :-)
16:19:01 <thingee> whoa
16:19:03 <DuncanT-> jgriffith: +1
16:19:06 <guitarzan> thingee: we have cinder_endpoint_template, we just need cinder_api_version maybe
16:19:13 <thingee> guitarzan: config opt isn't going away for v1 in I.
16:19:22 <guitarzan> thingee: I meant in nova's code
16:19:30 <guitarzan> your nova patch just uses the v2 client
16:19:55 <thingee> alright so let me make sure I understand
16:20:06 <jgriffith> guitarzan: is there any good reason to have the internal API's conigurable?
16:20:19 <guitarzan> jgriffith: because the internal api is the same as the external one
16:20:35 <jgriffith> guitarzan: but that's the point, it doesn't have to be
16:20:36 <thingee> default v2 in devstack, leave nova code to using v1 (which would require having both v1 and v2 enabled).
16:20:54 <jgriffith> thingee: sorry... I'll let you talk I promise ;)
16:21:14 <guitarzan> thingee: I like that idea just because it makes us solve the ambiguous endpoints issue
16:21:17 <guitarzan> :)
16:21:38 <jgriffith> guitarzan: I agreee with that, but I believe that's being fixed regardless
16:21:47 <jgriffith> guitarzan: and needs to be
16:21:54 <guitarzan> jgriffith: if that's fixed, then I don't care which is defaulted really
16:22:01 <jgriffith> guitarzan: :)
16:22:08 <thingee> DuncanT-: and what about you?
16:22:09 <guitarzan> that lets us add v2 to our catalog to make thingee happy
16:22:14 <guitarzan> and our customers can use whichever they want
16:22:27 <jgriffith> guitarzan: exactly, that's what I'm thinking
16:22:39 <jgriffith> guitarzan: leaves the implementation up to the SP
16:22:51 <jgriffith> guitarzan: err... configuration ?
16:22:52 <hemna> +1
16:23:03 <DuncanT-> Summary from my PoV: With V1 not going away in Icehouse (no new features is fine and indeed sensible) and Thingee on the case for compatibility, I'm quite happy my concerns are addressed.
16:23:07 <guitarzan> although customers will probably have to look at their code if/when we add another endpoint
16:24:08 <DuncanT-> Sounds like that will fix our immediate problems too, so all good
16:24:20 <jgriffith> and life is good again :)
16:24:26 <thingee> whew
16:24:31 <thingee> next topic, quick!
16:24:37 <jgriffith> #topic open discussion
16:24:38 <thingee> there are none. everyone go!
16:24:39 <DuncanT-> That was pleasantly pain free :-) Thanks
16:24:44 <jgriffith> anybody....
16:24:58 <thingee> DuncanT-: I worked late on the patch just for this discussion :)
16:25:14 * med_ tries to recall the project review status from ttx's meeting yesterday....
16:25:29 <DuncanT-> thingee: I had faith you'd fix everything once I made any sort of fuss ;-)
16:25:42 <jgriffith> med_: it was "looks ok, going to have a trafffic jam in review queue"
16:25:47 <avishay> jgriffith: update - i successfully did a live migration today.  still have to write tests and add a way to get status.
16:25:57 <jgriffith> avishay: nice!
16:26:12 <jgriffith> avishay: what did you migrate from->to
16:26:13 <hemna> very cool
16:26:31 <avishay> jgriffith: both storwize and LVM, via Vish's Nova code
16:26:34 <jgriffith> avishay: about that, there was a question this AM about migrating across Cinder nodes?
16:26:39 <jgriffith> avishay: nice!
16:26:43 <kmartin> jgriffith: he went from IBM to 3PAR :)
16:26:47 <avishay> what does that mean?
16:26:54 <jgriffith> kmartin: haha!
16:26:56 <dosaboy> avishay: block-migrate?
16:27:02 <avishay> kmartin: 3par to IBM is the more common use case :P
16:27:04 <hemna> avishay, do  you have a WIP up for us to play with ?
16:27:15 <jgriffith> avishay: sorry... so say you weren't using multi-backend on a single cinder node
16:27:25 <jgriffith> avishay: you actually had cinder node-a, node-b etc
16:27:35 <jgriffith> avishay: I believe it shouldn't matter... but
16:27:42 <avishay> jgriffith: oh, it was actually on a one node devstack, but should work regardless.  need more servers to test.
16:27:50 <jgriffith> avishay: :)
16:27:51 <avishay> dosaboy: block-migrate?
16:27:59 <avishay> hemna: will put something up as soon as it's sane
16:28:05 <guitarzan> famous last words!
16:28:09 <guitarzan> it should work
16:28:14 <hemna> avishay, sweet, looking forward to it.
16:28:16 <avishay> guitarzan: :)
16:28:17 <dosaboy> avishay: what type of migration you talking about?
16:28:22 <jgriffith> avishay: yeah, I think it'll work the same untl you actually want to migrate to another cinder setup
16:28:25 <jgriffith> guitarzan: haha!
16:28:30 <avishay> dosaboy: moving a volume from its current backend to another
16:28:36 <dosaboy> ah ok
16:28:43 <xyang_> avishay: this is for migrating unattached volume, right?
16:28:56 <avishay> xyang_: unattached is already merged, attached is coming soon
16:29:04 <med_> he said "live" so I assumed attached
16:29:05 <jgriffith> winston-1: how are things with the QoS patch?
16:29:09 <xyang_> avishay: cool!
16:29:20 <avishay> and this is the first time cinder is calling nova - we're no longer just a slave :)
16:29:23 <jgriffith> med_: xyang_ the unattached code is already in
16:29:34 <med_> ack/nod.
16:29:41 <jgriffith> avishay: that API is going to be EXTREMELY useful BTW
16:29:42 <xyang_> jgriffith: ok, will take a look
16:29:44 <avishay> yea...that QoS patch sitting in the queue is bothering me
16:30:04 <hemna> looks like it's just waiting for reviews
16:30:05 <avishay> there's a dependent nova patch for QoS too
16:30:15 <hemna> https://review.openstack.org/#/c/29737/
16:30:19 <jgriffith> winston-1: oops.. didn't notice you updated
16:30:28 <avishay> i think jgriffith and DuncanT- had concerns, i deferred review to them
16:30:30 <DuncanT-> Does this mean we've going to want a nova internal endpoint for cinder->nova calls, just like having a specific internal endpoint for nova->cinder calls like attach and reserve is a good idea?
16:30:31 <jgriffith> winston-1: sorry... I'll look at that again today, would like to make sure others do as well
16:30:50 <jgriffith> Oh... it's not updated ;(
16:30:52 <hemna> jgriffith, I'll look at it this morning.
16:31:26 <hemna> jgriffith, what are you waiting for?  I don't see a -1 on it?
16:31:26 <jgriffith> DuncanT-: we need that for a number of things, se ML last night between Russell and I on the instance assisted snaps
16:31:28 <winston-1> jgriffith: i'm working on it. since it's totally new patch
16:31:35 <avishay> DuncanT-: I'm not familiar with the internal endpoint - let's discuss offline?
16:31:36 <winston-1> jgriffith: 70% ready
16:31:45 <DuncanT-> avishay: Sure
16:31:47 <jgriffith> hemna: no need, winston-1 and I talked through it last week
16:31:50 <avishay> DuncanT-: thanks!
16:31:51 <hemna> ok
16:32:10 <hemna> can you -1 the review so we know it's in a wait state for changes ?
16:32:11 <DuncanT-> jgriffith: I'm days behind on the mailing list, will catch up. Glad to see it being discussed
16:32:13 <jgriffith> avishay: that's the "cinder talking to nova" piece
16:32:41 <jgriffith> avishay: I think we took that as implying that you had an endpoint setup in cinder (ie cinder --> novaclient)
16:32:44 <DuncanT-> hemna: Done
16:32:57 <avishay> jgriffith: is this endpoint a piece of code or an actual endpoint (separate port)
16:32:59 <hemna> ok thanks.  :)
16:33:14 <jgriffith> avishay: easiest way to describe it is nova/volume/cinder.py
16:33:23 <avishay> jgriffith: DuncanT-: I have that written already
16:33:26 <jgriffith> avishay: but you said we were talking to nova now, so how did you do that?
16:33:35 <jgriffith> avishay: yeah.. that's what I thought :)
16:33:38 <avishay> jgriffith: cinder/compute/nova.py :)
16:33:51 <jgriffith> avishay: most EXCELLENT!
16:34:04 <jgriffith> avishay: looking forward to that patch :)
16:34:09 <avishay> DuncanT-: is that what you meant?
16:35:05 <jgriffith> Ok... couple of other things real quick and everybody can get back to reviews (hint hint)
16:35:07 <DuncanT-> avishay: I meant that you probably don't want these cinder->nova APIs enabled on a customer facing endpoint, in teh same way you don't really want cinder reserve API available on a customer facing endpoint
16:35:27 <DuncanT-> avishay: Can discuss on channel after if you want
16:35:39 <avishay> DuncanT-: OK, so that something else.  Yes, let's talk after.
16:36:23 <jgriffith> So the only other thing I wanted to mention
16:36:32 <jgriffith> we've got some monster reviews in the queue
16:36:37 <jgriffith> task-flow
16:36:42 <jgriffith> is a big one
16:36:56 <jgriffith> I'm pretty comfortable with it and would like to get it turned on ASAP
16:37:01 <jgriffith> the earlier the more testing
16:37:09 <hemna> I'll try and do more reviews today, was doing a bit last night
16:37:14 <jgriffith> I've run a bunch of create cases with forced failures and so far so good
16:37:15 <scottda> bye
16:37:31 <avishay> there are 3 huawei monsters at the bottom of the queue...
16:37:48 <jgriffith> avishay: yeah, huawei and coraid and nexenta
16:37:51 <jgriffith> BUT
16:37:53 <DuncanT-> I'll take another look at task-flow. The last things I spotted were all minor
16:38:07 <jgriffith> I'd like to sort through things like public volumes R/O volumes first
16:38:18 <jgriffith> DuncanT-: great.. thanks :)
16:38:18 <avishay> jgriffith: i've been working on coraid and nexenta recently
16:38:24 <jgriffith> avishay: excellent
16:38:29 <jgriffith> OHHHH
16:38:30 <zhiyan> jgriffith: thanks, it's ready to get your +2 IMO :)
16:38:32 <jgriffith> That reminds me
16:38:53 <zhiyan> some minor changes which DuncanT- spotted has been addressed.
16:38:56 <avishay> jgriffith: yup you're right, read-only just got a new update, we should get these features in
16:39:01 <jgriffith> I put together a quick driver-cert last week-end
16:39:10 <avishay> jgriffith: oooooh shiiiiiny
16:39:10 <hemna> how is that going?
16:39:18 <jgriffith> avishay: I'll get it up on github
16:39:26 <avishay> sweet
16:39:31 <jgriffith> ^^ everyone
16:39:31 <uvirtbot> jgriffith: Error: "^" is not a valid command.
16:39:39 <jgriffith> So here's my dilema right now....
16:39:42 * hemna readies his git clone command line......
16:39:57 <jgriffith> 1. I wrote it in python, it checks devstack and cinder and runs the tempest api/volume tests
16:40:08 <zhiyan> avishay: yes, thanks. and jgriffith i'm working on mutliple-attaching now
16:40:17 <jgriffith> 2. It's extensible so you can add things like compute, networking etc
16:40:21 <avishay> zhiyan: awesome
16:40:47 <jgriffith> So I'm thinking it doesn't belong in "cinder" should be outside somewhere for others to make use of and contribute to
16:40:58 <hemna> why not in tempest itself ?
16:41:10 <hemna> tempest has a subproject for stress tests
16:41:12 <jgriffith> the question is... is that place, devstack, tempest or maybe a project in packstack?
16:41:15 <jgriffith> errr
16:41:28 <jgriffith> ^^ stackforge
16:41:29 <uvirtbot> jgriffith: Error: "^" is not a valid command.
16:41:37 <avishay> why not cinder/tests/functional or something?  just for the run script
16:41:38 <jgriffith> uvirtbot: welcome back!
16:41:39 <uvirtbot> jgriffith: Error: "welcome" is not a valid command.
16:41:49 <avishay> haha
16:41:57 <avishay> uvirtbot is not very sociable
16:41:58 <uvirtbot> avishay: Error: "is" is not a valid command.
16:41:59 <jgriffith> avishay: well the only reason is like I said others could use this as well
16:42:33 <avishay> other projects?
16:42:36 <hemna> tempest seems like a logical place to me, especially if you think others can use it.
16:42:59 <avishay> oh, the packaging of results, yes definitely
16:43:35 <jgriffith> K... I'll fiddle with it some more and get input from everybody later this week
16:43:39 <avishay> yea tempest could be the right place
16:44:15 <avishay> jgriffith: there's some hook to configure the backend, or just supply a cinder.conf file, or flags?
16:44:15 <jgriffith> Ok... anything else from folks?
16:44:27 <jgriffith> med_: did you want more details from yesterdays project udpate?
16:44:36 <med_> noe
16:44:38 <med_> nope
16:44:43 <jgriffith> avishay: nope, it let's you do that.  Assumes a devstack env
16:44:58 <jgriffith> avishay: then collects your cinder.conf and git stamps of the repos of interest
16:45:16 <jgriffith> med_: kk
16:45:29 <avishay> jgriffith: OK cool
16:45:43 <jgriffith> alrighty then... if nobody has anything let's finish early for a change :)
16:45:48 <jgriffith> thanks everyone!
16:45:56 <jgriffith> #endmeeting