16:00:15 <jgriffith> #startmeeting cinder
16:00:16 <openstack> Meeting started Wed Jun 12 16:00:15 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:19 <openstack> The meeting name has been set to 'cinder'
16:00:23 <jgriffith> Hey everyone!
16:00:23 <thingee> o/
16:00:30 <zhiyan> hi
16:00:41 <mkoderer> hi
16:00:46 <winston-d> hi
16:00:48 <jgriffith> agenda for today: https://wiki.openstack.org/wiki/CinderMeetings
16:00:51 <xyang_> hi
16:01:01 <bswartz> hello
16:01:26 <jgriffith> One thing I'd like to ask, when folks add items to agenda do me a favor and put your name on there :)
16:01:33 <jgriffith> that way we know who's topic it is :)
16:01:41 <zhiyan> jgriffith: sure
16:01:46 <jgriffith> and on that note...
16:01:55 <jgriffith> #topic Ceph as option for backup
16:02:05 <seiflotfy_> hi
16:02:26 <jgriffith> I'm down with that, don't know who's proposal it is, but makes sense to me
16:02:27 <Akendo> Hello
16:02:33 <jgriffith> I'm curious how much effort is involved
16:02:35 <thingee> jgriffith: it was seiflotfy_
16:02:43 <seiflotfy_> jgriffith its mine
16:02:48 <jgriffith> ie Ceph/Swift compatability should be pretty easy I would've thought
16:02:50 <jgriffith> ahh..
16:02:55 <DuncanT> Given ceph can pretend to be swift, I think you get that for free now?
16:02:57 <seiflotfy_> so there are 2 ways to do it and i would like to discuss which one would fit better with upstream
16:02:59 <jgriffith> seiflotfy_: anything specific you want to bring up?
16:03:02 <thingee> seiflotfy_: I don't think anyone is opposed to the idea. Is there anything you need?
16:03:07 <seiflotfy_> 1) we use ceph swift api
16:03:14 <Akendo> Indeed
16:03:18 <Akendo> We just check how to do so
16:03:31 <seiflotfy_> 2) we actually add direct support for it in openstack
16:03:46 <seiflotfy_> (which would require a decent amount of code)
16:03:50 <Akendo> We have to do some tests on it but in theory it should work easy
16:03:51 <thingee> seiflotfy_: really that's your decision. :)
16:04:08 <thingee> seiflotfy_: I don't care either way, as long as it works
16:04:15 <jgriffith> thingee: +1 :)
16:04:25 <jgriffith> seiflotfy_: just curious what option #2 buys you over #1?
16:04:28 <DuncanT> I'd certainly be interested in hearing how you get on with trying to implement a backup driver, if you go that route...
16:04:33 <seiflotfy_> thingee: well if we go with 1) then the coding might not even exist but more configuration
16:04:43 <seiflotfy_> it needs to be tested
16:04:43 <thingee> seiflotfy_: yup
16:05:01 <thingee> seiflotfy_: I was under the impression since it's a compatible api, there shouldn't be a problem
16:05:03 <seiflotfy_> in anyway I think i will start with 1) then later head to 2)
16:05:11 <seiflotfy_> since it wil lrequire some refactoring of the code
16:05:16 <jgriffith> seiflotfy_: sounds like a good idea to me :)
16:05:22 <jdurgin1> seiflotfy_: I've been thinking adding an rbd or rados backup target that can do differential backups would be useful
16:05:22 <thingee> yup sounds good
16:05:32 <seiflotfy_> mkoderer: went through it and it looks like it will require some refactoring to not make swift the only hardcoded option
16:05:36 <jdurgin1> but trying 1) first makes sense to me
16:05:51 <thingee> just flowing through the agenda
16:05:56 <mkoderer> refactoring is needed for option 2)
16:06:02 <mkoderer> 1) should work out of the box
16:06:06 <DuncanT> seiflotfy_: Should be a single config option to change the backup target... ping me if it looks harder
16:06:37 <jgriffith> I think both options are good... I have no objections, don't think anybody else would either
16:06:51 <jgriffith> So, unless there are any questions?
16:06:55 <avishay> Hi all
16:07:22 <jungleboyj> Hey avishay.
16:07:23 <jgriffith> avishay: yo
16:07:30 <hemna> morning
16:07:32 <jgriffith> Ok, next item
16:07:38 <seiflotfy_> ok cool can i take this task then
16:07:39 <seiflotfy_> ?
16:07:43 <zhiyan> hi avishay, hemna
16:07:45 <seiflotfy_> me and mkoderer would do it
16:07:48 <jgriffith> seiflotfy_: it's all yours :)
16:07:49 <avishay> hi
16:07:52 <mkoderer> ;)
16:08:11 <jgriffith> seiflotfy_: You should link up with jdurgin1 when you get aroung to looking at option 2
16:08:38 <winston-d> avishay hi
16:08:40 <zhiyan> hi hemna, could you pls share the progress about the brick implementation?
16:08:41 <jgriffith> #topic  brick status update
16:08:50 <hemna> heh
16:08:50 <avishay> winston-d: hi
16:08:54 <jgriffith> :)
16:08:59 <hemna> ok, well I have a WIP review up on gerrit
16:09:13 <hemna> I believe I have the iSCSI code working now
16:09:24 <hemna> https://review.openstack.org/#/c/32650/
16:09:36 <hemna> I am just doing some more testing and waiting for my QA guy to give me the thumbs up
16:09:41 <zhiyan> including attach and/or detach code?
16:09:54 * jgriffith wants his own QA person!
16:09:54 <hemna> yes, this is the iSCSI attach/detach code
16:09:59 <hemna> heh
16:10:00 <zhiyan> cool
16:10:03 <avishay> haha
16:10:15 <hemna> I've modified the base ISCSIDriver in cinder to use the new brick code and it works
16:10:16 <xyang_> hemna: works for copy image to volume as well, right
16:10:17 <hemna> (for me)
16:10:22 <Akendo> I could do some testing and QU and supporting seiflotfy_ and mkoderer
16:10:24 <hemna> xyang_, haven't tried it yet
16:10:27 <jgriffith> hemna: you mean on the attach
16:10:27 <Akendo> QA*
16:10:35 <Akendo> :-)
16:10:35 <jgriffith> hemna: I moved the target stuff a while back :)
16:10:43 <hemna> xyang_, I haven't modified the copy image to volume method yet to use brick...that's why it's a WIP still
16:10:45 <avishay> hemna: there's an issue with nova that disconnecting from an iscsi target disconnects all LUNs...is that a problem here?
16:11:09 <seiflotfy_> +1
16:11:11 <hemna> avishay, if that's a bug in the current nova libvirt volume driver, then yes, it's a bug in this code
16:11:13 <seiflotfy_> :D
16:11:13 <xyang_> hemna: ok thanks
16:11:45 <avishay> hemna: no it's not - libvirt keeps track of which VMs are using what, so they disconnect only if nobody is using
16:11:52 <zhiyan> avishay: there is a check in nova libvirt volume detaching code...
16:11:53 <avishay> hemna: do we need similar tracking?
16:12:04 <hemna> avishay, yes, that code is in this brick code as well
16:12:05 <zhiyan> avishay: yes
16:12:11 <hemna> but we aren't attaching to VMs
16:12:14 <avishay> hemna: sweet
16:12:24 <hemna> we are just attaching to the host and using the LUN and then detaching it
16:12:25 <avishay> hemna: i know, but we still may have multiple LUNs, right?
16:12:53 <hemna> yes we'll have multiple LUNs
16:12:57 <xyang_> hemna: since copy image to volumes is from cinder, we may still have that problem
16:13:07 <hemna> but we should only be detaching the LUNs we are done with at the time
16:13:11 <xyang_> hemna: cinder doesn't know what luns are attached
16:13:44 <hemna> the way nova looks at the attached LUNs is inquiring the hypervisor
16:13:47 <xyang_> hemna: there's a log out call at the end if no luns attaches, that is one thing we don't know in cinder
16:14:06 <hemna> we don't have a hypervisor in this case
16:14:23 <avishay> so we probably need to track the connections ourselves
16:14:35 <jgriffith> hemna: xyang_ but can't we add that through intiator queries?
16:14:50 <hemna> well in our case it's always an attach, use, detach for a single LUN
16:15:02 <hemna> we aren't attaching, then going away and then detaching at some later time.
16:15:05 <dosaboy> sorry guys joining late here, if there is a moment at the end I have a few words on the ceph-backup bp
16:15:09 <xyang_> jgriffith: avishay and I discussed about that.  so driver can find it out but cinder has to make an additional call
16:15:11 <hemna> but if cinder dies in that serial process....
16:15:27 <jgriffith> hemna: states will fix that for us :)
16:15:31 <jgriffith> cinder never dies!!
16:15:35 <avishay> :)
16:15:40 <hemna> so yes, we aren't currently tracking (storing in a DB) which LUNs we have have attached
16:16:00 <jgriffith> I hate to go down the path of BDM type stuff in Cinder
16:16:06 <hemna> yah
16:16:13 <hemna> I'd like to keep this simple for the first round
16:16:18 <avishay> what if we get two calls that attach at the same time?
16:16:18 <jgriffith> +1
16:16:20 <hemna> it's already better than the code we copy/pasted from nova
16:16:28 <hemna> that's existing in cinder now
16:16:44 <hemna> avishay, lockutils
16:16:51 <avishay> i'm find with keeping it simple for the first pass, but we should keep these issues in mind
16:17:02 <hemna> yup!
16:17:03 <jgriffith> avishay: I hear ya
16:17:14 <hemna> it's something we can mull over for H3
16:17:20 <avishay> hemna: works for me
16:17:24 <DuncanT> avishay: Check out the code and raise a bug if you can see a specific scenario that would break it...
16:17:33 <avishay> DuncanT: yup
16:17:40 <hemna> as it stands today there are issues with the existing copy volume to image code that doesn't work
16:17:44 <hemna> that I discovered in the process
16:17:50 <hemna> like....we never detach a volume.....
16:17:52 <hemna> :(
16:18:16 <hemna> this WIP patch already addresses that issue.
16:18:26 <jgriffith> Ok, the only other thing there (I think) is the LVM driver migration
16:18:30 <hemna> I saw another issue in the existing code that failed to issue iscsiadmin logins as well
16:18:32 <avishay> hemna: there was no detach precisely because of the issue i raised
16:18:37 <jgriffith> I am hoping to have that done here shortly
16:18:55 <thingee> jgriffith, hemna: separate commit for the disconnect and backport?
16:18:56 <hemna> avishay, that leads to dangling luns and eventually kernel device exhaustion.  :(
16:18:56 <jgriffith> After that we've got the key components in brick and we've got something consuming all of them
16:19:08 <jgriffith> thingee: hmmm?
16:19:35 <thingee> jgriffith: errr copy volume to image code not deataching
16:19:45 <jgriffith> thingee: ahh...
16:19:47 <jgriffith> :)
16:19:52 <hemna> just for Grizzly backport ?
16:19:56 <avishay> hemna: if nova and cinder are running on the same host, cinder might logout of nova luns
16:20:12 <thingee> hemna: yea
16:20:20 <thingee> Oh I guess that was folsom too
16:20:23 <thingee> hmm
16:20:26 <jgriffith> avishay: I'm still unclear on how this got so convoluted
16:20:38 <hemna> can you issue a copy volume to image when a volume is attached to a VM ?
16:20:39 <jgriffith> avishay: We *know* what lun we're using when we atttach for clone etc
16:20:47 <xyang_> there could be more than one luns on the same target, if we logout in copy image to volume, other luns can be affected
16:21:06 <jgriffith> xyang_: understood, but since we know the lun why can't we log out "just" that lun
16:21:11 <avishay> the problem is that when you logout, it disconnect ALL luns on the same target
16:21:27 <avishay> you can't log out of just one AFAIK
16:21:32 <hemna> well logout is a separate issue from removing the LUN from the kernel
16:21:32 * winston-d checking connectivity
16:21:45 <jgriffith> right, but what I'm saying is I *believe* there's a way to do a logout on JUST the one session/lun
16:21:49 <singn> this is how iscsiadm works when logs in to target
16:21:50 <thingee> hemna: only grizzly. folsom just gets security fixes now
16:21:50 <hemna> you can remove a LUN from the kernel by issuing an scsi subsystem command
16:21:50 <avishay> maybe there is a better way than what nova does
16:21:55 <hemna> w/o doing an iscsi logout
16:22:09 <jgriffith> avishay: that's what I'm wondering
16:22:28 <jgriffith> avishay: xyang_ regardless... I'd propose we file a bug to track it (thought we already did though)
16:22:33 <hemna> you don't need to do a logout to remove a lun
16:22:33 <avishay> hemna: so remove from the kernel, then you can check if there are no more luns and logout?
16:22:40 <jgriffith> and address it after we get hemna 's first version landed
16:22:42 <xyang_> jgriffith: I think avishay already logged a bug
16:22:43 <hemna> you should only logout from an iscsi session when you are done with the host
16:22:44 <avishay> jgriffith: there already is a bug
16:22:53 <jgriffith> xyang_: avishay I thought so :)
16:23:06 <hemna> avishay, yah there is a way I believe
16:23:14 <avishay> OK, so there is a bug open, let's fix it in v2
16:23:23 <hemna> requires some smart parsing of kernel devices in /dev/disk/by-path and knowing the target iqns, etc
16:23:29 <jgriffith> I guess the *right* answer is actually the opposite of what I just said
16:23:36 <jgriffith> in order to do the backport correctly
16:23:52 <jgriffith> fix it in the existing code now and backport, then move forward with the new common code
16:24:02 <hemna> ok so we have like 3 issues here :)
16:24:14 <hemna> 1) the detach in the existing cinder code
16:24:36 <hemna> 2) iscsi logout issues that can cause host logouts when LUNS are in use
16:24:44 <hemna> 3) detaches from the kernel
16:24:57 <avishay> FC is so much easier ;)
16:25:00 <hemna> the important one here for now I think is the issue that thingee raised
16:25:06 <hemna> :P
16:25:07 <jgriffith> avishay: ha!  Now that's funny
16:25:16 <winston-d> avishay as long as you have HBA installed?
16:25:23 <avishay> jgriffith: winston-d  :)
16:25:31 <hemna> I haven't started the FC stuff yet
16:25:31 <jgriffith> winston-d: avishay and you don't care about things like zoning
16:25:41 <avishay> zoning shmoning
16:25:45 <jgriffith> hemna: one thing at a time :)
16:25:46 <hemna> I'll probably do another patch for the FC attach/detach migration into brick
16:25:47 <jgriffith> avishay: :)
16:25:56 <jgriffith> hemna: yes, please do them separately
16:26:03 <hemna> yah that was the plan :)
16:26:06 <seiflotfy_> ah forgot dosaboy  is working on theceph blueprint
16:26:11 <avishay> anyway, looks like a good start - nice work
16:26:12 <seiflotfy_> so i will be trying to assist him then
16:26:23 <hemna> the brocade guys are supposed to be working on the zone manager BP
16:26:30 <winston-d> sorry, guys my network connectivity is very unstable today.
16:26:39 <jgriffith> Ok... anything else on this topic?  I think hemna has a good idea of the challenges and the point thingee brought up
16:26:59 <jgriffith> #topic QoS and Volume Types
16:27:00 <hemna> what should be the plan for the Grizzly detach issue ?
16:27:13 <hemna> ok nm  we can hash it out in #openstack-cinder
16:27:19 <thingee> hemna: sounds good
16:27:25 <jgriffith> hemna: I would have liked to have seen that addressed already TBH
16:27:45 <hemna> yah, I didn't notice it until I started the brick work :(
16:27:48 <jgriffith> hemna: but yes, we'll talk later between xyang_ hemna and whoever else is interested
16:27:55 <hemna> +1
16:28:00 <xyang_> sure
16:28:04 <jgriffith> and avishay
16:28:15 <jgriffith> sorry avishay you can't go home yet ;)
16:28:27 <jgriffith> So... QoS
16:28:29 <avishay> jgriffith: i'm already home :)
16:28:36 <jgriffith> avishay: ;)
16:28:41 <jgriffith> well then we're all set :)
16:28:53 <winston-d> yes, QoS please. :)
16:29:02 <jgriffith> winston-d: where did you patch go?
16:29:09 <jgriffith> ahh fond it
16:29:10 <jgriffith> found
16:29:24 <winston-d> it's here: https://review.openstack.org/#/c/29737/
16:29:26 <jgriffith> https://launchpad.net/cinder/+milestone/havana-2
16:29:30 <jgriffith> oops
16:29:32 <jgriffith> sorry
16:29:43 <jgriffith> yeah.. what winston-d said ^^ :)
16:29:53 <jgriffith> I don't know how many of you have looked at this
16:30:04 <jgriffith> but I had some thoughts I wanted to discuss
16:30:17 <jgriffith> I think I commented them pretty well in the review but...
16:30:36 <jgriffith> to summarize, I'm not crazy about introducing unused columns in the DB
16:30:45 <kmartin> I have as well :)
16:30:52 <winston-d> kmartin :)
16:31:02 <jgriffith> and I'm not sure about fighting the battle of trying to cover every possible implementation/verbage a vendor might use
16:31:11 <jgriffith> I had two possible alternate suggestions:
16:31:23 <jgriffith> 1. Use metadata keys
16:31:36 <jgriffith> This way the vendor can implement whatever they need here
16:31:58 <jgriffith> It's like a "specific" extra-specs entry
16:32:15 <thingee> jgriffith: +1, non-first class features should not be introducing changes to the model.
16:32:43 <jgriffith> The other option:
16:32:44 <DuncanT> jgriffith: +1 seems like a sane solution
16:33:14 <jgriffith> 2. Implement QoS - Rate Limiting and QoS - Iops setting
16:33:24 <winston-d> jgriffith i have concerns about having vender specific implementation keys stored in DB for volume types, that makes volume types not compatible with other back-ends.
16:33:32 <thingee> while I was working on the wsme stuff for the api framework switch, it made me realize how complex the volume object is becoming =/
16:33:48 <thingee> as jgriffith mentioned, we're half of what instances are in nova
16:33:54 <jgriffith> winston-d: actually... my proposal
16:34:04 <jgriffith> winston-d: would make it such that it's still compatable, just ignored
16:34:29 <jgriffith> winston-d: in other words if the keys don't mean anything to the driver it just ignores them
16:34:49 <jgriffith> winston-d: this creates some funky business with the filtering, but I think we can resolve that
16:35:04 <jgriffith> winston-d: just leave filter scheduling as a function of the "type"
16:35:08 <thingee> the only thing drivers should agree on is the capability keys.
16:35:09 <jgriffith> not QoS setting
16:35:18 <jgriffith> thingee: I would agree with that
16:35:29 <thingee> but...
16:35:43 <jgriffith> The problem is I see little chance of us all agreeing on what QoS is and how to specify it
16:35:50 <winston-d> thingee i agree as well, but i think QoS is among capabilities.
16:36:09 <jgriffith> winston-d: you're correct, but I think it's a "True/False"
16:36:13 <avishay> You can't call it QoS - that term is overloaded.  This is rate limiting.
16:36:15 <DuncanT> Thought that doesn't mean we shouldn't try to get drivers to agree (i.e. point out inconsistencies at review time), jsut let the standards be defacto rather than prescribed...
16:36:29 <jgriffith> winston-d: and TBH I'm still border line on whether I count rate-limiting as QoS :)
16:36:34 * thingee thinks there should a way to extend capabilities if it's not a first class feature.
16:36:35 <avishay> I thought this is what we discussed at the summit :)
16:36:41 <hemna> thingee, +1
16:36:51 <jgriffith> thingee: _1
16:36:54 <jgriffith> ooops
16:36:56 <jgriffith> +1
16:37:11 <jgriffith> DuncanT: so the problem is... there's already an issue
16:37:34 <jgriffith> DuncanT: For example, I use "minIOPS, maxIOPS and burstIOPS"
16:37:38 <jgriffith> on a voume per volume basis
16:37:44 <jgriffith> that can be changed on the fly
16:38:11 <jgriffith> Other's use "limit max MB/s Read and limit max MB/s Write"
16:38:17 <winston-d> folks, the QoS bp/patch was at first for client rate-limiting (aka, doing rate-limit at Nova Compute).  so we have to deal with back-ends, as well as hypervisors.
16:38:22 <jgriffith> While yet other use "limit IOPs"
16:38:38 <jgriffith> winston-d: indeed
16:38:52 <jgriffith> winston-d: but what I'm saying is maybe that should be "rate-limiting" and not QoS
16:38:54 <DuncanT> jgriffith: On-the-fly changes don't seem to fit within the framework we've discussed
16:39:10 <DuncanT> jgriffith: Nor per-volume limits (rather than per-type limits)
16:39:12 <jgriffith> DuncanT: updates
16:39:26 <hemna> so you would change those settings on the fly after the volume is created?
16:39:41 <hemna> probably out of scope for this I would presume
16:39:46 <jgriffith> hemna: Yes, that's something I need to be able to do
16:40:02 <jgriffith> well... it's not something I'm asking winston-d to put in his patch
16:40:07 <hemna> ah ok
16:40:10 <DuncanT> I definitely feel that is not within the discussed framework, other than via retyping
16:40:15 <jgriffith> bu it's something I'm keeping in mind with the design
16:40:25 <jgriffith> DuncanT: correct
16:40:26 <hemna> that smells like v2 to me
16:40:33 <winston-d> hemna there's no reason why not if back-end/hypervisor supports run-time modification
16:40:35 <avishay> winston-d: does libvirt support changing rate limit settings after the volume is attached?
16:40:41 <hemna> like a volume type update or something like that
16:40:41 <jgriffith> DuncanT: well... it's just like "update extra-specs"
16:41:06 <jgriffith> hemna: I'd like to have it be the same volume-type
16:41:16 <jgriffith> So the volume-type just tells what back-end to use
16:41:17 <hemna> jgriffith, but in this case do you want to update the volume type here, or the specific volume instances's settings
16:41:19 <DuncanT> jgriffith: I don't think that changes existing volumes?
16:41:55 <hemna> like for volume X, update it's IOPS settings now.
16:42:06 <jgriffith> hemna: DuncanT so I don't want to kill the discussion on winston-d 's work here with my little problems :)
16:42:09 <winston-d> avishay last time we checked, it should be able to do so. but I didn't try that out
16:42:09 <jgriffith> but...
16:42:15 <avishay> winston-d: ok
16:42:25 <jgriffith> hemna: but yes, that's what I intend to do
16:42:30 <hemna> that'd be cool :)
16:43:03 <jgriffith> DuncanT: to start it most likely would have to be an update to the volume-type
16:43:21 <jgriffith> so for example:  volume-type: A, with QoS: Z
16:43:32 <jgriffith> Update volume-type: A to have QoS: X
16:43:32 <DuncanT> jgriffith: That is entirely outside of any scope of QoS discussed so far... and is going to cause major issues in regards to even slightly trying to standardise behaviours between backends
16:43:43 <jgriffith> DuncanT: why?
16:43:56 <jgriffith> DuncanT: and BTW I've already submitted a patch for this back in Folsom
16:44:10 <hemna> well I think it's a new feature that hasn't been discussed yet, but should be put in a new BP and scheduled.
16:44:13 <DuncanT> jgriffith: Because the possibility matrix explodes, as far as what backends can do what featyres
16:44:27 <jgriffith> DuncanT: that's why I'm saying you don't hard code that shit
16:44:36 <jgriffith> DuncanT: That's the whole point of using metadata keys
16:44:39 <winston-d> if we perfer K/V pairs for QoS metadata, maybe we should have a set of fixed keys?
16:44:55 <jgriffith> winston-d: can you expand on that?
16:45:12 <hemna> that's just the key standardization discussion all over again :)
16:45:30 <xyang_> hema: 2 sessions at the summit:)
16:45:36 <hemna> :)
16:45:43 <avishay> and no conclusions, obviously
16:45:54 <jgriffith> xyang_: hemna the good thing is it's paired own in terms of scope
16:46:03 <hemna> true
16:46:12 <jgriffith> avishay: I think we tried to tackle too large of a problem in the summit sessions
16:46:28 <jgriffith> winston-d: can you tell me more about what you're thinking with the standard keys?
16:46:33 <avishay> jgriffith: agreed.  i also think that we failed to agree on simpler use cases than this.
16:46:35 <winston-d> for example, KVM/libivrt only accepts total/read/write bytes/iops per sec.
16:47:08 <guitarzan> we need to keep track of what we're discussing...
16:47:18 <winston-d> so for a QoS setting requires client to do the enforcement, these keys must be there, at least 0
16:47:21 <guitarzan> we're confusing client side, backend, qos, capabilities
16:47:53 <jgriffith> winston-d: I get that
16:48:16 <jgriffith> winston-d: so that brings me back to thinking that we have two types of performance control
16:48:27 <jgriffith> 1. Hypervisor rate-limiting
16:48:34 <jgriffith> 2. Vendor/Backend implemented
16:48:36 <winston-d> I think the whole idea behind the bp/patch is we try to find a way to express QoS requirements for volume types in Cinder, which can be consumed either by Nova or cinder back-ends.
16:49:18 <rushiagr> winston-d: and that I guess is mixing 1 and 2?
16:49:22 <jgriffith> winston-d: indeed, but what I'm propsing is
16:49:36 <jgriffith> rushiagr: haha.... I think we're thinking the same thing
16:49:44 <avishay> s/QoS/rate limiting/g  might make this issue easier to agree on
16:49:52 <jgriffith> winston-d: rushiagr so what if we had set keys for hypervisor limiting
16:50:01 <jgriffith> and arbitrary K/V's for vendors
16:50:13 <jgriffith> avishay: now that's more what I'm thinking!!
16:50:19 <winston-d> avishay well, it was simply cliend-side rate limiting at first. :)
16:50:26 <jgriffith> avishay: I don't think we should treat rate limiting as QoS
16:50:40 <thingee> 10 min warning
16:50:45 <jgriffith> DOHHHH
16:50:49 <guitarzan> suprrise
16:51:10 <jgriffith> I don't think we're going to agree on representation
16:51:20 <jgriffith> But I do think we shoudl be able to agree on:
16:51:30 <jgriffith> 1. Should QoS and Rate Limiting be separate concepts
16:51:40 <jgriffith> 2. Should QoS be abstract K/V pairs
16:51:51 <jgriffith> thoughts on those two points?
16:52:11 <hemna> +1 to both
16:52:25 <jgriffith> winston-d: avishay thingee kmartin rushiagr ?
16:52:31 <DuncanT> I'm not convinced QoS and rate limiting are different concepts
16:52:36 <rushiagr> +1 for 2
16:52:36 <jgriffith> DuncanT: they are
16:52:39 <DuncanT> +1 on 2. though
16:52:43 <xyang_> +1 for 2
16:52:43 <thingee> yes
16:52:45 <jgriffith> but I can argue with you over a beer on that one :)
16:52:51 <winston-d> +1 for 2.
16:52:55 <avishay> QoS means different things to practically everyone
16:53:02 <hemna> yup
16:53:05 <kmartin> I ok with #2 but #1 is the same as far as HP is concerned
16:53:08 <avishay> I can say that Flash vs. HDD is QoS
16:53:24 <guitarzan> do the decisions on 1 & 2 affect the client vs backend question?
16:53:31 <rushiagr> the first one needs some more discussion i guess. Need to think more on the idea of separating hypervisor/backend stuff
16:53:32 <hemna> kmartin, well HP 3PAR that is.
16:53:50 <winston-d> guitarzan client side usually can only do rate-limiting, AFAIK
16:53:52 <bswartz> QoS is more about guaranteed minimums than it is about maximums
16:53:54 <guitarzan> avishay: we call those two different products
16:53:55 <DuncanT> jgriffith: Certainly it is a non-trivial argument space but ultimately the only sane conclusion is that they are the same class of thing :-)
16:53:56 <jgriffith> guitarzan: that might help me win my argument :)
16:54:04 <hemna> bswartz, +1
16:54:04 <DuncanT> jgriffith: I'll buy the first round
16:54:05 <jgriffith> guitarzan: I could go for that :)
16:54:10 <jgriffith> DuncanT: :)
16:54:22 <jgriffith> Ok.. one more minute on this
16:54:28 <jgriffith> I think we all agree on #2 then
16:54:35 <jgriffith> The only question is #1
16:54:40 <bswartz> +1 on both
16:54:41 <jgriffith> I'm willing to compromise here I think
16:54:47 <jgriffith> Yay!!! bswartz
16:54:53 <guitarzan> I think #1 and the client/backend question are easily bigger than "what keys"
16:55:09 <winston-d> +2 for 1 if we shift QoS to 'I' release...
16:55:14 <jgriffith> I think guitarzan makes a good point, what about separating client and backend
16:55:30 <guitarzan> and here's my last off the wall idea
16:55:35 <jgriffith> winston-d: hmmm... that could hurt
16:55:47 <guitarzan> maybe the client side stuff should be stuck on an "attachment" instead of the volume itself
16:56:00 <jgriffith> guitarzan: I actually like that idea
16:56:09 <jgriffith> guitarzan: I think it's come up before actually
16:56:10 <winston-d> jgriffith never mind. i can do both for 1.
16:56:38 <bswartz> winston-d: https://launchpad.net/~openstack/+poll/i-release-naming
16:56:44 <jgriffith> What do others think of the separation of client/backend implemented?
16:57:02 <winston-d> jgriffith +1
16:57:13 <avishay> fine by me
16:57:34 <jgriffith> cool!
16:57:38 <jgriffith> kmartin: you good?
16:57:40 <jgriffith> hemna: ?
16:57:44 <guitarzan> I think things may get refined after someone actually implements something
16:57:44 <jgriffith> bswartz: rushiagr ?
16:57:44 <DuncanT> jgriffith: But the end result is the same, whether the rate limit is enforced on hypervisor or backend
16:57:46 <jgriffith> DuncanT: ?
16:57:47 <guitarzan> s/think/hope/
16:57:58 <kmartin> +1
16:58:04 <haomaiwang> +1
16:58:05 <bswartz> I don't understand what the client side imlpementation has to do with cinder
16:58:11 <hemna> +1
16:58:17 <winston-d> guitarzan https://review.openstack.org/#/c/29737/
16:58:21 <jgriffith> DuncanT: well.. backend become K/V's and client is "set" semantics that get in the DB
16:58:29 <guitarzan> winston-d: touche :)
16:58:32 <winston-d> bswartz just like volume encryption in client side.
16:58:36 <avishay> slight difference - doing it in the hypervisor adds ratelimiting to the network connection as well
16:58:47 <bswartz> clients are welcome to limit themselves, but it's not our business
16:59:11 <jgriffith> bswartz: fair point, but I like the idea of having that setting in Cinder via the attach
16:59:18 <bswartz> winston-d: okay well I can see it from that pespective
16:59:20 <jgriffith> bswartz: and it allows us to keep from double implementing
16:59:22 <guitarzan> shocker, our one minute is over :)
16:59:36 <jgriffith> bswartz: in other words set it on the backend and on the hypervisor
16:59:46 <jgriffith> Darn you time!!!
17:00:07 <DuncanT> bswartz: Like encryption, cinder is the single place to store this kind of info... and I'd really rather most customers don't see things like rate limiting
17:00:16 <thingee> yup, that's time
17:00:23 <bswartz> okay I take your points
17:00:25 <thingee> see ya all in #openstack-cinder
17:00:27 <jgriffith> Ok... suppose that will do it
17:00:30 <DuncanT> Shall we move to the cinder channel? I know dosaboy still have a question...
17:00:31 <bswartz> cinder does need to understand rate limitting
17:00:49 <winston-d> it always make me feel that I'm back to OSD when discussing standardizing things among back-ends.
17:00:53 <winston-d> which is good. :)
17:01:00 <jgriffith> :)
17:01:05 <danwent> who's here for the vmware driver meeting?
17:01:11 <jgriffith> alright, I need to wrap and go to my next meeting :(
17:01:18 <jgriffith> #end meeting cinder
17:01:23 <danwent> jgriffith: ah, sorry, thought you were already done
17:01:26 <jgriffith> #endmeeting cinder