16:04:03 <jgriffith> #startmeeting
16:04:04 <openstack> Meeting started Wed Jul 25 16:04:03 2012 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:04:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:04:18 <jgriffith> Howdy everyone!
16:04:18 <DuncanT> Hey
16:04:25 <avishay> Hey
16:04:31 <Vincent_Hou> Hi
16:04:55 <jgriffith> Alright, so I didn't put up an agenda this week....
16:05:08 <thingee> o/
16:05:21 <dricco> hi
16:05:24 <jgriffith> The only thing listed is dricco's blueprint
16:05:54 <dricco> first up, wahooooo!
16:06:01 <jgriffith> So, let's start with that https://blueprints.launchpad.net/nova/+spec/volume-usage-metering
16:06:06 <jgriffith> dricco: It's all yours
16:06:21 <dricco> :jgriffith thx
16:06:40 <dricco> so I went to the ceilometer last week
16:07:17 <dricco> guys where happy with the blueprint and they were happy with me implementing it in folsom
16:07:26 <jgriffith> Sounds good...
16:07:37 <jgriffith> So you're going to be able to tie in with them
16:07:47 <dricco> after folsom we can then move to ceilometer if we wish
16:07:59 <jgriffith> ah... sounds like a good plan
16:08:04 <dricco> yup, going to go to the meeting tomorrow  as well
16:08:10 <jgriffith> great
16:08:18 <jgriffith> Ok, so my next question...
16:08:22 <jgriffith> resources?
16:08:25 <dricco> was hopefully to get have something out for review tomorrow but looks more like friday at best
16:08:32 <renuka> guessing this will be implemented only for libvirt?
16:08:36 <Vincent_Hou> merge into ceilometer?
16:08:46 <thingee> what's the hold up with tying it in now? Sorry not completely aware of their status. Last meeting with them being accepted in core I think they have no api exposed?
16:08:50 <dricco> renuka: yes
16:09:37 <renuka> dricco: right, we can look into adding the xenapi support
16:09:55 <dricco> Vincent_Hou: might be a slight re write but the proof of concept will be there
16:10:00 <avishay> So this relies on compute to aggregate all of the usage data?
16:10:09 <dricco> avishay: yes
16:10:22 <jgriffith> dricco: Then it's not really a "cinder" feature eh?
16:10:23 <dricco> using a periodic task
16:10:38 <dricco> jgriffith: for now, no
16:10:58 <dricco> but i think in the future we might require a process on every compute host?
16:11:08 <jgriffith> dricco: compute or volume/cinder host?
16:11:14 <avishay> Some of the backends already collect usage statistics themselves.  Maybe that can be used instead or in addition?
16:11:18 <dricco> compute host
16:11:38 <jgriffith> dricco: Ok, so this ends up being a nova feature that we just benefit from
16:11:44 <jgriffith> :)
16:11:53 <dricco> there is a bandwidth task that collects network usage for instances
16:12:03 <dricco> :)
16:12:24 <dricco> my design follow that current implementation
16:12:42 <thingee> dricco: sorry I think my question was missed. what's the hold up with tying it into ceilometer now?
16:13:04 <dricco> it's a requirement for us for folsom
16:13:09 <thingee> got it
16:13:21 <dricco> ceilometer won't be released till post folsom
16:13:33 <thingee> yup yup, sounds good
16:13:48 <jgriffith> dricco: So really from my perspective on the Cinder side I don't have any real input for you
16:14:09 <jgriffith> dricco: I do get weary anytime I see polling in Openstack?
16:14:23 <dricco> no problem, just wanted to touch base to see if everyone was ok with it
16:14:33 <thingee> jgriffith, dricco: I'm fine as long as keystone isn't doing it ;)
16:14:45 <rnirmal> one point would be... some of the volume backends provide this sort of data, is that something we want to consider for this
16:14:45 <jgriffith> thingee: hehe
16:14:51 <avishay> It's not clear to me why the statistics collection is done at the compute side and not at the cinder side
16:15:01 <rnirmal> instead of tying it into nova-compute
16:15:16 <avishay> I think even with LVM you can get these kinds of statistics from /proc
16:15:37 <renuka> dricco: I noticed you do not have rate-of-... in your table. Is that something that could be important
16:16:16 <jgriffith> rnirmal: You make an excellent point
16:16:18 <rnirmal> I'm not so sure that's the right approach, also having to do it for each of the hypervisors
16:16:29 <dricco> renuka: we could deduce from a timestamp and total i/o for that time
16:16:39 <rnirmal> what if a volume is idle for a period of time... not attached to any compute instances
16:16:51 <renuka> dricco: Not accurately right. Total is over all attaches, I would expect?
16:16:52 <rnirmal> you don't get the usage data then
16:16:56 <thingee> rnirmal: we're already storing other usage info on the compute side for cinder. Doesn't make sense to separate it out even more
16:17:09 <thingee> rnirmal: it's also temporary. It's going to ceilometer after folsom
16:17:49 <rnirmal> thingee: ah ok.. but even is cielometer the right place
16:18:14 <jgriffith> rnirmal: That's the whole idea of ceilometer so it's the "right" place I think
16:18:20 <rnirmal> is it so it's read in a uniform manner irrespective of the volume backends ?
16:18:32 <dricco> renuka: there is also totals per attach cycle but i take your point. rate-of could be very useful for debugging
16:18:56 <jgriffith> So here's my thoughts...
16:19:08 <jgriffith> I think the first step is to implement the blueprint as it's written here
16:19:16 <jgriffith> I think there's another level of reporting though
16:19:23 <jgriffith> ie idle volumes etc
16:19:27 <renuka> dricco: Also useful if the IO is bursty... (if that affects things)
16:19:32 <dricco> rnirmal: I think we need it calculated on the compute because if we want to charge the customer for the I/O.
16:19:35 <jgriffith> I think that is something that will need implemented in Cinder
16:19:40 <dricco> we should charge for what they see
16:19:45 <avishay> I would also consider adding latency, not just throughput
16:19:51 <dricco> in /proc/diskstats
16:20:40 <rnirmal> dricco: ok I don't think I'm totally convinced but I agree this is something to start with
16:20:53 <dricco> if you calculate on the backend then you might miss some I/O because of caching etc
16:20:57 <avishay> I agree with rnirmal
16:21:25 <jgriffith> I believer there's a whole seperate set of stats that need gathered that will have to be done in the volume/cinder code
16:21:27 <renuka> dricco: so in case of remote volumes, where are we running /proc/diskstats? The volumes may not always be visible to the volume service right?
16:21:35 <jgriffith> and can be implemented via the backends
16:21:59 <avishay> dricco, I guess it depends on how the billing is calculated?  Maybe they want to charge less for cached I/Os?  An incentive to write well-behaved apps?
16:22:07 <dricco> renuka: mean /proc/diskstats in the vm
16:22:24 <rnirmal> jgriffith: agree
16:22:34 <renuka> dricco: ok that makes sense to me. I am still unclear on how we would do it entirely on the cinder side
16:23:03 <jgriffith> renuka: I don't think that's possible
16:23:14 <jgriffith> renuka: I think the path dricco is on to start is correct
16:23:26 <DuncanT> Collection i/o metrics on the server side is interesting too, it is just a different use-case to dricco's work
16:23:27 <jgriffith> renuka: But I think there's additional info that will be desireable from cinder
16:23:45 <renuka> jgriffith: gotcha
16:23:57 <jgriffith> I say dricco should run with what he has
16:24:02 <jgriffith> :)
16:24:07 <thingee> +1
16:24:19 <rnirmal> dricco: is this just going to be i/o metrics for attached volumes or also root/ephemeral disk ?
16:24:25 <DuncanT> +1
16:24:36 <renuka> +1
16:25:20 <avishay> I think it's a good start, but we should keep in mind for the future that cinder backends could be queried for richer statistics that could be useful for billing, debugging, and for the customer
16:25:43 <jgriffith> avishay: Agreed
16:26:03 <jgriffith> I believe there is going to be levels of monitoring/reporting
16:26:04 <dricco> rnirmal: just for nova volumes in the attached state
16:26:16 <DuncanT> avishay: We (I work with dricco) have some thoughts on that too (we've implemented a version we use in-house), but not today :-)
16:27:00 <avishay> DuncanT: sounds good :)
16:27:05 <jgriffith> Ok, sounds like we're more or less all in agreement
16:27:14 <jgriffith> Thanks dricco!
16:27:27 <dricco> thanks everyone :-)
16:27:29 <Vincent_Hou> Do we have a uniformed billing model for openstack? i mean, no matter, nova or cinder, all taken into account.
16:27:30 <jgriffith> I'll look forward to seeing how it all comes out
16:27:42 <jgriffith> Also, just to be sure, make sure the polling is configurable :)
16:27:47 <jgriffith> and can be disabled
16:27:59 <jgriffith> ie intervals
16:28:22 <jgriffith> Vincent_Hou: I thinnk the answer to your question is "no"
16:28:28 <dricco> jgriffith: will do
16:28:31 <rnirmal> +1 for disabled, since some providers just charge for the volume gbs
16:28:37 <jgriffith> Vincent_Hou: But I believe that's part of what ceilo is trying to accomplish
16:28:50 <Vincent_Hou> all right
16:29:05 <jgriffith> Ok...
16:29:11 <jgriffith> #topic status updates
16:29:28 <jgriffith> There's been a lot going on the past week with bugs and fixes
16:29:34 <thingee> Vincent_Hou: http://wiki.openstack.org/EfficientMetering
16:30:13 <jgriffith> thingee: and Vincent_Hou have been very busy reporting and fixing bugs :)
16:30:53 <thingee> and backporting to nova!
16:30:58 <thingee> :D
16:31:12 <jgriffith> thingee: yes, sadly we're stuck with backporting for now it seems
16:31:42 <jgriffith> Vincent_Hou: unfortunately I'm not sure what I'm going to do with your snapshot delete bug
16:31:53 <avishay> jgriffith: We have seen an issue that attaching a volume fails with our driver - we're debugging now to see if it's something in our driver or generic in cinder
16:32:01 <Vincent_Hou> well, i have found sth new.
16:32:07 <jgriffith> avishay: sorry... which driver?
16:32:19 <Vincent_Hou> i put all my comments within that bug.
16:32:29 <jgriffith> Vincent_Hou: Yeah, I read that this morning
16:32:31 <avishay> jgriffith: the storwize_svc driver that we submitted and you reviewed
16:32:37 <jgriffith> Vincent_Hou: That's what's troubling :)
16:32:45 <jgriffith> avishay: Ahh.. thanks
16:33:05 <jgriffith> avishay: You'll have to forgive me, don't always remember irc nicks with those that submit code :)
16:33:16 <avishay> jgriffith, no problem :)
16:33:19 <Vincent_Hou> i added one more comment one hour ago.
16:34:27 <thingee> Vincent_Hou: I can spend sometime tomorrow profiling it on the different ubuntu versions
16:34:35 <jgriffith> perfect
16:34:41 <Vincent_Hou> thx, Mike.
16:35:02 <jgriffith> I'd like to find out how to make this not so miserably slow if we can
16:35:23 <jgriffith> I've been working on getting devstack to use cinder as default
16:35:33 <thingee> woo!
16:35:43 <Vincent_Hou> how is it?
16:35:46 <jgriffith> I'm hoping the various patches will all land today and we can get this DONE
16:35:59 <jgriffith> So there were some *problems*
16:36:06 <jgriffith> Tempest was failing for a number of reasons
16:36:17 <jgriffith> I have 3 patches in review to fix
16:36:49 * jgriffith is gathering patch id's
16:37:24 <jgriffith> https://review.openstack.org/#/c/10200/
16:37:31 <jgriffith> https://review.openstack.org/#/c/10262/
16:37:39 <jgriffith> https://review.openstack.org/#/c/10263/
16:38:11 <jgriffith> Speaking of which.... if any of you have time for reviews, I'd really like to get the cinderclient one approved and merged asap
16:38:53 <jgriffith> If you would like more explanation of the problems etc let me know and I'm happy to go through it
16:39:16 <jgriffith> In an nut shell it's tweaks for having a volume service outside of nova
16:40:09 <jgriffith> So other than that...
16:41:07 <jgriffith> I'd like to get winstond implementation of the snapshots fix in
16:41:23 <jgriffith> and I still need to get back to the quota issues in the cinderclient
16:41:49 <jgriffith> I never heard back from clayg, so if anybody has a chance to take a look today it would be VERY helpful
16:42:32 <thingee> jgriffith: link?
16:42:36 <jgriffith> anyone want to have a look at it?
16:42:47 <jgriffith> thingee: So no link yet
16:43:07 <jgriffith> Recall from last week I'm having issues getting the endpoints sorted correctly
16:43:29 <jgriffith> I was able to send quota commands from cinderclient no problem but they pointed to nova :(
16:43:44 <jgriffith> After tweakign things to make that work I get 404 errors
16:43:59 <jgriffith> I'm missing something in the extension code I believe but not sure
16:44:21 <jgriffith> bswartz: Hey... don't let me forget to talk to you later
16:44:53 <jgriffith> anyway, it's in my github https://github.com/j-griffith/cinder.git and python-cinderclient.git
16:45:11 <jgriffith> You can get a recap from last weeks meeting minutes: http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-07-18-16.01.log.html
16:45:17 <bswartz> jgriffith: okay
16:45:20 <DuncanT> Will start looking at those 3 reviews now... can't promise how long I'll get before being pulled of though :-(
16:46:06 <jgriffith> DuncanT: No worries, the nova folks should handle their two shortly.
16:46:17 <jgriffith> DuncanT: The main thing from our side is the pythonc-cinderclient review
16:46:28 <jgriffith> DuncanT: Hate to just +2/A it myself :)
16:46:49 <jgriffith> alrighty...
16:47:14 <jgriffith> #topic user migration (nova-volume to Cinder)
16:47:40 <jgriffith> Once the devstack stuff all lands and the quota pieces are in place...
16:47:55 <jgriffith> We need to get some tooling and testing for customer migration going
16:48:16 <jgriffith> Not sure if there are folks here that are interested in looking at this with me?
16:48:21 * jgriffith nudges DuncanT
16:48:30 <DuncanT> We collectively certainly are
16:48:40 <jgriffith> DuncanT: I believe you have  vested interest here :)
16:49:19 <DuncanT> I'll get TimR to assign it to somebody officially so tehre is less of the 'didn't have time' issue... We definitely have a vested interest
16:49:27 <jgriffith> Ok, so that's just a heads up that I'm likely to be bugging a few of you on this
16:50:03 <jgriffith> Alright... the only other thing that I was interested in was CHAP
16:50:23 <jgriffith> anybody have any thoughts about implementing CHAP in nova-vol and Cinder?
16:50:42 <jgriffith> Not necessarily asking you to sign up for the work, but wanting your input
16:50:50 <bswartz> what's to implement?
16:51:01 <bswartz> I thought the existing drivers had chap support
16:51:11 <jgriffith> bswartz: Nope
16:51:22 <jgriffith> bswartz: Unless something has changed that I'm unaware of :)
16:52:06 <bswartz> well the drivers have a notion of authentication, with a username and password
16:52:12 <jgriffith> bswartz: Some of the backends do their own implementation to make this work, but it's not in the default iSCSI driver
16:52:18 <bswartz> the actual chap is handled elsewhere ofc
16:52:59 <bswartz> I never tested it, but I assumed the iscsi initiator on the compute host did the chap authentication
16:53:18 <bswartz> otherwise what would be the point of having a username and password in the driver?
16:53:30 <jgriffith> It does if the backend driver implements it and puts it in the model
16:54:12 * jgriffith is looking for the bug on this
16:54:39 <DuncanT> We use the provider_loc and provider_auth fields in our driver for something not actually auth related
16:54:52 <jgriffith> https://bugs.launchpad.net/bugs/1025667
16:55:14 <bswartz> that's a broken link for me
16:55:20 <DuncanT> 'provider_location' sorry
16:55:27 <jgriffith> yeah, just realized it's not a public page... sorry
16:56:18 <jgriffith> alright, well I'll look at this later this week
16:56:33 <jgriffith> The fact is that chap is not supported by default and it should be
16:56:34 <bswartz> Well my thinking was that the backend was the right place for stuff like chap authentication. If there is a gain from unifying the implementation, then that's a good idea.
16:56:43 <jgriffith> or I would *like* it to be
16:57:25 <jgriffith> bswartz: yes, the problem is that it's "optional" right now
16:57:43 <jgriffith> bswartz: And it's implementation is entirely backend dependent
16:57:48 <avishay> Does it make sense to integrate with keystone here?  I think the last time I thought about it, I came to the conclusion that it doesn't.
16:58:05 * jgriffith hears thingee groaning
16:58:08 <Vincent_Hou> i agree
16:58:33 <jgriffith> avishay: Vincent_Hou: Not sure what you have in mind?
16:59:22 <DuncanT> jgriffith: We'd like to bring up the whole attach dataflow path again as a security issue again... it got shelved before but it is something that should be looked at carefully in cinder
16:59:22 <avishay> jgriffith: for example, storing CHAP tokens in keystone
16:59:32 <Vincent_Hou> isn't keystone supposed to do the authentication?
17:00:00 <jgriffith> avishay: that's an idea...
17:00:32 <thingee> avishay: keystone is for policy and tokens specifically for projects in the openstack family
17:00:43 <jgriffith> Vincent_Hou: Yes, but there's a context, and I don't know if this level is quite appropriate
17:01:16 <jgriffith> Alright, we're out of time...
17:01:18 <DuncanT> Be aware that some drivers (like ours :-) ) do auth quite differently in a way that looks nothing like chap
17:01:20 <thingee> whew
17:01:54 <avishay> Oh, now I remember what the problem was...
17:02:08 <jgriffith> avishay: ?
17:02:35 <jgriffith> Ok... hate to cut folks off, but
17:02:48 <jgriffith> Amazing how quickly this hour goes by every week
17:03:00 <jgriffith> There's always #openstack-cinder :)
17:03:02 <avishay> jgriffith: i'll follow up with you in #openstack-cinder
17:03:13 <jgriffith> avishay: Sounds good
17:03:17 <jgriffith> Thanks everyone
17:03:21 <dricco> thx
17:03:24 <jgriffith> #endmeeting