16:04:03 #startmeeting 16:04:04 Meeting started Wed Jul 25 16:04:03 2012 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:04:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:04:18 Howdy everyone! 16:04:18 Hey 16:04:25 Hey 16:04:31 Hi 16:04:55 Alright, so I didn't put up an agenda this week.... 16:05:08 o/ 16:05:21 hi 16:05:24 The only thing listed is dricco's blueprint 16:05:54 first up, wahooooo! 16:06:01 So, let's start with that https://blueprints.launchpad.net/nova/+spec/volume-usage-metering 16:06:06 dricco: It's all yours 16:06:21 :jgriffith thx 16:06:40 so I went to the ceilometer last week 16:07:17 guys where happy with the blueprint and they were happy with me implementing it in folsom 16:07:26 Sounds good... 16:07:37 So you're going to be able to tie in with them 16:07:47 after folsom we can then move to ceilometer if we wish 16:07:59 ah... sounds like a good plan 16:08:04 yup, going to go to the meeting tomorrow as well 16:08:10 great 16:08:18 Ok, so my next question... 16:08:22 resources? 16:08:25 was hopefully to get have something out for review tomorrow but looks more like friday at best 16:08:32 guessing this will be implemented only for libvirt? 16:08:36 merge into ceilometer? 16:08:46 what's the hold up with tying it in now? Sorry not completely aware of their status. Last meeting with them being accepted in core I think they have no api exposed? 16:08:50 renuka: yes 16:09:37 dricco: right, we can look into adding the xenapi support 16:09:55 Vincent_Hou: might be a slight re write but the proof of concept will be there 16:10:00 So this relies on compute to aggregate all of the usage data? 16:10:09 avishay: yes 16:10:22 dricco: Then it's not really a "cinder" feature eh? 16:10:23 using a periodic task 16:10:38 jgriffith: for now, no 16:10:58 but i think in the future we might require a process on every compute host? 16:11:08 dricco: compute or volume/cinder host? 16:11:14 Some of the backends already collect usage statistics themselves. Maybe that can be used instead or in addition? 16:11:18 compute host 16:11:38 dricco: Ok, so this ends up being a nova feature that we just benefit from 16:11:44 :) 16:11:53 there is a bandwidth task that collects network usage for instances 16:12:03 :) 16:12:24 my design follow that current implementation 16:12:42 dricco: sorry I think my question was missed. what's the hold up with tying it into ceilometer now? 16:13:04 it's a requirement for us for folsom 16:13:09 got it 16:13:21 ceilometer won't be released till post folsom 16:13:33 yup yup, sounds good 16:13:48 dricco: So really from my perspective on the Cinder side I don't have any real input for you 16:14:09 dricco: I do get weary anytime I see polling in Openstack? 16:14:23 no problem, just wanted to touch base to see if everyone was ok with it 16:14:33 jgriffith, dricco: I'm fine as long as keystone isn't doing it ;) 16:14:45 one point would be... some of the volume backends provide this sort of data, is that something we want to consider for this 16:14:45 thingee: hehe 16:14:51 It's not clear to me why the statistics collection is done at the compute side and not at the cinder side 16:15:01 instead of tying it into nova-compute 16:15:16 I think even with LVM you can get these kinds of statistics from /proc 16:15:37 dricco: I noticed you do not have rate-of-... in your table. Is that something that could be important 16:16:16 rnirmal: You make an excellent point 16:16:18 I'm not so sure that's the right approach, also having to do it for each of the hypervisors 16:16:29 renuka: we could deduce from a timestamp and total i/o for that time 16:16:39 what if a volume is idle for a period of time... not attached to any compute instances 16:16:51 dricco: Not accurately right. Total is over all attaches, I would expect? 16:16:52 you don't get the usage data then 16:16:56 rnirmal: we're already storing other usage info on the compute side for cinder. Doesn't make sense to separate it out even more 16:17:09 rnirmal: it's also temporary. It's going to ceilometer after folsom 16:17:49 thingee: ah ok.. but even is cielometer the right place 16:18:14 rnirmal: That's the whole idea of ceilometer so it's the "right" place I think 16:18:20 is it so it's read in a uniform manner irrespective of the volume backends ? 16:18:32 renuka: there is also totals per attach cycle but i take your point. rate-of could be very useful for debugging 16:18:56 So here's my thoughts... 16:19:08 I think the first step is to implement the blueprint as it's written here 16:19:16 I think there's another level of reporting though 16:19:23 ie idle volumes etc 16:19:27 dricco: Also useful if the IO is bursty... (if that affects things) 16:19:32 rnirmal: I think we need it calculated on the compute because if we want to charge the customer for the I/O. 16:19:35 I think that is something that will need implemented in Cinder 16:19:40 we should charge for what they see 16:19:45 I would also consider adding latency, not just throughput 16:19:51 in /proc/diskstats 16:20:40 dricco: ok I don't think I'm totally convinced but I agree this is something to start with 16:20:53 if you calculate on the backend then you might miss some I/O because of caching etc 16:20:57 I agree with rnirmal 16:21:25 I believer there's a whole seperate set of stats that need gathered that will have to be done in the volume/cinder code 16:21:27 dricco: so in case of remote volumes, where are we running /proc/diskstats? The volumes may not always be visible to the volume service right? 16:21:35 and can be implemented via the backends 16:21:59 dricco, I guess it depends on how the billing is calculated? Maybe they want to charge less for cached I/Os? An incentive to write well-behaved apps? 16:22:07 renuka: mean /proc/diskstats in the vm 16:22:24 jgriffith: agree 16:22:34 dricco: ok that makes sense to me. I am still unclear on how we would do it entirely on the cinder side 16:23:03 renuka: I don't think that's possible 16:23:14 renuka: I think the path dricco is on to start is correct 16:23:26 Collection i/o metrics on the server side is interesting too, it is just a different use-case to dricco's work 16:23:27 renuka: But I think there's additional info that will be desireable from cinder 16:23:45 jgriffith: gotcha 16:23:57 I say dricco should run with what he has 16:24:02 :) 16:24:07 +1 16:24:19 dricco: is this just going to be i/o metrics for attached volumes or also root/ephemeral disk ? 16:24:25 +1 16:24:36 +1 16:25:20 I think it's a good start, but we should keep in mind for the future that cinder backends could be queried for richer statistics that could be useful for billing, debugging, and for the customer 16:25:43 avishay: Agreed 16:26:03 I believe there is going to be levels of monitoring/reporting 16:26:04 rnirmal: just for nova volumes in the attached state 16:26:16 avishay: We (I work with dricco) have some thoughts on that too (we've implemented a version we use in-house), but not today :-) 16:27:00 DuncanT: sounds good :) 16:27:05 Ok, sounds like we're more or less all in agreement 16:27:14 Thanks dricco! 16:27:27 thanks everyone :-) 16:27:29 Do we have a uniformed billing model for openstack? i mean, no matter, nova or cinder, all taken into account. 16:27:30 I'll look forward to seeing how it all comes out 16:27:42 Also, just to be sure, make sure the polling is configurable :) 16:27:47 and can be disabled 16:27:59 ie intervals 16:28:22 Vincent_Hou: I thinnk the answer to your question is "no" 16:28:28 jgriffith: will do 16:28:31 +1 for disabled, since some providers just charge for the volume gbs 16:28:37 Vincent_Hou: But I believe that's part of what ceilo is trying to accomplish 16:28:50 all right 16:29:05 Ok... 16:29:11 #topic status updates 16:29:28 There's been a lot going on the past week with bugs and fixes 16:29:34 Vincent_Hou: http://wiki.openstack.org/EfficientMetering 16:30:13 thingee: and Vincent_Hou have been very busy reporting and fixing bugs :) 16:30:53 and backporting to nova! 16:30:58 :D 16:31:12 thingee: yes, sadly we're stuck with backporting for now it seems 16:31:42 Vincent_Hou: unfortunately I'm not sure what I'm going to do with your snapshot delete bug 16:31:53 jgriffith: We have seen an issue that attaching a volume fails with our driver - we're debugging now to see if it's something in our driver or generic in cinder 16:32:01 well, i have found sth new. 16:32:07 avishay: sorry... which driver? 16:32:19 i put all my comments within that bug. 16:32:29 Vincent_Hou: Yeah, I read that this morning 16:32:31 jgriffith: the storwize_svc driver that we submitted and you reviewed 16:32:37 Vincent_Hou: That's what's troubling :) 16:32:45 avishay: Ahh.. thanks 16:33:05 avishay: You'll have to forgive me, don't always remember irc nicks with those that submit code :) 16:33:16 jgriffith, no problem :) 16:33:19 i added one more comment one hour ago. 16:34:27 Vincent_Hou: I can spend sometime tomorrow profiling it on the different ubuntu versions 16:34:35 perfect 16:34:41 thx, Mike. 16:35:02 I'd like to find out how to make this not so miserably slow if we can 16:35:23 I've been working on getting devstack to use cinder as default 16:35:33 woo! 16:35:43 how is it? 16:35:46 I'm hoping the various patches will all land today and we can get this DONE 16:35:59 So there were some *problems* 16:36:06 Tempest was failing for a number of reasons 16:36:17 I have 3 patches in review to fix 16:36:49 * jgriffith is gathering patch id's 16:37:24 https://review.openstack.org/#/c/10200/ 16:37:31 https://review.openstack.org/#/c/10262/ 16:37:39 https://review.openstack.org/#/c/10263/ 16:38:11 Speaking of which.... if any of you have time for reviews, I'd really like to get the cinderclient one approved and merged asap 16:38:53 If you would like more explanation of the problems etc let me know and I'm happy to go through it 16:39:16 In an nut shell it's tweaks for having a volume service outside of nova 16:40:09 So other than that... 16:41:07 I'd like to get winstond implementation of the snapshots fix in 16:41:23 and I still need to get back to the quota issues in the cinderclient 16:41:49 I never heard back from clayg, so if anybody has a chance to take a look today it would be VERY helpful 16:42:32 jgriffith: link? 16:42:36 anyone want to have a look at it? 16:42:47 thingee: So no link yet 16:43:07 Recall from last week I'm having issues getting the endpoints sorted correctly 16:43:29 I was able to send quota commands from cinderclient no problem but they pointed to nova :( 16:43:44 After tweakign things to make that work I get 404 errors 16:43:59 I'm missing something in the extension code I believe but not sure 16:44:21 bswartz: Hey... don't let me forget to talk to you later 16:44:53 anyway, it's in my github https://github.com/j-griffith/cinder.git and python-cinderclient.git 16:45:11 You can get a recap from last weeks meeting minutes: http://eavesdrop.openstack.org/meetings/openstack-meeting/2012/openstack-meeting.2012-07-18-16.01.log.html 16:45:17 jgriffith: okay 16:45:20 Will start looking at those 3 reviews now... can't promise how long I'll get before being pulled of though :-( 16:46:06 DuncanT: No worries, the nova folks should handle their two shortly. 16:46:17 DuncanT: The main thing from our side is the pythonc-cinderclient review 16:46:28 DuncanT: Hate to just +2/A it myself :) 16:46:49 alrighty... 16:47:14 #topic user migration (nova-volume to Cinder) 16:47:40 Once the devstack stuff all lands and the quota pieces are in place... 16:47:55 We need to get some tooling and testing for customer migration going 16:48:16 Not sure if there are folks here that are interested in looking at this with me? 16:48:21 * jgriffith nudges DuncanT 16:48:30 We collectively certainly are 16:48:40 DuncanT: I believe you have vested interest here :) 16:49:19 I'll get TimR to assign it to somebody officially so tehre is less of the 'didn't have time' issue... We definitely have a vested interest 16:49:27 Ok, so that's just a heads up that I'm likely to be bugging a few of you on this 16:50:03 Alright... the only other thing that I was interested in was CHAP 16:50:23 anybody have any thoughts about implementing CHAP in nova-vol and Cinder? 16:50:42 Not necessarily asking you to sign up for the work, but wanting your input 16:50:50 what's to implement? 16:51:01 I thought the existing drivers had chap support 16:51:11 bswartz: Nope 16:51:22 bswartz: Unless something has changed that I'm unaware of :) 16:52:06 well the drivers have a notion of authentication, with a username and password 16:52:12 bswartz: Some of the backends do their own implementation to make this work, but it's not in the default iSCSI driver 16:52:18 the actual chap is handled elsewhere ofc 16:52:59 I never tested it, but I assumed the iscsi initiator on the compute host did the chap authentication 16:53:18 otherwise what would be the point of having a username and password in the driver? 16:53:30 It does if the backend driver implements it and puts it in the model 16:54:12 * jgriffith is looking for the bug on this 16:54:39 We use the provider_loc and provider_auth fields in our driver for something not actually auth related 16:54:52 https://bugs.launchpad.net/bugs/1025667 16:55:14 that's a broken link for me 16:55:20 'provider_location' sorry 16:55:27 yeah, just realized it's not a public page... sorry 16:56:18 alright, well I'll look at this later this week 16:56:33 The fact is that chap is not supported by default and it should be 16:56:34 Well my thinking was that the backend was the right place for stuff like chap authentication. If there is a gain from unifying the implementation, then that's a good idea. 16:56:43 or I would *like* it to be 16:57:25 bswartz: yes, the problem is that it's "optional" right now 16:57:43 bswartz: And it's implementation is entirely backend dependent 16:57:48 Does it make sense to integrate with keystone here? I think the last time I thought about it, I came to the conclusion that it doesn't. 16:58:05 * jgriffith hears thingee groaning 16:58:08 i agree 16:58:33 avishay: Vincent_Hou: Not sure what you have in mind? 16:59:22 jgriffith: We'd like to bring up the whole attach dataflow path again as a security issue again... it got shelved before but it is something that should be looked at carefully in cinder 16:59:22 jgriffith: for example, storing CHAP tokens in keystone 16:59:32 isn't keystone supposed to do the authentication? 17:00:00 avishay: that's an idea... 17:00:32 avishay: keystone is for policy and tokens specifically for projects in the openstack family 17:00:43 Vincent_Hou: Yes, but there's a context, and I don't know if this level is quite appropriate 17:01:16 Alright, we're out of time... 17:01:18 Be aware that some drivers (like ours :-) ) do auth quite differently in a way that looks nothing like chap 17:01:20 whew 17:01:54 Oh, now I remember what the problem was... 17:02:08 avishay: ? 17:02:35 Ok... hate to cut folks off, but 17:02:48 Amazing how quickly this hour goes by every week 17:03:00 There's always #openstack-cinder :) 17:03:02 jgriffith: i'll follow up with you in #openstack-cinder 17:03:13 avishay: Sounds good 17:03:17 Thanks everyone 17:03:21 thx 17:03:24 #endmeeting