16:01:54 #startmeeting Cinder 16:01:54 Meeting started Wed Dec 12 16:01:54 2012 UTC. The chair is DuncanT. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:55 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:57 The meeting name has been set to 'cinder' 16:02:20 yep 16:02:52 I was away for the last two meetings and we don't seem to have an agenda planned, so feel free to shout up with topic for discussion 16:03:32 o/ 16:03:32 Start with updates? FC? Volume backup? 16:03:37 FC please :) 16:03:46 #topic FC update 16:03:46 volume backup 16:03:47 if that doesn't land i'm doomed ;p 16:03:55 I can provide an FC update 16:04:06 kmartin: Show's all yours 16:04:35 As we mention last week we have a Proof of Concept working 16:04:51 and making good progress with the HP legal system 16:05:20 I would expect we could share the code at the start of the new year 16:05:41 Sounds like good news 16:05:54 That work for you zykes? 16:06:33 Still meeting with the Brocade, IBM , EMC guys on a weekly basis to make sure we cover all the requirements for the different vendors 16:07:19 that's all I have Duncan 16:07:23 Good stuff. Anybody any comments? 16:07:52 None for me. Sounds good. 16:08:23 Shall we move on to volume backups? 16:09:14 #topic volume backups 16:09:23 yes 16:09:34 (shout if I'm skipping people) 16:09:42 frankm: You there? 16:09:43 I can give an update on this 16:10:04 We've started forward porting our code to cinder 16:10:19 So far so good, steady progress 16:10:52 Plan is to have something ready to share early in the new year 16:11:06 frankm: can you please give 2 sentences on design? Is it only for detached volumes? 16:11:32 It for backing up volumes in the available state 16:11:46 avishay: yes, only for detached volumes 16:12:01 so, yes detached volumes only 16:12:09 OK 16:12:17 frankm: early in the new year...is this still a goal for g2? 16:12:39 yes, g2 is still the goal 16:13:08 thingee: we expect what we push will need some rework but should have something by then 16:13:25 Hopefully people will be generous with reviewing early and often :-) 16:13:51 DuncanT, frankm: yea I'll make sure to be available. good luck guys 16:14:09 I will do my best as well 16:14:57 I will also try to help, though my exposure to cinder is limited at this point of time 16:15:16 rushiagr: The more the merrier 16:15:21 i will try to as well 16:15:55 smulcahy, frankm: awesome. there ya go :) 16:15:58 Ok, so it sounds like we are making progress there. Anything else on the status front? Filter scheduler? 16:16:29 DuncanT: I have a couple topics for the meeting if you run out: bug squashing day tomorrow, and things people need help on, especially for g2 16:16:51 Lets take those two then 16:16:59 #topic bug squashing day 16:17:06 The floor is yours 16:17:13 on volume backups - cinder has a FLAG - storage_availability_zone which doesn't seem to be set in installations I've seen - can we rely on that being set to availability zones in production configs? 16:18:16 DuncanT: Just wanted to bring to people's attention, and hope we can squash some bugs :) 16:18:27 and, also on volume backups, we've found it very useful to have thread_ids in debug and error log messages - we're currently wrapping LOG.debug in the volume backup service to do this but I'm wondering is there any reason not to modify the cinder log formatter to always insert the thread_id ? It would be useful imo 16:18:34 Is it a project wide thing or just cinder tomorrow? 16:18:42 https://bugs.launchpad.net/cinder/+bugs 16:18:44 project-wide 16:18:44 DuncanT: project wide 16:18:55 Righto 16:19:22 What about things people need help on? 16:19:47 We've still got some blueprints with nobody talking about them... 16:20:05 I am now available part-time to work on general Cinder stuff...any urgent blueprint that I can tackle within a few days work? 16:20:16 i think i need to update my LIO blueprint and get it targeted correctly 16:20:23 DuncanT: we should be available in #openstack-dev and #openstack-cinder in case people need help contributing to the project that are new I think is what avishay meant. 16:20:28 smulcahy: Your two points are noted... 16:21:14 thingee: Yup, thought I'd see if anybody wants to shout up now - can't have a meeting finishing early ;-) 16:22:04 DuncanT: did we already talk about filter drives? 16:22:07 drivers* 16:22:29 Nope, I was desparately scanning the logs to see who was talkking about them last week ;-) 16:22:34 http://wiki.openstack.org/CinderMeetings 16:23:29 Any updates on that? 16:23:44 Looks like winston-d isn't on now? 16:24:32 DuncanT: nah he hasn't answered in #openstack-cinder 16:24:47 DuncanT: skip along to volume type create? 16:25:09 thingee: That was last week's agenda I think 16:26:14 Any questions about volume type create? Looks from the logs like avishay was happy? 16:26:35 I'm always happy :) 16:26:55 I'm still not sure if volume types are flexible enough for everything we'll want in the future though 16:27:26 For example, it would be nice to be able to set a string without defining a new type. For example, for volume affinity. 16:27:31 They don't handle per--volume tuning at all, among things, but I think that's a post-g2 discussion 16:27:48 I do entirely agree with you though 16:28:20 I've had a volume affinity blueprint open for ages that needs thinking about, interface-wise 16:28:24 For example, declare a volume with group "database", and all volumes in that group should go to the same back-end (or different ones, depending on what you want) 16:29:00 Anyway, no action item here...need to think about it :) 16:29:08 :-) 16:29:28 #topic threads and debugging 16:30:04 smulcahy bought up a good point that we don't have thread id in the default debug format, which can make trawling the logs painful 16:30:13 Anybody got a good reason not to add it? 16:30:29 (Anybody else found it a problem?) 16:32:16 I'm ambivalent, but it could be useful in the future 16:32:49 Certainly we saw, when several backup threads are all working hard, it was impossible to untangle the messages 16:32:51 maybe it's a dumb question, but what threads are there 16:33:05 jgriffith: sorry i'm late 16:33:45 avishay: If multiple requests are made to a cinder service, it can result in multiple threads of execution starting to process the requests in parallel 16:33:46 So each API request coming in goes to one of a pool of greenthreads... they're often fast enough you don't see much overlap in the logs, but for long running ops you certainly can do 16:34:12 avishay: in the case of the backup service which involves long-running operations, we can see tens of threads running at the same time 16:34:26 Ah, didn't realize that - good to know 16:34:33 what DuncanT said :) 16:35:16 Looks like we can slap a patch in and see if anybody screams then... 16:35:26 DuncanT: that's greenthreads of volume service, right? 16:35:49 winston-d: Yup 16:36:14 k 16:38:04 winston-d: Have you any update on the filter scheduler? 16:39:11 DuncanT: well, i've submitted two patches for common filter/weight to Oslo to address russellb's suggestion. 16:39:59 but the review process is slow 16:40:15 so filter scheduler patch in cinder review is pending 16:41:41 Ok, thanks. I can see the review, thanks 16:41:41 yeah, i've been out of commission on reviews lately, sorry 16:41:47 way behind on my usual review amount 16:41:58 sorry :( 16:42:08 I'm the same, took a vacation 16:42:35 Right, was there anything else? 16:42:45 russellb: it seems other oslo core are not interested neither? 16:43:39 Yes 16:43:45 cinder's use of availability_zones 16:43:47 winston-d: get any reviews yet? 16:44:06 winston-d: once i can get in there, i'll ping some other reviewers 16:44:07 in nova-volumes and now in cinder, we have this flag storage_availability_zone 16:44:31 russellb: nope, not yet. 16:44:43 russellb: sure, that'll be great. thx! 16:44:43 I haven't seen it used in production environments - does anything rely on this always being nova or can we start using this to identify the actual availability_zone the service is running in? 16:44:51 ok, the rest of my week is looking better, so i'll try to get on it very soon 16:44:59 are people already using it correctly in their environments? 16:45:10 russellb: great! thank you 16:45:26 smulcahy: what do you mean by using it correctly? 16:45:31 Just wondering if we can use this in the volume backup service or whether we need to add a 'volume_backup_availability_zone' or somesuch 16:46:04 Is storage_availability_zone actually used for anything other than the euca api? 16:46:08 winston-d: as in setting it in the nova.conf (or cinder.conf now I guess). 16:46:35 DuncanT: euca api doesn't use it AFAIK 16:47:05 well, euca api uses it in nova, but not in cinder. 16:47:25 winston-d: we're using availability_zone in volume backups as part of the unique identifier for a backup in swift (since swift may be cross-az, we could possibly get a naming collision without it). But if cinder is always deployed with this set to 'nova' we'll see problems. 16:48:18 does that make sense? 16:49:04 smulcahy: az in cinder is... complex. in AWS, you can only attach volume from same az to EC2 instance. but in OpenStack, we don't actually have such constrain/limit. 16:49:33 smulcahy: at least not in OpenStack API level. 16:50:58 but to follow AWS, I guess it's suggested to set storage_availability_zone for cinder to the same string as nova (if they logical in the same az). 16:51:31 winston-d: maybe my confusion is stemming from a lack of understanding of how az's in cinder should/do work. Feel free to point me at the documentation if there is some. In the abscence of that though - I wonder is it reasonable for use to use storage_availability_zone to identify backups created from volumes in a particular 'az' or whether we need to use a specific flag for volume_backups. 16:51:57 it sounds like it is reasonable to re-use it from this discussion 16:52:18 and we can revisit it in future if we encounter someone using az's in a different way 16:52:23 smulcahy: for that question, i suggest we re-use storage_az flag 16:52:36 winston-d: ok, thanks 16:53:26 that flag was named that way back in nova-volume time. back then, nova has two az flags, one for nova, one for volume. 16:53:58 we may actually rename that flag if it causes much confusion, i guess. 16:54:25 I think the flag name makes sense 16:54:39 Might make sense to put it into the default cinder.conf to expose it though 16:54:46 I need to go. Just one quick thing that may be of interest - a fellow IBMer is soon submitting iSCSI multipath support to nova - https://blueprints.launchpad.net/nova/+spec/libvirt-volume-multipath-iscsi 16:54:49 (perhaps its there already) 16:55:51 avishay: The review mentioned in that blueprint appears to be a 404? 16:56:06 smulcahy: yes, it was there. 16:56:22 s/was/is 16:58:03 smulcahy: that default value for that flag is 'nova', same default value as nova's az flag. 16:58:36 #topic Any final bussiness 16:58:57 Anybody got anything else to bring up? 16:59:19 nope 16:59:25 i missed adding my bit when we were discussing helping new people on cinder 17:00:05 rushiagr: Now is as good a time as any to make comments.... 17:00:42 DuncanT: I don't think he submitted yet - but keep an eye out if it interests you 17:00:55 avishay: Will do 17:01:17 i was just bringing to notice that i might ask some trivial looking questions on the cinder channel.. 17:01:50 rushiagr: Ask away - new folks always welcome :-) 17:02:29 rushiagr: yeah 17:02:54 actually the problem is - i am usually up at the channel during office hours in India 17:03:26 and as this channel is not logged, i sometime miss some discussion 17:03:34 this = #openstack-cinder 17:03:36 rushiagr: hey bro, i'm in China. so the time I'm usually up is largely overlapped with yours. 17:04:01 openstack-meeting is logged 17:04:02 http://eavesdrop.openstack.org/meetings/cinder/2012/ 17:04:16 rushiagr: Many people are logged into the channel 24/7 - it means you have a local log at least 17:04:54 rushiagr: you can have a 7x24 IRC session in office, even while you were not there. 17:05:01 winston-d: okay, will remember that 17:05:32 Right, we're just about out of time for today... Thanks to everybody for coming, and apologies if I was less than smooth in the chair - JohnG will be back next week I hope! 17:06:01 winston-d: will do that in a couple of days 17:06:11 thanks 17:06:20 DuncanT: thanks 17:06:24 thx DuncanT 17:06:34 #end-meeting 17:06:54 thx DuncanT 17:06:58 #endmeeting