16:01:54 <DuncanT> #startmeeting Cinder
16:01:54 <openstack> Meeting started Wed Dec 12 16:01:54 2012 UTC.  The chair is DuncanT. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:55 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:57 <openstack> The meeting name has been set to 'cinder'
16:02:20 <kmartin> yep
16:02:52 <DuncanT> I was away for the last two meetings and we don't seem to have an agenda planned, so feel free to shout up with topic for discussion
16:03:32 <thingee> o/
16:03:32 <avishay> Start with updates?  FC?  Volume backup?
16:03:37 <zykes-> FC please :)
16:03:46 <DuncanT> #topic FC update
16:03:46 <frankm> volume backup
16:03:47 <zykes-> if that doesn't land i'm doomed ;p
16:03:55 <kmartin> I can provide an FC update
16:04:06 <DuncanT> kmartin: Show's all yours
16:04:35 <kmartin> As we mention last week we have a Proof of Concept working
16:04:51 <kmartin> and making good progress with the HP legal system
16:05:20 <kmartin> I would expect we could share the code at the start of the new year
16:05:41 <DuncanT> Sounds like good news
16:05:54 <DuncanT> That work for you zykes?
16:06:33 <kmartin> Still meeting with the Brocade, IBM , EMC guys on a weekly basis to make sure we cover all the requirements for the different vendors
16:07:19 <kmartin> that's all I have Duncan
16:07:23 <DuncanT> Good stuff. Anybody any comments?
16:07:52 <avishay> None for me.  Sounds good.
16:08:23 <DuncanT> Shall we move on to volume backups?
16:09:14 <DuncanT> #topic volume backups
16:09:23 <rushiagr> yes
16:09:34 <DuncanT> (shout if I'm skipping people)
16:09:42 <DuncanT> frankm: You there?
16:09:43 <frankm> I can give an update on this
16:10:04 <frankm> We've started forward porting our code to cinder
16:10:19 <frankm> So far so good, steady progress
16:10:52 <frankm> Plan is to have something ready to share early in the new year
16:11:06 <avishay> frankm: can you please give 2 sentences on design?  Is it only for detached volumes?
16:11:32 <frankm> It for backing up volumes in the available state
16:11:46 <smulcahy> avishay: yes, only for detached volumes
16:12:01 <frankm> so, yes detached volumes only
16:12:09 <avishay> OK
16:12:17 <thingee> frankm: early in the new year...is this still a goal for g2?
16:12:39 <frankm> yes, g2 is still the goal
16:13:08 <smulcahy> thingee: we expect what we push will need some rework but should have something by then
16:13:25 <DuncanT> Hopefully people will be generous with reviewing early and often :-)
16:13:51 <thingee> DuncanT, frankm: yea I'll make sure to be available. good luck guys
16:14:09 <avishay> I will do my best as well
16:14:57 <rushiagr> I will also try to help, though my exposure to cinder is limited at this point of time
16:15:16 <DuncanT> rushiagr: The more the merrier
16:15:21 <eharney> i will try to as well
16:15:55 <thingee> smulcahy, frankm: awesome. there ya go :)
16:15:58 <DuncanT> Ok, so it sounds like we are making progress there. Anything else on the status front? Filter scheduler?
16:16:29 <avishay> DuncanT: I have a couple topics for the meeting if you run out: bug squashing day tomorrow, and things people need help on, especially for g2
16:16:51 <DuncanT> Lets take those two then
16:16:59 <DuncanT> #topic bug squashing day
16:17:06 <DuncanT> The floor is yours
16:17:13 <smulcahy> on volume backups - cinder has a FLAG - storage_availability_zone which doesn't seem to be set in installations I've seen - can we rely on that being set to availability zones in production configs?
16:18:16 <avishay> DuncanT: Just wanted to bring to people's attention, and hope we can squash some bugs :)
16:18:27 <smulcahy> and, also on volume backups, we've found it very useful to have thread_ids in debug and error log messages - we're currently wrapping LOG.debug in the volume backup service to do this but I'm wondering is there any reason not to modify the cinder log formatter to always insert the thread_id ? It would be useful imo
16:18:34 <DuncanT> Is it a project wide thing or just cinder tomorrow?
16:18:42 <DuncanT> https://bugs.launchpad.net/cinder/+bugs
16:18:44 <avishay> project-wide
16:18:44 <thingee> DuncanT: project wide
16:18:55 <DuncanT> Righto
16:19:22 <DuncanT> What about things people need help on?
16:19:47 <DuncanT> We've still got some blueprints with nobody talking about them...
16:20:05 <avishay> I am now available part-time to work on general Cinder stuff...any urgent blueprint that I can tackle within a few days work?
16:20:16 <eharney> i think i need to update my LIO blueprint and get it targeted correctly
16:20:23 <thingee> DuncanT: we should be available in #openstack-dev and #openstack-cinder in case people need help contributing to the project that are new I think is what avishay meant.
16:20:28 <DuncanT> smulcahy: Your two points are noted...
16:21:14 <DuncanT> thingee: Yup, thought I'd see if anybody wants to shout up now - can't have a meeting finishing early ;-)
16:22:04 <thingee> DuncanT: did we already talk about filter drives?
16:22:07 <thingee> drivers*
16:22:29 <DuncanT> Nope, I was desparately scanning the logs to see who was talkking about them last week ;-)
16:22:34 <thingee> http://wiki.openstack.org/CinderMeetings
16:23:29 <DuncanT> Any updates on that?
16:23:44 <DuncanT> Looks like winston-d isn't on now?
16:24:32 <thingee> DuncanT: nah he hasn't answered in #openstack-cinder
16:24:47 <thingee> DuncanT: skip along to volume type create?
16:25:09 <DuncanT> thingee: That was last week's agenda I think
16:26:14 <DuncanT> Any questions about volume type create? Looks from the logs like avishay was happy?
16:26:35 <avishay> I'm always happy :)
16:26:55 <avishay> I'm still not sure if volume types are flexible enough for everything we'll want in the future though
16:27:26 <avishay> For example, it would be nice to be able to set a string without defining a new type.  For example, for volume affinity.
16:27:31 <DuncanT> They don't handle per--volume tuning at all, among things, but I think that's a post-g2 discussion
16:27:48 <DuncanT> I do entirely agree with you though
16:28:20 <DuncanT> I've had a volume affinity blueprint open for ages that needs thinking about, interface-wise
16:28:24 <avishay> For example, declare a volume with group "database", and all volumes in that group should go to the same back-end (or different ones, depending on what you want)
16:29:00 <avishay> Anyway, no action item here...need to think about it :)
16:29:08 <DuncanT> :-)
16:29:28 <DuncanT> #topic threads and debugging
16:30:04 <DuncanT> smulcahy bought up a good point that we don't have thread id in the default debug format, which can make trawling the logs painful
16:30:13 <DuncanT> Anybody got a good reason not to add it?
16:30:29 <DuncanT> (Anybody else found it a problem?)
16:32:16 <avishay> I'm ambivalent, but it could be useful in the future
16:32:49 <DuncanT> Certainly we saw, when several backup threads are all working hard, it was impossible to untangle the messages
16:32:51 <avishay> maybe it's a dumb question, but what threads are there
16:33:05 <winston-d> jgriffith: sorry i'm late
16:33:45 <smulcahy> avishay: If multiple requests are made to a cinder service, it can result in multiple threads of execution starting to process the requests in parallel
16:33:46 <DuncanT> So each API request coming in goes to one of a pool of greenthreads... they're often fast enough you don't see much overlap in the logs, but for long running ops you certainly can do
16:34:12 <smulcahy> avishay: in the case of the backup service which involves long-running operations, we can see tens of threads running at the same time
16:34:26 <avishay> Ah, didn't realize that - good to know
16:34:33 <smulcahy> what DuncanT said :)
16:35:16 <DuncanT> Looks like we can slap a patch in and see if anybody screams then...
16:35:26 <winston-d> DuncanT: that's greenthreads of volume service, right?
16:35:49 <DuncanT> winston-d: Yup
16:36:14 <winston-d> k
16:38:04 <DuncanT> winston-d: Have you any update on the filter scheduler?
16:39:11 <winston-d> DuncanT: well, i've submitted two patches for common filter/weight to Oslo to address russellb's suggestion.
16:39:59 <winston-d> but the review process is slow
16:40:15 <winston-d> so filter scheduler patch in cinder review is pending
16:41:41 <DuncanT> Ok, thanks. I can see the review, thanks
16:41:41 <russellb> yeah, i've been out of commission on reviews lately, sorry
16:41:47 <russellb> way behind on my usual review amount
16:41:58 <russellb> sorry :(
16:42:08 <DuncanT> I'm the same, took a vacation
16:42:35 <DuncanT> Right, was there anything else?
16:42:45 <winston-d> russellb: it seems other oslo core are not interested neither?
16:43:39 <smulcahy> Yes
16:43:45 <smulcahy> cinder's use of availability_zones
16:43:47 <russellb> winston-d: get any reviews yet?
16:44:06 <russellb> winston-d: once i can get in there, i'll ping some other reviewers
16:44:07 <smulcahy> in nova-volumes and now in cinder, we have this flag storage_availability_zone
16:44:31 <winston-d> russellb: nope, not yet.
16:44:43 <winston-d> russellb: sure, that'll be great. thx!
16:44:43 <smulcahy> I haven't seen it used in production environments - does anything rely on this always being nova or can we start using this to identify the actual availability_zone the service is running in?
16:44:51 <russellb> ok, the rest of my week is looking better, so i'll try to get on it very soon
16:44:59 <smulcahy> are people already using it correctly in their environments?
16:45:10 <winston-d> russellb: great! thank you
16:45:26 <winston-d> smulcahy: what do you mean by using it correctly?
16:45:31 <smulcahy> Just wondering if we can use this in the volume backup service or whether we need to add a 'volume_backup_availability_zone' or somesuch
16:46:04 <DuncanT> Is storage_availability_zone actually used for anything other than the euca api?
16:46:08 <smulcahy> winston-d: as in setting it in the nova.conf (or cinder.conf now I guess).
16:46:35 <winston-d> DuncanT: euca api doesn't use it AFAIK
16:47:05 <winston-d> well, euca api uses it in nova, but not in cinder.
16:47:25 <smulcahy> winston-d: we're using availability_zone in volume backups as part of the unique identifier for a backup in swift (since swift may be cross-az, we could possibly get a naming collision without it). But if cinder is always deployed with this set to 'nova' we'll see problems.
16:48:18 <smulcahy> does that make sense?
16:49:04 <winston-d> smulcahy: az in cinder is... complex.  in AWS, you can only attach volume from same az to EC2 instance.  but in OpenStack, we don't actually have such constrain/limit.
16:49:33 <winston-d> smulcahy: at least not in OpenStack API level.
16:50:58 <winston-d> but to follow AWS, I guess it's suggested to set storage_availability_zone for cinder to the same string as nova (if they logical in the same az).
16:51:31 <smulcahy> winston-d: maybe my confusion is stemming from a lack of understanding of how az's in cinder should/do work. Feel free to point me at the documentation if there is some. In the abscence of that though - I wonder is it reasonable for use to use storage_availability_zone to identify backups created from volumes in a particular 'az' or whether we need to use a specific flag for volume_backups.
16:51:57 <smulcahy> it sounds like it is reasonable to re-use it from this discussion
16:52:18 <smulcahy> and we can revisit it in future if we encounter someone using az's in a different way
16:52:23 <winston-d> smulcahy: for that question, i suggest we re-use storage_az flag
16:52:36 <smulcahy> winston-d: ok, thanks
16:53:26 <winston-d> that flag was named that way back in nova-volume time. back then, nova has two az flags, one for nova, one for volume.
16:53:58 <winston-d> we may actually rename that flag if it causes much confusion, i guess.
16:54:25 <smulcahy> I think the flag name makes sense
16:54:39 <smulcahy> Might make sense to put it into the default cinder.conf to expose it though
16:54:46 <avishay> I need to go.  Just one quick thing that may be of interest - a fellow IBMer is soon submitting iSCSI multipath support to nova - https://blueprints.launchpad.net/nova/+spec/libvirt-volume-multipath-iscsi
16:54:49 <smulcahy> (perhaps its there already)
16:55:51 <DuncanT> avishay: The review mentioned in that blueprint appears to be a 404?
16:56:06 <winston-d> smulcahy: yes, it was there.
16:56:22 <winston-d> s/was/is
16:58:03 <winston-d> smulcahy: that default value for that flag is 'nova', same default value as nova's az flag.
16:58:36 <DuncanT> #topic Any final bussiness
16:58:57 <DuncanT> Anybody got anything else to bring up?
16:59:19 <winston-d> nope
16:59:25 <rushiagr> i missed adding my bit when we were discussing helping new people on cinder
17:00:05 <DuncanT> rushiagr: Now is as good a time as any to make comments....
17:00:42 <avishay> DuncanT: I don't think he submitted yet - but keep an eye out if it interests you
17:00:55 <DuncanT> avishay: Will do
17:01:17 <rushiagr> i was just bringing to notice that i might ask some trivial looking questions on the cinder channel..
17:01:50 <DuncanT> rushiagr: Ask away - new folks always welcome :-)
17:02:29 <winston-d> rushiagr: yeah
17:02:54 <rushiagr> actually the problem is - i am usually up at the channel during office hours in India
17:03:26 <rushiagr> and as this channel is not logged, i sometime miss some discussion
17:03:34 <rushiagr> this = #openstack-cinder
17:03:36 <winston-d> rushiagr: hey bro, i'm in China. so the time I'm usually up is largely overlapped with yours.
17:04:01 <resker> openstack-meeting is logged
17:04:02 <resker> http://eavesdrop.openstack.org/meetings/cinder/2012/
17:04:16 <DuncanT> rushiagr: Many people are logged into the channel 24/7 - it means you have a local log at least
17:04:54 <winston-d> rushiagr: you can have a 7x24 IRC session in office, even while you were not there.
17:05:01 <rushiagr> winston-d: okay, will remember that
17:05:32 <DuncanT> Right, we're just about out of time for today... Thanks to everybody for coming, and apologies if I was less than smooth in the chair - JohnG will be back next week I hope!
17:06:01 <rushiagr> winston-d: will do that in a couple of days
17:06:11 <thingee> thanks
17:06:20 <rushiagr> DuncanT: thanks
17:06:24 <winston-d> thx DuncanT
17:06:34 <DuncanT> #end-meeting
17:06:54 <kmartin> thx DuncanT
17:06:58 <DuncanT> #endmeeting