16:01:56 <jgriffith> #startmeeting cinder
16:01:57 <openstack> Meeting started Wed Mar 13 16:01:56 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:00 <openstack> The meeting name has been set to 'cinder'
16:02:07 <vincent_hou> hi
16:02:09 <avishay> hi
16:02:09 <bswartz> hei
16:02:10 <eharney> hi
16:02:12 <xyang_> hi
16:02:13 <kmartin> hi
16:02:14 <rushiagr> o/
16:02:15 <jgallard> hi
16:02:26 <jgriffith> full house :)
16:02:32 <jgriffith> Hey everyone
16:02:40 <jgriffith> let's get started should be easy meeting today
16:02:45 <jgriffith> #topic RC1 status
16:02:51 * DuncanT1 slinks in at the back
16:03:04 <jgriffith> I'd like to cut RC1 tomorrow morning
16:03:14 <jgriffith> I think we're in fairly good shape
16:03:22 <jgriffith> and there's nothing to say more bugs can't/won't come in
16:03:38 <jgriffith> But it will make sure that we're a bit more picky on what's a critical bug and what's not
16:03:49 <jgriffith> How do folks feel about that?
16:04:09 <jgriffith> rushiagr: bswartz I think you guys got all of your fixes in yes?
16:04:19 <bswartz> will there be RC2/RC3/etc?
16:04:51 <jgriffith> bswartz: there will be but it's not an excuse to rewrite code
16:04:56 <jgriffith> critical bug fixes only!
16:05:05 <jgriffith> and by critical I mean release blocking
16:05:05 <bswartz> I have one bug which is going to take significant changes to fix -- probalby needs to be targetted to havana not grizzly at this point
16:05:21 <jgriffith> bswartz: Yeah, I think so
16:05:33 <bswartz> jgriffith: yeah I'm not planning to push in any big code changes
16:05:48 <jgriffith> So the idea is we really move into testing and documentation after we cut RC1
16:06:05 <jgriffith> April is going to be upon us before we know it
16:06:09 <DuncanT1> We've very much in test mode now
16:06:16 <jgriffith> DuncanT1: excellent...
16:06:20 <jgriffith> DuncanT1: Speaking of....
16:06:22 <thingee> jgriffith: I think I need to get in touch with james king then. I told him to have the other stuff done on friday
16:06:22 <thingee> https://bugs.launchpad.net/cinder/+bug/1087817
16:06:23 <uvirtbot> Launchpad bug 1087817 in cinder "Update v2 with volume_type_id switch to uuid" [Medium,In progress]
16:06:30 <jgriffith> Ever find your doc on multiple cinder nodes?
16:06:37 <jgriffith> I'd like to blend that in to the docs
16:06:55 <jgriffith> thingee: oops... my bad
16:07:17 <jgriffith> thingee: I can ping him as well, I know you're getting ready to head out
16:07:35 <DuncanT1> No. I need to redo it for several places though. If you haven't got it before COB Friday, please shout at me
16:07:49 <jgriffith> DuncanT1: Can take that as you're signing up to do it :)
16:08:07 <thingee> jgriffith: yeah so about that. it's been moved to tomorrow noon
16:08:08 <jgriffith> #action DuncanT1 update multiple cinder node install doc
16:08:14 <jgriffith> thingee: haha
16:08:22 <jgriffith> hey  virbot WTF?
16:08:25 <jgriffith> virtbot
16:08:58 <jgriffith> DuncanT1: we don't need no stinking virtbot anyway
16:09:04 <thingee> jgriffith: I think it's just running slow. it took a while to pull up the info on that bug link I pasted
16:09:24 <jgriffith> ok
16:09:36 <jgriffith> So does anybody have anything else on RC1 updates?
16:09:57 <jgriffith> There's one other nasty bug that I'm going to work on but it will be after RC1 before I get to it
16:10:32 <DuncanT1> Everything I've got is python-cinderclient at the moment
16:10:37 <jgriffith> Here's a fun little exercise, ask to create a volume that 10 Gig w/ your 5 Gig backing store (LVM)
16:10:44 <jgriffith> DuncanT1: excellent
16:11:03 <jgriffith> would it help if we pushed to PyPi now, and then again when we're ready to release?
16:11:13 <thingee> jgriffith: it would just error on the manager layer
16:11:17 <jgriffith> Personally I've been just installing from master
16:11:49 <jgriffith> thingee: ?
16:11:56 <jgriffith> thingee: ideally
16:12:02 <jgriffith> thingee: but it's broke
16:12:14 <jgriffith> thingee: between retries we drop the exception from lvcreate
16:12:25 <jgriffith> and go about life ignoring the issue
16:12:34 <thingee> jgriffith: regarding your fun exercise. I always hit that in testing because the volume group is too small.
16:13:47 <avishay> jgriffith: I'd like to queue this for discussion please: https://review.openstack.org/#/c/24321/
16:14:03 <jgriffith> avishay: how about now
16:14:07 <jgriffith> #topic
16:14:15 <avishay> jgriffith: if queue length is 0...
16:14:23 <avishay> hmm...
16:14:59 <jgriffith> avishay: what would you like to discuss
16:15:23 <avishay> jgriffith: winston-d's comment on "Need discussion on whether these capabilities should be added in host_manager"
16:15:32 <winston-d> yeah
16:15:34 <rushiagr> what is this login.html link about?
16:15:42 <avishay> jgriffith: does everyone agree that this should be added?
16:16:01 <jgriffith> #topic https://review.openstack.org/#/c/24321/
16:16:15 <jgriffith> rushiagr: sorry... wrong info in my clipboard
16:16:20 <rushiagr> jgriffith: does that mean every driver needs to return a default value?
16:16:23 <rushiagr> jgriffith: np
16:16:28 <winston-d> avishay's patch is to add two new capabilities: compression and tiering to host_manager
16:16:53 <xyang_> winston-d: what happens if driver don't report on those two capabilities
16:16:55 <avishay> rushiagr: default is False for both capabilities, if the driver doesn't returnanything
16:16:55 <jgriffith> my opinion was yes on compression, not crazy about tiering
16:17:13 <jgriffith> avishay: but I'm flexible
16:17:28 <rushiagr> avishay: okay
16:17:31 <winston-d> xyang_ : the default should be False
16:17:31 <jgriffith> I think I mentioned before that I'm fine with it
16:17:37 <avishay> does anyone have strong feelings either way?
16:17:39 <avishay> jgriffith: yes you did
16:17:56 <avishay> or even not-so-strong feelings? :)
16:18:03 <jgriffith> haha
16:18:19 <jgriffith> We've become a much less controversial group over the last few weeks :)
16:18:21 <DuncanT1> My only worry is getting a flexible enough definition of these capabilities
16:18:25 <rushiagr> if this doesnt amount to adding a couple of lines in every driver i'm fine
16:18:31 <winston-d> is 'compression' and 'tiering' well-known feature for a storage solution?
16:18:41 <eharney> is tiering something that we think can be used in a similar fashion across different drivers?
16:18:52 <DuncanT1> So that we don't end up with many variations on the theme don't end up being needed
16:19:07 <xyang_> winston-d: they are well known, but we may not expose them.  we may hide them in a pool
16:19:21 <avishay> compression is done in several controllers - jgriffith, doesn't solidfire do it as well?
16:19:38 <jgriffith> yeah, I think *most* devices do compression these days
16:19:43 <avishay> xyang_: what do you mean by hiding them in a pool?
16:19:58 <jgriffith> what I'm not sure of is the value/need to expose this information back up
16:20:11 <DuncanT1> Is it something you want to be exposing to the sysadmin though? i.e. to let end uses choose to avoid it?
16:20:12 <guitarzan> maybe I need to read up on the host stuff, but who needs to know whether or not a volume is compressed or not?
16:20:29 <xyang_> avishay: I mean our driver may not report those capabilities specifically
16:20:35 <jgriffith> DuncanT1: the sysadmin will hopefully know what the device he installed is capable of already
16:20:38 <jgriffith> :0
16:20:54 <avishay> if you have a volume that's going to be logs, it should be scheduled on a compressed volume.  if it will be binary blobs, compression will waste resources.
16:20:56 <jgriffith> My view is that these values are specifically for scheduling purposes
16:21:10 <bswartz> jgriffith: +1
16:21:21 <winston-d> DuncanT1 : what do you mean by 'flexible enough defintion' ?
16:21:24 <DuncanT1> jgriffith: You don't know many sysadmins ;-)
16:21:30 <jgriffith> DuncanT1: haha!
16:21:36 <jgriffith> So here's the thing...
16:21:44 <jgriffith> If there's some confusion or concerns...
16:22:09 <jgriffith> I would propose we leave it out, I think we can get what avishay wants here via introducing types specific to these things
16:22:32 <jgriffith> I don't want to make life harder though and I don't have a real objection/preference on compression
16:22:44 <jgriffith> I'm just a bit neutral on it
16:22:46 <avishay> jgriffith: how can you do it via types?
16:22:58 <xyang_> avishay: the admin can setup gold, silver, bronze pools ahead of time, and each pool is already associated with certain capabilities.
16:23:03 <DuncanT1> winston-d: I mean that, for example, 'tiering' is a flexible enough term that some other vendor doesn't come along and say 'we do multiple storage media user directable migration levels, which is not tiering the way IBM do it and so we want our own capability adding..."
16:23:06 <jgriffith> avishay: Just define a type as "logs volume" or whatever
16:23:23 <jgriffith> avishay: and with knowledge of the capabilities set that up to point to the correct backend
16:23:24 <guitarzan> backends are tied to a type
16:23:52 <kmartin> xyang_: +1 that is how we are going to handle this
16:23:54 <jgriffith> avishay: or are you saying this would allow you to *set* compression on your device on a per volume basis?
16:23:57 <avishay> jgriffith: and then the admin has to manually say which backends support what
16:24:07 <xyang_> kmartin: good
16:24:08 <winston-d> in fact, even these capabilities are not part of host state (e.g avishay failed to get them in)  it's still part of capabilties of a host if driver reports them.
16:24:15 <jgriffith> avishay: yes, not overly elegant but it's an interm solution that works
16:24:44 <jgriffith> winston-d: good point
16:24:55 <jgriffith> winston-d: So then customer filters can still be written
16:24:59 <avishay> jgriffith: no, the point is for the driver to automatically report capabilities and for the scheduler to know about them, so that admins don't have to configure properties for each pool manually
16:25:24 <jgriffith> avishay: ok, that's what I thought you meant, thanks for clarifying.
16:25:30 <winston-d> jgriffith : that's the beauty of filter scheduler. :)
16:25:31 <jgriffith> avishay: I wanted to make sure I wasn't missing something
16:25:45 <jgriffith> winston-d: +1
16:25:51 <avishay> winston-d: but the capability filter scheduler only looks at things in host state, right?  maybe that's the problem?
16:26:21 <winston-d> avishay : no, it also looks at capabilities of a host
16:27:15 <avishay> winston-d: self._satisfies_extra_specs(host_state.capabilities, resource_type)
16:27:24 <xyang_> winston-d: with the mutli-backend support, does "host" mean a cinder-volume service now?
16:27:33 <winston-d> xyang_ : you are right
16:27:53 <winston-d> avishay : exactly, host_state.capabilities
16:28:14 <avishay> winston-d: so if something isn't added, it's ignored, which is why i wanted to add these two items
16:29:05 <guitarzan> I think we need to figure out what purpose we think volume types are supposed to serve
16:29:35 <winston-d> avishay : no, see line 112 of host_manager.py  capabilities reported by driver are copied to host_state.capabilities.
16:29:43 <bswartz> guitarzan: we've been over that a number of times
16:29:58 * jgriffith sees a rathole in our future ;)
16:30:07 <bswartz> I wish there was a nice document explaining the conclusions of these multiple conversations
16:30:09 <guitarzan> bswartz: I know, but it doesn't seem to be resolved
16:30:14 <guitarzan> indeed
16:30:18 <jgriffith> bswartz: you could write one :)
16:30:23 * bswartz hides
16:30:25 <jgriffith> guitarzan: bswartz I'll write on
16:30:27 <jgriffith> one
16:30:32 <jgriffith> it's resolved IMO
16:30:52 <avishay> winston-d: so that things reported by the driver are read once, and things in host state are constantly updated?
16:30:54 <guitarzan> oh? that's good to hear
16:31:11 <bswartz> I propose discussing it one more time at the conference and making sure we write down the conclusions and turn them into a doc
16:31:12 <jgriffith> guitarzan: haha
16:31:19 <bswartz> I can sign up for that
16:31:31 <avishay> bswartz: i was planning to bring this topic up at the summit as well
16:31:40 <bswartz> I can also try to get material prepared in advance
16:31:51 <jgriffith> So hold on a sec...
16:32:04 <jgriffith> First.. the issue with the patch from avishay
16:32:16 <avishay> if things work as winston-d says (I will test to make sure), then we don't need any change to host_manager.py, and the issue seems resolved
16:32:18 <jgriffith> Compression is pretty standard and has a pretty distinct meaning
16:32:35 <jgriffith> I think there are easy ways to get around it but regardless
16:32:43 <winston-d> avishay : see line 273 of host_manager.py  host_state.capabilities are constantly updated as well.
16:32:44 <jgriffith> avishay: if you want to put in compression I'm fine and I say go for it
16:32:55 <jgriffith> tiering on the other hand I'm not a fan of
16:33:06 <jgriffith> tiering should fall into the types setting IMO
16:33:10 <DuncanT1> jgriffith: It sounds like the  patch is unnecessary
16:33:11 <avishay> winston-d: missed that - thanks
16:33:14 <avishay> jgriffith: seems fair
16:33:16 <jgriffith> as in select the tier
16:33:23 <jgriffith> DuncanT1: That was my initial point
16:33:26 <guitarzan> jgriffith: cool, but I do look forward to hearing what types are :)
16:33:46 <avishay> so bottom line, i'll re-submit without the changes to host_manager.py?
16:33:49 <jgriffith> DuncanT1: it's unnecessary but if it's convenient for a specific use case avishay has or knowws of I don't care
16:33:53 <avishay> and we'll discuss at the summit
16:34:14 <jgriffith> avishay: if that works for you that's absolutely great with me
16:34:19 <DuncanT1> avishay: That sounds perfect :-)
16:34:21 <avishay> jgriffith: cool
16:34:29 <avishay> thanks everyone
16:34:41 <jgriffith> #topic volume-types
16:35:09 <jgriffith> Volume types are custom/admin-defined volume types that can be used to direct the scheduler to the appropriate back-end
16:35:25 <jgriffith> # end of topic!
16:35:27 <guitarzan> hehe
16:35:34 <jgriffith> ok... moving on :)
16:35:36 <winston-d> jgriffith : nice!
16:35:37 <guitarzan> except in the case of extra specs, which does the same thing
16:35:41 * guitarzan hides
16:35:50 <jgriffith> guitarzan: haha... but no, not really
16:35:56 <bswartz> guitarzan: no, they work together though
16:36:01 <jgriffith> #topic extra-specs
16:36:14 <avishay> jgriffith: is it only for the scheduler?  or also to pass information about how to create the volume to a driver?
16:36:16 <jgriffith> extra-specs is additional meta info to be passed to the driver selected by volume-type
16:36:16 <winston-d> guitarzan : it is extra specs that get volume types to do what jgriffith said it can do
16:36:19 <jgriffith> #end-topic
16:36:45 <jgriffith> extra-specs are just that, *extra*
16:37:04 <guitarzan> so that would imply that compression is a volume type?
16:37:09 <jgriffith> by extra, we mean *extra* information that can be consumed by the backend when it gets it's volume-type
16:37:11 <guitarzan> well, I'll think about it anyway :)
16:37:12 <winston-d> avishay : yeah, that's the other important usage of extra specs (to pass requirements to driver)
16:37:16 <guitarzan> it seems confusing to me
16:37:28 <jgriffith> guitarzan: correct, that's a possible way to do it that I mentioned earlier
16:37:36 <jgriffith> but folks don't like the admin's to actually have to think
16:37:42 <guitarzan> sure
16:37:46 <kmartin> anyone else notice that you have to  enable the scheduler setting in the cinder.conf for the extra specs to work in devstack?
16:37:49 <jgriffith> or maybe they just *can't* actually think
16:38:12 <jgriffith> kmartin: can you ellaborate?
16:38:25 <jgriffith> which setting specifically?
16:38:33 <avishay> I think maybe not everyone knows about scopes, which I learned about while playing with volume types
16:39:33 <kmartin> jgriffith: scheduler_host_manager, scheduler_default_filters,  scheduler_default_weighers and scheduler_driver
16:39:44 * DuncanT1 suggests people with a specific scenario they want to accomplish in terms of both users and admin actions, they they currently don't know how to, writes it up and we can see if our current method is sufficient or we need to enhance it for Havanna
16:40:05 <jgriffith> DuncanT1: +1
16:40:14 <avishay> I like DuncanT1's idea of assigning homework before the Summit :)
16:40:25 <winston-d> DuncanT1 : +1
16:40:26 <DuncanT1> It is far easier to answer specific questions than generalities
16:40:49 <DuncanT1> (I've a few myself that I don't know how to do, though I'm fairly sure they are entirely possible)
16:40:57 <avishay> kmartin: default devstack works fine with volume types for me
16:40:58 <winston-d> DuncanT1: and i believe most cases can be solved now, people just don't know how
16:41:07 <DuncanT1> winston-d: +1
16:41:14 <kmartin> avishay: with extra specs defined?
16:41:27 <avishay> winston-d: you're probably right
16:41:30 <jgriffith> winston-d: DuncanT1 agreed
16:41:33 <avishay> kmartin: what do you mean?
16:41:55 <jgriffith> kmartin: it works for me... wonder if we have an issue with expectations
16:42:21 <bswartz> jgriffith: I have a feeling we have an issue with documentation and understanding
16:42:22 <jgriffith> kmartin: So my driver is selected correctly by the volume type
16:42:33 <jgallard> kmartin, it works for me too
16:42:33 <jgriffith> kmartin: Then it queries that type for extra-specs
16:42:41 <jgriffith> and uses the extra-specs to do *stuff*
16:42:46 <kmartin> yeah, maybe...in our case if we do not have them enabled our driver never gets call
16:42:54 <xyang_> winston-d has a couple of nice docs about scheduler.  It will be nice to combine them and add more to it
16:43:03 <jgriffith> kmartin: ohhh?  That's a problem with the type then
16:43:10 <jgriffith> kmartin: volume-type is what selects the driver
16:43:27 * jgriffith really needs to document this it seems
16:43:47 <kmartin> ok...we may be using it incorrectly then
16:43:56 <jgriffith> kmartin: uh oh :)
16:44:09 <jgriffith> kmartin: what's the scenario you're trying to run?
16:45:12 <thingee> so just so everyone knows, I'm currently in the process of creating the initial block storage manual and separating it out of compute manuals.
16:45:20 <kmartin> we create volume type like Gold, Silver, Bronze, then assign extra specs to those with different capabilities on the array, like provisioning, host mode cpg, etc...
16:46:16 <thingee> all driver information will be moved over if you have it in there
16:46:17 * DuncanT1 wonders if we have any more topics to cover, since we can always work out the details of volume-type usage in #openstack-cinder
16:46:30 <jgriffith> DuncanT1: good point
16:46:33 * bswartz has a topic
16:46:40 <jgriffith> bswartz: go for it
16:46:42 <avishay> bswartz: care to share? :)
16:46:46 <bswartz> quick question actualy
16:46:59 <bswartz> just wanted to know about policy surrounding backporting from grizzly to folsom
16:47:11 <bswartz> what is the policy and who enforces it?
16:47:28 <eharney> i've been wondering a bit about this myself
16:47:39 <jgriffith> bswartz: the OSLO team mostly enforces it by having +2/A authority
16:48:03 <eharney> do we need to change something so that the right people get added to the reviews?
16:48:39 <eharney> for example, i've had https://review.openstack.org/#/c/22244/ floating around for a while now and i'm not sure who to poke
16:48:43 <jgriffith> eharney: bswartz I'll get with ttx and markmc and get this resolved
16:48:54 <jgriffith> also get clarification on features versus bugs etc etc
16:48:59 <eharney> (which isn't a grizzly backport, it's oslo stable syncing, but still)
16:49:03 <bswartz> I ask because Rushi mentioned doing some backport of features from grizzly to folsom, and I was surprised that this was even allowed
16:49:24 <jgriffith> bswartz: he mentioned it to me and I was TOTALLY in favor of it
16:49:26 <bswartz> I think backporting features is a fine idea, as long as it doesn't get us into trouble
16:49:30 <eharney> jgriffith: thanks, that would be helpful
16:49:37 * rushiagr emembers jgriffith mentioning about backporting multi-backend and filter sched
16:49:46 <jgriffith> So the rule of thumb is "it depends on the risk introduced"
16:49:50 <jgriffith> not very clear eh?
16:50:08 <jgriffith> rushiagr: yes, I would love it if we can do that
16:50:23 <avishay> if we start backporting everything though...
16:50:35 <jgriffith> avishay: no :)
16:50:41 <avishay> jgriffith: exactly :)
16:50:46 <jgriffith> avishay: it would have to be very selective
16:51:07 <jgriffith> avishay: I've picked the scheduler inparticular because a number of large providers have asked me for it
16:51:31 <jgriffith> and techinically the existing scheduler in Folsom is lacking to say the least
16:51:54 <bswartz> jgriffith: as PTL how much does your opinion count when it comes to deciding if a feature can be backported?
16:52:03 <jgriffith> bswartz: we'll find out :)
16:52:07 <bswartz> :-)
16:52:24 <bswartz> okay that's all I had
16:52:25 <jgriffith> bswartz: it should count for a bit, depending on the TC
16:52:27 <winston-d> jgriffith : can we do back-porting filter scheduler after grizzly released?
16:52:36 <jgriffith> winston-d: yeah, I think it would have to be
16:52:38 <xyang_> jgriffith: can a new driver be backported to Folsom?
16:52:48 <jgriffith> winston-d: too much disruption to do it now IMO
16:53:10 <winston-d> jgriffith : i've been occupied lately so no much bandwidth to do that before design summit
16:53:13 <jgriffith> xyang_: I think that would be where the line would be
16:53:27 <jgriffith> I'll come up with guidelines and submit them to everyone later this week
16:53:30 <bswartz> xyang_: we (NetApp) do that all the time, but we release the backported code from our github repo rather than submitting to a stable branch in cinder
16:53:45 <jgriffith> bswartz: +1 I do the same thing
16:54:22 <rushiagr> how about keeping cinder/volume/drivers folder open for backport?
16:54:38 <xyang_> bswartz: so that's your private github repo?
16:54:44 <jgriffith> rushiagr: it's more difficult than that
16:55:00 <jgriffith> I'll write up guidelines
16:55:07 <bswartz> xyang_: public
16:55:17 <jgriffith> meanwhile I need to wrap up here
16:55:37 <jgriffith> we can all meet back up in openstack-cinder if folks have more they want to hammer out?
16:55:44 <jgriffith> I'll be offline for about an hour
16:57:13 <bswartz> jgriffith: you forgot to #endmeeting
16:57:30 <avishay> thanks and bye everyone!
16:57:37 <xyang_> bye
16:57:39 <kmartin> bye
16:58:09 <kmartin> bswartz: can you try ending the meeting?
16:58:19 <bswartz> #endmeeting
16:58:24 <bswartz> I doubt it will work
16:58:38 <rushiagr> it works only with same nick
16:58:52 <bswartz> doh!
16:58:55 <rushiagr> and the bad part is jgriffith doesnt log out, and xen folks must be waiting
16:59:09 <johnthetubaguy> we can use the alternative channel if needed
16:59:20 <openstack> johnthetubaguy: Error: Can't start another meeting, one is in progress.
16:59:25 * winston-d try /nick himself to be john. :)
16:59:32 <hemna> openstack can kick him....if someone has the passwd :P
16:59:51 <rushiagr> hemna: good idea! (if it works)
16:59:56 <johnthetubaguy> try cinder lol
17:00:01 <guitarzan> hah
17:00:07 <kmartin> lol
17:02:03 <johnthetubaguy> OK, so join #openstack-meeting-alt for the XenAPI meeting today
17:02:18 <johnthetubaguy> I hope that works out OK for people
17:02:33 <BobBall> :)
17:02:35 <BobBall> Works fine for me
17:03:46 <rushiagr> someone can change the channel message so people get to know the meeting has been shifted to alt channel?
17:03:47 <BobBall> matelakat, we're on #openstack-meeting-alt
17:03:52 <matelakat> Oh.
17:03:59 <matelakat> Ehy is that?
17:04:04 <matelakat> Why is that?
17:04:32 <BobBall> matelakat, because we can't stop the cinder meeting - the nick that started it isn't here :)
17:04:34 <guitarzan> jgriffith didn't end the meeting :)
20:00:21 <openstack> sdake_: Error: Can't start another meeting, one is in progress.
20:00:39 <asalkeld> great
20:00:58 <stevebaker_> hi
20:01:03 <sdake_> join #openstack-meeting-alt
20:01:15 <asalkeld> jgriffith, can you end your meeting?
20:01:16 <zaneb> is the cinder meeting actually still going on?
20:01:39 <asalkeld> no, just forgot to end it
20:01:50 <sdake_> join openstack-meeting-alt - we will hold our meeting there
20:11:30 <jgriffith> #endmeeting