16:00:24 <jgriffith> #startmeeting cinder
16:00:25 <openstack> Meeting started Wed Jul 17 16:00:24 2013 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:29 <openstack> The meeting name has been set to 'cinder'
16:00:51 <jgriffith> avishay: thingee DuncanT hemna_  kmartin
16:01:04 <jgriffith> xyang_: howdy
16:01:08 <zhiyan> hello
16:01:12 <thingee> o/
16:01:22 <jgriffith> zhiyan: hey ya
16:01:29 <xyang_> jgriffith: hi!
16:01:36 <jgriffith> So there *were* some topic this morning, but it appears they've been removed :)
16:01:54 <avishay> hi all!
16:01:57 <winston-d> hi all~
16:01:58 <jgriffith> winston-d: hey
16:02:04 <jgriffith> avishay: morning/evening
16:02:10 <winston-d> jgriffith: morning
16:02:22 <avishay> jgriffith: evening :)
16:02:24 <jgriffith> So there were some proposed topics, but they've been removed
16:02:39 <jgriffith> Does anybody have anything they want to make sure we hit today before I start rambling along?
16:02:42 <DuncanT> Hey all
16:02:50 <jungleboyj> Howdy!
16:02:51 <DuncanT> I've got nothing
16:02:58 <zhiyan> yes, can we talke about 'R/O' volume support for cinder part?
16:03:10 <jgriffith> zhiyan: indeed...
16:03:19 <jgriffith> zhiyan: if there are no objections, we can start with that actually
16:03:24 <thingee> jgriffith: I only saw one thing tacked onto the july 10th agenda =/
16:03:37 <avishay> jgriffith: gnorth has a topic too
16:03:37 <bswartz> I'm curious to know what the current state of multi-attach is
16:03:42 <jgriffith> thingee: yeah, the local volume thing?
16:03:49 <kmartin> hemna and I should be able to attend the last 30 minutes
16:03:49 <zhiyan> thanks, first i want to make sure is there a existing BP for that?
16:03:55 <med_> Introduce the volume-driver-id blueprint - gnorth from history
16:03:56 <jgriffith> bswartz: check the bp, no real update as of yet
16:04:05 <bswartz> okay that's what I suspected -- just checking
16:04:11 <med_> Introduce the volume-driver-id blueprint - gnorth from WIKI history
16:04:17 <jgriffith> #topic R/O attach
16:04:24 <jgriffith> zhiyan: what's on your mind?
16:04:31 <gnorth> Hey, I'm on.
16:04:32 <thingee> jgriffith: https://blueprints.launchpad.net/cinder/+spec/volume-driver-id volume-driver-id
16:04:37 <thingee> jgriffith: that was the only thing
16:04:49 <jgriffith> thingee: thanks :)
16:04:53 <zhiyan> sorry, i don't know that also.
16:05:01 <zhiyan> bswartz: yes, multiple-attaching
16:05:14 <zhiyan> we just discussed that in maillist
16:05:52 <jgriffith> zhiyan: you mentioned you wanted to talk about R/O attach?
16:06:21 <zhiyan> jgriffith: as we talked before (last week?) IMO, if we not combine multple-attaching with R/O attaching, then i think R/O attaching for cinder part maybe just a API status management level change..
16:06:36 <zhiyan> do you think so?
16:07:02 <zhiyan> and pass the 'R/O' attaching flag to driver layer
16:07:46 <hemna_> jgriffith, we are doing a preso at the moment....
16:07:54 <zhiyan> and probable nova said need do a lot of things, in particular hypervisor driver code
16:08:02 <jgriffith> zhiyan: yep, understood.  I realize you're in a bit of a rush to get R/O and don't want to wait
16:08:05 <thingee> https://blueprints.launchpad.net/cinder/+spec/read-only-volumes
16:08:17 <jgriffith> zhiyan: I don't really have an issue with doing the R/O work separately
16:08:27 <jgriffith> zhiyan: is that what you're getting at?
16:08:59 <jgriffith> zhiyan: that work is in progress but has been rejected
16:09:00 <zhiyan> i'm ok, if separate r/o attaching and multiple attaching change
16:09:22 <jgriffith> zhiyan: I'm just trying to get to exactly what you're asking/proposing
16:09:28 <zhiyan> address r/o attaching first, then to multiple-attaching IMO
16:09:43 <jgriffith> zhiyan: yeah, that's happening already
16:09:51 <jgriffith> see the bp thingee referenced
16:10:02 <zhiyan> who developing r/o attaching feature now?
16:10:05 <jgriffith> also the review: https://review.openstack.org/#/c/34723/
16:10:06 <thingee> +1 these are already separate bps. they should be separate commits for sanity
16:10:07 <zhiyan> ok
16:10:10 <thingee> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
16:10:41 <jgriffith> zhiyan: so if you have an interest you should review add input :)
16:10:43 <zhiyan> oh, i need check it
16:10:50 <zhiyan> :) just not know that before
16:10:54 <jgriffith> zhiyan: :)
16:10:56 <zhiyan> yep
16:11:07 <jgriffith> there's a ton of in flight stuff right now, hard to keep track :)
16:11:18 <jgriffith> Ok... anything else on R/O ?
16:11:31 <zhiyan> no, but have one to multi-attach-volume,
16:11:39 <jgriffith> #topic multi-attach
16:12:04 <dosaboy> jgriffith: sorry i'm late, i had a topic to discuss if there is time :)
16:12:21 <jgriffith> dosaboy: sure... I'll hit you up in a few minutes
16:12:31 <thingee> dosaboy: add it to https://wiki.openstack.org/wiki/CinderMeetings
16:12:33 <bswartz> dosaboy: we never run out of time in these meetings!
16:12:37 <jgriffith> zhiyan: so there was some confusion introduced because of multiple bp's here
16:12:39 <thingee> heh
16:12:40 <zhiyan> seems that bp is different with my thoughts. I will check it later...sorry
16:12:41 <jgriffith> bswartz: now that's funy!
16:12:41 <dosaboy> doing it now
16:12:50 <jgriffith> zhiyan: https://blueprints.launchpad.net/cinder/+spec/shared-volume
16:13:11 <zhiyan> yes, rongze creating it
16:13:49 <zhiyan> ok, let me check them, and ask/sync with you of line.
16:13:56 <jgriffith> zhiyan: there's also https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume
16:14:04 <avishay> move on to volume-driver-id?
16:14:08 <jgriffith> zhiyan: kmartin we'll want to look at combining these I think
16:14:14 <jgriffith> avishay: yes sir
16:14:23 <jgriffith> anything else on multi-attach?
16:14:25 <jgriffith> bswartz: ?
16:14:28 <zhiyan> no, thanks
16:14:39 <bswartz> no, just that NetApp has a use case for it
16:14:41 <thingee> jgriffith: yeah I'm confused about there being two
16:14:54 <bswartz> we're interested in multi-attach and writable
16:15:02 <jgriffith> thingee: :)  I'll work on getting that fixed up with kmartin and the folks from Mirantis
16:15:05 <avishay> as are we (IBM)
16:15:12 <jgriffith> thingee: on the bright side up until last week there were 3 :)
16:15:39 <jgriffith> good thing is that despite the original comments added to my proposal by others it seems the majority is interested in this landing
16:15:48 <jgriffith> Ok, it's high priority for H3
16:15:56 <jgriffith> I'll get the BP's consolidated and cleaned up this week
16:16:00 <jgriffith> we'll make it happen
16:16:00 <winston-d> thingee: the share volume bp is 100-yr old~
16:16:09 <thingee> hehe
16:16:14 <med_> heh
16:16:15 <jgriffith> #topic volume-id
16:16:24 <med_> gnorth, ^
16:16:25 <avishay> gnorth: hit it
16:16:30 <gnorth> Hokay, so this is a new blueprint
16:16:42 <med_> link?
16:16:52 <gnorth> https://blueprints.launchpad.net/cinder/+spec/volume-driver-id
16:17:15 <gnorth> Principally, it allows us to behave better when our Volumes are backed by shared storage that non-Openstack users are using
16:17:22 <gnorth> i.e. SAN storage controllers etc.
16:17:39 <jgriffith> gnorth: so on the surface I have no issue with this, think it's a good idea
16:17:47 <gnorth> It is annoying that we have to embed the Cinder Volume UUID in the disk names - that isn't great for administrators etc.
16:17:51 <bswartz> +1
16:18:01 <gnorth> So this BP adds it, and a reference implementation for Storwize (I'm IBM)
16:18:14 <gnorth> I would like to discuss the database upgrade/downgrade scenarious
16:18:22 <gnorth> Since this could be something of a one-way street
16:18:38 <xyang_> nice feature
16:18:40 <gnorth> If I have Volumes whose name is disconnected from the SAN name, and is using the driver_id
16:18:44 <gnorth> There isn't really a good way to downgrade
16:19:20 <zhiyan> make sense to me, sound's good
16:19:42 <avishay> gnorth: I know we discussed this privately but I forgot the conclusions...could we rename to UUID on downgrade?
16:19:53 <jgriffith> gnorth: "disconneceted from the SAN Name" / "using the driver_id"
16:20:11 <gnorth> Yeah, so today I create a Volume 223123-abd03-...
16:20:14 <jgriffith> avishay: my vote is no as I stated before in the review
16:20:15 <zhiyan> do you think it will be better if we give a 'metadata' field? it's a directory
16:20:24 <gnorth> And it gets called (on the SAN) volume0223123-abd03...
16:20:25 <zhiyan> gnorth:?
16:20:27 <jgriffith> I don't like the idea of going around changing UUID's of volumes
16:20:36 <avishay> jgriffith: which review?
16:20:42 <gnorth> zhiyan - see design, security hole to use metadata
16:20:43 <winston-d> zhiyan: there is metadata
16:20:47 <jgriffith> gnorth: you can specify that in volume_name_template setting by the way
16:20:52 <gnorth> Yes
16:21:00 <gnorth> But the problem is I upgrade, and create  volume
16:21:04 <gnorth> Which is now called volume-geraint-disk
16:21:06 <gnorth> on the SAN
16:21:07 <jgriffith> gnorth: so just change it to "${driver_name}-volume-uuid
16:21:10 <gnorth> And driver_id references it
16:21:18 <gnorth> Now I downgrade to an earlier OpenStack
16:21:29 <gnorth> And I lose the mapping from Volume to SAN disk, because I lose my driver_id
16:21:54 <bswartz> jgriffith: I don't think your understanding on this BP is correct
16:21:55 <jgriffith> gnorth: sure, but why's that a problem?
16:22:02 <gnorth> So I don't have a good solution for downgrade - Avishay's suggestion is to rename the SAN disk, not change the UUID.
16:22:05 <jgriffith> bswartz: sure I don't
16:22:13 <jgriffith> bswartz: enlighten me then
16:22:53 <bswartz> it sounds to me like it's a mechanism to allow drivers to name their internal things something not based on UUID, and we add a database column to maintain the key that the driver actually used
16:23:01 <gnorth> Precisely
16:23:06 <bswartz> so the driver can map the thing it created to the orignal UUID
16:23:16 <jgriffith> bswartz: ok, what's the part that I don't understand?
16:23:28 <bswartz> there is no proposal to change UUIDs here
16:23:42 <jgriffith> bswartz: ahh... that was WRT to a comment earlier by avishay I believe
16:23:45 <bswartz> just a proposal to add a mapping layer from the UUIDs to something driver-defined
16:23:47 <jgriffith> and...
16:24:04 <avishay> jgriffith: i meant that on downgrade, we can rename the volume to the way it's named today, by UUID
16:24:29 <bswartz> avishay: that would work, but the downgrade logic might get a lot more complicated if it has to do non-database-related work
16:24:31 <winston-d> so if there really a need to downgrade, driver should take over the maintain that mapping internally
16:24:36 <jgriffith> avishay: ahh... yeah, that seems like a foredrawn conclusion
16:24:51 <jgriffith> winston-d: driver will be downgraded as well :)
16:25:32 <jgriffith> so it is tricky
16:25:34 <xyang_> avishay: you mean someone will manually rename using UUID?
16:25:50 <jgriffith> xyang_: I think that's the problem
16:25:59 <jgriffith> xyang_: the downgrade can do this in the DB for us
16:26:10 <jgriffith> but then we have all these volumes with innacurate names
16:26:13 <winston-d> avishay: is it a good idea that using a end-user visible field to do that?
16:26:15 <xyang_> jgriffith: ok
16:26:45 <jgriffith> winston-d: remember there's a difference between name and display-name
16:26:48 <bswartz> it seems to me that the only way to downgrade if this feature is in use, would be to run the driver-specific process before the downgrade process which removed reliance on that column by renaming everything
16:27:07 <gnorth> Yes, so I could fail the downgrade if there were any non-null driver_ids
16:27:14 <jgriffith> I think it might be safer to introduce a new column
16:27:30 <avishay> winston-d: do you have an alternate suggestion?
16:27:37 <jgriffith> "new" naming styles would be ported back to name in the backport
16:27:52 <jgriffith> and new volumes created after the backport would revert back to old-style
16:27:53 <DuncanT> I definitely think that this info should not be end-user visible...
16:28:16 <jgriffith> DuncanT: I don't see why it would be... gnorth avishay am I correct?
16:28:19 <gnorth> DuncanT - in some environments (the ones I work in), it is useful to know the backend reference
16:28:19 <jgriffith> bswartz: ^^
16:28:21 <gnorth> For example
16:28:29 <DuncanT> gnorth: Admin only extention then
16:28:39 <gnorth> I may want to do things on my Storage Controller with my disk that I can't do via the OpenStack driver currently.
16:28:46 <DuncanT> gnorth: Or some policy controlled extention
16:28:56 <gnorth> Yes, that could be done
16:28:57 <jgriffith> so I guess IMO for that I don't see why you can't just continue using UUID for the mapping
16:28:57 <thingee> DuncanT: if we're talking about extension, I'd rather not see a new column
16:29:08 <jgriffith> or as metadata on the backend device if available etc
16:29:19 <gnorth> It is because there isn't metadata on the backend device
16:29:25 <thingee> anything optional shouldn't mess with the model
16:29:25 <jgriffith> gnorth: on yours :)
16:29:32 <jgriffith> gnorth: so to back up...
16:29:36 <gnorth> sure
16:29:37 <jgriffith> gnorth: avishay bswartz
16:29:48 <avishay> thingee: maybe we need non user-accessible metadata?
16:29:52 <jgriffith> The only thing this solves is you don't like using UUID in the volume names on your devices?
16:30:04 <jgriffith> if so that doesn't seem like a very compellign argument to me
16:30:10 <bswartz> jgriffith: that sounds about right
16:30:16 <gnorth> Other things it solves, which you can count as non-problems if you wish:
16:30:19 <jgriffith> bswartz: that seems like a lame argument
16:30:33 <gnorth> 1. I would like to (in the future) add the ability to import SAN volumes into Cinder, and not have to rename them.
16:30:35 <bswartz> it may be weak, but it's not without merit
16:30:43 <jgriffith> gnorth: already have a bp for that actually
16:30:59 <avishay> gnorth: https://blueprints.launchpad.net/cinder/+spec/add-export-import-volumes
16:31:14 <winston-d> gnorth: rename SAN volumes at what level? back-end?
16:31:15 <gnorth> 2. Since backend volume names must be unique (on my controller...) then it might simplify some volume migration scenarios if we can decouple UUID from backend name, but I've not given that much thought.
16:31:29 <jgriffith> gnorth: unique screams UUID to me :)
16:31:51 <gnorth> winston-d - Yes
16:31:59 <jgriffith> so there are reasons that I converted all of the volumes to use UUID's in the first place
16:32:03 <gnorth> In enterprise environments, your SAN administrator may have strict ideas about how they like to organise things
16:32:03 <jgriffith> believe it or not :)
16:32:32 <jgriffith> gnorth: perhaps, but would they rather have that or have something that works?
16:32:43 <gnorth> What is the flaw in my proposal?
16:32:47 <jgriffith> ok... so I'll stop joking around
16:32:58 <thingee> gnorth: no downgrade path
16:33:04 <jgriffith> gnorth: I like your proposal WRT adding a driver ID column
16:33:08 <jgriffith> I think that's a great idea
16:33:20 <jgriffith> I don't like the idea you're proposing WRT to naming games
16:33:33 <jgriffith> but bswartz and others seem to like it so that's fine
16:33:36 <jgriffith> I'm out voted
16:34:01 <jgriffith> But as I said, I don't see the point in adding that sort of complexity, confusion lack of downgrade path etc
16:34:14 <gnorth> Now, we could make this feature optional - default behaviour would use UUIDs and so downgrade gracefully.
16:34:14 <jgriffith> Just not seeing the benefit...
16:34:27 <gnorth> I'm surprised that you see a benefit in driver_id without the naming thing
16:34:31 <jgriffith> gnorth: so if the admin didn't choose the right option they could never downgrade?
16:34:41 <thingee> gnorth: if it's optional, we're not even talking about it touching the model
16:34:55 <jgriffith> thingee: +1
16:34:55 <gnorth> Yes, if they specified in their config file that they wanted friendly names on the storage controller, they could never downgrade.
16:35:00 <DuncanT> How often do downgrades really happen / work?
16:35:04 <thingee> extensions should deal with metadata, which as avishay pointed out should be some sort of admin only metadata which we don't have yet
16:35:11 <jgriffith> DuncanT: that's not really the point
16:35:18 <gnorth> Admin-only metadata would be fine for this also.
16:35:46 <jgriffith> gnorth: BTW, driver ID is useful for a host of other things
16:35:47 <avishay> +1 for admin-only metadata
16:35:50 <jgriffith> gnorth: for example
16:35:59 <jgriffith> gnorth: I have a customer with 50 SF clusters
16:36:10 <jgriffith> gnorth: they're trying to find a volume uuid=XXXXX
16:36:22 <jgriffith> gnorth: it would be helpful if they knew "which" cluster to go to
16:36:27 <dosaboy> DuncanT: good question, but perhaps for another topic ;)
16:36:33 <jgriffith> gnorth: then they just search for the UUID
16:36:58 <jgriffith> So regardless...
16:37:05 <winston-d> jgriffith: can't they do that only using 'host', even with multiple back-ends?
16:37:09 <thingee> gnorth: I'm keep an open mind on the idea but I still can't justify the two points you raised earlier. If we already had plans for importing volumes which was your first point, I'm not entirely sure about the second point of decoupling UUID from the backend
16:37:10 <jgriffith> Really I don't think we can do an upgrade that can never be downgraded
16:37:26 <jgriffith> winston-d: ahh... interesting point
16:38:12 <bswartz> jgriffith: I think the driver_id column is actually the mapping column, not the ID of the driver
16:38:20 <bswartz> maybe it's poorly-named
16:38:27 <gnorth> driver_id could really just be "driver metadata"
16:38:33 <gnorth> And by extension, admin-only metadata
16:38:33 <jgriffith> bswartz: Oh
16:38:56 <zhiyan> gnorth: yes, that should be a 'metadata directory'
16:38:58 <gnorth> jgriffith - you want to re-state your position? :-)
16:39:01 <jgriffith> Ok...
16:39:02 <winston-d> to me, it seems _without_ driver-id or driver metadata, cinder should work fine. no?
16:39:07 <avishay> If this is implemented as a policy-controlled extension with admin-only metadata, with a warning label that says "this is a one-way street", is that acceptable?
16:39:12 <jgriffith> gnorth: well, you probably don't want me to
16:39:24 <jgriffith> :)
16:39:35 <jgriffith> Ok, I should let other discuss
16:39:38 <gnorth> I guess it is a question of how much we see enterprise environments like this as a target market for OpenStack
16:39:47 <jgriffith> gnorth: easy there...
16:39:54 <gnorth> Not being able to name disks, or at least rename them is going to be a problem.
16:39:55 <thingee> gnorth: hey sorry I know you're talking to jgriffith but I asked earlier about you to explain further on your second point of decoupling the UUID from the backend
16:40:00 <jgriffith> gnorth: I think enterprise are a HUGE target for OpenSTack
16:40:22 <jgriffith> but I'm not buying the argument that they require "friendly" names on the backend devices and that UUID"s are unacceptable
16:40:35 <DuncanT> gnorth: There are plenty of very much enterprise class backends that work fine just now
16:40:40 <jgriffith> again though, I'm fine if everybody else likes it
16:40:56 <jgriffith> I'll let you all figure it out, but remember you have to come up with a downgrade strategy
16:41:33 <gnorth> DuncanT: I'm not questioning that the backend don't "work fine", but whether the lack of control over the disk names will be a barrier to adoption in some enterprise environments
16:41:40 <winston-d> if downgrade only makes your admin angry, but cinder still works. i don't care.
16:42:00 <gnorth> RIght, thingee: my point 2 - about the migration?
16:42:24 <zhiyan> winston-d: agree
16:42:31 <DuncanT> gnorth: Some things that are a 'barrier to entry' are actually really useful in forcing an enterprise to think in term s of clouds rather than traditional IT
16:42:45 <jgriffith> DuncanT: +1
16:43:06 <DuncanT> gnorth: The thing about the cloud is that it frees you from having to nail everything down
16:43:08 <thingee> gnorth: yeah I guess I'm not understanding the problem you're solving with volume migration. If the cinder service is managing the namespace of the UUID
16:43:14 <thingee> 's what's the problem?
16:43:55 <gnorth> thingee: I don't think it is a functional gap, it might just make code cleaner, I don't think it is the most important issue.
16:44:20 <thingee> ok, so then back to your first point...we have a bp to work towards the idea of import/export
16:44:22 <gnorth> Essentially, to copy a backend disk from place A to B (i.e. two storage pools in the same controller), and then just update the driver_id
16:44:27 <avishay> thingee: i think he meant that instead of renaming the volume to match the UUID like my current code does, you could just change the mapping name, correct gnorth ?
16:44:32 <gnorth> yeah
16:44:38 <thingee> ok
16:44:59 <avishay> jgriffith: import volume will rename the volume on the backend?
16:45:16 <thingee> I can't speak for myself about this feature really. I'd be more curious if there is a demand for this. ML maybe?
16:45:18 <jgriffith> avishay: if supported yes, but not required
16:45:29 <thingee> yes and optional seems fine to me
16:45:38 <avishay> jgriffith: OK
16:45:46 <gnorth> jgriffith: How work it work if rename not supported?
16:46:02 <hemna_> back...(sorry guys we were doing an internal preso)
16:46:25 <winston-d> hemna_: you missed a lot of interesting discussion
16:46:32 <hemna_> doh :(
16:46:34 <gnorth> So, the clients I am speaking about are getting into cloud (and OpenStack), but not jumping in with both feet - they will want to run PoC first etc.  It is valuable to show that we can integrate with their policies and ways of doing things.
16:46:50 <gnorth> I agree that if nobody ever touched the SAN backend, we wouldn't need this feature.
16:47:09 <gnorth> But my clients do so a lot, even with cloud deployments, mostly to exploit features that are not yet exposed in the cloud.
16:47:19 <DuncanT> gnorth: There is a cost to maintaining and testing a feature that is useless to 99.9% of all deployments
16:47:25 <gnorth> Yes, I agree.
16:47:32 <avishay> I would also point out that people are using OpenStack for IaaS on things much smaller than a cloud (e.g., a rack or two)
16:48:26 <gnorth> I would be happy with admin-metadata and then we could put this into Storwize driver and have it as an optional (downgrade-breaking) feature, if folks don't want this driver_id field.
16:48:57 <gnorth> But as I say, it is really a question of whether this really is a useless feature for 99.9% of all deployments.
16:49:19 <gnorth> And if ec2_id wasn't an integer, I'd be able to use that. :-)
16:49:35 <DuncanT> admin-metadata has many possible driver-specific usages, so I'd say that was a better name
16:49:38 <thingee> gnorth: I've just never heard this problem brought up, but I'm mostly coming from a public cloud case
16:49:46 <winston-d> gnorth: i believe it' a useful feature for private cloud that wants to migrate existing volumes into cinder
16:49:56 <bswartz> DuncanT +1
16:49:58 <gnorth> For purpose built cloud I agree it is a different story
16:49:58 <zhiyan> for the real cloud, 99.9% userless, but for traditional IT, it's valuable thing
16:50:15 <jgriffith> DuncanT: I think I'm much more comfortable with that as well
16:50:30 <jgriffith> haha... real cloud, pretend cloud, my cloud
16:50:31 <winston-d> zhiyan: private cloud _is_ a real cloud as well
16:50:34 <jgriffith> it's all cloud
16:50:45 <jgriffith> winston-d: thank you!!
16:50:57 <jgriffith> Ok... are we ready to move on to the next topic?
16:51:01 <DuncanT> There have been a few use-cases where generic metadata would be useful to drivers (maybe even jgriffith's blocksize stuff I was talking to him about...)
16:51:05 <jgriffith> Do we have a next topic?
16:51:09 <gnorth> OK, so what is the conclusion here?
16:51:12 <jgriffith> DuncanT: :)
16:51:13 <dosaboy> o/
16:51:21 <jgriffith> DuncanT: I just solved that by adding provider info :)
16:51:23 <thingee> DuncanT, jgriffith: just like the capability stuff (and I still need to do)...the keys should be figured in the dev sphinx doc and approved on a commit basis so we don't discrepancies
16:51:26 <jgriffith> DuncanT: but that's a good point
16:51:27 <winston-d> dosaboy: not so fast. :)
16:51:36 <DuncanT> gnorth: Call it 'driver metadata' or something
16:51:40 <jgriffith> thingee: +1
16:51:59 <jgriffith> DuncanT: how about just admin_metadata like you suggested earlier?
16:52:06 <DuncanT> gnorth: Add an extention API to query / modify it
16:52:09 <DuncanT> jgriffith: +1
16:52:11 <zhiyan> winston-d: jgriffith: use case and operation model has little different IMO
16:52:13 <gnorth> Just as a single String(255) field?
16:52:15 <jgriffith> So it's not isolated in use or interpretation
16:52:23 <jgriffith> zhiyan: I agree with you
16:52:29 <gnorth> Or as key/value pairs?
16:52:31 <jgriffith> zhiyan: that's kinda the whole point :)
16:52:35 <gnorth> Like today's metadata
16:52:36 <thingee> gnorth: you'd make me so happy if you had separate commits on the backend work, and front api work. just sayin'
16:52:54 <gnorth> thingee: Yes, that is in the staging section of the design, and is the plan.
16:52:55 <DuncanT> A single string is easier on the DB, but less flexible...
16:52:56 <winston-d> thingee: noted
16:53:07 <DuncanT> I've no strong preferences
16:53:15 <gnorth> Who else might use it other than the driver?
16:53:18 <DuncanT> Though if it was me I'd go single string
16:53:25 <jgriffith> gnorth: who knows :)
16:53:32 <gnorth> I worry that it could get stomped on if a single string, but agree, earlier on the db
16:53:37 <gnorth> easier, rather
16:53:48 <thingee> gnorth: I'm still curious if there's interest if brought up on the ML
16:53:53 <thingee> likely no one will respond, sigh
16:53:58 <winston-d> key/value can be much more flexible, if someone else would like to utilize that to achieve other goals.
16:54:15 <xyang_> winston-d: +1
16:54:15 <DuncanT> gnorth: I'm hoping that people who want magic metadata values can be persuaded to use admin_metadata instead
16:54:15 <jgriffith> Ok, so I think here's where we're at
16:54:24 <thingee> 6 min warning
16:54:36 <jgriffith> Moving away from the idea of the special columns etc in favor of some form of admin metadata
16:54:48 <jgriffith> still have some questions about whether single string is best or K/V pairs
16:54:54 <jgriffith> I'm not object to either
16:54:56 <gnorth> Add an "admin-only" flag to VolumeMetadata to filter it?
16:55:14 <jgriffith> gnorth: oh for sure!  Otherwise we'd use the existing volume-metadata :)
16:55:24 <jgriffith> Everybody in agreement on that?
16:55:30 <gnorth> I mean, rather than adding a second set of metadata and APIs - Administrators see more
16:55:35 <gnorth> And can set that flag
16:55:38 <winston-d> i'm fine
16:55:43 <jgriffith> gnorth: wait... now you lost me again
16:55:48 <zhiyan> +1, but maybe need filter some sensitive data
16:55:53 <avishay> sounds good to me
16:55:56 <jgriffith> gnorth: you mean use the existing metadata with a flag?
16:56:03 <jgriffith> gnorth: not sure how you would do that
16:56:16 <jgriffith> gnorth: and still allow the user to set metadata as well
16:56:22 <gnorth> OK, so you want a new table, new APIs?  Treat them completely separately?
16:56:25 <jgriffith> gnorth: seems like a separate entitity would be better
16:56:28 <gnorth> OK
16:56:33 <jgriffith> gnorth: I think so... everyone else?
16:56:42 <gnorth> How did you handle downgrade when you added metadata - just threw it away?
16:57:00 <thingee> oh boy, metadata clean up
16:57:05 <jgriffith> gnorth: well, we've never had a downgrade where there wasn't a metadata column
16:57:11 <jgriffith> but yes, that's what we'd be doing here
16:57:25 <jgriffith> That's the price you pay if you have to downgrade IMO
16:57:29 <avishay> I think if it could be done elegantly as one metadata column, that would be cool, otherwise two separate
16:57:43 <jgriffith> Since the "previous" migration won't know or have any use for it anyway
16:57:49 <gnorth> I will throw some ideas around, thanks everyone.
16:57:54 <jgriffith> gnorth: thank you
16:57:58 <DuncanT> throw it away +1
16:58:00 <thingee> we have to be pretty careful with this. we could end up with orphan metadata and admins would be afraid to remove anything without doing a proper lengthy audit
16:58:04 <bswartz> 2 minutes left
16:58:13 <jgriffith> #topic open discussion
16:58:14 <winston-d> gnorth: thx for handling this tough case :)
16:58:23 <kartikaditya_> I have a new bp https://blueprints.launchpad.net/cinder/+spec/vmware-vmdk-cinder-driver
16:58:44 <jgriffith> kmartin: noted... thanks
16:59:00 <thingee> gnorth: yes thanks for your input on other use cases
16:59:13 <kmartin> should we add "Extend Volume" to the mininum features required by a new cinder driver for Icehouse release?
16:59:54 <jgriffith> anyone have input to kmartin 's question above?
17:00:01 <winston-d> kmartin: +1
17:00:07 <DuncanT> On that topic I'm considering starting the 'your driver doesn't meet minimum specs' removal patch generation process soon. I'm aware I've got to split this with thingee... anybody any comments?
17:00:07 <jgriffith> I believe that's in line with what we agreed upon for a policy
17:00:14 <dosaboy> kartikaditya_: that is along similar ines to what I was thinking with https://blueprints.launchpad.net/cinder/+spec/rbd-driver-optional-iscsi-support
17:00:15 <DuncanT> kmartin: +1
17:00:19 <jgriffith> DuncanT: I thnk you've been beat
17:00:20 <thingee> so jdurgin brought up to me that we should probably try to reach maintainers better on requirement changes.
17:00:27 <kmartin> I'll update the wiki today...thanks
17:00:31 <thingee> maybe at the very least we can mention it on the ml besides irc meeting?
17:00:36 <jgriffith> kmartin: thingee fair point
17:00:37 <DuncanT> jgriffith: Bugger, haven't seen any yet
17:00:46 <jgriffith> kmartin: thingee ML is the only good vehicle I can think of though
17:00:58 <jgriffith> DuncanT: I think thingee was starting to tackle that
17:00:58 <thingee> half the drivers we support now are going to get an email from me about not supporting what's needed in H
17:01:16 <DuncanT> jgriffith: No problem, we agreed to split them :-)
17:01:21 <jgriffith> Ok, we're out of time an I have another meeting :(
17:01:25 <jgriffith> #cinder :)
17:01:28 <thingee> thanks everyone
17:01:30 <kartikaditya_> dosaboy: No, this is more of supporting upcoming datastores that ESX/VC can manage
17:01:33 <jgriffith> #end meeting
17:01:40 <kartikaditya_> damn...
17:01:42 <jgriffith> #endmeeting cinder