16:00:24 #startmeeting cinder 16:00:25 Meeting started Wed Jul 17 16:00:24 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:29 The meeting name has been set to 'cinder' 16:00:51 avishay: thingee DuncanT hemna_ kmartin 16:01:04 xyang_: howdy 16:01:08 hello 16:01:12 o/ 16:01:22 zhiyan: hey ya 16:01:29 jgriffith: hi! 16:01:36 So there *were* some topic this morning, but it appears they've been removed :) 16:01:54 hi all! 16:01:57 hi all~ 16:01:58 winston-d: hey 16:02:04 avishay: morning/evening 16:02:10 jgriffith: morning 16:02:22 jgriffith: evening :) 16:02:24 So there were some proposed topics, but they've been removed 16:02:39 Does anybody have anything they want to make sure we hit today before I start rambling along? 16:02:42 Hey all 16:02:50 Howdy! 16:02:51 I've got nothing 16:02:58 yes, can we talke about 'R/O' volume support for cinder part? 16:03:10 zhiyan: indeed... 16:03:19 zhiyan: if there are no objections, we can start with that actually 16:03:24 jgriffith: I only saw one thing tacked onto the july 10th agenda =/ 16:03:37 jgriffith: gnorth has a topic too 16:03:37 I'm curious to know what the current state of multi-attach is 16:03:42 thingee: yeah, the local volume thing? 16:03:49 hemna and I should be able to attend the last 30 minutes 16:03:49 thanks, first i want to make sure is there a existing BP for that? 16:03:55 Introduce the volume-driver-id blueprint - gnorth from history 16:03:56 bswartz: check the bp, no real update as of yet 16:04:05 okay that's what I suspected -- just checking 16:04:11 Introduce the volume-driver-id blueprint - gnorth from WIKI history 16:04:17 #topic R/O attach 16:04:24 zhiyan: what's on your mind? 16:04:31 Hey, I'm on. 16:04:32 jgriffith: https://blueprints.launchpad.net/cinder/+spec/volume-driver-id volume-driver-id 16:04:37 jgriffith: that was the only thing 16:04:49 thingee: thanks :) 16:04:53 sorry, i don't know that also. 16:05:01 bswartz: yes, multiple-attaching 16:05:14 we just discussed that in maillist 16:05:52 zhiyan: you mentioned you wanted to talk about R/O attach? 16:06:21 jgriffith: as we talked before (last week?) IMO, if we not combine multple-attaching with R/O attaching, then i think R/O attaching for cinder part maybe just a API status management level change.. 16:06:36 do you think so? 16:07:02 and pass the 'R/O' attaching flag to driver layer 16:07:46 jgriffith, we are doing a preso at the moment.... 16:07:54 and probable nova said need do a lot of things, in particular hypervisor driver code 16:08:02 zhiyan: yep, understood. I realize you're in a bit of a rush to get R/O and don't want to wait 16:08:05 https://blueprints.launchpad.net/cinder/+spec/read-only-volumes 16:08:17 zhiyan: I don't really have an issue with doing the R/O work separately 16:08:27 zhiyan: is that what you're getting at? 16:08:59 zhiyan: that work is in progress but has been rejected 16:09:00 i'm ok, if separate r/o attaching and multiple attaching change 16:09:22 zhiyan: I'm just trying to get to exactly what you're asking/proposing 16:09:28 address r/o attaching first, then to multiple-attaching IMO 16:09:43 zhiyan: yeah, that's happening already 16:09:51 see the bp thingee referenced 16:10:02 who developing r/o attaching feature now? 16:10:05 also the review: https://review.openstack.org/#/c/34723/ 16:10:06 +1 these are already separate bps. they should be separate commits for sanity 16:10:07 ok 16:10:10 https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume 16:10:41 zhiyan: so if you have an interest you should review add input :) 16:10:43 oh, i need check it 16:10:50 :) just not know that before 16:10:54 zhiyan: :) 16:10:56 yep 16:11:07 there's a ton of in flight stuff right now, hard to keep track :) 16:11:18 Ok... anything else on R/O ? 16:11:31 no, but have one to multi-attach-volume, 16:11:39 #topic multi-attach 16:12:04 jgriffith: sorry i'm late, i had a topic to discuss if there is time :) 16:12:21 dosaboy: sure... I'll hit you up in a few minutes 16:12:31 dosaboy: add it to https://wiki.openstack.org/wiki/CinderMeetings 16:12:33 dosaboy: we never run out of time in these meetings! 16:12:37 zhiyan: so there was some confusion introduced because of multiple bp's here 16:12:39 heh 16:12:40 seems that bp is different with my thoughts. I will check it later...sorry 16:12:41 bswartz: now that's funy! 16:12:41 doing it now 16:12:50 zhiyan: https://blueprints.launchpad.net/cinder/+spec/shared-volume 16:13:11 yes, rongze creating it 16:13:49 ok, let me check them, and ask/sync with you of line. 16:13:56 zhiyan: there's also https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume 16:14:04 move on to volume-driver-id? 16:14:08 zhiyan: kmartin we'll want to look at combining these I think 16:14:14 avishay: yes sir 16:14:23 anything else on multi-attach? 16:14:25 bswartz: ? 16:14:28 no, thanks 16:14:39 no, just that NetApp has a use case for it 16:14:41 jgriffith: yeah I'm confused about there being two 16:14:54 we're interested in multi-attach and writable 16:15:02 thingee: :) I'll work on getting that fixed up with kmartin and the folks from Mirantis 16:15:05 as are we (IBM) 16:15:12 thingee: on the bright side up until last week there were 3 :) 16:15:39 good thing is that despite the original comments added to my proposal by others it seems the majority is interested in this landing 16:15:48 Ok, it's high priority for H3 16:15:56 I'll get the BP's consolidated and cleaned up this week 16:16:00 we'll make it happen 16:16:00 thingee: the share volume bp is 100-yr old~ 16:16:09 hehe 16:16:14 heh 16:16:15 #topic volume-id 16:16:24 gnorth, ^ 16:16:25 gnorth: hit it 16:16:30 Hokay, so this is a new blueprint 16:16:42 link? 16:16:52 https://blueprints.launchpad.net/cinder/+spec/volume-driver-id 16:17:15 Principally, it allows us to behave better when our Volumes are backed by shared storage that non-Openstack users are using 16:17:22 i.e. SAN storage controllers etc. 16:17:39 gnorth: so on the surface I have no issue with this, think it's a good idea 16:17:47 It is annoying that we have to embed the Cinder Volume UUID in the disk names - that isn't great for administrators etc. 16:17:51 +1 16:18:01 So this BP adds it, and a reference implementation for Storwize (I'm IBM) 16:18:14 I would like to discuss the database upgrade/downgrade scenarious 16:18:22 Since this could be something of a one-way street 16:18:38 nice feature 16:18:40 If I have Volumes whose name is disconnected from the SAN name, and is using the driver_id 16:18:44 There isn't really a good way to downgrade 16:19:20 make sense to me, sound's good 16:19:42 gnorth: I know we discussed this privately but I forgot the conclusions...could we rename to UUID on downgrade? 16:19:53 gnorth: "disconneceted from the SAN Name" / "using the driver_id" 16:20:11 Yeah, so today I create a Volume 223123-abd03-... 16:20:14 avishay: my vote is no as I stated before in the review 16:20:15 do you think it will be better if we give a 'metadata' field? it's a directory 16:20:24 And it gets called (on the SAN) volume0223123-abd03... 16:20:25 gnorth:? 16:20:27 I don't like the idea of going around changing UUID's of volumes 16:20:36 jgriffith: which review? 16:20:42 zhiyan - see design, security hole to use metadata 16:20:43 zhiyan: there is metadata 16:20:47 gnorth: you can specify that in volume_name_template setting by the way 16:20:52 Yes 16:21:00 But the problem is I upgrade, and create volume 16:21:04 Which is now called volume-geraint-disk 16:21:06 on the SAN 16:21:07 gnorth: so just change it to "${driver_name}-volume-uuid 16:21:10 And driver_id references it 16:21:18 Now I downgrade to an earlier OpenStack 16:21:29 And I lose the mapping from Volume to SAN disk, because I lose my driver_id 16:21:54 jgriffith: I don't think your understanding on this BP is correct 16:21:55 gnorth: sure, but why's that a problem? 16:22:02 So I don't have a good solution for downgrade - Avishay's suggestion is to rename the SAN disk, not change the UUID. 16:22:05 bswartz: sure I don't 16:22:13 bswartz: enlighten me then 16:22:53 it sounds to me like it's a mechanism to allow drivers to name their internal things something not based on UUID, and we add a database column to maintain the key that the driver actually used 16:23:01 Precisely 16:23:06 so the driver can map the thing it created to the orignal UUID 16:23:16 bswartz: ok, what's the part that I don't understand? 16:23:28 there is no proposal to change UUIDs here 16:23:42 bswartz: ahh... that was WRT to a comment earlier by avishay I believe 16:23:45 just a proposal to add a mapping layer from the UUIDs to something driver-defined 16:23:47 and... 16:24:04 jgriffith: i meant that on downgrade, we can rename the volume to the way it's named today, by UUID 16:24:29 avishay: that would work, but the downgrade logic might get a lot more complicated if it has to do non-database-related work 16:24:31 so if there really a need to downgrade, driver should take over the maintain that mapping internally 16:24:36 avishay: ahh... yeah, that seems like a foredrawn conclusion 16:24:51 winston-d: driver will be downgraded as well :) 16:25:32 so it is tricky 16:25:34 avishay: you mean someone will manually rename using UUID? 16:25:50 xyang_: I think that's the problem 16:25:59 xyang_: the downgrade can do this in the DB for us 16:26:10 but then we have all these volumes with innacurate names 16:26:13 avishay: is it a good idea that using a end-user visible field to do that? 16:26:15 jgriffith: ok 16:26:45 winston-d: remember there's a difference between name and display-name 16:26:48 it seems to me that the only way to downgrade if this feature is in use, would be to run the driver-specific process before the downgrade process which removed reliance on that column by renaming everything 16:27:07 Yes, so I could fail the downgrade if there were any non-null driver_ids 16:27:14 I think it might be safer to introduce a new column 16:27:30 winston-d: do you have an alternate suggestion? 16:27:37 "new" naming styles would be ported back to name in the backport 16:27:52 and new volumes created after the backport would revert back to old-style 16:27:53 I definitely think that this info should not be end-user visible... 16:28:16 DuncanT: I don't see why it would be... gnorth avishay am I correct? 16:28:19 DuncanT - in some environments (the ones I work in), it is useful to know the backend reference 16:28:19 bswartz: ^^ 16:28:21 For example 16:28:29 gnorth: Admin only extention then 16:28:39 I may want to do things on my Storage Controller with my disk that I can't do via the OpenStack driver currently. 16:28:46 gnorth: Or some policy controlled extention 16:28:56 Yes, that could be done 16:28:57 so I guess IMO for that I don't see why you can't just continue using UUID for the mapping 16:28:57 DuncanT: if we're talking about extension, I'd rather not see a new column 16:29:08 or as metadata on the backend device if available etc 16:29:19 It is because there isn't metadata on the backend device 16:29:25 anything optional shouldn't mess with the model 16:29:25 gnorth: on yours :) 16:29:32 gnorth: so to back up... 16:29:36 sure 16:29:37 gnorth: avishay bswartz 16:29:48 thingee: maybe we need non user-accessible metadata? 16:29:52 The only thing this solves is you don't like using UUID in the volume names on your devices? 16:30:04 if so that doesn't seem like a very compellign argument to me 16:30:10 jgriffith: that sounds about right 16:30:16 Other things it solves, which you can count as non-problems if you wish: 16:30:19 bswartz: that seems like a lame argument 16:30:33 1. I would like to (in the future) add the ability to import SAN volumes into Cinder, and not have to rename them. 16:30:35 it may be weak, but it's not without merit 16:30:43 gnorth: already have a bp for that actually 16:30:59 gnorth: https://blueprints.launchpad.net/cinder/+spec/add-export-import-volumes 16:31:14 gnorth: rename SAN volumes at what level? back-end? 16:31:15 2. Since backend volume names must be unique (on my controller...) then it might simplify some volume migration scenarios if we can decouple UUID from backend name, but I've not given that much thought. 16:31:29 gnorth: unique screams UUID to me :) 16:31:51 winston-d - Yes 16:31:59 so there are reasons that I converted all of the volumes to use UUID's in the first place 16:32:03 In enterprise environments, your SAN administrator may have strict ideas about how they like to organise things 16:32:03 believe it or not :) 16:32:32 gnorth: perhaps, but would they rather have that or have something that works? 16:32:43 What is the flaw in my proposal? 16:32:47 ok... so I'll stop joking around 16:32:58 gnorth: no downgrade path 16:33:04 gnorth: I like your proposal WRT adding a driver ID column 16:33:08 I think that's a great idea 16:33:20 I don't like the idea you're proposing WRT to naming games 16:33:33 but bswartz and others seem to like it so that's fine 16:33:36 I'm out voted 16:34:01 But as I said, I don't see the point in adding that sort of complexity, confusion lack of downgrade path etc 16:34:14 Now, we could make this feature optional - default behaviour would use UUIDs and so downgrade gracefully. 16:34:14 Just not seeing the benefit... 16:34:27 I'm surprised that you see a benefit in driver_id without the naming thing 16:34:31 gnorth: so if the admin didn't choose the right option they could never downgrade? 16:34:41 gnorth: if it's optional, we're not even talking about it touching the model 16:34:55 thingee: +1 16:34:55 Yes, if they specified in their config file that they wanted friendly names on the storage controller, they could never downgrade. 16:35:00 How often do downgrades really happen / work? 16:35:04 extensions should deal with metadata, which as avishay pointed out should be some sort of admin only metadata which we don't have yet 16:35:11 DuncanT: that's not really the point 16:35:18 Admin-only metadata would be fine for this also. 16:35:46 gnorth: BTW, driver ID is useful for a host of other things 16:35:47 +1 for admin-only metadata 16:35:50 gnorth: for example 16:35:59 gnorth: I have a customer with 50 SF clusters 16:36:10 gnorth: they're trying to find a volume uuid=XXXXX 16:36:22 gnorth: it would be helpful if they knew "which" cluster to go to 16:36:27 DuncanT: good question, but perhaps for another topic ;) 16:36:33 gnorth: then they just search for the UUID 16:36:58 So regardless... 16:37:05 jgriffith: can't they do that only using 'host', even with multiple back-ends? 16:37:09 gnorth: I'm keep an open mind on the idea but I still can't justify the two points you raised earlier. If we already had plans for importing volumes which was your first point, I'm not entirely sure about the second point of decoupling UUID from the backend 16:37:10 Really I don't think we can do an upgrade that can never be downgraded 16:37:26 winston-d: ahh... interesting point 16:38:12 jgriffith: I think the driver_id column is actually the mapping column, not the ID of the driver 16:38:20 maybe it's poorly-named 16:38:27 driver_id could really just be "driver metadata" 16:38:33 And by extension, admin-only metadata 16:38:33 bswartz: Oh 16:38:56 gnorth: yes, that should be a 'metadata directory' 16:38:58 jgriffith - you want to re-state your position? :-) 16:39:01 Ok... 16:39:02 to me, it seems _without_ driver-id or driver metadata, cinder should work fine. no? 16:39:07 If this is implemented as a policy-controlled extension with admin-only metadata, with a warning label that says "this is a one-way street", is that acceptable? 16:39:12 gnorth: well, you probably don't want me to 16:39:24 :) 16:39:35 Ok, I should let other discuss 16:39:38 I guess it is a question of how much we see enterprise environments like this as a target market for OpenStack 16:39:47 gnorth: easy there... 16:39:54 Not being able to name disks, or at least rename them is going to be a problem. 16:39:55 gnorth: hey sorry I know you're talking to jgriffith but I asked earlier about you to explain further on your second point of decoupling the UUID from the backend 16:40:00 gnorth: I think enterprise are a HUGE target for OpenSTack 16:40:22 but I'm not buying the argument that they require "friendly" names on the backend devices and that UUID"s are unacceptable 16:40:35 gnorth: There are plenty of very much enterprise class backends that work fine just now 16:40:40 again though, I'm fine if everybody else likes it 16:40:56 I'll let you all figure it out, but remember you have to come up with a downgrade strategy 16:41:33 DuncanT: I'm not questioning that the backend don't "work fine", but whether the lack of control over the disk names will be a barrier to adoption in some enterprise environments 16:41:40 if downgrade only makes your admin angry, but cinder still works. i don't care. 16:42:00 RIght, thingee: my point 2 - about the migration? 16:42:24 winston-d: agree 16:42:31 gnorth: Some things that are a 'barrier to entry' are actually really useful in forcing an enterprise to think in term s of clouds rather than traditional IT 16:42:45 DuncanT: +1 16:43:06 gnorth: The thing about the cloud is that it frees you from having to nail everything down 16:43:08 gnorth: yeah I guess I'm not understanding the problem you're solving with volume migration. If the cinder service is managing the namespace of the UUID 16:43:14 's what's the problem? 16:43:55 thingee: I don't think it is a functional gap, it might just make code cleaner, I don't think it is the most important issue. 16:44:20 ok, so then back to your first point...we have a bp to work towards the idea of import/export 16:44:22 Essentially, to copy a backend disk from place A to B (i.e. two storage pools in the same controller), and then just update the driver_id 16:44:27 thingee: i think he meant that instead of renaming the volume to match the UUID like my current code does, you could just change the mapping name, correct gnorth ? 16:44:32 yeah 16:44:38 ok 16:44:59 jgriffith: import volume will rename the volume on the backend? 16:45:16 I can't speak for myself about this feature really. I'd be more curious if there is a demand for this. ML maybe? 16:45:18 avishay: if supported yes, but not required 16:45:29 yes and optional seems fine to me 16:45:38 jgriffith: OK 16:45:46 jgriffith: How work it work if rename not supported? 16:46:02 back...(sorry guys we were doing an internal preso) 16:46:25 hemna_: you missed a lot of interesting discussion 16:46:32 doh :( 16:46:34 So, the clients I am speaking about are getting into cloud (and OpenStack), but not jumping in with both feet - they will want to run PoC first etc. It is valuable to show that we can integrate with their policies and ways of doing things. 16:46:50 I agree that if nobody ever touched the SAN backend, we wouldn't need this feature. 16:47:09 But my clients do so a lot, even with cloud deployments, mostly to exploit features that are not yet exposed in the cloud. 16:47:19 gnorth: There is a cost to maintaining and testing a feature that is useless to 99.9% of all deployments 16:47:25 Yes, I agree. 16:47:32 I would also point out that people are using OpenStack for IaaS on things much smaller than a cloud (e.g., a rack or two) 16:48:26 I would be happy with admin-metadata and then we could put this into Storwize driver and have it as an optional (downgrade-breaking) feature, if folks don't want this driver_id field. 16:48:57 But as I say, it is really a question of whether this really is a useless feature for 99.9% of all deployments. 16:49:19 And if ec2_id wasn't an integer, I'd be able to use that. :-) 16:49:35 admin-metadata has many possible driver-specific usages, so I'd say that was a better name 16:49:38 gnorth: I've just never heard this problem brought up, but I'm mostly coming from a public cloud case 16:49:46 gnorth: i believe it' a useful feature for private cloud that wants to migrate existing volumes into cinder 16:49:56 DuncanT +1 16:49:58 For purpose built cloud I agree it is a different story 16:49:58 for the real cloud, 99.9% userless, but for traditional IT, it's valuable thing 16:50:15 DuncanT: I think I'm much more comfortable with that as well 16:50:30 haha... real cloud, pretend cloud, my cloud 16:50:31 zhiyan: private cloud _is_ a real cloud as well 16:50:34 it's all cloud 16:50:45 winston-d: thank you!! 16:50:57 Ok... are we ready to move on to the next topic? 16:51:01 There have been a few use-cases where generic metadata would be useful to drivers (maybe even jgriffith's blocksize stuff I was talking to him about...) 16:51:05 Do we have a next topic? 16:51:09 OK, so what is the conclusion here? 16:51:12 DuncanT: :) 16:51:13 o/ 16:51:21 DuncanT: I just solved that by adding provider info :) 16:51:23 DuncanT, jgriffith: just like the capability stuff (and I still need to do)...the keys should be figured in the dev sphinx doc and approved on a commit basis so we don't discrepancies 16:51:26 DuncanT: but that's a good point 16:51:27 dosaboy: not so fast. :) 16:51:36 gnorth: Call it 'driver metadata' or something 16:51:40 thingee: +1 16:51:59 DuncanT: how about just admin_metadata like you suggested earlier? 16:52:06 gnorth: Add an extention API to query / modify it 16:52:09 jgriffith: +1 16:52:11 winston-d: jgriffith: use case and operation model has little different IMO 16:52:13 Just as a single String(255) field? 16:52:15 So it's not isolated in use or interpretation 16:52:23 zhiyan: I agree with you 16:52:29 Or as key/value pairs? 16:52:31 zhiyan: that's kinda the whole point :) 16:52:35 Like today's metadata 16:52:36 gnorth: you'd make me so happy if you had separate commits on the backend work, and front api work. just sayin' 16:52:54 thingee: Yes, that is in the staging section of the design, and is the plan. 16:52:55 A single string is easier on the DB, but less flexible... 16:52:56 thingee: noted 16:53:07 I've no strong preferences 16:53:15 Who else might use it other than the driver? 16:53:18 Though if it was me I'd go single string 16:53:25 gnorth: who knows :) 16:53:32 I worry that it could get stomped on if a single string, but agree, earlier on the db 16:53:37 easier, rather 16:53:48 gnorth: I'm still curious if there's interest if brought up on the ML 16:53:53 likely no one will respond, sigh 16:53:58 key/value can be much more flexible, if someone else would like to utilize that to achieve other goals. 16:54:15 winston-d: +1 16:54:15 gnorth: I'm hoping that people who want magic metadata values can be persuaded to use admin_metadata instead 16:54:15 Ok, so I think here's where we're at 16:54:24 6 min warning 16:54:36 Moving away from the idea of the special columns etc in favor of some form of admin metadata 16:54:48 still have some questions about whether single string is best or K/V pairs 16:54:54 I'm not object to either 16:54:56 Add an "admin-only" flag to VolumeMetadata to filter it? 16:55:14 gnorth: oh for sure! Otherwise we'd use the existing volume-metadata :) 16:55:24 Everybody in agreement on that? 16:55:30 I mean, rather than adding a second set of metadata and APIs - Administrators see more 16:55:35 And can set that flag 16:55:38 i'm fine 16:55:43 gnorth: wait... now you lost me again 16:55:48 +1, but maybe need filter some sensitive data 16:55:53 sounds good to me 16:55:56 gnorth: you mean use the existing metadata with a flag? 16:56:03 gnorth: not sure how you would do that 16:56:16 gnorth: and still allow the user to set metadata as well 16:56:22 OK, so you want a new table, new APIs? Treat them completely separately? 16:56:25 gnorth: seems like a separate entitity would be better 16:56:28 OK 16:56:33 gnorth: I think so... everyone else? 16:56:42 How did you handle downgrade when you added metadata - just threw it away? 16:57:00 oh boy, metadata clean up 16:57:05 gnorth: well, we've never had a downgrade where there wasn't a metadata column 16:57:11 but yes, that's what we'd be doing here 16:57:25 That's the price you pay if you have to downgrade IMO 16:57:29 I think if it could be done elegantly as one metadata column, that would be cool, otherwise two separate 16:57:43 Since the "previous" migration won't know or have any use for it anyway 16:57:49 I will throw some ideas around, thanks everyone. 16:57:54 gnorth: thank you 16:57:58 throw it away +1 16:58:00 we have to be pretty careful with this. we could end up with orphan metadata and admins would be afraid to remove anything without doing a proper lengthy audit 16:58:04 2 minutes left 16:58:13 #topic open discussion 16:58:14 gnorth: thx for handling this tough case :) 16:58:23 I have a new bp https://blueprints.launchpad.net/cinder/+spec/vmware-vmdk-cinder-driver 16:58:44 kmartin: noted... thanks 16:59:00 gnorth: yes thanks for your input on other use cases 16:59:13 should we add "Extend Volume" to the mininum features required by a new cinder driver for Icehouse release? 16:59:54 anyone have input to kmartin 's question above? 17:00:01 kmartin: +1 17:00:07 On that topic I'm considering starting the 'your driver doesn't meet minimum specs' removal patch generation process soon. I'm aware I've got to split this with thingee... anybody any comments? 17:00:07 I believe that's in line with what we agreed upon for a policy 17:00:14 kartikaditya_: that is along similar ines to what I was thinking with https://blueprints.launchpad.net/cinder/+spec/rbd-driver-optional-iscsi-support 17:00:15 kmartin: +1 17:00:19 DuncanT: I thnk you've been beat 17:00:20 so jdurgin brought up to me that we should probably try to reach maintainers better on requirement changes. 17:00:27 I'll update the wiki today...thanks 17:00:31 maybe at the very least we can mention it on the ml besides irc meeting? 17:00:36 kmartin: thingee fair point 17:00:37 jgriffith: Bugger, haven't seen any yet 17:00:46 kmartin: thingee ML is the only good vehicle I can think of though 17:00:58 DuncanT: I think thingee was starting to tackle that 17:00:58 half the drivers we support now are going to get an email from me about not supporting what's needed in H 17:01:16 jgriffith: No problem, we agreed to split them :-) 17:01:21 Ok, we're out of time an I have another meeting :( 17:01:25 #cinder :) 17:01:28 thanks everyone 17:01:30 dosaboy: No, this is more of supporting upcoming datastores that ESX/VC can manage 17:01:33 #end meeting 17:01:40 damn... 17:01:42 #endmeeting cinder