16:00:28 #startmeeting cinder 16:00:29 Meeting started Wed Aug 14 16:00:28 2013 UTC and is due to finish in 60 minutes. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:33 The meeting name has been set to 'cinder' 16:00:35 Hey everyone 16:00:44 * bswartz waves hello 16:00:54 Hey 16:00:55 hi 16:00:56 hi 16:00:59 #link https://wiki.openstack.org/wiki/CinderMeetings 16:01:01 hi 16:01:06 hey garyk 16:01:10 hi 16:01:10 hello 16:01:14 avishay????? 16:01:16 Hello all. 16:01:18 no avishay 16:01:21 jgriffith: thanks for looking at the code 16:01:29 garyk: no worries 16:01:31 o/ 16:01:34 thingee: :) 16:01:37 hi 16:01:40 hi 16:01:42 You get to go first 16:01:53 #topic API extensions using metadata 16:01:57 excellent 16:02:03 good 16:02:10 so some meeting ago we spoke about storing extension data in metadata 16:02:35 since extensions on optional features, having changes to model and risking columns being unused seemed to not make sense 16:02:39 is this the metadata that's supposed to be for end users to tag their volumes? 16:02:54 end user = tenant 16:02:57 key/value pair for the volume. 16:03:00 bswartz: wrt https://review.openstack.org/#/c/38322/ 16:03:18 actually, i'm agree to using 'metadata' for extension. but in this R/O volume case, i just think 'readonly' property of volume should not put into metadata but volume table/model. since for next multiple-attaching feature change, i'd like keep 'readonly' as a property for the volume, LIKE others such as 'instance_uuid', 'attach_time' and etc., those attaching related properties will all be removed to a dedicated table. 16:03:40 storing if a volume is readonly. Not all cases will we have backend solutions that support this. IMO, this is optional, and so I feel like the model shouldn't be changed 16:03:54 otherwise you end up with columns being unused and worse of all the volume table growing :( 16:03:59 thingee: +1 16:04:22 thingee: I agree with your statements, but I'm on the fence with this particular change 16:04:28 we already have other patches aiming towards this direction and we should continue this path with new stuff coming in 16:04:43 thingee: +1 16:04:56 zhiyan: there is a volume-acl being implementing about the readonly property 16:05:05 Can we differentiate between what the API calls metadata and this please? 16:05:05 And I don't think it's zhiyan fault at all. It was discussed in a meeting, but there is no documentation about this. I think that needs to be improved and something I'm willing to take on to help people writing extensions. 16:05:32 DuncanT-: back to the discussions about QoS 16:05:34 I've no problem storing it in some k/v table, but volume metadata is already a thing, and I don't think changing that is a good idea 16:05:44 DuncanT-: +1 16:05:44 DuncanT-: not putting dedicated columns in the DB/Volume obj 16:05:59 DuncanT-, jgriffith: so I think we talked about admin metadata at one point? Just can't be changed by a user 16:06:01 but using abstracted K/V's or AKA meta 16:06:01 if you really like keep request 'readonly' save to 'metadata', i would rather remove that extension in this r/o attaching case, as avishay said, using 'update' standard api, but extension for 'readonly' flag update. 16:06:08 I think that would make the distinction 16:06:08 thingee: indeed 16:06:31 thingee: did my simple explanation attempt line up with your thoughts as well? 16:06:34 jgriffith: I've no problem with the concept, I'd like to kill the name 'metadata' ASAP to avoid confusion 16:06:37 hi, sorry i'm late 16:06:44 DuncanT-: I'm fine with that 16:06:52 DuncanT-: definitely 16:07:35 zhiyan: so if we ended up with using some idea of metadata, I'm with the extension having it's on change readonly flag. I think other people would agree that wouldn't belong in the volume api update 16:08:07 it wouldn't belong in the core api especially if it's optional 16:08:27 thingee: so that's a good distinction point IMO 16:08:46 err... "point of distinction" 16:09:14 jgriffith: if it was part the volume table, I'd say leave it in core api. but in order for it to have it's only column(s) it would have to be mandatory feature. 16:09:29 thingee: yeah, I see your point 16:09:41 thingee: when I went through the patch I looked at it differently though 16:09:54 thingee: but I think what you're proposing makes perfect sense 16:10:11 ie I *did* look at it as a core feature 16:10:16 thingee: but as i mentioned above, for this case i don't think 'readonly' should be separated from volume/model, since it's a status for volume, multiple-attaching will separate them all out to a dedicated table, this will keep consistent IMO, i don't think 'readonly' save to 'metadata', and other's such as 'instance_uuid' be saved to other table is a good idea. 16:10:21 ok great. zhiyan I think you had one other last concern with the readonly column remaining in the volume table? 16:10:24 ah there it is :) 16:10:29 i thought it was a core feature as well...why is it optional? 16:11:09 avishay: if it's not optional, it should be in the core. not in contrib. 16:11:27 thingee: well, maybe we need to figure out how to define that better 16:11:30 I've always wondered why in general volume actions is in contrib 16:11:35 admin actions rather. 16:11:52 thingee: core features can be policy based extensions I believe 16:12:04 https://review.openstack.org/#/c/39683/ the readonly can be defined by the permission. 16:12:06 thingee: i thought it should be in api/v2/volumes.py : update() 16:12:11 thingee: ok, if so, i will remove that extension api, then to ask client use standard update api to change 'readonly' flag. 16:12:37 vincent_hou: I really don't like that direction at all 16:12:38 by checking the permission level, you can it is readonly or not. 16:12:40 personally 16:12:42 folks, do you think is acceptable? 16:12:59 it is acceptable? 16:13:06 well ok step back a second. is this a core feature or not? Is every backend solution really going to be able to provide this feature? 16:13:18 Every hypervisor can 16:13:21 AFAICT 16:13:22 yes 16:13:23 thingee: so that's the million dollar question 16:13:36 Backend enforcement is a grey area 16:13:38 thingee: my thought was yes, because the hypervisor can implement it 16:13:40 BUT 16:13:45 mornin 16:13:46 i thought it was optional for drivers, but all hypervisors would support 16:13:57 thingee: I'm also good with the idea of graduating later 16:14:04 if all cinder backend can support it , it will be better, but as DuncanT mentioned, hypervisor can support it. 16:14:14 jgriffith: My fear with that is migrating to the volume table then. 16:14:15 jgriffith: yes 16:14:15 zhiyan: all backends CAN'T support it 16:14:29 jgriffith: I feel like if we see ourselves graduating it later, we just let it be a core feature. 16:14:29 all hypervisors apparently can 16:14:34 avishay: i mean hypervisor, nova side, but cinder 16:14:35 thingee: hmmm 16:14:52 avishay: thingee zhiyan but the problem is that it's not implemented yet 16:15:02 ie on the nova/hypervisor side 16:15:03 jgriffith: yes, i will do that 16:15:09 what's the chance of the nova code making havana, at least for libvirt? 16:15:12 jgriffith: after this cinder server side done 16:15:14 and we're too late in the cycle to get those changes in I believe 16:15:21 jgriffith: so if it's not in nova, we shouldn't merge. 16:15:23 zhiyan: I don't think that will be possible 16:15:27 jgriffith: we can be ready to merge. 16:15:36 jgriffith: so we need speed up to reveiw/landing, IMHO 16:15:46 we can merge now and if the nova code doesn't make it we can revert? 16:16:03 zhiyan: sure, but if there's not a bp for nova work already you're likely going to be too late 16:16:13 avishay: I'd rather not 16:16:15 https://review.openstack.org/#/c/34722/2 16:16:27 avishay: we can agree on exceptions for Cinder if need be, but not revert 16:16:34 avishay: I'd be fine with that if we weren't low on resources as it is for reviews that are likely to make it 16:16:35 jgriffith: ok 16:16:40 if it's hypervisor, it's a state of the 'connection' to the volume instead of the state of volume itself. do we want to be able to distingish these two? 16:16:43 avishay: remember there are folks running trunk and that can make a mess for them 16:16:51 jgriffith: +1 16:16:56 winston-1: +1 16:17:04 winston-1: that brings up a good point 16:17:12 Nova are refusing to merge until the cinder part is merged. If we refuse until nova merge we've a problem.... 16:17:12 winston-1: I did something similar with the blocksize 16:17:22 DuncanT-: ha! 16:17:24 DuncanT-: heh 16:17:36 so let's not get off track here 16:17:39 you merge first, no you merge first! 16:17:41 let's back up 16:17:52 first we need to decide what we *want* 16:18:00 winston-1: we discussed this very former, 'readonly' is a status for volume, and also 'attached_mode' is for attach seesion of a volume. 16:18:28 zhiyan: but... attach info is used for attach which is when r/o would be needed/checked 16:18:37 which it currently isn't 16:18:58 The only thing I don't like about using K/V's or connect info 16:19:09 we need some way to communicate to the tennant it's R/O 16:19:17 otherwise silly things happen 16:19:57 admin-meta may be fine, but then we're saying that's all end-user visible (not modifyable) 16:20:04 jgriffith: what's 'tennant' ? sorry 16:20:08 haha... it's Read Only 16:20:15 zhiyan: end-user of openstack 16:20:24 Our customers :) 16:20:27 jgriffith: readonly 16:20:43 * jgriffith thought it was funny 16:20:51 and so did his dog 16:20:57 jgriffith: IMO, query 'readonly' status of the volume from db/model IMO.. 16:21:08 zhiyan: yes, understood 16:21:19 zhiyan: I think we're all clear on that :) 16:21:23 I'd like input from others 16:21:41 Specifically about it being a core function or not 16:21:53 or not core now, core later etc 16:22:07 Unfortunately it's getting late in the cycle 16:22:26 Nobody has an opinion there? 16:22:27 as long as some backends can't support it we need to explain how those should treat R/O requests 16:22:33 I think if KVM supports it, and others can, it should be core. It all depends on what can realistically get into Havana. 16:22:36 jgriffith: if it's not going to make it into nova where they're in a ready state to merge, then we shouldn't merge. 16:22:37 bswartz: hypervisor 16:22:40 I think if we're going to support multi-attach, we need this as core at some point 16:22:48 at this point I think H is too late 16:22:48 thingee: I agree with that 16:22:49 how about just put it in extension, but save 'readonly' to volume table but not metadata ? 16:22:52 and we should plan for I 16:23:06 zhiyan: isn't that what you already said? 16:23:19 hemna: +1 16:23:23 hemna: you give up too easy 16:23:25 the hypervisor approach doesn't address volumes which are read-only on the backend and can't be made writable 16:23:28 :) 16:23:30 :P 16:23:32 what's the benefit of having driver support for this if all hypervisors support it? two levels of read-only? 16:23:33 jgriffith: i can remove that from extension, and ask use using standard update api. 16:23:34 bswartz: ? 16:23:42 bswartz: that's fairly easy to address actually 16:24:00 avishay: Belt and braces / defense in depth? 16:24:01 well the obvious solution is: don't do that, but I'm curious about a better solution 16:24:04 bswartz: indicate via the K/V structure "backend supported" 16:24:14 okay 16:24:20 then if not backend: hypervisor 16:24:29 DuncanT-: OK 16:24:44 there's a 3rd state though: backend can report r/o but backend cannot change r/o 16:24:51 Ok, it seems there's only two opinions being voiced here 16:24:53 DuncanT-: that's what i thought...might be a little overkill for my taste, but OK 16:24:53 I'd suggest that if a backend can't make things writeable, it should make them R/O, but then it's a pretty weird backend even by my standards in that case 16:24:57 1. Wait until I 16:25:06 2. Move forward with the proposed patch 16:25:12 DuncanT-: 'even by my standards' :) 16:25:15 I was hoping for an option 3 16:25:30 jgriffith: option 3 is store in metadata 16:25:44 yay! 16:25:48 and still get in for I 16:25:53 but you said the M word!! 16:25:56 Given we need to hit the hypervisors to get this to actually work, we can't call it core until most if not all the hypervisor work is done 16:26:06 I'm not so ready to give up on H yet 16:26:11 but I'll have to take that offline 16:26:11 I'll buy DuncanT- a shot everytime I say metadata. 16:26:18 metadata metadata metadata 16:26:22 and we need to figure out our approach before I can do anything there 16:26:24 haha 16:26:27 This is going to hurt.... 16:26:35 I presume this would also affect brick's attach/detach support for both iSCSI and FC 16:26:46 hemna: yep 16:27:08 Sorry for jumping in late, but I'd like to ask a very basic question. What is the need to create a "read-only volume" when we have already defined snapshots? 16:27:17 Hmmm, given the amount of dependencies and complications arising, punt until I is growing on me 16:27:22 snapshots don't have much to do with it 16:27:27 DuncanT-, +1 16:27:31 caitlin-nexenta: You attach a snapshot 16:27:33 caitlin-nexenta: the end goal is multi-attach 16:27:37 Gah 16:27:45 caitlin-nexenta: starting with R/O volumes to do so 16:27:46 *Can't* attacha snapshot 16:27:51 caitlin-nexenta: you can't attach snapshots 16:28:09 ok we're losing focus 16:28:14 But you can clone snapshots. 16:28:24 Then they aren't read only 16:28:26 caitlin-nexenta: but then it's a volume and round and round we go :) 16:28:39 OK, decision time? 16:28:50 avishay: +1 16:29:08 Who wants to leave it in teh volume table? 16:29:12 I vote to punt until summit discussion / I 16:29:14 besides zhiyan :) 16:29:23 DuncanT-: Not on the list 16:29:27 heh 16:29:42 think about next mutli-attach change 16:29:48 keep consistent 16:29:58 "1. Wait until I" 16:30:01 what's the chance of the libvirt support landing in H? 16:30:05 :P 16:30:15 DuncanT-: heh 16:30:19 avishay, 0 if the cinder patch doesn't land first 16:30:21 zhiyan: I understand that point, however I'm kind of in the opinion that if that changes something drastically we address it then 16:30:32 we'll also need a patch to change brick to support this 16:30:33 not sure, i need time, but block on review you know 16:30:40 ok, you guys are killing me 16:30:44 hemna: let me rephrase...given that the cinder patch lands tomorrow, what's the chance of libvirt support in H? 16:30:47 I declare this topic dead 16:31:00 avishay: I think it could get in 16:31:04 avishay, hard to tell, there are over 300 outstanding reviews in nova today 16:31:09 so i say give it a chance 16:31:16 If we merge an API into to cinder that flat out doesn't work, that's bad IMO 16:31:18 avishay: that's the spirit 16:31:37 Ok, moving along 16:31:38 And that is the case if we merge before nova merge 16:31:41 zhiyan: we can chat later 16:31:46 jgriffith: if we do see it a core feature, can we rethinking the columns. Just looking at the model changes now seemed not straight forward from outside perspective and overlap. 16:31:47 we'll figure something out 16:31:52 thingee: you too 16:32:05 thingee: I'm all for rethinking the columns 16:32:12 in fact I'm agreeing with you on that one 16:32:28 but everybody is busy arguing amongst themeselves about Nova and I etc 16:32:36 ok next topic 16:32:42 #topic migration 16:32:48 avishay: what's up? 16:33:17 avishay: ??? 16:33:18 so i have patches up for cinder being able to migrate in-use volumes, and also patches for cinderclient and nova to go along with it 16:33:47 * jgriffith is abstaining from the cinder patch at this point 16:33:51 the detached case code that was merged required drivers to implement rename_volume for migration to work, and i got rid of that 16:34:04 so now all drivers that have support in brick have migration for free 16:34:21 there are 2 dependencies though 16:34:35 What about the none-brick ones? 16:34:54 avishay, nice 16:35:06 DuncanT-: online migration via libvirt will work, but cinder can't copy data for detached if brick doesn't support 16:35:13 DuncanT-: they can override the copy function though 16:35:18 avishay: Cheers 16:35:22 avishay: maybe you should clarify by "brick" 16:35:30 brick attach/detach code 16:35:39 so iSCSI and FC is there, NFS and others is not 16:35:42 avishay: aka iscsi/fc 16:35:44 :) 16:35:49 avishay - can a storage vendor optimize migration for their devices? 16:35:49 thanks 16:36:05 caitlin-nexenta: yes - see here https://review.openstack.org/#/c/41046/ 16:36:12 so i have 2 dependencies 16:36:21 so we need connectors for nfs, iser, aoe, etc then 16:36:23 1. eharney and i are working out how to interface with novaclient 16:36:29 hemna: aoe is submmitted 16:36:36 hemna: I'm doing a nfs one 16:36:42 iser is in too IIRC 16:36:43 ok excellent 16:36:48 2. i need help from thingee on this https://review.openstack.org/#/c/40857/ 16:37:02 if you guys need help on the connectors...I'm here. 16:37:02 but the code is ready for whoever is interested to test, and for everyone to review 16:37:30 anyone works on RBD for brick? 16:37:42 dosaboy: ? 16:37:42 jdurgin: ? 16:37:44 winston-1: that model doesn't really *fit* 16:37:58 winston-1: how imment is it needed? 16:38:03 winston-1: but maybe dosaboy ? 16:38:04 I am happy to work on it 16:38:05 ha 16:38:17 got a fair bit on already 16:38:18 dosaboy: this week :) 16:38:22 eeeek 16:38:22 RBD can override the driver's copy_volume_data (or whatever it's called) function with a simple 'cp' to get detached migration working 16:38:52 avishay: to clarify again though, you're talking migrate to same back-end right? 16:39:02 jgriffith: absolutely not :) 16:39:05 avishay, what do you mean by detatched migration? detached from a VM ? 16:39:05 avishay: can I ping you tomorrow on this? 16:39:11 hemna: yes 16:39:12 avishay: so LVM --> RBD 16:39:14 dosaboy: sure 16:39:16 ok 16:39:17 thx 16:39:34 avishay: and the reverse as well? 16:39:40 jgriffith: LVM vg A to LVM vg B, or LVM to storwize to RBD to whatever 16:39:52 jgriffith: two different cinder backends, no matter what the type 16:40:10 avishay: k, last time we chatted I thought that was NOT the case 16:40:12 nod. 16:40:16 jgriffith: yes it was 16:40:23 avishay: hmmm 16:40:25 avishay: nice! 16:40:37 jgriffith: you asked what the difference between miration and clone was 16:40:44 avishay: ? 16:40:49 eventually it would be nice to see if the backend had hints on migration. some backends can move volume between themselves to avoid the dd/cp over the network. 16:40:54 jgriffith: clone is specifically in the same back-end, migration is moving the volume somewhere else 16:41:04 avishay: I'm fully aware of what clone is thanks 16:41:11 hemna: drivers have the option to do it themselves 16:41:33 avishay, ok, are there hints to the driver that let it know what the destination is ? 16:41:33 jgriffith: i'm just saying that's what you asked last time 16:41:44 avishay: no, it's not but that's ok 16:41:53 hemna: the driver gets the name of the host and its capabilities 16:41:55 avishay: doesn't matter so long as I was wrong :) 16:42:15 jgriffith: yeesh... you also said you were a smart ass :) 16:42:30 whoooo... meeeee? 16:42:33 say moving from one 3par to another 3par. my driver can instruct the 3pars to do the work between themselves 16:42:44 * jgriffith thinks somebody is impersonating him 16:42:45 hemna: see here: https://review.openstack.org/#/c/41046/ 16:42:50 avishay, thn 16:42:52 thnx 16:42:52 avishay - do you have a summary of the assumptions your patch is making. For example, are you assuming that copying the volume data is always a full copy? 16:43:19 caitlin-nexenta: yes, moving the entire volume from "here" to "there" 16:43:45 the interface is: cinder migrate [--force-host-copy True] 16:44:09 The force-host-copy flag can be used to disable a driver's optimized version and use cinder/nova to copy 16:44:25 In case of a driver bug, for example, your data isn't stuck 16:44:33 Some storage servers have the ability to create what is effectively a remote thin clone, and be very lazy about how complete the migration is. What would we have to do to preserve that capability for our servers? 16:44:55 caitlin-nexenta: i think we should take that offline 16:45:03 No problem. 16:45:22 jgriffith: we good? 16:45:22 avishay, I'd like to ping you offline about this as well to better understand my optimized mechanism as well. 16:45:27 I'm good 16:45:30 hemna: no problem 16:45:54 hemna: if you review my patch you will understand it ;) 16:46:05 I'm reading through it now thanks 16:46:12 avishay: :) 16:46:26 but seriously, will be happy to help anyone who needs more understanding, and will work on docs as well 16:46:45 oh, one more thing 16:47:00 _ seems to be missing, and that's why the patch isn't passing py26 and py27 16:47:14 any idea where it went? 16:49:38 guess not 16:49:39 hello? anybody home? 16:49:54 avishay: have you rebased 16:49:54 avishay: I'll have a look at your patch later and see if I can help out there 16:49:55 avishay: it's likely related to some pulls from OSLO 16:49:55 avishay: but the fact that other patches are going through makes me wonder if a rebase would handle it 16:49:58 anything else? 16:50:04 I have a patch for huawei driver: https://review.openstack.org/#/c/36294/ 16:50:07 wow i just got all your messages at once...strange 16:50:13 lagged 16:50:13 jgriffith: https://review.openstack.org/#/c/41600/ posted vmdk driver couple of APIs 16:50:16 avishay: that happened last night too 16:50:20 jgriffith: will try to rebase - thanks 16:52:23 whoaaaa there folks 16:52:23 kk 16:52:23 avishay: freenode is lagging very badly 16:52:23 avishay: nothing else? 16:52:24 hope anyone instrested in this could find time to review 16:52:24 #topic other stuff 16:52:24 Ok, now the free-for all 16:52:24 but PLEASE 16:52:24 * med_ lost his connection 16:52:24 don't ask "review my patch" 16:52:24 we're all painfully aware of what's in the queue 16:52:24 NO offense... just sayin 16:52:24 jgriffith: :) 16:52:24 ok,i understand 16:52:24 alright... if nobody has anything else? 16:52:28 remember proposal freeze next week (21'st) 16:52:30 Is there a good document anywhere that summarizes the philosophy of what a snaphshot, backup, etc. should be used for? 16:52:45 caitlin-nexenta: there are some comments on Victors patch if you guys can get to it 16:52:48 my connection sucks, i'm dropping off 16:52:49 bye all 16:52:58 I can explain the multi-backend stuff if needed 16:53:08 caitlin-nexenta: other than it's pretty much good to go 16:53:21 alright... 16:53:22 caitlin-nexenta: i have a task open to update the docs on backups 16:53:27 thanks everyone 16:53:35 #end meeting 16:53:38 Later. 16:54:06 jgriffith: #endmeeting is one word 16:54:12 #endmeeting