16:01:56 #startmeeting cinder 16:01:57 Meeting started Wed Mar 13 16:01:56 2013 UTC. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:00 The meeting name has been set to 'cinder' 16:02:07 hi 16:02:09 hi 16:02:09 hei 16:02:10 hi 16:02:12 hi 16:02:13 hi 16:02:14 o/ 16:02:15 hi 16:02:26 full house :) 16:02:32 Hey everyone 16:02:40 let's get started should be easy meeting today 16:02:45 #topic RC1 status 16:02:51 * DuncanT1 slinks in at the back 16:03:04 I'd like to cut RC1 tomorrow morning 16:03:14 I think we're in fairly good shape 16:03:22 and there's nothing to say more bugs can't/won't come in 16:03:38 But it will make sure that we're a bit more picky on what's a critical bug and what's not 16:03:49 How do folks feel about that? 16:04:09 rushiagr: bswartz I think you guys got all of your fixes in yes? 16:04:19 will there be RC2/RC3/etc? 16:04:51 bswartz: there will be but it's not an excuse to rewrite code 16:04:56 critical bug fixes only! 16:05:05 and by critical I mean release blocking 16:05:05 I have one bug which is going to take significant changes to fix -- probalby needs to be targetted to havana not grizzly at this point 16:05:21 bswartz: Yeah, I think so 16:05:33 jgriffith: yeah I'm not planning to push in any big code changes 16:05:48 So the idea is we really move into testing and documentation after we cut RC1 16:06:05 April is going to be upon us before we know it 16:06:09 We've very much in test mode now 16:06:16 DuncanT1: excellent... 16:06:20 DuncanT1: Speaking of.... 16:06:22 jgriffith: I think I need to get in touch with james king then. I told him to have the other stuff done on friday 16:06:22 https://bugs.launchpad.net/cinder/+bug/1087817 16:06:23 Launchpad bug 1087817 in cinder "Update v2 with volume_type_id switch to uuid" [Medium,In progress] 16:06:30 Ever find your doc on multiple cinder nodes? 16:06:37 I'd like to blend that in to the docs 16:06:55 thingee: oops... my bad 16:07:17 thingee: I can ping him as well, I know you're getting ready to head out 16:07:35 No. I need to redo it for several places though. If you haven't got it before COB Friday, please shout at me 16:07:49 DuncanT1: Can take that as you're signing up to do it :) 16:08:07 jgriffith: yeah so about that. it's been moved to tomorrow noon 16:08:08 #action DuncanT1 update multiple cinder node install doc 16:08:14 thingee: haha 16:08:22 hey virbot WTF? 16:08:25 virtbot 16:08:58 DuncanT1: we don't need no stinking virtbot anyway 16:09:04 jgriffith: I think it's just running slow. it took a while to pull up the info on that bug link I pasted 16:09:24 ok 16:09:36 So does anybody have anything else on RC1 updates? 16:09:57 There's one other nasty bug that I'm going to work on but it will be after RC1 before I get to it 16:10:32 Everything I've got is python-cinderclient at the moment 16:10:37 Here's a fun little exercise, ask to create a volume that 10 Gig w/ your 5 Gig backing store (LVM) 16:10:44 DuncanT1: excellent 16:11:03 would it help if we pushed to PyPi now, and then again when we're ready to release? 16:11:13 jgriffith: it would just error on the manager layer 16:11:17 Personally I've been just installing from master 16:11:49 thingee: ? 16:11:56 thingee: ideally 16:12:02 thingee: but it's broke 16:12:14 thingee: between retries we drop the exception from lvcreate 16:12:25 and go about life ignoring the issue 16:12:34 jgriffith: regarding your fun exercise. I always hit that in testing because the volume group is too small. 16:13:47 jgriffith: I'd like to queue this for discussion please: https://review.openstack.org/#/c/24321/ 16:14:03 avishay: how about now 16:14:07 #topic https://10.200.1.17/login.html 16:14:15 jgriffith: if queue length is 0... 16:14:23 hmm... 16:14:59 avishay: what would you like to discuss 16:15:23 jgriffith: winston-d's comment on "Need discussion on whether these capabilities should be added in host_manager" 16:15:32 yeah 16:15:34 what is this login.html link about? 16:15:42 jgriffith: does everyone agree that this should be added? 16:16:01 #topic https://review.openstack.org/#/c/24321/ 16:16:15 rushiagr: sorry... wrong info in my clipboard 16:16:20 jgriffith: does that mean every driver needs to return a default value? 16:16:23 jgriffith: np 16:16:28 avishay's patch is to add two new capabilities: compression and tiering to host_manager 16:16:53 winston-d: what happens if driver don't report on those two capabilities 16:16:55 rushiagr: default is False for both capabilities, if the driver doesn't returnanything 16:16:55 my opinion was yes on compression, not crazy about tiering 16:17:13 avishay: but I'm flexible 16:17:28 avishay: okay 16:17:31 xyang_ : the default should be False 16:17:31 I think I mentioned before that I'm fine with it 16:17:37 does anyone have strong feelings either way? 16:17:39 jgriffith: yes you did 16:17:56 or even not-so-strong feelings? :) 16:18:03 haha 16:18:19 We've become a much less controversial group over the last few weeks :) 16:18:21 My only worry is getting a flexible enough definition of these capabilities 16:18:25 if this doesnt amount to adding a couple of lines in every driver i'm fine 16:18:31 is 'compression' and 'tiering' well-known feature for a storage solution? 16:18:41 is tiering something that we think can be used in a similar fashion across different drivers? 16:18:52 So that we don't end up with many variations on the theme don't end up being needed 16:19:07 winston-d: they are well known, but we may not expose them. we may hide them in a pool 16:19:21 compression is done in several controllers - jgriffith, doesn't solidfire do it as well? 16:19:38 yeah, I think *most* devices do compression these days 16:19:43 xyang_: what do you mean by hiding them in a pool? 16:19:58 what I'm not sure of is the value/need to expose this information back up 16:20:11 Is it something you want to be exposing to the sysadmin though? i.e. to let end uses choose to avoid it? 16:20:12 maybe I need to read up on the host stuff, but who needs to know whether or not a volume is compressed or not? 16:20:29 avishay: I mean our driver may not report those capabilities specifically 16:20:35 DuncanT1: the sysadmin will hopefully know what the device he installed is capable of already 16:20:38 :0 16:20:54 if you have a volume that's going to be logs, it should be scheduled on a compressed volume. if it will be binary blobs, compression will waste resources. 16:20:56 My view is that these values are specifically for scheduling purposes 16:21:10 jgriffith: +1 16:21:21 DuncanT1 : what do you mean by 'flexible enough defintion' ? 16:21:24 jgriffith: You don't know many sysadmins ;-) 16:21:30 DuncanT1: haha! 16:21:36 So here's the thing... 16:21:44 If there's some confusion or concerns... 16:22:09 I would propose we leave it out, I think we can get what avishay wants here via introducing types specific to these things 16:22:32 I don't want to make life harder though and I don't have a real objection/preference on compression 16:22:44 I'm just a bit neutral on it 16:22:46 jgriffith: how can you do it via types? 16:22:58 avishay: the admin can setup gold, silver, bronze pools ahead of time, and each pool is already associated with certain capabilities. 16:23:03 winston-d: I mean that, for example, 'tiering' is a flexible enough term that some other vendor doesn't come along and say 'we do multiple storage media user directable migration levels, which is not tiering the way IBM do it and so we want our own capability adding..." 16:23:06 avishay: Just define a type as "logs volume" or whatever 16:23:23 avishay: and with knowledge of the capabilities set that up to point to the correct backend 16:23:24 backends are tied to a type 16:23:52 xyang_: +1 that is how we are going to handle this 16:23:54 avishay: or are you saying this would allow you to *set* compression on your device on a per volume basis? 16:23:57 jgriffith: and then the admin has to manually say which backends support what 16:24:07 kmartin: good 16:24:08 in fact, even these capabilities are not part of host state (e.g avishay failed to get them in) it's still part of capabilties of a host if driver reports them. 16:24:15 avishay: yes, not overly elegant but it's an interm solution that works 16:24:44 winston-d: good point 16:24:55 winston-d: So then customer filters can still be written 16:24:59 jgriffith: no, the point is for the driver to automatically report capabilities and for the scheduler to know about them, so that admins don't have to configure properties for each pool manually 16:25:24 avishay: ok, that's what I thought you meant, thanks for clarifying. 16:25:30 jgriffith : that's the beauty of filter scheduler. :) 16:25:31 avishay: I wanted to make sure I wasn't missing something 16:25:45 winston-d: +1 16:25:51 winston-d: but the capability filter scheduler only looks at things in host state, right? maybe that's the problem? 16:26:21 avishay : no, it also looks at capabilities of a host 16:27:15 winston-d: self._satisfies_extra_specs(host_state.capabilities, resource_type) 16:27:24 winston-d: with the mutli-backend support, does "host" mean a cinder-volume service now? 16:27:33 xyang_ : you are right 16:27:53 avishay : exactly, host_state.capabilities 16:28:14 winston-d: so if something isn't added, it's ignored, which is why i wanted to add these two items 16:29:05 I think we need to figure out what purpose we think volume types are supposed to serve 16:29:35 avishay : no, see line 112 of host_manager.py capabilities reported by driver are copied to host_state.capabilities. 16:29:43 guitarzan: we've been over that a number of times 16:29:58 * jgriffith sees a rathole in our future ;) 16:30:07 I wish there was a nice document explaining the conclusions of these multiple conversations 16:30:09 bswartz: I know, but it doesn't seem to be resolved 16:30:14 indeed 16:30:18 bswartz: you could write one :) 16:30:23 * bswartz hides 16:30:25 guitarzan: bswartz I'll write on 16:30:27 one 16:30:32 it's resolved IMO 16:30:52 winston-d: so that things reported by the driver are read once, and things in host state are constantly updated? 16:30:54 oh? that's good to hear 16:31:11 I propose discussing it one more time at the conference and making sure we write down the conclusions and turn them into a doc 16:31:12 guitarzan: haha 16:31:19 I can sign up for that 16:31:31 bswartz: i was planning to bring this topic up at the summit as well 16:31:40 I can also try to get material prepared in advance 16:31:51 So hold on a sec... 16:32:04 First.. the issue with the patch from avishay 16:32:16 if things work as winston-d says (I will test to make sure), then we don't need any change to host_manager.py, and the issue seems resolved 16:32:18 Compression is pretty standard and has a pretty distinct meaning 16:32:35 I think there are easy ways to get around it but regardless 16:32:43 avishay : see line 273 of host_manager.py host_state.capabilities are constantly updated as well. 16:32:44 avishay: if you want to put in compression I'm fine and I say go for it 16:32:55 tiering on the other hand I'm not a fan of 16:33:06 tiering should fall into the types setting IMO 16:33:10 jgriffith: It sounds like the patch is unnecessary 16:33:11 winston-d: missed that - thanks 16:33:14 jgriffith: seems fair 16:33:16 as in select the tier 16:33:23 DuncanT1: That was my initial point 16:33:26 jgriffith: cool, but I do look forward to hearing what types are :) 16:33:46 so bottom line, i'll re-submit without the changes to host_manager.py? 16:33:49 DuncanT1: it's unnecessary but if it's convenient for a specific use case avishay has or knowws of I don't care 16:33:53 and we'll discuss at the summit 16:34:14 avishay: if that works for you that's absolutely great with me 16:34:19 avishay: That sounds perfect :-) 16:34:21 jgriffith: cool 16:34:29 thanks everyone 16:34:41 #topic volume-types 16:35:09 Volume types are custom/admin-defined volume types that can be used to direct the scheduler to the appropriate back-end 16:35:25 # end of topic! 16:35:27 hehe 16:35:34 ok... moving on :) 16:35:36 jgriffith : nice! 16:35:37 except in the case of extra specs, which does the same thing 16:35:41 * guitarzan hides 16:35:50 guitarzan: haha... but no, not really 16:35:56 guitarzan: no, they work together though 16:36:01 #topic extra-specs 16:36:14 jgriffith: is it only for the scheduler? or also to pass information about how to create the volume to a driver? 16:36:16 extra-specs is additional meta info to be passed to the driver selected by volume-type 16:36:16 guitarzan : it is extra specs that get volume types to do what jgriffith said it can do 16:36:19 #end-topic 16:36:45 extra-specs are just that, *extra* 16:37:04 so that would imply that compression is a volume type? 16:37:09 by extra, we mean *extra* information that can be consumed by the backend when it gets it's volume-type 16:37:11 well, I'll think about it anyway :) 16:37:12 avishay : yeah, that's the other important usage of extra specs (to pass requirements to driver) 16:37:16 it seems confusing to me 16:37:28 guitarzan: correct, that's a possible way to do it that I mentioned earlier 16:37:36 but folks don't like the admin's to actually have to think 16:37:42 sure 16:37:46 anyone else notice that you have to enable the scheduler setting in the cinder.conf for the extra specs to work in devstack? 16:37:49 or maybe they just *can't* actually think 16:38:12 kmartin: can you ellaborate? 16:38:25 which setting specifically? 16:38:33 I think maybe not everyone knows about scopes, which I learned about while playing with volume types 16:39:33 jgriffith: scheduler_host_manager, scheduler_default_filters, scheduler_default_weighers and scheduler_driver 16:39:44 * DuncanT1 suggests people with a specific scenario they want to accomplish in terms of both users and admin actions, they they currently don't know how to, writes it up and we can see if our current method is sufficient or we need to enhance it for Havanna 16:40:05 DuncanT1: +1 16:40:14 I like DuncanT1's idea of assigning homework before the Summit :) 16:40:25 DuncanT1 : +1 16:40:26 It is far easier to answer specific questions than generalities 16:40:49 (I've a few myself that I don't know how to do, though I'm fairly sure they are entirely possible) 16:40:57 kmartin: default devstack works fine with volume types for me 16:40:58 DuncanT1: and i believe most cases can be solved now, people just don't know how 16:41:07 winston-d: +1 16:41:14 avishay: with extra specs defined? 16:41:27 winston-d: you're probably right 16:41:30 winston-d: DuncanT1 agreed 16:41:33 kmartin: what do you mean? 16:41:55 kmartin: it works for me... wonder if we have an issue with expectations 16:42:21 jgriffith: I have a feeling we have an issue with documentation and understanding 16:42:22 kmartin: So my driver is selected correctly by the volume type 16:42:33 kmartin, it works for me too 16:42:33 kmartin: Then it queries that type for extra-specs 16:42:41 and uses the extra-specs to do *stuff* 16:42:46 yeah, maybe...in our case if we do not have them enabled our driver never gets call 16:42:54 winston-d has a couple of nice docs about scheduler. It will be nice to combine them and add more to it 16:43:03 kmartin: ohhh? That's a problem with the type then 16:43:10 kmartin: volume-type is what selects the driver 16:43:27 * jgriffith really needs to document this it seems 16:43:47 ok...we may be using it incorrectly then 16:43:56 kmartin: uh oh :) 16:44:09 kmartin: what's the scenario you're trying to run? 16:45:12 so just so everyone knows, I'm currently in the process of creating the initial block storage manual and separating it out of compute manuals. 16:45:20 we create volume type like Gold, Silver, Bronze, then assign extra specs to those with different capabilities on the array, like provisioning, host mode cpg, etc... 16:46:16 all driver information will be moved over if you have it in there 16:46:17 * DuncanT1 wonders if we have any more topics to cover, since we can always work out the details of volume-type usage in #openstack-cinder 16:46:30 DuncanT1: good point 16:46:33 * bswartz has a topic 16:46:40 bswartz: go for it 16:46:42 bswartz: care to share? :) 16:46:46 quick question actualy 16:46:59 just wanted to know about policy surrounding backporting from grizzly to folsom 16:47:11 what is the policy and who enforces it? 16:47:28 i've been wondering a bit about this myself 16:47:39 bswartz: the OSLO team mostly enforces it by having +2/A authority 16:48:03 do we need to change something so that the right people get added to the reviews? 16:48:39 for example, i've had https://review.openstack.org/#/c/22244/ floating around for a while now and i'm not sure who to poke 16:48:43 eharney: bswartz I'll get with ttx and markmc and get this resolved 16:48:54 also get clarification on features versus bugs etc etc 16:48:59 (which isn't a grizzly backport, it's oslo stable syncing, but still) 16:49:03 I ask because Rushi mentioned doing some backport of features from grizzly to folsom, and I was surprised that this was even allowed 16:49:24 bswartz: he mentioned it to me and I was TOTALLY in favor of it 16:49:26 I think backporting features is a fine idea, as long as it doesn't get us into trouble 16:49:30 jgriffith: thanks, that would be helpful 16:49:37 * rushiagr emembers jgriffith mentioning about backporting multi-backend and filter sched 16:49:46 So the rule of thumb is "it depends on the risk introduced" 16:49:50 not very clear eh? 16:50:08 rushiagr: yes, I would love it if we can do that 16:50:23 if we start backporting everything though... 16:50:35 avishay: no :) 16:50:41 jgriffith: exactly :) 16:50:46 avishay: it would have to be very selective 16:51:07 avishay: I've picked the scheduler inparticular because a number of large providers have asked me for it 16:51:31 and techinically the existing scheduler in Folsom is lacking to say the least 16:51:54 jgriffith: as PTL how much does your opinion count when it comes to deciding if a feature can be backported? 16:52:03 bswartz: we'll find out :) 16:52:07 :-) 16:52:24 okay that's all I had 16:52:25 bswartz: it should count for a bit, depending on the TC 16:52:27 jgriffith : can we do back-porting filter scheduler after grizzly released? 16:52:36 winston-d: yeah, I think it would have to be 16:52:38 jgriffith: can a new driver be backported to Folsom? 16:52:48 winston-d: too much disruption to do it now IMO 16:53:10 jgriffith : i've been occupied lately so no much bandwidth to do that before design summit 16:53:13 xyang_: I think that would be where the line would be 16:53:27 I'll come up with guidelines and submit them to everyone later this week 16:53:30 xyang_: we (NetApp) do that all the time, but we release the backported code from our github repo rather than submitting to a stable branch in cinder 16:53:45 bswartz: +1 I do the same thing 16:54:22 how about keeping cinder/volume/drivers folder open for backport? 16:54:38 bswartz: so that's your private github repo? 16:54:44 rushiagr: it's more difficult than that 16:55:00 I'll write up guidelines 16:55:07 xyang_: public 16:55:17 meanwhile I need to wrap up here 16:55:37 we can all meet back up in openstack-cinder if folks have more they want to hammer out? 16:55:44 I'll be offline for about an hour 16:57:13 jgriffith: you forgot to #endmeeting 16:57:30 thanks and bye everyone! 16:57:37 bye 16:57:39 bye 16:58:09 bswartz: can you try ending the meeting? 16:58:19 #endmeeting 16:58:24 I doubt it will work 16:58:38 it works only with same nick 16:58:52 doh! 16:58:55 and the bad part is jgriffith doesnt log out, and xen folks must be waiting 16:59:09 we can use the alternative channel if needed 16:59:20 johnthetubaguy: Error: Can't start another meeting, one is in progress. 16:59:25 * winston-d try /nick himself to be john. :) 16:59:32 openstack can kick him....if someone has the passwd :P 16:59:51 hemna: good idea! (if it works) 16:59:56 try cinder lol 17:00:01 hah 17:00:07 lol 17:02:03 OK, so join #openstack-meeting-alt for the XenAPI meeting today 17:02:18 I hope that works out OK for people 17:02:33 :) 17:02:35 Works fine for me 17:03:46 someone can change the channel message so people get to know the meeting has been shifted to alt channel? 17:03:47 matelakat, we're on #openstack-meeting-alt 17:03:52 Oh. 17:03:59 Ehy is that? 17:04:04 Why is that? 17:04:32 matelakat, because we can't stop the cinder meeting - the nick that started it isn't here :) 17:04:34 jgriffith didn't end the meeting :) 20:00:21 sdake_: Error: Can't start another meeting, one is in progress. 20:00:39 great 20:00:58 hi 20:01:03 join #openstack-meeting-alt 20:01:15 jgriffith, can you end your meeting? 20:01:16 is the cinder meeting actually still going on? 20:01:39 no, just forgot to end it 20:01:50 join openstack-meeting-alt - we will hold our meeting there 20:11:30 #endmeeting