Thursday, 2015-02-12

*** annashen has joined #openstack-cinder00:00
*** emagana has quit IRC00:02
*** markvoelker has quit IRC00:03
openstackgerritMitsuhiro Tanino proposed openstack/cinder: Refactoring for export functions in Target object
*** emagana has joined #openstack-cinder00:08
*** bswartz has joined #openstack-cinder00:09
*** emagana has quit IRC00:12
*** smoriya has joined #openstack-cinder00:13
*** lcurtis has quit IRC00:13
jgriffiththingee: yo00:15
*** mriedem has joined #openstack-cinder00:15
*** smoriya has left #openstack-cinder00:16
*** afazekas has quit IRC00:16
*** smoriya has joined #openstack-cinder00:16
*** dalgaaf has quit IRC00:17
*** bitblt has quit IRC00:18
thingeejgriffith: I was writing the spec for capabilities as discussed at the midcycle meetup00:19
thingeejgriffith: if you do a search for "continued discussion from thursday"00:20
jgriffiththingee: got it00:20
thingeejgriffith: I was trying to understand again how we would represent QOS with this model.00:20
jgriffiththingee: too bad we didn't write that down eh :)00:20
jgriffiththingee: so wouldn't it just be along the same lines:00:21
jgriffith{qos:{default=True, available=True, type=guaranteed}}00:22
jgriffiththingee: so the well defined keys are kinda like "do you or don't you" support this00:22
jgriffiththingee: then the vendor specific keys come into play00:23
jgriffith{qos:{default=True, available=True, type=guaranteed}{vendor-keys:[minIOPS, maxIOPS, burstIOPS]}00:23
thingeejgriffith: thank you00:24
thingeejgriffith: and from the scheduler perspective00:24
jgriffiththe data structure itself of course might be different but I think that was a rough summary of what I wrote on the board00:24
jgriffiththingee: so the scheduler side is the part I've never quite agreed with in all of this00:24
jgriffiththingee: IMO the scheduler should care about the "well defined" keys only00:25
jgriffiththingee: the vendor-keys are there just to help with what the driver wants specifically on things00:25
jgriffithhemna: ^^  does that align with what we agreed upon in the end?00:25
*** emagana has joined #openstack-cinder00:26
*** Mandell has quit IRC00:26
thingeejgriffith: makes sense to me. Are we then relying on the data placement to the storage backend? It makes things complicated with multiple pools in a backend exposed to Cinder.00:26
jgriffiththingee: so honestly I've said before if you're stuck with pools then sucks to be you :)00:27
jgriffiththingee: ok, seriously....00:27
jgriffiththingee: no, I dont' think so00:27
jgriffiththingee: remember each pool is presented as a backend now for the most part00:27
jgriffiththingee: so if someobdy has a device with a pool that has compression=False and another with compression=True00:27
jgriffiththings still work00:27
*** r-daneel has joined #openstack-cinder00:28
jgriffithand details on any compression options that are vendor-unique go in those vendor keys00:28
thingeejgriffith: I was getting at it more that it becomes complicated with the vendor specific keys. If you don't want it in the scheduler, the backend itself has to deal with it00:28
jgriffithbut they don't impact the scheduler00:28
jgriffiththingee: as it should00:28
jgriffiththingee: why should the scheduler deal with 20+ iterations of QoS keys for example?00:28
jgriffiththingee: IMO that's awful00:28
jgriffiththingee: they're vendor/driver unique keys, that's where they should be dealt with00:29
jgriffiththingee: but maybe I'm not following?00:29
thingeejgriffith: I completely agree with you. Thank you so much for helping me with this. I'll make sure to reflect this in the spec.00:29
jgriffithOk, cool00:29
jgriffithlemme know if there are other details we forgot (I'm sure there are)00:29
hemnajust a sec...00:29
*** Longgeek has quit IRC00:30
jgriffithhemna: sorry, you're too late LOL00:30
*** Mandell has joined #openstack-cinder00:30
*** Mandell has quit IRC00:30
thingeejgriffith, DuncanT: I know I started this thread, but I haven't had a chance to respond back yet. There appears to be confusion that the scheduler today could solve the problems raised
jgriffiththingee: oh, yeah I meant to respond to that actually00:31
thingeerather, I think what were talking about *now* solves these problems00:31
thingeehemna: ^00:31
thingeeso I think if anything, operators don't really understand the feature. Now I didn't exactly explain the feature, and for good reason. The documentation is all I left for people to read.00:32
gary-smithhe's yapping right now. give him a sec00:32
jgriffiththingee: so the problem in the example Christian gives is it is addressed by the specs change we just talked about00:32
openstackgerritAnthony Lee proposed openstack/cinder: Adding manage/unmanage support for LeftHand driver
thingeejgriffith: yup, same with marc00:32
*** Mandell has joined #openstack-cinder00:32
thingeeand richard00:32
thingeeleeantho: ^00:32
jgriffiththingee: I see zero tie in to the goodness filter really00:32
*** Mandell has quit IRC00:33
jgriffiththingee: so to clarify my comments yesterday; I'm not opposed to a goodness filter (although I don't see much value in it)00:33
jgriffiththingee: I was just hoping to get some info on how the evaluator worked etc00:33
jgriffiththingee: but I also sympathize with folks that their patch was up for 8 months with little feedback (and zero feedback for me)00:34
thingeeleeantho, hemna: The point I'm making, is this initial documentation isn't making much sense to people, and is kind of conflicting with this other work coming from standard capabilities.00:34
thingeeI'll admit I don't really understand the usecase of an array that can only handle 1000 volumes.00:34
*** Mandell has joined #openstack-cinder00:34
thingeedo we have a real example of that? And why are they in a cloud environment? :P00:34
jgriffithwell... I don't know I think that's fine; although I'd say again all of the tools needed to deal with that are already in place00:35
hemnaso the vendor specific keys can be used during the scheduler though00:35
hemnaif you are using the capabilities filter00:35
hemnaerr capabilities scheduler00:35
jgriffithhemna: hmm, how and why?00:35
hemnawhich is the entire point of the capabilities scheduler00:35
jgriffithhemna: wouldn't you just want the scheduler to do the high level placement?00:36
jgriffithand let the driver handle details?00:36
hemnathe capabilities scheduler allow you to even specify ranges in values00:36
thingeehemna: I agree with jgriffith that it should just be the standard capabilities we filter on. Leave it to the backend to choose where based on the vendor specific keys.00:36
jgriffithhemna: seems like a ton of extra work and complexity fo rno real gain00:36
thingeeI think a lot of vendors would want that control00:36
hemnawell then you need to remove the capabilities scheduler from cinder00:36
hemnabecause that's how it works today.00:36
jgriffithhemna: I think we might be talking passed each other on something...00:37
hemnamight me00:37
thingeeDon't need to. we still have *standard capabilities*00:37
jgriffithhemna: so what I had though (and I might be wrong)00:37
thingeethat need to be filtered on00:37
jgriffithis that having the well-defined keys improves our scheduler and still does capability filtering00:37
jgriffithin fact does it MUCH better00:37
jgriffithbut vendor unique keys on how that's set are handled in the driver00:38
jgriffithfor example QoS00:38
jgriffithHow/Why should the scheduler care that SF uses keys "minIOPS, maxIOPS and burstIOPS"00:38
thingeejgriffith: have another thought with pools, but we can get back to it after this.00:38
jgriffiththat means nothing in terms of filter scheduler and probably shouldn't is what i think00:38
jgriffiththingee: ok... don't confuse me :)00:38
jgriffithhemna: is there another use case I'm missing on that?00:39
thingeejgriffith: don't worry, I'm getting better with communication on irc, I hope :)00:39
jgriffithhaha.. "it's not you, it's me" :)00:39
thingeeI think this is awesome.00:39
hemnaso yah the vendor specific keys are scoped00:39
jgriffithhemna: Right00:39
jgriffithhemna: so ignored by the scheduler anyway, which I think is "ok"00:40
jgriffithhemna: and yeah, I think maybe down the road that changes00:40
thingeeit gives vendors more freedom on placement, and leaves the capabilities simple.00:40
jgriffithbut for now I see no real reason to do so00:40
jgriffiththingee: +100:40
hemnawhich are bypassed by the capabilities filter today00:40
hemnaso the well defined keys are there on purpose so that the scheduler can use them for filtering00:41
jgriffithhemna: right, but my question is "is that a problem" and if yes, "why"00:41
hemnaso the answer from me is no.00:41
hemnait's not a problem afaik00:41
jgriffithhemna: yes (WRT well defined keys)00:41
jgriffithMy proposal there was to provide a mechanism to increase the power of the filter scheduler00:41
jgriffithAND to allow you to do your "hints" thing for keys00:41
jgriffithso sort of a win  win compromise00:41
hemnato add the categories?00:42
*** scottda_ has quit IRC00:42
jgriffithhemna: the "vendor-unique-keys"00:42
hemnaok what's the hints thing ?00:42
hemnaI'm not following that part.   sorry, I'm dumb00:42
jgriffithhemna: not used by the sched, but they're presented as hints / capablities to the admin00:43
leeanthothingee, yes I plan on improving the documentation more in the areas that are unclear/missing,  I've been trying to finish a few other things at the same time (just saw your msg)00:43
jgriffithhemna: so the I thought the whole thing driving this for you guys was tha it wsa "hard" to know all the valid vendor keys in types00:43
jgriffithand you wanted to present that00:43
hemnayah that was part of what was driving this.00:43
jgriffithso my proposal was "ok" do that but be sure they're labelled as what they are00:43
jgriffithdon't just throw out a big ol bag o'keys00:44
jgriffithand make sure they tie into something that is relevant00:44
jgriffithwhen applicable00:44
hemnawe wanted to have a json dict that 1) shows what keys are supported by the vendor and 2) the user viewable information/metadata/description about those keys and their possible values/types, etc.00:44
hemnahence the json schema00:44
*** annashen has quit IRC00:45
jgriffithso this still give you that00:45
gary-smithdoesn't thingee's category proposal address the 'big ol bag o'keys' ?00:45
thingeegary-smith: sort of00:45
jgriffithgary-smith: oh boy, you're trying to side track me aren't you00:45
thingeegary-smith: hang tight...I'll have a spec update soon00:45
hemnaso I see the reporting of the schema as a side to the reporting of the capabilities though no?00:45
jgriffithgary-smith: I'm reiterating the proposal and our discussions as there seems to be some confusion00:46
hemnawe just happened to lump them all into the capabilities today in get_volume_stats()00:46
hemnawith the bad name of "extra_specs" for the key schema00:46
thingeehemna: don't hate me, but my spec proposes to just change that to capabilities00:46
jgriffithnot sure why you say that's a "bad name" but ok00:46
hemnajgriffith, I think we are close though00:46
jgriffithhemna: agreed00:47
jgriffiththingee: I say run with it00:47
hemnawell, because extra_specs are capabilities00:47
*** patrickeast has quit IRC00:47
jgriffithif we need to iterate slightly we can, but honestly if it turns out the way I'm envisioning everybody gets what they want here00:47
hemnaand the purpose of that key name was simply to contain the schema of supported keys and their metadata00:47
hemnabut not provide the actual capabilities for that pool00:47
hemnawhich I still don't like00:47
thingeejgriffith: let me know when I ask my question about pools :)00:48
thingeeI can ask*00:48
jgriffithhemna: so curious... what are you using extra-specs for if not capabilities?00:48
jgriffithhemna: as in, if they don't indicate things you can do or behaviors then "why are they even there"?00:48
hemnapersonally I'd rather have a separate driver API that returns the schema itself when someone asks for it, instead of returning that as part of the capabilities in get_volume_stats() EVERY time.00:48
jgriffithwho cares00:48
jgriffithhemna: you know you can do that right?00:48
hemnayes I do know that00:49
hemnawe just haven't been doing that.00:49
jgriffithhemna: I have some code on my github that does that via extension00:49
hemnawhich was my fault00:49
jgriffithhemna: at least I used to....00:49
hemnawe filed a BP in the last week to update our driver to provide pool specific capabilities, that we hope can be scheduled against.00:49
jgriffithhemna: I'll see if i can dig it up, have a customer using it00:49
jgriffithhemna: so that's where i'm now kinda confused00:50
jgriffithpools are treated as backends, so again; what's the difference?00:50
hemnanot much really00:50
jgriffithso why are we bringing it up here again?00:50
hemnaexcept each of our pools has different capabilities tied to them00:50
jgriffithsure, that's fine00:50
jgriffithhemna: now you're back to capabilities :)00:51
hemnajust saying that we are going to add more to ours, because we haven't been reporting specific capabilities up to now.00:51
jgriffithhemna: which we already stated def goes in the well-def keys00:51
hemnaso for example pool1 has raid5, supports thin provisioning, etc00:51
jgriffithhemna: ahhh... I think I see where you're going now00:51
hemnapool2 is raid0, on 15k drives00:52
hemnawe aren't reporting that now00:52
hemnabut want to00:52
jgriffithhemna: sure, I drew out that exact example in Austin00:52
hemnaI probably missed it, sorry00:52
hemnaI'm slow00:52
*** nellysmitt has joined #openstack-cinder00:52
jgriffithdamn you... you were playing on the internet weren't you!!!00:52
*** markvoelker has joined #openstack-cinder00:52
hemnayou guys know this stuff way better than I do.  but I hope I'm coming around :)00:52
jgriffithhemna: so let's take small steps if that's cool00:53
hemnadon't you think the capabilities like raid level should be a well defined key ?00:53
jgriffithhemna: start with the capability improvements with the well-defined keys00:53
jgriffithhemna: yes00:53
jgriffithWRT raid level00:53
hemnait's not really vendor specific00:53
jgriffithand disk rpms even00:53
hemnawe can report that up as well and plan on it00:53
jgriffiththat being said, I do think it adds way more complication than is needed00:54
gary-smithhow does disk rpms become a "well-defined key"00:54
jgriffithie; adming creates and maps types using those gory details and the rest of us move on with life :)00:54
gary-smithis there a list somewhere?00:54
jgriffithgary-smith: thingee is working a spec00:54
jgriffithgary-smith: hemna also keep in mind that you can still just use the vendor keys for that as well00:55
*** ebalduf has joined #openstack-cinder00:55
gary-smithjgriffith: I thought his spec was just defining categories, didn't know it covered keys, too00:55
jgriffithgary-smith: that's part of it, categories aren't "well defined" without the keys IMO :)00:55
gary-smithjgriffith: vendor keys won't work if they're scoped, since the schduler ignores them00:56
jgriffithgary-smith: that's the bag o'stuff approach IMO00:56
jgriffithgary-smith: and I'm saying who cares :)00:56
jgriffithgary-smith: seriously...00:56
jgriffithI'm saying we can address that00:56
thingeejgriffith: so the question with pools, if a storage backend has three pools that fulfill all capabilities for the volume being created, the scheduler is just going to pick one of them based on the weigher00:57
jgriffithgary-smith: you don't have to boil the ocean all at once00:57
*** nellysmitt has quit IRC00:57
gary-smithjgriffith: not trying to, just trying to understand00:57
jgriffiththingee: correct, which is where gary-smith 's point comes in00:57
*** markvoelker has quit IRC00:57
jgriffiththingee: but I still argue, "ok" let the admin create types that map to the criteria they want00:58
jgriffiththingee: I think tackling this as "EVERY availalbe option from every backend filtered through the scheduler" as a first pass is a really really bad idea00:58
hemnathingee, correct00:58
hemnajgriffith, +100:59
jgriffithstart with the well-defined stuff, and build on it00:59
hemnathat's exactly what we were thinking as well.00:59
hemnagary-smith, and I have been churning on this00:59
jgriffithgary-smith: yeah, you might have pools that are differentiated by vendor unique things... that's fine00:59
*** markvoelker has joined #openstack-cinder00:59
jgriffithadmin creates types/filters to deal with that (for now)00:59
jgriffithin L we revisit and build on it00:59
jgriffithbut we really NEED to get some standards started at some point01:00
jgriffithand if you insist that we have to have every 3par pool option included in scheduling I'll guarantee we'll never get anywhere01:00
*** ebalduf has quit IRC01:00
jgriffithhemna: gary-smith this is the part I said was a compromise and win/win01:00
*** davechen has joined #openstack-cinder01:00
jgriffithwe move forward on some standardization (what i want)01:01
jgriffithwe present the info to the admins to help them (part of what you want)01:01
hemnajgriffith, perfect01:01
jgriffithand we have an extenable foundation we can build on going forward01:01
jgriffithoh.. and the operator wins in all of this as well01:01
hemnayah sounds great man.  I think I'm good with it.01:02
gary-smithme too01:02
jgriffithcool... thingee we all good?01:02
*** alex_xu has joined #openstack-cinder01:03
*** markvoelker has quit IRC01:04
jgriffithgary-smith: hemna so here's something to chew on as well...01:05
jgriffithgary-smith: hemna vendors can write their own specific filter01:06
jgriffithgary-smith: hemna there's a documented process for creating your own filter01:06
jgriffithgary-smith: hemna you could ship it as an add on with your driver, or maybe there's something clever to be done like putting it "in" your driver01:06
hemnaso if you wanted to filter on vendor specific keys you could then01:06
jgriffithhemna: exactly01:07
jgriffithI thought about that a while back, wrote one that sat between the scheduler and my driver01:07
gary-smithbasically extend the CapabilitiesFilter to look for the things you care about. interesting01:07
jgriffithgary-smith: so that's one way... but yeah01:07
gugl2ping thingee01:08
jgriffithmy use case was I aggregated multiple SF clusters into a single Cinder driver01:08
*** scottda_ has joined #openstack-cinder01:08
jgriffithand had my own scheduling logic there... but in my case it was kinda stupid01:08
jgriffithit was for one specific problem I wanted to solve, and as the filter scheduler matured I didn't need it any longer01:08
jgriffithhemna: gary-smith FWIW I don't recommend the shim version I did, but you're own filter scheduler01:09
hemnaso thingee, I hate to pressure you, but the sooner we can get the spec in, we can work on the cinder/horizon changes need to try and get this in for K01:10
hemnaas we want to try and do validation as well, but we can push that to L if need be.01:11
*** scottda_ has joined #openstack-cinder01:12
*** scottda_ has quit IRC01:14
*** scottda_ has joined #openstack-cinder01:15
*** Mandell has quit IRC01:16
*** hemna is now known as hemnaf01:17
*** hemnaf is now known as hemnafk01:17
*** scottda_ has quit IRC01:19
*** Mandell has joined #openstack-cinder01:20
openstackgerritKurt Martin proposed openstack/cinder: Add dedup provisioning to 3PAR drivers
*** leeantho has quit IRC01:22
*** tsekiyam_ has joined #openstack-cinder01:26
*** Lee1092 has joined #openstack-cinder01:28
*** tsekiyama has quit IRC01:28
*** tsekiyam_ has quit IRC01:30
openstackgerritGloria Gu proposed openstack/cinder: HP 3par driver filter and evaluator function
*** Mandell has quit IRC01:39
*** markvoelker has joined #openstack-cinder01:40
*** sigmavirus24 is now known as sigmavirus24_awa01:43
*** r-daneel has quit IRC01:46
*** _cjones_ has quit IRC01:48
openstackgerritGloria Gu proposed openstack/cinder: HP 3par driver filter and evaluator function
*** dannywilson has quit IRC01:58
*** dannywilson has joined #openstack-cinder01:58
*** r-daneel has joined #openstack-cinder02:03
*** mtanino has quit IRC02:04
*** chenleji has joined #openstack-cinder02:14
*** Longgeek has joined #openstack-cinder02:15
*** bill_az_ has quit IRC02:17
*** MasterPiece has quit IRC02:19
*** dannywilson has quit IRC02:23
*** MasterPiece has joined #openstack-cinder02:28
*** MasterPiece| has joined #openstack-cinder02:29
*** lcurtis has joined #openstack-cinder02:31
*** MasterPiece has quit IRC02:33
*** rmesta has quit IRC02:35
*** tsekiyama has joined #openstack-cinder02:36
*** davechen_ has joined #openstack-cinder02:37
thingeejgriffith: sorry got pulled into a meeting02:38
*** Apoorva_ has joined #openstack-cinder02:38
thingeejgriffith: I get what you're saying. I guess what I'm considering the idea that you have one backend, with three pools. Say they all report the same capabilities, so it comes to the weigher on the three.02:39
jgriffiththingee: yep02:40
*** davechen has quit IRC02:40
jgriffithif they all report the same thing then yeah, it should02:40
thingeeso I guess eventually we can talk about how we can do mutal exclusive?02:40
thingeethree volume creations, separate pools02:40
thingeeI guess today with this model, you can have some capability pushed from the backend to support passing a hint.02:41
*** tsekiyama has quit IRC02:41
jgriffiththingee: so I don't understand the concern, if the pools are all identical in capabilites who cares02:41
jgriffithif theyre' not... then they'll either report as such in their capabilites02:42
thingeethinking of pool affinity cases like hadoop.02:42
*** Apoorva has quit IRC02:42
jgriffithor you use the the vendor keys you get back to create a type for them appropriately02:42
jgriffiththingee: that isn't solved by this proposal either though02:42
*** Apoorva_ has quit IRC02:43
jgriffiththingee: IMO things like that you really SHOULD build a custom type/extra-spec for and do it right02:43
jgriffithor custom filter as I suggested before02:43
jgriffiththingee: note that an affinity/anti-affinity filter is a whole different thing02:44
thingeejgriffith: it's just a use case I hear. When I hear custom filter, I'm not sure what's going to be easy for the cloud admin to act on that.02:44
jgriffithnothing to do with capabilities02:44
thingeejgriffith: sure02:44
jgriffiththingee: so IMO that's yet another whole seperate filter and use case02:44
thingeejgriffith: sure02:44
*** bill_ibm has joined #openstack-cinder02:45
thingeejgriffith, hemnafk: ok, well I'll try to get a spec posted tonight. I've had too many meetings today :(02:45
jgriffiththingee: I'm not certain on that use case anyway though... the affinity rules in hadoop are mostly around VM's02:45
jgriffithand it's usually anti-affinity that you want IIRC02:45
jgriffithbut I'm not a Hadoop expert02:45
thingeegugl2: hi02:46
*** smcginni1 has joined #openstack-cinder02:47
*** Longgeek has quit IRC02:48
thingeejgriffith: thanks for the help02:49
*** lcurtis has quit IRC02:49
*** shakamunyi has joined #openstack-cinder02:50
*** thingee has quit IRC02:50
*** rodrigod` has joined #openstack-cinder02:50
*** smcginnis has quit IRC02:52
*** rwsu has quit IRC02:52
*** scottda has quit IRC02:52
*** bill_az has quit IRC02:52
*** rodrigods has quit IRC02:52
jgriffiththingee... you betya02:52
* jgriffith goes to eat dinner02:52
*** nellysmitt has joined #openstack-cinder02:53
*** emagana has quit IRC02:53
*** emagana has joined #openstack-cinder02:54
*** Longgeek has joined #openstack-cinder02:55
*** scottda has joined #openstack-cinder02:57
*** rwsu has joined #openstack-cinder02:58
*** vilobhmm has quit IRC02:58
*** emagana has quit IRC02:58
*** david-lyle is now known as david-lyle_afk02:59
*** kaisers1 has joined #openstack-cinder03:00
*** kaisers has quit IRC03:01
*** mberlin has joined #openstack-cinder03:01
*** nellysmitt has quit IRC03:01
*** mberlin1 has quit IRC03:01
*** lcurtis has joined #openstack-cinder03:05
*** Yogi1 has joined #openstack-cinder03:09
*** ebalduf has joined #openstack-cinder03:11
jgriffithhemnafk: you might want to look at this in your lib work for brick:
openstackLaunchpad bug 1216051 in Cinder "exception handling makes bad assumptions" [Undecided,Triaged]03:14
*** Yogi1 has quit IRC03:14
*** takedakn has joined #openstack-cinder03:14
*** ebalduf has quit IRC03:15
*** MasterPiece| has quit IRC03:21
*** scottda_ has joined #openstack-cinder03:27
*** scottda_ has quit IRC03:27
*** EmilienM is now known as EmilienM|afk03:28
*** bkopilov has quit IRC03:34
*** enterprisedc has quit IRC03:35
*** enterprisedc has joined #openstack-cinder03:36
*** ebalduf has joined #openstack-cinder03:50
*** MasterPiece has joined #openstack-cinder03:51
*** ebalduf has quit IRC03:51
*** takedakn has quit IRC03:53
*** mriedem has quit IRC03:55
*** hemna has joined #openstack-cinder03:56
*** rushiagr_away is now known as rushiagr04:06
*** hemna has quit IRC04:06
*** harlowja is now known as harlowja_away04:08
*** Longgeek has quit IRC04:10
*** BharatK has joined #openstack-cinder04:17
*** Mandell has joined #openstack-cinder04:21
*** Longgeek has joined #openstack-cinder04:25
*** Longgeek has quit IRC04:26
*** lcurtis has quit IRC04:27
*** barra204 has joined #openstack-cinder04:35
*** MasterPiece| has joined #openstack-cinder04:38
*** shakamunyi has quit IRC04:39
*** sileht has quit IRC04:40
*** MasterPiece has quit IRC04:40
*** lcurtis has joined #openstack-cinder04:41
*** rushiagr is now known as rushiagr_away04:48
*** harlowja_at_home has joined #openstack-cinder04:49
*** deepakcs has joined #openstack-cinder04:52
*** BharatK has quit IRC04:52
*** barra204 has quit IRC04:53
*** r-daneel has quit IRC04:54
*** nellysmitt has joined #openstack-cinder04:58
*** hflai has quit IRC05:02
*** nellysmitt has quit IRC05:03
openstackgerritTina Tang proposed openstack/cinder: Attach/detach batch processing in VNX driver
*** bkopilov has joined #openstack-cinder05:06
*** barra204 has joined #openstack-cinder05:07
*** BharatK has joined #openstack-cinder05:08
*** lcurtis has quit IRC05:12
*** harlowja_at_home has quit IRC05:18
*** coolsvap has joined #openstack-cinder05:19
*** coolsvap has quit IRC05:21
*** coolsvap has joined #openstack-cinder05:23
*** ankit_ag has joined #openstack-cinder05:25
*** ankit_ag has quit IRC05:25
*** ankit_ag has joined #openstack-cinder05:26
openstackgerritXi Yang proposed openstack/cinder: EMC VNX Cinder Driver iSCSI multipath enhancement
*** Ilja has joined #openstack-cinder05:32
*** lcurtis has joined #openstack-cinder05:35
*** nikesh_vedams has joined #openstack-cinder05:39
*** anshul has joined #openstack-cinder05:41
*** anshul has quit IRC05:41
*** anshul has joined #openstack-cinder05:42
*** nkrinner has joined #openstack-cinder05:46
*** nellysmitt has joined #openstack-cinder05:50
*** MasterPiece| has quit IRC05:51
*** BharatK has quit IRC05:52
*** sileht has joined #openstack-cinder06:04
*** Mandell has quit IRC06:07
*** Longgeek has joined #openstack-cinder06:07
openstackgerritOpenStack Proposal Bot proposed openstack/cinder: Imported Translations from Transifex
*** lpetrut has joined #openstack-cinder06:11
*** BharatK has joined #openstack-cinder06:14
*** BharatK has quit IRC06:18
*** BharatK has joined #openstack-cinder06:30
*** Mandell has joined #openstack-cinder06:30
*** boris-42 has joined #openstack-cinder06:33
*** barra204 has quit IRC06:36
openstackgerritLiu Xinguo proposed openstack/cinder: Huawei driver check before associating LUN to a LUN group
*** ronenkat has joined #openstack-cinder06:38
*** sgotliv has joined #openstack-cinder06:44
openstackgerritLiu Xinguo proposed openstack/cinder: Huawei driver fix problems under multipath
*** wanghao has joined #openstack-cinder06:50
openstackgerritLiu Xinguo proposed openstack/cinder: Huawei driver remove LUN controller change
openstackgerrityogeshprasad proposed openstack/cinder: Fixes total_capacity_gb value in CloudByte driver.
*** nellysmitt has quit IRC07:04
*** afazekas has joined #openstack-cinder07:06
*** davechen_ has quit IRC07:07
*** davechen has joined #openstack-cinder07:07
*** vukcrni has quit IRC07:12
*** vukcrni has joined #openstack-cinder07:15
*** lcurtis has quit IRC07:20
*** takedakn has joined #openstack-cinder07:29
*** ronenkat has quit IRC07:32
*** ronis has joined #openstack-cinder07:34
openstackgerritAmit Saha proposed openstack/cinder: Fixed typo
*** Miouge has joined #openstack-cinder07:35
*** lpetrut has quit IRC07:38
*** e0ne has joined #openstack-cinder07:40
*** TobiasE has joined #openstack-cinder07:47
*** deepakcs has quit IRC07:48
*** deepakcs has joined #openstack-cinder07:48
*** bkopilov has quit IRC07:48
*** bkopilov has joined #openstack-cinder07:48
*** coolsvap has quit IRC07:48
*** coolsvap has joined #openstack-cinder07:48
*** nikesh_vedams has quit IRC07:48
*** nikesh_vedams has joined #openstack-cinder07:48
*** anshul has quit IRC07:48
*** anshul has joined #openstack-cinder07:48
*** nkrinner has quit IRC07:48
*** nkrinner has joined #openstack-cinder07:48
*** BharatK has quit IRC07:49
*** BharatK has joined #openstack-cinder07:49
*** boris-42 has quit IRC07:49
*** boris-42 has joined #openstack-cinder07:49
*** afazekas has quit IRC07:49
*** afazekas has joined #openstack-cinder07:49
*** Longgeek has quit IRC07:53
*** Longgeek has joined #openstack-cinder07:53
*** sandywalsh_ has joined #openstack-cinder07:53
*** Longgeek has quit IRC07:53
*** sandywalsh has quit IRC07:55
*** sandywalsh_ has quit IRC08:04
*** Miouge_ has joined #openstack-cinder08:05
*** sandywalsh has joined #openstack-cinder08:06
*** Miouge has quit IRC08:06
*** Miouge_ is now known as Miouge08:06
*** nkrinner has quit IRC08:11
*** nkrinner has joined #openstack-cinder08:14
*** ronenkat has joined #openstack-cinder08:15
*** ronenkat_ has joined #openstack-cinder08:16
*** e0ne has quit IRC08:17
*** karimb has joined #openstack-cinder08:18
*** ronenkat has quit IRC08:20
*** openstackgerrit has quit IRC08:21
*** openstackgerrit has joined #openstack-cinder08:22
*** dulek has joined #openstack-cinder08:29
*** alecv has joined #openstack-cinder08:39
openstackgerritMichal Dulko proposed openstack/cinder: Fix allocated_capacity tracking when rescheduling
*** chlong has quit IRC08:42
*** bkopilov has quit IRC08:44
*** bkopilov has joined #openstack-cinder08:45
*** Longgeek has joined #openstack-cinder08:54
*** jistr|off is now known as jistr08:54
*** ronenkat__ has joined #openstack-cinder08:55
*** ronenkat_ has quit IRC08:58
*** lpetrut has joined #openstack-cinder08:59
*** sgotliv_ has joined #openstack-cinder09:01
*** sgotliv has quit IRC09:04
openstackgerritMike Perez proposed openstack/cinder-specs: Standard Capabilities
*** nellysmitt has joined #openstack-cinder09:05
*** jpich has joined #openstack-cinder09:06
*** jordanP has joined #openstack-cinder09:06
*** Longgeek has quit IRC09:07
*** jistr has quit IRC09:08
*** nellysmitt has quit IRC09:10
*** sgotliv__ has joined #openstack-cinder09:10
*** jistr has joined #openstack-cinder09:11
*** Mandell has quit IRC09:11
*** sgotliv_ has quit IRC09:13
*** nileshb has joined #openstack-cinder09:15
*** BharatK has quit IRC09:16
*** takedakn has quit IRC09:24
*** nshaikh has joined #openstack-cinder09:28
*** BharatK has joined #openstack-cinder09:30
*** e0ne has joined #openstack-cinder09:31
*** nileshb has quit IRC09:34
*** xyang1 has quit IRC09:34
*** Miouge has quit IRC09:37
*** davechen_ has joined #openstack-cinder09:39
*** davechen has quit IRC09:41
*** Miouge has joined #openstack-cinder09:43
*** alex_xu has quit IRC09:47
*** yuriy_n17 has joined #openstack-cinder09:47
*** Miouge has quit IRC09:48
*** aarefiev has quit IRC09:49
nikesh_vedamsthingee: still online :)09:51
*** aarefiev has joined #openstack-cinder10:00
*** Miouge has joined #openstack-cinder10:00
*** ronenkat has joined #openstack-cinder10:03
*** Longgeek has joined #openstack-cinder10:04
*** ronenkat__ has quit IRC10:07
ondergetekende_Can someone help me? One of my SANs is reported as down, even though it's update_service_capabilities is properly delivered by the MQ.10:14
ondergetekende_All of my other SANs are online, and their messages are extremely similar.10:14
*** e0ne is now known as e0ne_10:21
nikesh_vedamswe addressed your reviews10:24
nikesh_vedamsso +1 ? :)10:24
aarefievnikesh_vedams: ok, thx, I'l see later10:25
nikesh_vedamsya sure thanks10:25
*** e0ne_ is now known as e0ne10:30
*** ndipanov has joined #openstack-cinder10:30
*** Longgeek has quit IRC10:38
*** leopoldj has joined #openstack-cinder10:40
openstackgerritVincent Hou proposed openstack/cinder: Set the request_spec with the value of the new type in migration
*** Longgeek has joined #openstack-cinder10:44
*** wanghao has quit IRC10:45
*** wanghao has joined #openstack-cinder10:45
openstackgerritAmit Saha proposed openstack/cinder: Fixed typo
*** nellysmitt has joined #openstack-cinder11:06
*** nellysmitt has quit IRC11:10
openstackgerritJordan Pittier proposed openstack/cinder: Fix Scality SRB driver security concerns
*** kmartin has quit IRC11:16
*** mdenny has quit IRC11:17
*** dobson has quit IRC11:27
*** pschaef has joined #openstack-cinder11:27
*** ndipanov has quit IRC11:29
*** smoriya has quit IRC11:30
*** BharatK has quit IRC11:30
*** dobson has joined #openstack-cinder11:34
*** markvoelker has quit IRC11:34
*** BharatK has joined #openstack-cinder11:35
*** ndipanov has joined #openstack-cinder11:41
*** e0ne is now known as e0ne_11:48
*** rodrigod` is now known as rodrigods11:49
*** asselin__ has quit IRC11:52
*** e0ne_ has quit IRC11:58
*** aix has quit IRC12:01
*** BharatK has quit IRC12:02
*** MasterPiece has joined #openstack-cinder12:05
openstackgerritRushi Agrawal proposed openstack/cinder-specs: Kilo: Support public snapshot
*** MasterPiece has quit IRC12:12
*** BharatK has joined #openstack-cinder12:17
*** TobiasE1 has joined #openstack-cinder12:19
*** karimb has quit IRC12:20
*** MasterPiece has joined #openstack-cinder12:22
*** TobiasE has quit IRC12:22
*** bswartz has quit IRC12:23
openstackgerritRushi Agrawal proposed openstack/cinder: Hitachi: Remove duplicate CHAP opts
openstackgerritRushi Agrawal proposed openstack/cinder: EQLX: Consolidate CHAP config options
*** nellysmitt has joined #openstack-cinder12:27
*** markvoelker has joined #openstack-cinder12:35
*** tellesnobrega_ has joined #openstack-cinder12:36
*** markvoelker has quit IRC12:40
*** deepakcs has quit IRC12:40
*** IanGovett has joined #openstack-cinder12:41
*** nkrinner has quit IRC12:49
*** nkrinner has joined #openstack-cinder12:53
*** markstur_ has joined #openstack-cinder12:55
*** markstur_ has left #openstack-cinder12:55
*** TobiasE has joined #openstack-cinder12:56
*** aix has joined #openstack-cinder12:57
*** TobiasE1 has quit IRC12:57
*** tellesnobrega_ has quit IRC12:58
*** bkopilov has quit IRC12:58
*** kaufer has joined #openstack-cinder12:59
*** markvoelker has joined #openstack-cinder12:59
*** bswartz has joined #openstack-cinder13:02
*** markvoelker has quit IRC13:05
*** e0ne has joined #openstack-cinder13:05
*** eharney has joined #openstack-cinder13:05
*** nkrinner has quit IRC13:07
*** lcurtis has joined #openstack-cinder13:08
*** alexpilotti has joined #openstack-cinder13:09
*** nkrinner has joined #openstack-cinder13:10
*** EmilienM|afk is now known as EmilienM13:11
*** tellesnobrega has quit IRC13:11
*** Mandell has joined #openstack-cinder13:12
*** lcurtis has quit IRC13:14
*** Mandell has quit IRC13:16
*** pschaef has quit IRC13:17
*** tbarron has joined #openstack-cinder13:20
*** bill_az has joined #openstack-cinder13:24
openstackgerritVipin Balachandran proposed openstack/cinder: VMware: Refactor initialize_connection unit tests
*** karimb has joined #openstack-cinder13:31
*** Ilja has quit IRC13:33
*** e0ne is now known as e0ne_13:35
*** timcl has joined #openstack-cinder13:35
*** tellesnobrega has joined #openstack-cinder13:37
*** eharney has quit IRC13:39
*** crose has joined #openstack-cinder13:39
*** akerr has joined #openstack-cinder13:40
*** nshaikh has left #openstack-cinder13:40
*** avishay has joined #openstack-cinder13:45
*** Yogi1 has joined #openstack-cinder13:48
*** e0ne_ is now known as e0ne13:48
openstackgerritVipin Balachandran proposed openstack/cinder: VMware:Use datastore selection logic in new module
openstackgerritVipin Balachandran proposed openstack/cinder: VMware:Use datastore selection logic in new module
openstackgerritxing-yang proposed openstack/cinder: EMC VMAX driver Kilo update
*** eharney has joined #openstack-cinder13:53
openstackgerritKallebe Monteiro proposed openstack/python-cinderclient: Add -d short option for --debug
*** anshul has quit IRC13:55
*** crose has quit IRC13:59
*** TobiasE1 has joined #openstack-cinder14:00
*** sigmavirus24_awa is now known as sigmavirus2414:01
*** TobiasE has quit IRC14:01
*** nkrinner has quit IRC14:05
*** primechuck has joined #openstack-cinder14:06
*** nkrinner has joined #openstack-cinder14:08
*** dalgaaf has joined #openstack-cinder14:13
duleke0ne: Still working on
openstackLaunchpad bug 1409012 in Cinder "Volume becomes in 'error' state after scheduler starts" [High,In progress] - Assigned to Ivan Kolodyazhny (e0ne)14:13
duleke0ne: ?14:13
e0nedulek: hi. looks like it needs dome discussion in a mailing list or a meeting14:14
e0nedulek: i didn't find easy solution for it:(14:15
e0nedid you se my comment?14:15
dulekSo you don't mind if I take a look at it as I have some free cycles? :)14:15
dulekMaybe fresh look at it will help.14:15
e0nedulek: sure! thanks for help14:16
e0nedulek: do you have an idea how to fix it?14:17
duleke0ne: Not yet but I'll see if I can come up with an idea14:18
*** lpetrut has quit IRC14:24
*** david-lyle_afk is now known as david-lyle14:25
*** ankit8188 has joined #openstack-cinder14:27
*** alexpilotti has quit IRC14:28
*** alexpilotti has joined #openstack-cinder14:29
*** ankit_ag has quit IRC14:29
*** jungleboyj has quit IRC14:29
*** humble_ has joined #openstack-cinder14:31
*** boris-42 has quit IRC14:32
*** thangp has joined #openstack-cinder14:34
*** aagrawal has joined #openstack-cinder14:35
*** aagrawal has quit IRC14:36
*** r-daneel has joined #openstack-cinder14:36
*** ankit8188 has quit IRC14:39
*** mriedem has joined #openstack-cinder14:39
*** boris-42 has joined #openstack-cinder14:42
*** smcginni1 is now known as smcginnis14:42
*** jcru has joined #openstack-cinder14:44
*** nshaikh has joined #openstack-cinder14:46
*** mtanino has joined #openstack-cinder14:50
*** nshaikh has quit IRC14:51
*** tellesnobrega has quit IRC14:51
*** tellesnobrega has joined #openstack-cinder14:54
*** tellesnobrega_ has joined #openstack-cinder14:54
*** tellesnobrega_ has quit IRC14:55
*** ronenkat_ has joined #openstack-cinder15:00
*** ronenkat has quit IRC15:03
*** bkopilov has joined #openstack-cinder15:04
dulekthangp: ping15:05
openstackgerritSteven Kaufer proposed openstack/cinder: WIP Generic filter support for volume queries
*** nkrinner has quit IRC15:06
*** hemna has joined #openstack-cinder15:19
*** Mandell has joined #openstack-cinder15:19
smcginnisI noticed in cinder/ there is a note for Duplicate stating "EOL this exception!"15:20
smcginnisAny reason why we should get rid of that exception?15:20
smcginnisIt's only used in hp_3par_common, but I don't see why we need to get rid of it.15:20
smcginnisAnyone have any opinions? Seems like we should either updated hp_3par_common and remove it, or get rid of that comment.15:21
*** TobiasE has joined #openstack-cinder15:21
*** kaufer has quit IRC15:21
*** TobiasE1 has quit IRC15:22
*** e0ne is now known as e0ne_15:23
eharneysmcginnis: not sure on the note, but looking at the usage, i'd think it could be a VolumeBackendAPIException instead15:24
eharneysmcginnis: it is used as a base type for other exceptions...15:24
*** rushil has joined #openstack-cinder15:25
smcginniseharney: Do you think we should refactor to remove it? I don't really see any harm in having it.15:25
*** mdenny has joined #openstack-cinder15:28
*** karimb has quit IRC15:28
eharneysmcginnis: it's probably ok, unless there is is some special error handling for a more specific exception15:29
dulekthangp: Do you plan to implement more Cinder resources as versionedobjects for Kilo?15:30
*** asselin has joined #openstack-cinder15:30
*** jpich has quit IRC15:30
dulekthangp: I have some free cycles once persitence of workflows was deferred.15:31
thangpdulek: not for kilo, since we kind of have a short time period to get it done15:31
thangpdulek: but for liberty...yes15:31
thangpdulek: but you are welcome to start any other resources :)15:31
thangpdulek: there are a bunch15:32
smcginniseharney: I guess I'll leave it for now. Primarily since other duplicate-related exceptions use it as their base. Wouldn't be hard to remove it, but I like the logic of them having that common base. And doesn't seem like worth doing a patch solely to remove that comment. :)15:32
dulekthangp: I wonder if I could start with backups and try to get them into Kilo. What do you think?15:32
thangpdulek: sure, sounds fine15:32
dulekthangp: Or maybe there's better candidate?15:32
thangpdulek: backup sounds good15:32
thangpdulek: there's also service15:33
dulekthangp: Okay, great. :)15:33
thangpdulek: the largest "object" is probably volume...that's going to take a lot of time to do15:33
dulekthangp: Oh, I thought that you've implemented it in snapshots patch?15:34
thangpdulek: only the skeleton15:34
thangpdulek: i plan to finish it up15:34
thangpdulek: in liberty15:34
*** jungleboyj has joined #openstack-cinder15:34
dulekthangp: Okay, code still uses SQLAlchemy objects15:34
thangpdulek: that's because snapshots references volume15:34
thangpdulek: yup15:34
dulekthangp: I get it.15:34
*** e0ne_ is now known as e0ne15:35
thangpdulek: could you base it off of the snapshot object patch?
dulekthangp: Okay, another thing is if you plan to move to oslo.versionedobjects15:35
*** hemna has quit IRC15:35
thangpdulek: yes...i'm not sure of the state of it15:35
*** nellysmitt has quit IRC15:35
thangpdulek: I see the github repo15:35
thangpdulek: but there are still changes coming in15:36
thangpdulek: e.g. it still has NovaObject as a name in base.py15:36
thangpdulek: I can follow up with dansmith on it15:36
dulekthangp: Two of my colleagues are working on that repo.15:36
thangpdulek: would be good to know when they have a initial version ready15:37
dulekthangp: OpenStack project is ready, code is forked from Nova and guys are working to get it generic.15:37
*** kaufer has joined #openstack-cinder15:37
dulekthangp: They were able to make Heat objectification functional depending on versionedobjects package.15:38
dulekthangp: So I guess initial version is ready. :)15:38
dulekthangp: Let me find their patchset...15:38
*** hemna has joined #openstack-cinder15:40
dulekthangp: "ImportError: No module named oslo_versionedobjects", cool, they forgot to add it to requirements.txt ;)15:43
*** nellysmitt has joined #openstack-cinder15:43
dulekthangp: Anyway you can take a look, they claim this should work after this change.15:43
thangpdulek: :)15:43
thangpdulek: wow...the jenkins build failed15:44
dulekthangp: Yeah, I see they haven't added dependency for the package. Anyway they claim that this is working locally. You can monitor the patch. I'll try to keep you updated as well.15:45
thangpdulek: ok thx...the change should be pretty straight forward to import oslo_versionedobject15:46
dulekthangp: I hope that too. Okay, thanks for clarifying, I'll start with another patch tomorrow. Have a nice day!15:48
thangpdulek: welcome15:48
*** hemna has quit IRC15:52
thangpdulek: so i pinged dansmith, and he says there is still on going work for oslo_versionedobjects15:54
thangpdulek: we should wait until they have a release15:54
*** BharatK has quit IRC15:55
dulekthangp: Yeah, definitely. I've also noticed that there's no pip package.15:55
dansmithwe have a bunch of work items to chew through before we'll be ready15:55
dansmithit's only been up for two days :)15:55
dulekdansmith: yeah, inc0 got me a little confused15:56
dulekdansmith: thanks for clarifying15:56
*** number80 has joined #openstack-cinder15:56
*** rajinir has joined #openstack-cinder16:00
*** rajinir_r has quit IRC16:01
*** marcusvrn1 has joined #openstack-cinder16:03
*** marcusvrn has quit IRC16:04
*** BharatK has joined #openstack-cinder16:07
*** rmesta has joined #openstack-cinder16:13
*** kmartin has joined #openstack-cinder16:14
openstackgerritSteven Kaufer proposed openstack/cinder: WIP Generic filter support for volume queries
*** laudo has joined #openstack-cinder16:16
*** nshaikh has joined #openstack-cinder16:17
*** rushil has quit IRC16:18
*** dulek has quit IRC16:20
*** rmesta1 has joined #openstack-cinder16:23
*** rmesta has quit IRC16:24
*** rmesta1 is now known as rmesta16:25
*** Mandell has quit IRC16:26
*** j_king has quit IRC16:29
*** j_king has joined #openstack-cinder16:29
*** xyang has joined #openstack-cinder16:29
*** markstur_ has joined #openstack-cinder16:30
*** markstur has quit IRC16:30
laudoCan I do snapshot between pools in Juno?16:31
*** jdurgin1 has joined #openstack-cinder16:31
jgriffithlaudo: you mean snap a volume in one pool, expecting the snap to be created in another pool?16:32
laudojgriffith: yes16:32
*** tsekiyama has joined #openstack-cinder16:33
jgriffithlaudo: No, I don't think you can do that, unless there are some backends that have some "trickery" going on16:33
openstackgerritAdrien Vergé proposed openstack/cinder: Pass region name to Nova client
jbernardlaudo, jgriffith: rbd is not one of those16:33
jgriffithjbernard: ahhh16:33
jgriffithjbernard: laudo didn't know we were talking RBD16:33
* jgriffith should always first ask "what backend" these days :)16:34
laudojgriffith: we do :-)16:34
*** dannywilson has joined #openstack-cinder16:35
jbernardlaudo: im not certain it couldn't be supported, but it isn't at the moment16:35
laudojgriffith: That means the snapshot have to be on the same backend as the actual snapshotted data correct?16:36
jgriffithlaudo: yeah, the scheduler directs that16:36
jgriffithlaudo: even if you backend doesn't require it16:37
jgriffithlaudo: I used to allow retype on snap which would do what you want16:37
jgriffithlaudo: but DuncanT hated it and you can't do it any more16:37
*** tsekiyama has quit IRC16:37
*** tsekiyama has joined #openstack-cinder16:38
DuncanTWhat did I hate?16:38
*** dannywilson has quit IRC16:38
jgriffithlaudo: so curious though... what's the reasoning for ensuring it's not in the same pool?16:38
jgriffithDuncanT: LOL16:38
DuncanTOh, that. When it works on LVM, I'll hate it less16:38
jgriffithDuncanT: the snapshot to a different type16:38
jgriffithfair point16:38
*** dannywilson has joined #openstack-cinder16:38
*** tellesnobrega_ has joined #openstack-cinder16:39
jgriffithDuncanT: for the record, I've moved on and work around it quite easily.  So it's a "don't care" for me now16:39
*** tellesnobrega_ has quit IRC16:39
jgriffithDuncanT: but I'm lucky, I don't deal with that shit show they call pools :)16:39
*** patrickeast has joined #openstack-cinder16:39
jgriffithDuncanT: Pools are for swimming, we shoudl keep it that way16:40
DuncanTjgriffith: Me neither :-) I really don't care about pools, just types16:40
DuncanTThey're good for fishing too16:40
*** ronenkat_ has quit IRC16:40
jgriffithfloating in an old inner tube as well16:40
laudojgriffith: bascally to get around cinder backups. They want to have a live snapshot which they could restore but don't want to put it not the same ceph backend in case they would loose the backend.16:41
jgriffithlaudo: ahh... makes sense16:42
jgriffithlaudo: the old "snapshot's as backups"16:42
jgriffithlaudo: there's a long history there FWIW16:43
laudojgriffith: correct16:43
jgriffithlaudo: at the end of the day it always came back to exactly what you describe as the problem (same backend)16:43
jgriffithlaudo: so I know this might be an awful idea, but why not do somethig custom here?16:44
*** dustins has joined #openstack-cinder16:44
jgriffithlaudo: or cheat and do somehting replicate to another pool?16:44
laudojgriffith: hmm have not tought about that. But might be a good idea.16:45
jgriffithlaudo: depends on the customer... and depends who you ask16:45
*** erlon has joined #openstack-cinder16:45
jgriffithsome see that sort of thing as blasphemous :)16:45
*** afazekas has quit IRC16:46
*** Yogi1 has quit IRC16:46
*** anshul has joined #openstack-cinder16:48
*** tellesnobrega_ has joined #openstack-cinder16:52
*** leopoldj has quit IRC16:52
*** e0ne has quit IRC16:54
*** Apoorva has joined #openstack-cinder16:54
laudojgriffith: Can you replicate snapshots? Never did it.16:55
*** e0ne has joined #openstack-cinder16:55
laudo Is this implemented?16:56
*** blinky_ghost has joined #openstack-cinder16:57
blinky_ghosthi all, I have an openstack deployment with ceph for image and cinder storage. When I try to copy an image to a volume I get this error: "computeFault": {"message": "The server has either erred or is incapable of performing the requested operation.", "code": 500" Any hint? thanks16:57
*** avishay has quit IRC16:58
openstackgerritYuriy Nesenenko proposed openstack/cinder: Fix comments style according to the Hacking Rules
*** alecv has quit IRC17:00
openstackgerritAdrien Vergé proposed openstack/cinder: Pass region name to Nova client
*** _cjones_ has joined #openstack-cinder17:04
*** e0ne has quit IRC17:05
*** e0ne has joined #openstack-cinder17:06
*** rajinir has quit IRC17:07
jgriffithblinky_ghost: usually means the service isn't running right, but you can do other things....17:09
jgriffithblinky_ghost: check cinder-api logs; might be your cinder node can't talk to your image store17:09
jgriffithblinky_ghost: but that's just a guess17:09
jgriffithcinder-scheduler logs would be next17:09
jgriffithblinky_ghost: search on "ERROR" or "TRACE"17:10
blinky_ghostjgriffith: I can create volumes, and launch instances, I just cannot copy from glance to cinder volume.17:10
blinky_ghostjgriffith: ok, checking17:10
jgriffithblinky_ghost: right, that's what I mean when I said "can do other things..."17:10
jgriffithblinky_ghost: make sure cinder node can access Glance endpoint too of course17:11
*** coolsvap is now known as coolsvap_17:12
*** fischerw has joined #openstack-cinder17:13
*** nshaikh has quit IRC17:14
blinky_ghostjgriffith: nothing gets to the logs, I think it fails before the request getting to api17:16
jgriffithblinky_ghost: hmmm17:16
jgriffithblinky_ghost: you're doing "create --image-id xxxx"?17:17
blinky_ghostjgriffith: cinder --debug create --image-id 0a4240f5-37a0-44a2-bd80-2f3547090abb --volume-type iscsi 1017:17
jgriffithshould be something in the logs....17:17
guitarzanthen grep all over the place for that request id17:18
jgriffithsince 500 is "unknown" general cinder exception it's kinda useless17:18
guitarzanif you don't get one, then something is busted with the wsgi server17:18
jgriffithblinky_ghost: yeah.... what guitarzan said! :)17:18
guitarzanif you don't find it in cinder-api.log something is really busted :)17:18
jgriffithblinky_ghost: yeah, but if you can do regular create as you say that's what's odd about that17:19
jgriffithhas to be something in the logs for the request17:19
guitarzanI'd make that assumption too17:19
*** hemnafk is now known as hemna17:19
blinky_ghostwait I got something17:20
blinky_ghostjgriffith guitarzan
jgriffithblinky_ghost: this is where you're likely going ot end up:
jgriffithblinky_ghost: 2.6?  Warlock?  What are you running?17:21
jgriffithblinky_ghost: regardless... it's clearly your glancecient that fails there17:22
blinky_ghostjgriffith: 2.6? cinder?17:22
jgriffithpython 2.617:22
jgriffithdoesn't matter17:22
jgriffithwell... assuming your on an older version of openstack17:22
jgriffithie not K17:22
guitarzanit's got the create volume api flow17:23
guitarzanso it's not super old17:23
jgriffithguitarzan: good point17:23
jgriffithblinky_ghost: bottom line, your glance client calls are failing17:23
guitarzanbut that's definitely an odd stack trace17:23
jgriffithblinky_ghost: for something in the 'warlock' module it appears to me17:23
guitarzanblinky_ghost: jgriffith or maybe a super old glance?17:23
jgriffithor something passed in to it17:23
blinky_ghostjgriffith, guitarzan maybe because I use ceph to store my images?17:24
jgriffithblinky_ghost: afraid I don't know17:24
jgriffithblinky_ghost: might want to try #openstack for support help17:24
blinky_ghostopenstack-glance-2014.1-2.el6.noarch python-glanceclient-0.12.0-1.el6.noarch17:24
guitarzanthe 'deleted' attribute not being there looks really suspicious to me17:25
guitarzanalthough even my ancient havana version has a default value for that getattr call17:25
*** EmilienM is now known as EmilienM|afk17:25
*** jistr has quit IRC17:27
guitarzanicehouse has the default value too17:27
guitarzanblinky_ghost: I have no idea what cinder version you're running :)17:28
guitarzanblinky_ghost: ah, I guess maybe the getattr default value was put in a later patch? maybe17:29
blinky_ghostguitarzan: I have an update available, testing17:29
guitarzan66afa155 (Josh Durgin             2012-08-12 02:21:0017:30
guitarzanthat's old :)17:30
guitarzanI'm a little confused17:30
blinky_ghostguitarzan: I use Icehouse, RDO repos17:30
guitarzanblinky_ghost: ok, cool, that's a start17:31
guitarzan1e488339 (Mike Perez              2014-04-17 18:46:36 -0700 434)17:32
guitarzanthingee had to fix it17:32
blinky_ghostguitarzan: it seems to work now :)17:32
guitarzanblinky_ghost: excellent17:32
jgriffithtiming on that was priceless17:32
blinky_ghostguitarzan: I've updated glance and cinder17:32
guitarzanhere you go:
guitarzanstill seems weird to not have a deleted attr :)17:33
blinky_ghostthanks for the help :)17:33
guitarzanblinky_ghost: you're welcome17:33
blinky_ghostnow I have to debug my horrible ceph performance :)17:34
guitarzanI am zero help there, good luck :)17:34
*** leeantho has joined #openstack-cinder17:34
*** rushil has joined #openstack-cinder17:37
jbernardblinky_ghost: the cmdline tools have some benchmark options baked in17:37
jbernardblinky_ghost: if you're issues are with ceph itself17:38
blinky_ghostjbernard: I have terrible write performance17:38
jgriffithWTF.. that's less than useful: ERROR (Conflict): Cannot 'unpause' while instance is in vm_state active (HTTP 409)17:38
*** jdurgin1 has quit IRC17:38
*** Yogi1 has joined #openstack-cinder17:40
guitarzanjgriffith: oh? what else do you need to know?17:41
blinky_ghostjbernard: testing iscsi volume with this command: dd if=/dev/zero of=tempfile bs=1M count=1024 oflag=direct I get about 97 MB/s with ceph I get about 7MB/s17:42
jgriffithguitarzan: well.... what I mean is;17:42
jgriffithsomething caused all my instance to "pause"17:42
guitarzanit doesn't *read* well for sure :)17:42
*** emagana has joined #openstack-cinder17:42
jgriffithno big deal17:42
jgriffithI'll just "unpause" them with this handy "nova unpause" command17:42
jgriffithOh... wait, that doesn't really work with paused instances17:42
guitarzanjgriffith: crazy hypervisor hijinks outside of nova perhaps?17:42
jgriffithguitarzan: yeah, I think I know what caused it :)17:43
jgriffithguitarzan: I ran virt-sparsify on an image17:43
guitarzanit sounds to me like nova doesn't know it's paused17:43
jgriffithand installed some virtfs tools17:43
guitarzanso it won't unpause it17:43
jgriffithguitarzan: Ahh....  ya know what I think you're right17:43
jgriffithguitarzan: lemme try something....17:43
guitarzandb state vs hypervisor reality is a nasty beast17:43
guitarzanso you should just unpause it in the hypervisor itself17:44
jbernardblinky_ghost: take a look at rados' bench flag17:44
jgriffithguitarzan: so my bad... I didn't realize the "real" workflow is to pause in KVM AND set Status to Paused17:44
jbernardblinky_ghost: you can benchmark each osd individually (i believe) as well as an entire pool17:44
jgriffiththat makes sense then17:44
jbernardblinky_ghost: this might give some context:
blinky_ghostjbernard: I can run this on my mon hosts, right?17:46
jbernardblinky_ghost: as long as the hostname resolves and rados client is install, i believe so17:47
blinky_ghostjbernard: thanks17:47
jbernardblinky_ghost: no problem, hope it helps.  let me know how it goes17:48
jbernardblinky_ghost: i only found those options recently, so im still learning as well17:48
*** mdbooth has quit IRC17:51
*** ndipanov has quit IRC17:51
*** humble_ has quit IRC17:53
*** tellesnobrega_ has quit IRC17:53
jgriffithguitarzan: huh... well that's cool.  I have to take back everything bad I ever said about Nova :)17:54
jgriffithguitarzan: ok, maybe that's an exageration17:54
*** vmtyler has quit IRC17:54
guitarzanjgriffith: haha!17:55
*** jwang has quit IRC17:55
*** coolsvap_ is now known as coolsvap17:55
guitarzanjgriffith: wouldn't the "real workflow" be to pause in nova?17:55
jgriffithguitarzan: yeah :)17:55
guitarzanok, just making sure I didn't miss something :)17:56
jgriffithguitarzan: so what I wasn't aware of (stupid me) was that it wasn't Nova that paused things17:56
jgriffithand that there was a vm_state of "paused" in Nova17:56
guitarzanto be honest, i was making an educated guess :)17:56
jgriffithso when I did a list and saw everythign paused I was like "that's weird, ok, just unpause"17:56
jgriffithbut that failed17:56
guitarzanoff of that excellent error message :D17:57
*** mdbooth has joined #openstack-cinder17:57
jgriffithso I was like WTF?  why have an unpause if you can't unpause :)17:57
guitarzanoh, your nova list said something about paused?17:57
jgriffithNow I know better17:57
jgriffithguitarzan: yeah!17:57
guitarzanthat surprises me a bit17:57
jgriffithguitarzan: that was the whole thing17:57
guitarzanoh, the power state17:57
jgriffithSo PowerState was paused17:57
guitarzanya, got it17:57
jgriffithnow that I know there's an equiv vm_state it all makes sense17:58
*** yuriy_n17 has quit IRC17:58
guitarzannova could be smarter about that if it wanted to, but there are probably a lot of ugly edge cases17:58
blinky_ghostjbernard: I want to change my public network to a different Vlan/address in ceph. Is it safe to do it? Thanks17:58
jgriffithguitarzan: btw.. neat trick if you ever have the need:17:58
jgriffithguitarzan: just go through and pause all of the VM's through nova17:58
*** tellesnobrega_ has joined #openstack-cinder17:58
jgriffithit'll set the vm_state and power_state appropriately17:58
guitarzanjgriffith: yeah, that seems smart17:58
jgriffiththen you can just do unpause on all of them and voila17:58
jgriffithguitarzan: yeah... pretty cool17:59
jgriffithbut I'm easily impressed/entertained :)17:59
jbernardblinky_ghost: it should be, many deployments configure storage traffic (and others) on isolated networks17:59
*** _cjones_ has quit IRC17:59
blinky_ghostjbernard: I think I misunderstand the network concept, so I'm running public network in same network that I use for openstack management17:59
jgriffithand after that sparsify only took myimage from 5 Gig to 4 Gig17:59
jgriffithso it really wasn't worth the trouble AT ALL17:59
guitarzanyou have an extra gig now17:59
blinky_ghostjbernard: maybe that's because in I'm in pain now :)18:00
jbernardblinky_ghost: i assume you've discovered the source of your poor write performance18:00
jgriffithguitarzan: is the eternal optimist!18:00
*** anshul has quit IRC18:00
*** jordanP has quit IRC18:00
*** aix has quit IRC18:00
jgriffithguitarzan: I wanted something under a gig, but not sure why I care TBH18:00
jgriffithjust something to obsess about this AM18:01
jgriffithgiven the fact now that I look at it and the binary I installed is 3Gig not sure WTF I was expecting :)18:01
blinky_ghostjbernard: I hope so :)18:02
guitarzanjgriffith: haha :D18:02
* jgriffith is silly18:02
guitarzanjgriffith: so now that we're all multiple volume managered, has there been any work on volume-manager maintenance mode stuff?18:03
guitarzanI haven't been able to pay much attention lately18:04
guitarzanI think it's been brought up for a while, but the ability to not perform any new actions, but finish anything in progress18:04
*** rushil has quit IRC18:06
jgriffithguitarzan: not really...18:06
jgriffithguitarzan: good deal of effor going on for things like objects and taskflow18:06
guitarzanjgriffith: task persistence too?18:07
jgriffithguitarzan: some craziness to bubble up all available extra-specs18:07
jgriffithguitarzan: that hit some snags18:07
jgriffithguitarzan: DuncanT I believe discovered it falls apart in a multi-node env18:07
jgriffithguitarzan: but I'm not completely in the loop on that conversation18:07
guitarzanmultiple nodes for the same 'host'18:07
jgriffithguitarzan: I believe so yes18:07
guitarzanah, well, initially I'm thinking much smaller18:08
jgriffithguitarzan: I made a simple proposal for persistence in Austin using a queue18:08
jgriffithguitarzan: but honestly I know I will not have time to work on that18:08
guitarzansimple signal handler to shutdown the incoming rabbit connections18:08
*** rushil has joined #openstack-cinder18:08
guitarzanI'll let you know how it works out :)18:08
jgriffithguitarzan: nice!18:08
guitarzantha way we can maintenance the old managers, upgrade, bring up new ones18:09
guitarzanmight have to nuke some pidfiles or something18:09
guitarzanI'm not sure about that18:09
guitarzanbut restarting volume managers is a problem for us18:09
guitarzansince a lot of our things take a while18:10
jgriffithguitarzan: yeah, I don't think you're alone on that18:10
*** sgotliv__ has quit IRC18:10
guitarzanwe sit and spin in our driver :)18:10
guitarzanour biggest problem is all of the stuff that happens *after* a create from image18:11
jgriffithguitarzan: there are very few who actually operate at real scale like you providers tdo18:11
guitarzanI'm not sure why that stuff doesn't happen before18:11
jgriffithguitarzan: oh?18:11
guitarzanwe can't just reset-state and be done with it18:11
guitarzanyeah, the bootable flag and glance meta happens *after* the driver call to clone image18:11
jgriffithguitarzan: that seemed reasonable to me, but I guess it jacks things up if you resart after the download eh?18:12
guitarzanor during18:12
guitarzanbut yeah, since we use clone and not the built in copy image18:12
jgriffithguitarzan: well... during you just need to hang it up and start over IMO18:12
guitarzanour driver is waiting for the backend to be done making the copy18:13
jgriffithguitarzan: but after that, we should be able to break out the remaining tasks18:13
guitarzaneverything ends up finishing in lunr, but the volume manager that's polling for us to be done dies off18:13
laudowhat version was retyping added to cinder? I cant find it in 1.0.918:13
patrickeasthemna: ping18:14
*** vilobhmm has joined #openstack-cinder18:14
patrickeasthemna: hey, have you had a chance to read through my replies on ?18:14
hemnanot yet18:15
patrickeasti wanted to make sure i’m on the same page as you wrt to the 2 way chap18:15
hemnaok I'll take a look18:15
*** afazekas has joined #openstack-cinder18:17
*** _cjones_ has joined #openstack-cinder18:17
e0nehi all!18:17
hemnapatrickeast, so my initial -1 was based upon the names being used for the CHAP that you had planned on storing18:18
e0nedoes anybody know why we are using 'detailed=True' by defaulft for api calls from client?18:18
hemnayou used initiator, when in fact what you were storing are target CHAP creds.18:18
hemnaI don't think we even support 2 way CHAP at the moment.  I could be wrong though18:18
patrickeasthemna: yea i don’t think we do, but i remember seeing it mentioned before18:19
hemnaI know nova doesn't send the initiator CHAP creds in the connector at initialize_connection time18:19
hemnait's something we could eventually do18:19
*** thingee has joined #openstack-cinder18:19
patrickeastlonger term it is something i would even be interested in looking into18:19
patrickeastso i figure its worth making sure this thing can handle it18:19
hemnaeven if we do, I'm not sure the initiator chap creds even need to be stored in that db table in cinder18:20
hemnathey should go into the array at initialize_connection time18:20
hemnaso I think short term for your patch, just change the names18:20
*** asmith_brcd has joined #openstack-cinder18:20
hemnaas you are trying to store the target CHAP creds18:20
*** harlowja_away is now known as harlowja18:21
patrickeasthmm ok, but then for the table the key still needs to be per initiator so it can be queried and passed in to the driver for initialize_connection, unless we want to grab them all and let the driver figure out what the target is18:22
*** EmilienM|afk is now known as EmilienM18:22
patrickeasti can change around the wording in the spec to be more clear that it is the target credentials in my chap use case18:23
hemnaok I posted a few comments18:23
*** MasterPiece has quit IRC18:23
hemnahopefully to clear things up.18:23
patrickeastcool, thanks!18:23
hemnanp!  thanks for working on this.18:23
guitarzanwow, there's chap creds for the initiator?18:24
laudoany doc on howto cet cinder policyes for retyping?18:26
openstackgerritGloria Gu proposed openstack/cinder: HP 3par driver filter and evaluator function
*** emagana has quit IRC18:29
openstackgerritPatrick East proposed openstack/cinder-specs: Add DB table for driver private data
*** emagana has joined #openstack-cinder18:29
*** emagana has quit IRC18:30
*** emagana has joined #openstack-cinder18:30
*** tellesnobrega_ has quit IRC18:31
*** annegent_ has joined #openstack-cinder18:32
*** boris-42 has quit IRC18:32
kauferjgriffith: Do you have a minute to discuss your comments on this?
jgriffithkaufer: for you... certainly :)18:33
*** e0ne is now known as e0ne_18:33
laudodoes someone know when rbd is going to get implemented for retyping? Is there a roadmap?18:33
kauferjgriffith: Thanks, first did my rational for breaking up the fix into 2 "phases" make sense?18:34
jgriffithNot following the two phases part?18:34
jgriffithkaufer: yeah18:35
jgriffithkaufer: so that addresses what i asked for18:35
jgriffithkaufer: you see my point though?  Without that second part this doesn't really "change" much of anything18:35
*** e0ne_ is now known as e0ne18:35
*** tellesnobrega_ has joined #openstack-cinder18:35
*** jaypipes has quit IRC18:36
jgriffithkaufer: sure it keeps you from looping on backends that don't support replication (that's a win)18:36
jgriffithbut it's kinda minor in the big scheme of things....18:36
kauferjgriffith: Yep, but I thought that there was still value in no-oping the periodic task for drivers that don't support it18:36
jgriffithsayI have a backend that supports it, but isn't using it18:36
jgriffithand I have 5000 volumes on that backend (not unusual)18:37
jgriffithI do a "for v in volumes_all:"18:37
jgriffiththat sucks18:37
jgriffiththrow in a db.volume_update for each one...18:37
jgriffiththat sucks even more18:37
kauferjgriffith: It seemed (to me at least) that doing an update to the DB API wasn't appropriate for a bug fix, hence the new BP and 2 commits for this bug18:38
jgriffithwell... that last part isn't the same but you see what I mean18:38
jgriffithkaufer: suppose that's fine, but I don't know if I agree on the "db change" not acceptable for a bug fix18:38
jgriffithit surely won't be backported18:38
jgriffithbut I wouldn't bother backporting this anyway I don't think18:38
jgriffithas I said, I don't really see that it changes much18:38
jgriffithkaufer: now... what might be more interesting:18:39
kauferjgriffith: Yep, totally understand, I'm hoping that the "phase 2" work will help in that area, since then the DB API will filter out volumes that have replication disabled18:39
kauferOk, sorry, maybe I created the BP too quickly18:39
*** asmith_brcd has quit IRC18:39
jgriffithmeh... forget that idea18:39
jgriffithkaufer: nah... you didn't18:39
jgriffithkaufer: you're good18:39
*** mriedem has quit IRC18:40
jgriffithkaufer: so my real problem here is the looping through all volumes and updating part18:40
jgriffithkaufer: that's "the bug" IMO18:40
jgriffithit *will* come back and bight us, I can pretty much guarantee it18:41
patrickeastso one thing slightly related to this i’ve been wondering about is if it would be worth adding a bulk get_replication_status method on the driver that takes a list of volumes… right now i’ve got some concerns that when we switch on replication if there are too many volumes it will just assult our rest api with 5000 requests where we could just do a single one18:41
kauferjgriffith: Ok, so filtering on the DB query will help with that, but you don't think that that is sufficient?18:41
kauferjungleboyj: ^ FYI18:41
jgriffithkaufer: I think that is much better than what we have18:42
jgriffithkaufer: but I think the design is flawed when the manager needs to do this at all18:42
*** rushil_ has joined #openstack-cinder18:42
jgriffithkaufer: polling in a large scale distributed system isn't such a good thing IMO18:43
* jungleboyj scrolls back up.18:43
jgriffithkaufer: when you're doing that polling on a resource like "volumes"18:43
jgriffithkaufer: I guess I'm not offering a better suggestion (other than rewrite of the code)18:44
kauferjgriffith: Yep, it is doing this work for every volume in a periodic task that runs every min18:44
jgriffithso you probably shouldn't get too wrapped up in what i say here :)18:44
kauferjgriffith: :)18:44
*** e0ne has quit IRC18:44
jgriffithkaufer: exactly... that's a pretty bad idea IMO18:44
*** rushil has quit IRC18:44
jgriffithkaufer: I wonder if there's a way to convert this to some sort of a notification update?18:45
jgriffithso don't poll... but send a notification from the backend if something changes18:45
jgriffithassuming of course the backend should *know* this18:46
jgriffithkaufer: so let's think about this...18:46
kauferjgriffith: it's a good idea, i'll do some investigation, not sure what is possible18:46
jgriffithkaufer: the whole point of this periodic is to enable a fail over right?18:46
jgriffithkaufer: an "auto" fail over that is18:47
jgriffithkaufer: ping the driver, see if rep status has changed, acting as a message that says "hey, change the model info so you point to the other volume/backend"18:47
jgriffithkaufer: does that jive with you?18:48
jungleboyjjgriffith: That was not how I interpreted this.  Is that future work?  Right now the promotion is a manual process.18:49
kauferjgriffith: AKAIK, but TBH I don't have all of the background here, jungleboyj do you know?18:49
jungleboyjjgriffith: This is just to keep the status of the replica updated.  Is it consistent or not.18:49
*** MasterPiece has joined #openstack-cinder18:49
patrickeastright now iirc it just updates the status so if you look at the command line it is ok, and the promote command works18:49
jgriffithjungleboyj: so why is this check even here?18:49
jgriffithjungleboyj: in the model update?18:49
jgriffithjungleboyj: and honestly, if it's just a status update... who cares18:50
jgriffithjungleboyj: just provide it if/when asked for18:50
jgriffithwe don't poll volumes for anything else18:50
jgriffithwhy should we poll in this case?18:50
jungleboyjjgriffith: Good question.18:50
jungleboyjLet me look here.18:51
*** Longgeek has quit IRC18:51
laudowhat is the best way to download an image from cinder backend?18:51
*** Longgeek has joined #openstack-cinder18:52
hemnalaudo, image from cinder ?18:52
jgriffithjungleboyj: you're doing a bunch of shit in there BTW18:53
*** cdelatte has quit IRC18:53
jungleboyjjgriffith: What do you mean?18:53
laudothere is cinder upload-to-image is there download-to-disk?18:54
hemnathat image gets put into the image service configured in your deployment18:55
jgriffithjungleboyj: check out your driver18:55
jungleboyjjgriffith: Yeah, that is where I am looking now.18:55
hemnalaudo, cinder doesn't have it at that point.   most likely in glance18:55
hemnalaudo, nova image-list18:55
laudohemna: then export it from glance?18:55
hemnalaudo, or glance image-list18:55
jgriffithjungleboyj: extended2 Secondary copy status... and synchronized... sync progress is.....18:56
hemnalaudo, if you need the image file, then yah, you can download it from glance18:56
*** cdelatte has joined #openstack-cinder18:56
jgriffithshove that into a model update and stick it in the DB.... why?18:56
*** cdelatte has quit IRC18:56
*** cdelatte has joined #openstack-cinder18:56
*** Longgeek has quit IRC18:56
jgriffithjungleboyj: kaufer you shouldn't have asked me questions... now I'll file more bugs :)18:56
jgriffithkaufer: :)18:57
*** coolsvap is now known as coolsvap_18:57
jgriffithkaufer: jungleboyj you guys see what I'm saying though?18:58
jungleboyjYeah, that looks much more heavyweight than I realized.18:58
jgriffithkaufer: jungleboyj we're jus polluting the DB every 60 seconds with free form strings18:58
hemnajgriffith, where ?  url?18:58
jgriffiththat honestly I have no idea "why" we even need it18:58
laudohemna: thanks18:59
hemnalaudo, no problemo!18:59
jgriffithI know you guys think I'm a total A-hole, but sorry18:59
jungleboyjjgriffith:I didn't say that, I think you have a valid concern here.18:59
*** _cjones_ has quit IRC19:00
kauferjgriffith: Nope, you make valid points, I just don't know how to address them at the moment19:00
jungleboyjjgriffith: Let us regroup here.19:00
*** blinky_ghost has quit IRC19:00
jgriffithhemna: which is called by a periodic here:
hemnaoh that's the periodic task for updating the db record from the backend19:00
jgriffithat line 163619:00
laudoCan somone point me to a doc which explainse how to setup hints/ policies for retyping?19:00
laudocant really find anything when googling19:01
jgriffithhemna: yeah... my first issue was iterating through all volumes on a periodic (bug 1)19:01
openstackbug 1 in Ubuntu Malaysia LoCo Team "Microsoft has a majority market share" [Critical,In progress] - Assigned to MFauzilkamil Zainuddin (apogee)19:01
*** eharney has quit IRC19:01
jgriffithhemna: but then looking at what's happening on the backend I am also concerned about what that "status" being sent in the udpate is19:01
*** dustins has quit IRC19:01
jgriffithhemna: and finally... I ask "why we even need it"19:01
patrickeastwe had talked about discussing replication at the next summit… do we have a etherpad or something to capture these kind of issues we want to tackle?19:01
jgriffithhemna: implement a "cinder show-repstatus xxxxxx"19:02
hemnaoh yah, it's looping through every volume associated with a host19:02
hemnaso can't we only loop through the volumes that are replicated instead ?19:02
*** katco has joined #openstack-cinder19:02
hemnaat a minimum19:02
jgriffithhemna: that was what kaufer and I have been talking about (see my comments and his in the patch)19:02
hemnaok sorry, wasn't following that.  was working on brick stuffs.19:03
hemnajust trying to catch up19:03
jgriffithwhile that's def better... I still question why this is even here?19:03
jungleboyjpatrickeast: We do!19:03
*** _cjones_ has joined #openstack-cinder19:03
katcoi need some assistance with parsing the storage wadl file. i'm not that familiar with the format, but the wadl file seems to contradict the specification? is anyone familiar?19:03
jgriffithand frankly using free form strings to store status isn't kosher either19:03
*** dustins has joined #openstack-cinder19:03
hemnaso I guess the question I have is, who even looks at the status19:04
jungleboyjpatrickeast: Actually, we have one for getting CGs and Replication together.19:04
jungleboyjNot one for just replication.  I should maybew start that.19:04
hemnajgriffith, except we use free form strings for volume['status']19:05
hemnaand volume['attach-status']19:05
hemnaanyway, that's probably a minor point though19:05
kauferjgriffith: Are you referring to the format of 'replication_extended_status'19:05
jgriffithhemna: not really19:05
hemnawho needs the replication status updated every n seconds ?19:05
jgriffithhemna: there's a big diff between: "Available"19:05
katcoi.e. the WADL spec states regarding response elements: "Zero or more param elements (see section 2.12 ) with a value of 'header' for their style attribute, each of which specifies the details of a HTTP header for the response"19:06
jgriffithand a custom joined string that's 30+ chars long and is actually a sentence19:06
hemnaisn't the replication_extended_status supposed to be a driver only thing ?19:06
jgriffithhemna: look at the code19:06
katcobut the response parameters for creating a volume have styles of "plain"19:06
jgriffithif it was "Replicated", "Good" "meh"19:06
jgriffithI probably wouldn't care19:06
katcofurther, some responses don't list their parameters?19:06
jungleboyjjgriffith: So, the part I expected to see in here is the sync_progress updates and status updates.19:06
jgriffithhemna: but really???  "Primary copy status xxx and synchronized: xxx"19:07
hemnajgriffith, ok I see what you are talking about19:07
kauferjgriffith: so 'replication_status' seems to be a single string (ie, 'error', 'active', 'copying'), it's the 'replication_extended_status' that is a bit crazy19:07
jgriffithkaufer: yes, and that's fine19:07
jgriffithkaufer: the extended status is stupid though19:07
jungleboyjjgriffith: Regardless, it doesn't seem that all this needs to be updated in a periodic task if we are not doing automatic promotion.19:08
hemnaaccording to the updated spec on replication19:08
hemnathe replication_extended_status is for each driver to consume19:08
jgriffithI don't want to rat hole on this, it's kind of a tangent19:08
hemnaand if that's the case, then why not store a dict instead of a big ass string?19:08
jungleboyjhemna: Right.19:08
jgriffithhemna: in which case they should store it themeselves IMO :)19:08
jgriffithbut wait... this goes back to free form data in the DB19:09
hemnabut still I haven't heard what's the point of the status periodic task is for19:09
hemnaok, it's updating some status19:09
jgriffithand the fact that your doing it every minute for no apparant good reason19:09
jgriffithAND WTF!!!19:09
hemnabut who uses that? and for what reason?19:09
jgriffithIf it's for the driver to use, the driver is what gets called to build it19:09
jgriffithso why would you periodic and put it in the db?19:10
*** asmith_brcd has joined #openstack-cinder19:10
hemnawhy not fetch it when you need it19:10
jgriffithhemna: retype uses status19:10
jgriffithbut that's ugly as well19:10
hemnaand by that you mean the volume manager uses it ?19:10
jgriffithretype call in teh volume-manager yes19:10
*** lpetrut has joined #openstack-cinder19:11
jgriffithhemna: honeslty I propose we just revert it19:11
jgriffithnobody implements it anyway19:11
*** alexpilotti has quit IRC19:11
hemnawe just looked at implementing replication19:11
hemnaand would like to do so for L19:12
hemnagary-smith, and I looked deeply into it19:12
jgriffithI would have like to have seen us all do it for K19:12
jgriffithhemna: oh...19:12
hemnaand are still wrapping our heads around it19:12
jgriffithok, well then I guess it's good and works19:12
hemnabut I still didn't understand the point of the periodic task19:12
jungleboyjjgriffith: Lets break this into bite size pieces.19:12
hemnajgriffith, well I didn't say that :)19:12
jgriffithjungleboyj: sure19:12
*** bswartz has quit IRC19:13
jungleboyjjgriffith: If you do't like what storwize is doing we can address that, that, however is not the point of the immediate discussion.19:13
jgriffithjungleboyj: honestly I don't have much more to say19:13
jgriffithjungleboyj: and you're correct that wasn't really my point19:13
jungleboyjjgriffith: The item under question is the efficacy of the periodic task.19:14
hemnaok so I see in the volume manager where it looks for a value in replication_status19:14
jgriffithjungleboyj: and yes, you're correct I made the mistake of pushing this through on that last code review after the meetup19:14
jungleboyjjgriffith: No, I didn't say that either.19:14
hemnaand looking at the storewize driver19:14
jungleboyjjgriffith: I need to figure out if that task makes sense.  If not, lets fix it.19:15
hemnaif it's broken it sets a status of 'error'19:15
hemnawhich is ignored by the retype code in the volume manager19:15
hemnait would allow that volume to be retyped19:15
hemnabecause the status isn't set to "disabled"19:15
hemnainstead of 'error', which is in the storewize driver19:15
hemnaso that's broken19:15
*** chenleji has quit IRC19:16
jgriffithjungleboyj: so here's my proposal....19:16
* jungleboyj is starting an etherpad19:16
jgriffithjungleboyj: we have one vendor with it implemented, and it seems to work fine for them19:16
jgriffithjungleboyj: so maybe we just leave it as is like that for now19:16
hemnanote: 'disabled'19:17
jgriffithjungleboyj: don't allow any more submissions using it (which based on timing of release we're not going to anyway)19:17
jgriffithand for K, first thing we do is rework the thing and fix it up19:17
hemnajgriffith, except I think it's not working for them.  in this case retype will be allowed in a broken replication status.19:17
jgriffithhemna: not my problem quite frankly, and if they need to test/fix they can19:18
hemnajgriffith, at a minimum we need to fix the periodic task19:18
*** eharney has joined #openstack-cinder19:18
jungleboyjjgriffith: I have no qualms with that statement at all.19:18
patrickeastfor reference we have an impl up for review right now, not saying it needs to go in, but primarily as a discussion point for another interpretation of the current state of things19:18
thingeejgriffith: I have a question for you in the standard capabilities spec
hemnafetching every volume for a host every 60 seconds19:18
thingeehemna, gary-smith:
hemnathat's bad mmmkay19:18
thingeereplied back19:18
jgriffithhemna: jungleboyj yeah, well there's only one backend (2) this effecs to maybe I don't care19:18
jungleboyjjgriffith: That fits in line with what I have been trying to do.  Trying to get the documents fixed up right.  Find bugs and fix it.19:18
gary-smiththingee: will read...19:18
jgriffithhemna: it's not every host though, only those the report replicaton support in their capabilities19:18
jgriffithjungleboyj: no....19:19
jgriffithjungleboyj: don't mess with docs or antyhing19:19
jgriffithjungleboyj: the impl is broken IMO19:19
jungleboyjjgriffith: Also, this matches with what we said in Paris, poke at this and try to fix it.19:19
jgriffithjungleboyj: needs rewritten, shouldn't have more people try and use it or implement it in their drivers19:19
jungleboyjjgriffith: Ok ...19:19
hemnajgriffith, well if kaufer's patch lands then yes, it only does this for the 1 host that supports replication19:19
hemnawhich I think currently isn't the case.19:19
thingeegary-smith: spelling issue going to happen. I haven't been home for three weeks and traveling and had to deal with k-2 cut recently ;)19:20
jgriffithhemna: yeah... sorry, you're correct19:20
hemnaso, we need kaufer's patch to land or some form of it.19:20
jungleboyjhemna: That makes sense.19:20
hemnaI think we shouldn't even setup the periodic task19:20
jgriffithhemna: +119:20
hemnaright now there will be a periodic task that exits, every n seconds19:20
thingeehave a meeting, bbl19:22
hemnathingee, thanks19:22
jungleboyjhemna: We can't just disable the periodic task though.  That is going to mean that status for those of us that have this implemented will not be updated.19:22
hemnajungleboyj, well, what I'm trying to say (poorly) is that the periodic task should only be setup for those that are using it.19:23
jungleboyjhemna: Doesn't really matter if the task runs and skips doing anything.19:23
jungleboyjhemna: Ok.  That is better.19:23
hemnaotherwise it's just a task that runs every N seconds and does nothing.19:23
hemnathat has FAIL written all over it IMO19:23
*** eharney has quit IRC19:23
jungleboyjkaufer ^^ thoughts on that?19:24
thingeehemna, gary-smith, jgriffith: thanks for the help!19:24
thingeeDuncanT: you too!19:24
openstackgerritRodrigo Barbieri proposed openstack/cinder: Fix exception error on HNAS drivers
kauferYes, my patches addresses that.  It makes it no-op for drivers that do not support replication.19:25
*** emagana has quit IRC19:26
openstackgerritnikeshmahalka proposed openstack/cinder: Add iSCSI SCST Target support to cinder
jungleboyjkaufer: Right, but is there a way to back that out a level so that the task isn't set up at all?19:26
hemnakaufer, no19:26
hemnakaufer, your patch still allows the periodic task to get created19:26
*** emagana has joined #openstack-cinder19:26
jungleboyjThat is what hemna is asking.19:26
hemnawe need to prevent the task from being created if a driver doesn't support replication19:27
kauferjungleboyj, herma: I see, I know that there is a enabled (or disabled) kwarg on the period task19:27
hemnaas there is no point in it running every N seconds just to tests and exit19:27
gary-smiththingee: gotta run. Sorry, I'll have to get back to the review in the afternoon19:27
thingeegary-smith: np19:28
*** dustins has quit IRC19:28
kauferjungleboyj, herma: I will investigate how to disable it19:28
*** Mandell has joined #openstack-cinder19:29
*** dustins has joined #openstack-cinder19:29
DuncanThemna: test and exit is way better than hitting the db, but still a band-aid at best19:29
hemnaDuncanT, yah it's better than what it is now19:30
hemnaI'm just saying that there isn't much point in a task that always tests and exits.19:30
hemnaunless by some magic replication is a capability that shows up later after the driver is up.19:30
kauferjungleboyj: ^ Happen to know?19:31
*** emagana has quit IRC19:31
DuncanThemna: In theory drivers are able to change their capabilities dynamically as the backend gets reconfigured19:31
hemnayah I get that19:32
DuncanThemna: We support that for everything else, it would be nice to support it for replication19:32
hemnaDuncanT, in general yes19:32
hemnabut currently19:32
hemnaonly 1 vendor supports replication19:32
DuncanThemna: But we can just check at status time and start the task if it isn't going, and yes, that can be future work19:32
hemnaso for everyone else, there will be a bogus periodic task setup19:32
DuncanThemna: I'm fine with whatever bandaids make sense now, for sure19:32
openstackgerritMitsuhiro Tanino proposed openstack/cinder: Refactoring for export functions in Target object
xyangDuncanT: This is updated based on the discussion at the meetup:
xyangDuncanT: can you please take a look?19:35
nikesh_vedamsjgriffith thingee : morning19:35
DuncanTxyang: Sure19:36
xyangDuncanT: thanks19:36
jungleboyjkaufer hemna : It isn't something magic. It is done as part of do_setup19:37
jungleboyjSo, that shouldn't change after the driver is initialized.19:38
kauferjungleboyj: k, i'll update the patch19:39
jungleboyjkaufer: Sounds good.  I am going to create an etherpad and document today's discussion.19:39
nikesh_vedamsjgriffith: any update on rootwrap  :)19:40
jungleboyjUse it as a starting point going forward. Put some proposals for how we tackle getting this fixed/re-written in K.19:40
jungleboyjIn the mean time we are going to focus on getting migration issues resolved to knock that piece out.19:41
DuncanTjungleboyj: Can you PM the etherpad address, please? I don't want to loose it in my scrollback, and my weekend has technically already started19:41
jungleboyjDuncanT: :-)  It doesn't exist yet.  I will send a not to the ML after I have it created.19:41
*** e0ne has joined #openstack-cinder19:41
jungleboyjDuncanT: You still want me to PM you DuncanT ?19:41
jungleboyjDuncanT: Will do.  Hope you are doing something fun this weekend!19:43
*** emagana has joined #openstack-cinder19:44
dannywilsonjungleboyj: Does this affect the tempest test work for replication?  Should we hold off until it settles down?19:44
jungleboyjdannywilson: :-)  Well, if w are talking about redesigning it ...19:45
dannywilsonjungleboyj: exactly, okay, will move on for now and looking forward to helping with trying to fix up the current problems19:46
jungleboyjdannywilson: Sounds good.19:46
jungleboyjdannywilson: Are you interested as you are also hoping to implement it?19:47
dannywilsonjungleboyj:  yes, we actually have an implementation based on the current design
dannywilsonjungleboyj: but from the sound of things it probably isn't going anywhere :(19:48
jungleboyjOh, you must work with patrickeast then?19:48
kauferdannywilson: If I'm reading correctly, it seems like the replication support can toggle after initialization -- is that correct?19:48
jungleboyjdannywilson: Excellent.19:48
*** emagana has quit IRC19:49
*** emagana has joined #openstack-cinder19:49
dannywilsonkaufer: It is possible but I think we can start with disabling the task.  We can document having to restart once replication is set up on the back end for now19:49
kauferok, thx19:50
jungleboyjGood deal.19:50
jungleboyjSo, we have three Vendors that are interested/have looked into this.19:51
jungleboyjdannywilson: Have you looked at my spec updates at all?19:51
dannywilsonjungleboyj: Yes.  They look good to me and things are working with our implementation19:51
*** _cjones_ has quit IRC19:52
jungleboyjdannywilson: Great news.  Thank you!19:52
*** Longgeek has joined #openstack-cinder19:53
dannywilsonjungleboyj: there might be a bug with the current way of reporting support for replication.  I think the spec says 'replication_support' but when I changed my code from 'replication' it stopped working, been meaning to bring that up19:53
*** emagana has quit IRC19:54
*** thangp has quit IRC19:58
*** Lee1092 has quit IRC20:00
*** rmesta has quit IRC20:02
*** Longgeek has quit IRC20:02
katcois anyone here responsible for the WADL specs? e.g.:
*** Miouge has quit IRC20:03
*** briancurtin has quit IRC20:07
*** Mandell has quit IRC20:08
*** zhiyan has quit IRC20:08
*** emagana has joined #openstack-cinder20:09
*** thingee has quit IRC20:10
*** ameade has quit IRC20:10
*** TobiasE has quit IRC20:12
*** nellysmitt has quit IRC20:14
*** _cjones_ has joined #openstack-cinder20:16
*** briancurtin has joined #openstack-cinder20:17
jungleboyjdannywilson: Yes, that is one of the things I need to fix in my next patch.20:17
*** zhiyan has joined #openstack-cinder20:18
jungleboyjdannywilson: ronenkat and xyang and I talked about that and since we already have drivers using 'replication'=true, we are going to stick with that.20:18
*** ameade has joined #openstack-cinder20:18
*** ronis_ has joined #openstack-cinder20:20
*** ronis has quit IRC20:21
jgriffithnikesh_vedams: I'm looking at that now20:22
*** Mandell has joined #openstack-cinder20:28
*** jgravel_ has quit IRC20:28
*** annegent_ has quit IRC20:32
*** nellysmitt has joined #openstack-cinder20:35
*** laudo has quit IRC20:37
*** jgravel has joined #openstack-cinder20:39
*** Mandell has quit IRC20:39
*** tellesnobrega_ has quit IRC20:41
*** markvoelker has joined #openstack-cinder20:41
*** rmesta has joined #openstack-cinder20:42
*** BharatK has quit IRC20:42
*** Mandell has joined #openstack-cinder20:45
*** mriedem has joined #openstack-cinder20:46
*** enterprisedc has quit IRC20:53
*** enterprisedc has joined #openstack-cinder20:55
*** sgotliv__ has joined #openstack-cinder20:55
openstackgerritSean McGinnis proposed openstack/cinder: WIP Dell Storage Center: Add retries to API calls
*** tbarron has quit IRC20:57
*** thingee has joined #openstack-cinder20:57
*** annegent_ has joined #openstack-cinder20:58
*** tbarron has joined #openstack-cinder20:59
*** sgotliv__ has quit IRC21:00
*** xyang has quit IRC21:00
*** tbarron has quit IRC21:01
*** tbarron has joined #openstack-cinder21:01
*** e0ne has quit IRC21:03
*** tbarron has quit IRC21:05
*** mriedem has quit IRC21:07
*** mriedem has joined #openstack-cinder21:07
*** timcl has quit IRC21:07
*** Apoorva has quit IRC21:07
*** boris-42 has joined #openstack-cinder21:09
*** tbarron has joined #openstack-cinder21:14
*** tbarron has quit IRC21:15
*** MasterPiece has quit IRC21:20
*** thrawn01 has joined #openstack-cinder21:24
*** Apoorva has joined #openstack-cinder21:26
*** sgotliv__ has joined #openstack-cinder21:30
*** emagana has quit IRC21:30
*** emagana has joined #openstack-cinder21:31
*** rushil_ has quit IRC21:31
*** eharney has joined #openstack-cinder21:33
*** emagana has quit IRC21:34
*** emagana has joined #openstack-cinder21:35
*** e0ne has joined #openstack-cinder21:45
*** Longgeek has joined #openstack-cinder21:47
*** jaypipes has joined #openstack-cinder21:49
*** nikesh_vedams has quit IRC21:50
*** lpetrut has quit IRC21:54
*** Yogi1 has quit IRC21:55
*** sigmavirus24 is now known as sigmavirus24_awa21:55
*** nikesh_vedams has joined #openstack-cinder21:56
*** vilobhmm has quit IRC21:58
*** emagana has quit IRC22:01
jgriffithnikesh_vedams: should probably just update the CommandFilter in volume filters to specify the path22:02
jgriffithnikesh_vedams: we *should* I believe pick up anything in system PATH, but that doesn't seem to work22:02
jgriffithnot sure if anybody here is better with rootwrap than me?22:03
*** nellysmitt has quit IRC22:03
jgriffithmaybe eharney ^^ ?22:03
smcginnisjgriffith: Do you mean keep what is in volume.filters but don't add the path in rootwrap.conf?22:05
jgriffithsmcginnis: kinda.... so it was pointed out that we shouldn't need to add things like "/usr/local/sbin|bin" as they should be in system PATH and just picked up22:06
jgriffithsmcginnis: but oddly, that doesn't seem to work for me22:06
jgriffithsmcginnis: I put an executable "" in /usr/local/sbin22:06
jgriffithremove local/sbin from rootwrap.conf22:07
jgriffithand add "" command filter in the volumefilters22:07
jgriffithand the good thing is the rootwrap doesn't fail....22:07
jgriffiththe bad thing is "executable not found"22:07
smcginnisThat seems odd. Like it's not picking up the addition of /usr/local/bin?22:08
jgriffithI know we did some work around this in the past becasue we had some distros installing things in one place, and others in another22:08
*** sgotliv__ has quit IRC22:08
jgriffithsmcginnis: so it's a path thing, yeah... my filter looks like:22:08
smcginnisAssuming that were to work, is there something wrong with adding /usr/local/bin to the search path?22:08
smcginniseharney has the patch to revert the other commit that added it.22:08
jgriffithtest-py: CommandFilter, /usr/local/sbin/, root22:08
jgriffithsmcginnis: yeah, he was one up to revert mine as well :)22:09
jgriffithsmcginnis: ahh... the adding /usr/local/bin22:09
smcginnisjgriffith: But is there a risk with adding it. Especially if it is at the end of the search paths?22:09
jgriffithso we used to do that... and I think it was actually eharney that fixed that all up22:09
jgriffithas some distros might install in diff location etc22:09
jgriffithso we just let it sort it out by looking in PATH22:10
jgriffithbut it doesn't seem like that's working for some reason22:10
smcginnissmcginnis: Seems like it makes it more flexible to handle that case of distros putting things in different locations though.22:10
jgriffithI guess I'll put a debug line in there and see what it thinks PATH is22:10
jgriffithsmcginnis: yeah, it's def the right idea IMO22:10
jgriffithjust wondering where things went wrong or broke :)22:11
jgriffithTo the git HUB!!!22:11
smcginnisYeah, in your case it smells like PATH isn't getting updated correctly somehow.22:11
* jgriffith doesn't have a bat cave22:11
smcginnisTo the horse barn just doesn't have the same ring to it.22:11
*** emagana has joined #openstack-cinder22:12
jgriffithso I wonder if there's something we missed when oslo's common rootwrap was added22:12
jungleboyjWe need to get jgriffith a bat cave.  :-)22:12
smcginnisSounds like a likely candidate.22:12
*** annegent_ has quit IRC22:13
smcginnisjgriffith: Could be overwriting the cinder PATH?22:13
thingeejgriffith: so I think were blocked on the open question I still have for you in
thingeewith making the scheduler filtering simple22:14
thingeeI like the idea, but I'm unsure how it will work.22:14
jgriffithsmcginnis: mystery solved22:15
*** annegent_ has joined #openstack-cinder22:15
jgriffithsmcginnis: /usr/local/sbin in NOT in sys.path when running as Cinder22:15
jgriffiththingee: ok... huh?  Checking22:16
*** _cjones_ has quit IRC22:16
jgriffiththingee: where's the question for "me"22:16
thingeejgriffith: last set of comments on the spec, start's with "john,"22:17
*** dalgaaf has quit IRC22:17
jgriffithhaha :)22:17
jgriffiththat's me22:17
*** harlowja_ has joined #openstack-cinder22:18
jgriffiththingee: ahh... yeah I see22:18
*** e0ne has quit IRC22:18
jgriffiththingee: but now I'm VERY confused by DuncanT 's comment??22:19
thingeejgriffith: we'll ignore DuncanT for now :)22:19
jgriffithactually, you know I forgot something....22:19
jgriffithif that key isn't scoped the scheduler will in fact just try and use it22:19
*** Longgeek has quit IRC22:19
*** _cjones_ has joined #openstack-cinder22:19
*** harlowja has quit IRC22:21
thingeejgriffith: not following22:21
nikesh_vedamsjgriffith: so since /usr/local/sbin in NOT in sys.path when running as Cinder so then we have to add this in rootwrap.conf or not22:21
jgriffiththingee: I updated the spec comment... but in a nut shell22:22
jgriffiththingee: IIRC the scheduler will take any key you hand it, and if it's not scoped...22:22
jgriffiththingee: it tries to match it to the hosts capabilities22:22
jgriffiththingee: in other words if you typed in "3parspec" instead of "hp:3parspec" it will actually filter on that22:23
jgriffiththingee: it will say "ok, what hosts support "3parspec"22:23
jgriffiththingee: if you scope the key however it just says "I don't care about this in my decision making process"22:23
thingeethe filter scheduler I don't think has the ability to handled nested capabilities like I'm doing right now. If you take a look at my example with compression under proposed change.22:23
thingeebut maybe that's a problem with how I'm structuring things as being nested22:24
jgriffiththingee: perhaps... I'll have to look22:24
*** xyang has joined #openstack-cinder22:25
thingeejgriffith: ok yeah I think I get what you're saying22:25
jgriffiththingee: yeah, that's probalby not going to work without some mods somewhere22:25
jgriffiththingee: honestly that persona shit is a PITA :)22:25
jgriffiththingee: but it's just a POD at the end of the day22:25
jgriffiththingee: no reason the filter sched can't iterate through that stuff22:26
jgriffiththingee: that's why I was saying in Austin we need a solid header22:26
thingeein the compression and qos examples, do you agree?  So compression for example is an enum, and the driver can push up a list it supports. In addition, the driver can have additional settings that it can push up to go with compression22:26
jgriffiththingee: so "other" doesn't cut it, but like "VendorUnique"22:26
thingeeI have this example of a "super compression" as a proprietary feature22:26
jgriffithor "MyComplexCrazyShit"22:26
jgriffiththingee: and just have the filter check both sets of data22:27
jgriffiththingee: personally I like seperating them out cleanly and clearly22:27
*** e0ne has joined #openstack-cinder22:27
jgriffiththingee: so there's no question whats "standard" capabilities and what's vendor specific22:27
jgriffiththingee: right, that's no different than what I was getting at22:28
jgriffithI'll see if I can write it up in the spec and check out the format you have22:28
thingeeok, so if I'm following, a driver can still say for compression enum ['raw', 'dedup', 'proprietary feature']...and then the rest of the settings to "proprietary feature] go to other.22:29
*** ronis_ has quit IRC22:32
*** jungleboyj has quit IRC22:33
gary-smiththingee: then for presentation purposes, all vendor-specific capabilities get thrown into a big bucket: "other". I thought adding ca22:34
gary-smithtegories was intended to address that22:35
gary-smithor at least, that was one of the motivations behind it22:35
*** Mandell has quit IRC22:35
jgriffiththingee: I gave it a +1 :)22:35
jgriffiththingee: and a short novel to go with it22:36
*** vilobhmm has joined #openstack-cinder22:38
*** tellesnobrega_ has joined #openstack-cinder22:39
*** Apoorva has quit IRC22:40
*** Apoorva has joined #openstack-cinder22:41
smcginnisthingee: If we could get this targeted for K-3 it would be nice :) -
*** e0ne has quit IRC22:43
thrawn01anyone know if there is a gerrit review "help I'm having trouble updating my contact info" email or help ticket or some such?22:46
thrawn01can't submit reviews because of it =(22:46
*** Mandell has joined #openstack-cinder22:48
patrickeastthrawn01: might want to ask over in #openstack-infra22:48
*** kaufer has quit IRC22:48
thrawn01patrickeast: ah, thx22:49
smcginnisthrawn01: I seem to remember if you didn't actually sign the CLA it gives that weird error.22:49
*** annegent_ has quit IRC22:49
thrawn01smcginnis: I signed the thing years ago, I log on today, and had to sign it again. perhaps data corruption of some sort =(22:51
smcginnisthrawn01: Wasn't my storage! :)22:51
thingeejgriffith: thanks22:53
jgriffiththingee: sure...22:53
openstackgerritDerrick Wippler proposed openstack/python-cinderclient: Avoid _get_keystone_session() if auth_plugin
jgriffiththingee: and BTW DuncanT may be mistaken.. and my posts about "just adding the capabilities" might not work22:54
jgriffiththingee: but I don't see a technical reason we can't make it work that way22:54
jgriffiththe good thins is if it doesn't work, we don't lose anything :)22:55
thrawn01patrickeast: they got my squared away thanks! apparently I had to click the "Apply to be a Foundation Member" button. now it works =/22:55
jgriffiththingee: so now that everyone side tracked me again :)22:55
jgriffithanybody have insight into rootwrap questions I was asking?22:56
jgriffithnikesh_vedams: so I need to figure out "why" it's not in system PATH22:56
jgriffithand add it22:56
*** primechuck has quit IRC23:01
*** thingee has quit IRC23:03
*** Apoorva_ has joined #openstack-cinder23:04
*** Apoorva has quit IRC23:08
jgriffithDuncanT: thingee, FYI I just verified it does in fact work the way I remembered WRT keys23:10
jgriffithnikesh_vedams: ok... now back to what I was doing with rootwrap23:11
*** xyang has quit IRC23:13
*** thingee has joined #openstack-cinder23:13
jgriffithnikesh_vedams: your stuff went to /usr/local/bin or sbin?23:14
* jgriffith is far too lazy to open the patch :)23:14
cebrunssmcginnis: :)23:14
*** akerr has quit IRC23:17
*** chlong has joined #openstack-cinder23:17
*** dustins has quit IRC23:17
openstackgerritDanny Wilson proposed openstack/cinder: Enabling volume replication on PureISCSIDriver
nikesh_vedamswhat happended to IRC chat23:18
jgriffithnikesh_vedams: I dunno... I've been here the whole time23:18
nikesh_vedamsjgriffith: /usr/local/sbin23:19
jgriffithMy recommendation is probably to just add the full path to the tool in the filter23:20
nikesh_vedamsscstadmin: CommandFilter, /usr/loca/sbin/scstadmin, root23:24
nikesh_vedamslike this23:24
jgriffithnikesh_vedams: that's what I'm thinking yes23:24
jgriffithnikesh_vedams: but I'm sure as soon as I say that somebody will -1 patch23:24
jgriffithnikesh_vedams: if they do ask them for the solution23:24
nikesh_vedamsif they say to put in rootwrap.conf then23:25
jgriffithnikesh_vedams: say no :)23:25
nikesh_vedamsso shall i upload new patch23:26
nikesh_vedamswith those changes23:26
nikesh_vedamsremoving path from rootwrap.conf and adding it in command filters23:27
jgriffithnikesh_vedams: your command itself in the code isn't specifying the path is it?23:27
jgriffithnikesh_vedams: ie, in your code you just do "scstadmin" IIRC23:28
jgriffithNOT "/usr/local/sbin/scstadmin"23:28
*** scottda_ has joined #openstack-cinder23:35
*** EmilienM is now known as EmilienM|afk23:35
nikesh_vedamsin tgt and lio also we are doing like this23:35
nikesh_vedamsutils.execute('tgt-admin', '--show', run_as_root=True)23:36
nikesh_vedamsutils.execute('cinder-rtstool',                           'delete',                           iqn,                           run_as_root=True)23:36
nikesh_vedamssimilarly, in scst also utils.execute('scstadmin', *args, run_as_root=True)23:37
*** tellesnobrega_ has quit IRC23:39
*** openstack has joined #openstack-cinder23:41
nikesh_vedamsjgriffith: yes we are doing just scstadmin and not /usr/local/sbin/scstadmin23:43
nikesh_vedamsjgriffith: but tgt and lio also following same trend23:44
nikesh_vedamsso like you recommended shall i upload a new patch with command path in volume.filters23:49
nikesh_vedamsby the way what is problem to put in rootwrap.conf,you said eharney has the patch to revert the other commit that added it23:50
vilobhmmhemna : ping23:51
vilobhmmin cinder + ceph (as a backing storage) how do we prevent user A from mounting the volume created by user B23:51
vilobhmmceph or for that matter any backing storage23:52
vilobhmmas of now there is no such prevention and seems like anyone who belongs to same tenant23:52
hemnauser A doesn't have visibility into user B's volumes23:52
*** scottda_ has quit IRC23:52
hemnathat's the definition of a tenant isn't it ?23:52
vilobhmmyup that is … so anyone who belongs to same tenant should be able to share (access one at a time since multiattach is not supported as of now) the volumes created23:53
*** smoriya has joined #openstack-cinder23:54
hemnayes I would presume so23:54
vilobhmmand does the idea of restricting access to volumes within a tenant makes sense to you ? I mean say user A and user B are part of tenant “xyz” but restricting user B to volumes created by user B (say volB) and not letting user B access volA (which was created by user A ?23:56
hemnamaybe, I'm not sure what the use case is23:56
*** krtaylor has quit IRC23:57
*** asmith_brcd has quit IRC23:57
vilobhmmalrite thanks for the input23:58
*** annegent_ has joined #openstack-cinder23:59
*** _cjones_ has quit IRC23:59
vilobhmmuse case is simple restricting user to the volume he created and not letting him/her access anyone’s else’s volume even both the users belong to same tenant…aw wanted to know if we have anything of that sort coming in…since i was reviewing your multi-atatch thinge thought will talk to you23:59
*** _cjones_ has joined #openstack-cinder23:59

Generated by 2.14.0 by Marius Gedminas - find it at!