Friday, 2015-04-17

*** emagana has joined #openstack-cinder00:02
*** rongze has joined #openstack-cinder00:04
*** dims__ has joined #openstack-cinder00:04
*** dims_ has joined #openstack-cinder00:05
*** rongze has quit IRC00:05
*** rongze has joined #openstack-cinder00:06
*** ho has joined #openstack-cinder00:07
*** dims__ has quit IRC00:09
*** rongze has quit IRC00:11
*** rongze has joined #openstack-cinder00:11
*** rongze has quit IRC00:17
*** Mandell has joined #openstack-cinder00:20
*** zhenguo has joined #openstack-cinder00:38
*** primechuck has quit IRC00:39
*** thingee has joined #openstack-cinder00:44
*** patrickeast has quit IRC00:50
*** _cjones_ has quit IRC00:53
*** leeantho has quit IRC00:54
*** vilobhmm11 has quit IRC01:01
*** emagana has quit IRC01:02
*** emagana has joined #openstack-cinder01:03
*** emagana has quit IRC01:03
*** ozialien has joined #openstack-cinder01:03
*** ozialien has left #openstack-cinder01:12
*** krtaylor has joined #openstack-cinder01:13
*** rongze has joined #openstack-cinder01:26
*** adurbin_ has quit IRC01:29
*** rongze has quit IRC01:30
*** fanyaohong has quit IRC01:39
*** primechuck has joined #openstack-cinder01:41
*** harlowja is now known as harlowja_away01:47
*** theanalyst has quit IRC01:49
*** theanalyst has joined #openstack-cinder01:51
*** dims_ has quit IRC01:54
*** bill_az has quit IRC01:56
*** Apoorva_ has quit IRC02:03
*** jwang_ has quit IRC02:26
*** Apoorva has joined #openstack-cinder02:26
openstackgerritwanghao proposed openstack/cinder: query volume detail support volume_glance_metadata
*** Apoorva has quit IRC02:30
*** Apoorva has joined #openstack-cinder02:31
*** Apoorva has quit IRC02:35
*** jwang_ has joined #openstack-cinder02:37
*** Mandell has quit IRC02:42
*** Mandell has joined #openstack-cinder02:42
*** liusheng has joined #openstack-cinder02:46
*** Longgeek has joined #openstack-cinder02:59
*** zhithuang has joined #openstack-cinder02:59
*** zhithuang is now known as winston-d_03:00
winston-d_jdurgin: ping03:00
*** winston-1_ has joined #openstack-cinder03:03
*** winston-d_ has quit IRC03:03
*** Longgeek has quit IRC03:04
*** Longgeek has joined #openstack-cinder03:06
*** liusheng has quit IRC03:07
*** liusheng has joined #openstack-cinder03:07
*** Mandell has quit IRC03:11
*** vilobhmm1 has joined #openstack-cinder03:20
*** Lee1092 has joined #openstack-cinder03:23
openstackgerritwanghao proposed openstack/cinder-specs: Add blueprint for support-force-delete-backup
*** mdenny has quit IRC03:24
*** heyun has joined #openstack-cinder03:24
openstackgerritwanghao proposed openstack/cinder: query volume detail support volume_glance_metadata
openstackgerritwanghao proposed openstack/cinder-specs: Add blueprint for support-force-delete-backup
*** ishant has joined #openstack-cinder03:45
openstackgerritwanghao proposed openstack/cinder: Supprot for force-delete backups
*** _cjones_ has joined #openstack-cinder03:54
*** _cjones_ has quit IRC03:59
*** jungleboyj has joined #openstack-cinder04:04
*** thingee has quit IRC04:15
*** Mandell has joined #openstack-cinder04:22
*** heyun has quit IRC04:37
*** heyun has joined #openstack-cinder04:37
*** IanGovett1 has joined #openstack-cinder04:39
*** IanGovett has quit IRC04:40
*** haomaiwa_ has joined #openstack-cinder04:43
*** xyang1 has quit IRC05:07
*** openstackgerrit has quit IRC05:21
*** openstackgerrit has joined #openstack-cinder05:21
*** sks has joined #openstack-cinder05:27
*** harlowja_at_home has joined #openstack-cinder05:31
*** jseiler_ has quit IRC05:32
*** jseiler_ has joined #openstack-cinder05:32
*** nikesh has quit IRC05:33
*** nkrinner has joined #openstack-cinder05:36
*** rushiagr_away is now known as rushiagr05:38
*** deepakcs has joined #openstack-cinder05:57
*** winston-1_ has quit IRC06:02
*** vilobhmm1 has quit IRC06:06
*** Maike has joined #openstack-cinder06:07
*** Maike_ has joined #openstack-cinder06:08
*** Maike has quit IRC06:12
*** emagana has joined #openstack-cinder06:16
*** harlowja_at_home has quit IRC06:18
*** ankit_ag has joined #openstack-cinder06:18
*** zerda has joined #openstack-cinder06:21
*** IanGovett has joined #openstack-cinder06:26
*** IanGovett1 has quit IRC06:27
*** TobiasE has joined #openstack-cinder06:29
*** bnemec has quit IRC06:32
*** lpetrut has joined #openstack-cinder06:34
*** IanGovett1 has joined #openstack-cinder06:34
*** IanGovett has quit IRC06:36
*** Longgeek_ has joined #openstack-cinder06:37
*** Longgeek has quit IRC06:39
*** IanGovett1 has quit IRC06:43
*** IanGovett has joined #openstack-cinder06:44
*** Mandell has quit IRC06:46
*** dulek has joined #openstack-cinder06:48
*** IanGovett has quit IRC06:48
*** IanGovett has joined #openstack-cinder06:48
*** IanGovett has quit IRC06:53
*** IanGovett has joined #openstack-cinder06:55
*** ho_ has joined #openstack-cinder06:57
*** winston-d_ has joined #openstack-cinder06:58
*** ho has quit IRC06:59
openstackgerritwanghao proposed openstack/cinder: Supprot for force-delete backups
*** e0ne has joined #openstack-cinder07:08
*** markus_z has joined #openstack-cinder07:14
*** svasheka has quit IRC07:21
*** e0ne has quit IRC07:23
*** ronis has joined #openstack-cinder07:26
*** e0ne has joined #openstack-cinder07:28
*** alexpilotti has joined #openstack-cinder07:30
*** haomaiw__ has joined #openstack-cinder07:32
*** haomaiwa_ has quit IRC07:32
*** chlong has quit IRC07:33
*** dims__ has joined #openstack-cinder07:33
*** ronis has quit IRC07:35
*** aarefiev_ has quit IRC07:38
*** dims__ has quit IRC07:38
*** ndipanov has quit IRC07:44
*** ankit_ag has quit IRC07:46
*** jistr has joined #openstack-cinder07:50
*** abhiram_moturi has quit IRC07:50
*** abhiram_moturi has joined #openstack-cinder07:50
*** e0ne has quit IRC07:57
*** e0ne has joined #openstack-cinder08:01
*** rushiagr is now known as rushiagr_away08:04
*** sks has quit IRC08:08
*** e0ne has quit IRC08:09
*** emagana has quit IRC08:12
*** e0ne has joined #openstack-cinder08:14
*** e0ne has quit IRC08:18
*** turul has joined #openstack-cinder08:18
*** e0ne has joined #openstack-cinder08:22
*** fanyaohong has joined #openstack-cinder08:26
*** Miouge has joined #openstack-cinder08:28
*** turul is now known as afazekas08:30
*** Miouge_ has joined #openstack-cinder08:30
*** Miouge has quit IRC08:34
*** Miouge_ is now known as Miouge08:34
*** c0m0 has joined #openstack-cinder08:35
*** ndipanov has joined #openstack-cinder08:40
*** jordanP has joined #openstack-cinder08:42
*** e0ne has quit IRC08:42
*** IanGovett1 has joined #openstack-cinder08:43
*** IanGovett has quit IRC08:44
openstackgerritTina Tang proposed openstack/cinder: Attach/detach batch processing in VNX driver
*** e0ne has joined #openstack-cinder08:46
*** aarefiev has joined #openstack-cinder08:47
*** pcaruana has joined #openstack-cinder08:50
*** e0ne has quit IRC08:51
*** e0ne has joined #openstack-cinder08:53
*** e0ne has quit IRC09:00
*** e0ne has joined #openstack-cinder09:04
*** ronis has joined #openstack-cinder09:04
*** zerda has quit IRC09:07
*** e0ne has quit IRC09:09
*** rushiagr_away is now known as rushiagr09:10
*** rongze has joined #openstack-cinder09:11
*** winston-d_ has quit IRC09:12
*** winston-d_ has joined #openstack-cinder09:14
*** ho_ has quit IRC09:14
*** ho has joined #openstack-cinder09:14
*** alecv has joined #openstack-cinder09:15
*** rongze has quit IRC09:15
*** rongze has joined #openstack-cinder09:15
*** lpetrut has quit IRC09:16
*** anshul has joined #openstack-cinder09:22
*** rongze_ has joined #openstack-cinder09:26
*** rongze has quit IRC09:26
*** winston-d_ has quit IRC09:26
openstackgerritwanghao proposed openstack/cinder-specs: Support query volume filter by glance metadata
*** rongze_ has quit IRC09:28
*** rongze has joined #openstack-cinder09:28
*** rongze has quit IRC09:29
*** ho has quit IRC09:35
openstackgerritTaoBai proposed openstack/cinder: Storwize driver should only report active wwpn port
openstackgerrityogeshprasad proposed openstack/cinder: Add chap support to CloudByte cinder driver
openstackgerritDave Chen proposed openstack/cinder: set/unset volume image metadata
*** aix has joined #openstack-cinder09:45
*** heyun has quit IRC09:46
*** alecv has quit IRC09:48
*** jamielennox is now known as jamielennox|away09:49
*** e0ne has joined #openstack-cinder09:50
*** xyang has quit IRC09:57
*** winston-d_ has joined #openstack-cinder09:59
*** ozamiatin has quit IRC10:08
*** winston-1_ has joined #openstack-cinder10:10
*** winston-d_ has quit IRC10:10
*** ho has joined #openstack-cinder10:11
openstackgerritwanghao proposed openstack/cinder: Supprot for force-delete backups
*** heyun has joined #openstack-cinder10:13
*** lpetrut has joined #openstack-cinder10:15
*** kmartin has quit IRC10:20
*** winston-1_ has quit IRC10:22
*** winston-d_ has joined #openstack-cinder10:22
*** heyun has quit IRC10:23
openstackgerritTina Tang proposed openstack/cinder: Create consistgroup from cgsnapshot support in VNX driver
*** e0ne is now known as e0ne_10:38
openstackgerritwanghao proposed openstack/cinder: Supprot for force-delete backups
*** fanyaohong has quit IRC10:39
*** e0ne_ is now known as e0ne10:42
*** zhenguo has quit IRC10:50
*** dulek_ has joined #openstack-cinder10:59
*** dulek has quit IRC11:02
*** dims__ has joined #openstack-cinder11:02
*** sks has joined #openstack-cinder11:05
*** aix has quit IRC11:05
*** annegentle has joined #openstack-cinder11:07
*** winston-d_ has quit IRC11:15
*** ishant has quit IRC11:22
openstackgerritYuriy Nesenenko proposed openstack/cinder-specs: –°hecking the existence of volume
openstackgerritPetrut Lucian proposed openstack/cinder: SMBFS: Fix retrieving total allocated size
openstackgerritPetrut Lucian proposed openstack/cinder: Windows SMBFS: Fix image resize errors during volume creation
*** deepakcs has quit IRC11:40
*** julim has joined #openstack-cinder11:46
*** dulek_ has quit IRC11:46
*** ho has quit IRC11:48
*** dulek has joined #openstack-cinder11:49
*** anshul has quit IRC11:49
*** anshul has joined #openstack-cinder11:50
*** anshul has quit IRC11:50
*** anshul has joined #openstack-cinder11:51
*** cbits has joined #openstack-cinder11:52
*** cbits has left #openstack-cinder11:55
openstackgerritYuriy Nesenenko proposed openstack/cinder-specs: Checking the existence of volume
*** e0ne is now known as e0ne_11:58
*** aix has joined #openstack-cinder12:03
*** e0ne_ has quit IRC12:08
*** e0ne has joined #openstack-cinder12:10
*** annegentle has quit IRC12:12
*** dulek has quit IRC12:19
*** abehl has joined #openstack-cinder12:20
*** anshul has quit IRC12:21
*** dulek has joined #openstack-cinder12:22
*** sks has quit IRC12:25
*** timcl has joined #openstack-cinder12:31
*** dalgaaf has quit IRC12:37
*** jaypipes has joined #openstack-cinder12:38
*** akerr has joined #openstack-cinder12:38
*** primechuck has quit IRC12:38
*** Miouge has quit IRC12:48
*** anshul has joined #openstack-cinder12:50
*** abehl has quit IRC12:52
*** cbader has joined #openstack-cinder12:53
*** xyang has joined #openstack-cinder12:55
*** xyang has quit IRC12:57
*** cbader has quit IRC13:00
*** xyang1 has joined #openstack-cinder13:04
*** Miouge has joined #openstack-cinder13:05
*** c0m0 has quit IRC13:07
*** marcusvrn has quit IRC13:19
*** mwichmann has left #openstack-cinder13:22
*** dustins has joined #openstack-cinder13:23
*** rushiagr is now known as rushiagr_away13:23
*** bill_az has joined #openstack-cinder13:24
*** jungleboyj has quit IRC13:25
*** mriedem has joined #openstack-cinder13:26
*** timcl has quit IRC13:26
*** markus_z has quit IRC13:29
*** dulek has quit IRC13:33
*** rushil has joined #openstack-cinder13:35
*** dansmith is now known as superdan13:35
*** Miouge has quit IRC13:36
*** timcl has joined #openstack-cinder13:41
*** dulek has joined #openstack-cinder13:42
*** anshul has quit IRC13:45
*** Mandell has joined #openstack-cinder13:46
*** anshul has joined #openstack-cinder13:46
*** anshul has quit IRC13:46
*** anshul has joined #openstack-cinder13:47
*** eharney has joined #openstack-cinder13:50
*** winston-d_ has joined #openstack-cinder13:55
*** nkrinner has quit IRC13:58
*** jaypipes is now known as leakypipes13:59
*** winston-d_ has quit IRC14:00
*** winston-d_ has joined #openstack-cinder14:00
openstackgerritSean McGinnis proposed openstack/cinder: Logging not using oslo.i18n guidelines (scheduler)
*** bnemec has joined #openstack-cinder14:03
*** mriedem has quit IRC14:03
*** thangp has joined #openstack-cinder14:04
*** rushil has quit IRC14:05
*** abhiram_moturi has quit IRC14:06
*** timcl has quit IRC14:06
*** mriedem has joined #openstack-cinder14:06
*** rushil has joined #openstack-cinder14:07
*** rushiagr_away is now known as rushiagr14:08
ameadeI have a fairly complicated fix for live vm migration with attached volumes for the NetApp E-Series driver that I want to get merged asap. Free beer to anyone who reviews it14:10
*** Lee1092 has quit IRC14:10
*** anshul has quit IRC14:11
*** e0ne is now known as e0ne_14:11
*** bswartz has quit IRC14:13
*** e0ne_ is now known as e0ne14:13
*** deepakcs has joined #openstack-cinder14:17
DuncanTameade: I'm looking now. I suspect a better fix in the long term is to fix nova not to do multi-attach during migrate.... there's no fundamental need for it.14:18
DuncanTameade: That isn't going to happen any time soon though14:18
DuncanTameade: Does this fix mean you can only have 256 attached volumes on your e-series if you want live migration to work though?14:19
ameadeDuncanT: that would be great, I had to do some really interested logic in the meantime.14:19
ameadeDuncanT: yes14:19
ameadethat limit should go up in later models so i have a plan to query for that number14:19
*** Lee1092 has joined #openstack-cinder14:20
ameadewe tested the heck out of this fyi14:20
ameadeDuncanT: if that option is turned on, we hard limit to 256 total volumes so we can guarantee that any created volume could be attached14:22
DuncanTameade: Thanks for the clarification. PAtch looks good if a little messy. It's nicely restricted to the netapp drivers though, which is good14:23
ameadeDuncanT: thanks for having a look, i'll get you a beer in vancouver.14:23
deepakcseharney, ping14:25
*** alexpilotti has quit IRC14:27
*** ronis has quit IRC14:28
eharneydeepakcs: hi14:28
deepakcseharney, in the vol snap impr bp, do we really need to maintain old nova compatibility ? thats causing too much un-necessary code in cidner side14:31
eharneydeepakcs: i'm pretty sure we do, unless there's a good reason to break it14:32
eharneyat least for some overlap between releases14:33
*** dims__ has quit IRC14:33
deepakcseharney, whats the reason for nova to be old and cinder to be new ? I mean wouldn't a distro carry them hand in hand ?14:33
eharneydeepakcs: rolling upgrades for one14:34
*** TobiasE has left #openstack-cinder14:34
deepakcseharney, i am not much aware of that, hence these Qs... do you mean cinder would be upgraded w/o nova ?14:35
*** annegentle has joined #openstack-cinder14:35
eharneyi think in general we don't want upgrading one of them before the other to break functionality14:35
deepakcseharney, so we need to take care of new nova with old cinder too then ?14:37
openstackgerritPetrut Lucian proposed openstack/cinder: Windows SMBFS: Fix image resize errors during volume creation
DuncanTrolling upgrade, bug fix deployments in a live datacentre14:38
eharneydeepakcs: yes, we can generally do that i believe14:39
DuncanTOld cinder new nova and old nova new cinder both need to work14:39
deepakcseharney, the bp spoke specifically abt old nova - new cinder only, but per DuncanT looks like we need to support both scenario, that means support for 4 combinations14:40
deepakcsthats going to add a lot of "if old... else new.. " kind of stuff in both nova/cinder14:41
eharneyiirc you may be looking at more of that than is really necessary due to wanting to rename fields14:41
*** mtanino has joined #openstack-cinder14:41
winston-d_deepakcs: yeah, fyi, we'll upgrade cinder first, then nova, 'cos cinder upgrade is easier and nova has more dependencies plus for nova, it's not just controller nodes, huge amount of hypervisors too.14:42
DuncanTWe have been known to upgrade in either order, depending on why we're upgrading14:43
DuncanTSometimes it's easier to add a second, new API and keep both working, rather than do something odd in one14:43
deepakcseharney, rename fields ? you mean progress --> compute_complete kind of thing ?14:46
*** timcl has joined #openstack-cinder14:46
eharneyi was thinking you wanted to rename some of them, if not, then not a concern14:46
deepakcseharney, if u rename, then one of the above 4 combination won't work14:48
deepakcsdepending on which side u rename14:48
eharneyyes, which is why i'm thinking you shouldn't do that :)14:48
deepakcseharney, and i am not doing it :) not sure why u got that impression14:48
eharneyi thought i saw something in one of the patchsets that did, but i'll have to go look in more depth14:49
deepakcseharney, np, i was only thinking on old nova - old/new cinder, but now need to look at both side old/new.. so will send more patches with that in mind14:50
deepakcss/thinking on/thinking only14:50
openstackgerritPetrut Lucian proposed openstack/cinder: Windows SMBFS: Fix image resize errors during volume creation
*** annegentle has quit IRC14:51
openstackgerritPetrut Lucian proposed openstack/cinder: SMBFS: Fix retrieving total allocated size
winston-d_ameade: about e-series live migration patch, what would happen if admin turn multattach config option on for a backend that already have > 256 vols?14:51
*** annegentle has joined #openstack-cinder14:51
ameadewinston-d_: they wouldnt be able to create any more but could still do other operations, we just couldnt guarantee that they can attach them all until they get less that 25614:53
winston-d_ameade: would some of the live migration fail? if so, is the failure pattern deterministic?14:53
winston-d_ameade: ok, what if the backend has > 256 vols and all of them attached, what would happen turning that config option on?14:54
ameadewinston-d_: on a brownfield environment it isn't deterministic14:54
ameadewinston-d_: it would just mean that some live migrations would work and others wouldnt14:54
ameadewhich is lame but no way around it14:55
winston-d_ameade: first come first serve or stll non-deterministic?14:55
ameadewinston-d_: unfortunately, non-deterministic...since we can't change LUN ids on the fly in eseries we would have collisions14:56
winston-d_ameade: ok, I'd say please document these14:57
ameadewinston-d_: yes definitely14:57
winston-d_a DocImpact for your patch as well?14:57
winston-d_what about other operations like snapshot?14:58
winston-d_is # of snapshot also be limited in this case?14:58
ameadeprobably not needed since it is only for the driver, I already have a patch to update our docs (which are linked to from openstack-manuals)14:58
winston-d_ameade: ok, that's good enough14:59
ameadewinston-d_: no other operations should remain unaffected14:59
ameadewinston-d_: thanks for having a look and great questions14:59
*** markvoelker has joined #openstack-cinder15:00
winston-d_one thing I think the driver may do, is to have some logic change in reporting backend stats to scheduler.15:00
*** markvoelker has quit IRC15:00
*** markvoelker has joined #openstack-cinder15:00
winston-d_i.e. if multiattach config option is on, and # vols >= 256 already, report 0 or negative free capacity to scheduler, so that scheduler won't place new vols to the backend15:01
*** vilobhmm1 has joined #openstack-cinder15:01
ameadewinston-d_: yes good idea, I guess currently it would have the same effect but would was a scheduler retry15:01
winston-d_the current behavior would be, schedule put a vol to your backend, and fail, then reschedule15:01
winston-d_ameade: right, but I think driver can be more proactive in such case.15:02
ameadesure i agree15:02
ameadelet me think on that for a sec15:02
DuncanTwinston-d_: No need to mess with the capacity, the filter function is designed to handle exactly this situation15:03
winston-d_ameade: just a idea, not that current behavior breaks anything15:03
winston-d_DuncanT: you mean backend supplied filter function?15:03
DuncanTwinston-d_: Just return a filter function that is 'backend.volumes < 256)'15:03
DuncanTwinston-d_: Yeah15:03
*** dims__ has joined #openstack-cinder15:04
akerrcan't use capacity 0 because it prevent volume extend15:05
ameadeit may cause issue if someone wants to extend a volume15:05
ameadeyeah ^^15:05
ameadewould it make sense to add reporting for 'volume capacity'? in other words, how many more volumes can be placed on this backend?15:05
winston-d_akerr, ameade scheduler was bypassed in vol extend case15:05
akerrwhen was that done.  I remember coding up the extend function for NFS driver so that it had to check the free capacity and oversub ratios15:06
winston-d_akerr, ameade, nobody knows how much capacity a backend has when doing extend, which is wrong15:06
ameadeah, so the plan is to change that?15:07
akerrnvm i answered my own question.  If the driver is doing the checking then it is bypassing the scheduler :)15:07
winston-d_ameade: yeah, in Paris I talked about having all opertions go through scheduler, even if the 'scheduling logic' isn't needed.15:08
winston-d_I gonna revisit that in Vancover15:08
ameadeDuncanT: ah i see what you are saying15:08
winston-d_ameade: so DuncanT made a good point about using backend supplied filter/evaluate function for such case15:09
ameadeDuncanT: does that filter it out for all operations? may have the same problem?15:09
*** Maike_ has quit IRC15:09
winston-d_ameade: only create new vols, retype, migrate, managing existing vols go through scheduler15:10
winston-d_ameade: other operations bypass scheduler, unfortunately.15:11
akerrwinston-d_: but your idea would change that, correct?15:11
winston-d_akerr: yeah15:12
ameadewouldn't want retypes and migrate to not work if the backend is full either15:12
akerrwinston-d_: do we feel it has a strong chance of making it in?  I'd hate to modify the logic here to work with current implementation only to have to change it again when the scheduler gets involved15:13
winston-d_akerr: otherwise, things like reserved_percentage become totally useless when extend vol, clone vol bypass scheduler15:13
ameadeif we wanted to represent the truth, we would need volume capacity. anything else is really slightly hacky15:13
*** vilobhmm1 has quit IRC15:13
ameadeit may be good to do that anyways as i'm sure some backends have limits on the number of volumes itself15:14
winston-d_akerr: sorry, make what in? nothing bypasses scheduler anymore or ameade's live migration fix with scheduler tweaks?15:14
ameadeheck on lower end models, even without this patch the limit is 51215:14
akerrwinston-d_: your fix to push everything through the scheduler and prevent bypasses15:15
winston-d_akerr: once we get consense, i'd do my best to make it in in L115:16
*** jistr is now known as jistr|mtg15:16
winston-d_akerr: functionality-wise, that change is transparent to drivers.15:16
ameadek i have a plan15:16
ameadehow about this patch as-is, i'll make a bp for reporting 'volume_number_capacity', and we implement that?15:17
winston-d_akerr: in other words, I don't expect driver to be changed even we let everything go through scheduler15:17
winston-d_ameade: yeah, that'll work, breaking them down is actaully a good idea.15:18
*** dannywilson has joined #openstack-cinder15:18
ameadeyeah i think the real problem is outside of this patch, just a low limit makes it obvious15:18
winston-d_after all, my suggestion is an enhancment, not hard requirement.15:18
ameadei'll comment this plan on the patch with some explanation as well15:19
winston-d_ameade: sounds good15:19
ameadewinston-d_: thanks for bringing that up, feels like productive conversation15:19
*** harlowja_at_home has joined #openstack-cinder15:19
tbarronwinston-d_: DuncanT: I have a different scheduler question.  Is 'CONF.scheduler_max_attempts = 1' supposed to work?15:21
winston-d_tbarron: yeah, I think so15:21
winston-d_tbarron: i think that simply means no retries15:22
tbarronI get four retries :-) and the attempt counter doesn't increment.15:22
*** dims__ has quit IRC15:22
tbarronThis is with recent master.15:22
winston-d_interesting, got a bug #?15:23
tbarronI guess I'll file one.  Just wanted to make sure that I wasn't missing something.15:23
winston-d_sure, I need to dig into the code, haven't touch that for quite a while.15:25
tbarronIt's the first time I tried to set this.  Did so because we have automated tests that run negative vol create cases.15:25
tbarronWhen we see an exception, which is expected, we do a delete on the volume for cleanup.15:25
tbarronBut the delete is running in the middle of the 3-attempt sequence.15:26
tbarronThere is no lock around the whole sequence of three create attempts.15:26
tbarronAnd the volume state gets set from creating back to error between each attempt.15:27
tbarronSo there is no exclusion of the delete by the overall create sequence.15:27
tbarronThat kinda sounds like another bug to me.  Opinions?15:27
*** vilobhmm1 has joined #openstack-cinder15:28
tbarronI was in any case trying to work around that issue with 'scheduler_max_attempts = 1' and found that it doesn't seem to be working either.15:29
*** xyang has joined #openstack-cinder15:29
*** vilobhmm1 has quit IRC15:30
*** Mandell has quit IRC15:31
*** jdurgin1 has joined #openstack-cinder15:31
*** timcl has quit IRC15:32
winston-d_tbarron: i think there is a bug, in the exact case, when max_attempts set to 115:35
tbarronwinston-d_: yeah, I'll file it in Launchpad unless you think that has already been done.15:36
winston-d_tbarron: please file a bug, and I will see if I can get you a quick fix to test15:36
openstackLaunchpad bug 1445561 in Cinder "cinder scheduler fails to handle CONF.scheduler_max_attempts = 1 " [Undecided,New]15:42
winston-d_tbarron: thx15:43
*** kmartin has joined #openstack-cinder15:43
tbarronwinston-d_: going to lunch, bbiab15:43
tbarronwinston-d_: thank you!15:43
*** jistr|mtg is now known as jistr15:44
*** Longgeek_ has quit IRC15:44
*** jdurgin1 has quit IRC15:47
*** deepakcs has quit IRC15:49
*** ganso_ has joined #openstack-cinder15:49
*** esker has joined #openstack-cinder15:50
*** tsekiyama has joined #openstack-cinder15:51
*** mtecer has joined #openstack-cinder15:53
openstackgerritAnton Arefiev proposed openstack/cinder: Add missing backups entry to default quota class
*** mtecer has quit IRC15:58
*** dulek has quit IRC15:58
*** thangp has quit IRC15:59
*** r-daneel has joined #openstack-cinder16:00
*** kbyrne has quit IRC16:00
*** hemna has joined #openstack-cinder16:01
*** annegentle has quit IRC16:05
*** adurbin_ has joined #openstack-cinder16:06
*** vilobhmm1 has joined #openstack-cinder16:07
*** harlowja_at_home has quit IRC16:09
*** pcaruana has quit IRC16:10
*** _cjones_ has joined #openstack-cinder16:11
*** e0ne has quit IRC16:12
*** vilobhmm11 has joined #openstack-cinder16:13
*** vilobhmm11 has quit IRC16:13
*** vilobhmm12 has joined #openstack-cinder16:13
*** vilobhmm12 has quit IRC16:13
*** vilobhmm11 has joined #openstack-cinder16:14
*** vilobhmm1 has quit IRC16:15
*** marcusvrn has joined #openstack-cinder16:17
*** mriedem1 has joined #openstack-cinder16:19
*** mriedem has quit IRC16:21
*** kmartin_ has joined #openstack-cinder16:21
*** kmartin has quit IRC16:21
*** marcusvrn1 has joined #openstack-cinder16:22
*** marcusvrn2 has joined #openstack-cinder16:22
*** marcusvrn has quit IRC16:23
*** marcusvrn3 has joined #openstack-cinder16:23
*** dims__ has joined #openstack-cinder16:24
*** marcusvrn1 has quit IRC16:26
*** marcusvrn2 has quit IRC16:26
*** patrickeast has joined #openstack-cinder16:27
*** thingee has joined #openstack-cinder16:27
*** timcl has joined #openstack-cinder16:29
*** leeantho has joined #openstack-cinder16:30
*** crose has joined #openstack-cinder16:37
*** mdbooth has quit IRC16:37
*** thingee has quit IRC16:37
*** lcurtis has joined #openstack-cinder16:38
tbarronSo is it expected that a delete can run against a volume that is being created?16:39
*** thingee has joined #openstack-cinder16:39
tbarron^^^ this question may not be quite as naive as it sounds :-)16:39
jgriffithtbarron: no16:39
winston-d_tbarron: no, if volume is in 'creating' status16:39
tbarronwinston-d_: +116:39
winston-d_tbarron: but it's error already, then you can delete it.16:40
jgriffithwinston-d_: hey ya... I didn't quite follow your comment on that detach change?16:40
jgriffithwinston-d_: I mean... yeah, it's in begin_detaching16:40
tbarronwinston-d_: jgriffith: so when scheduler tries more than once, it does to error between the schedule attempts.16:40
tbarronIf a delete comes in then, it gains its lock and tries to delete.16:41
jgriffithtbarron: sorry, not catching what the question is?16:41
jgriffithtbarron: well, that's valid if it's status is error (as winston-d_ stated)16:41
tbarronjgriffith: but the create can run at the same time then, right?16:42
tbarroncreating -> error -> delete16:42
jgriffithtbarron: usually if it's error it *should* be done16:42
tbarronconcurrent with creating -> error > creating16:42
jgriffithtbarron: if not that's a bug16:42
jgriffithtbarron: we shouldn't go from creating->error->creating in the same call16:43
jgriffithtbarron: that's bad16:43
jgriffithif it's doing that16:43
tbarronjgriffith: we by default retry the create16:43
jgriffithtbarron: yeah, that's fine16:43
winston-d_jgriffith: so begin_detaching should already set volume to 'detaching' status, which means, 'detach' call should only look for 'detaching'16:43
jgriffithtbarron: but we shouldn't go to error until it's known that's not going to work16:43
jgriffithtbarron: see what I mean?16:43
openstackgerritHuang Zhiteng proposed openstack/cinder: VolMgr: reschedule only when filter_properties has retry
tbarronjgriffith: but we do go to error16:44
ameadewinston-d_, DuncanT: commented on
*** mdbooth has joined #openstack-cinder16:44
jgriffithtbarron: and as I said, then that's a bug IMO16:44
winston-d_tbarron: ^^ can you try this ifx?16:44
tbarronjgriffith: I will feed you this then, taskflow sets us to error16:44
winston-d_jgriffith: agree, vol status shouldn't go error until done all retries16:44
jgriffithtbarron: then it's a bug in taskflow16:45
winston-d_tbarron: then blame taskflow16:45
winston-d_jgriffith: :)16:45
jgriffithwinston-d_: :)16:45
tbarrontalking to josh yesterday on this, he believes we should be putting a concurrency lock around the whole set of create attempts16:45
winston-d_or those who migrate code to TF16:45
tbarronrather than relying on volume state to do the exclusion16:45
ameademore anti HA stuff16:46
jgriffithtbarron: that's a whole different philosophical argument16:46
winston-d_the whole datacenter should have one gaint lock16:46
jgriffithtbarron: bottom line, setting state in the flow before it's *done* is wrong16:46
jgriffithwinston-d_: LOL16:46
winston-d_s/one/only one/16:46
jgriffithwe should just skip this whole cluster/cloud thing altogether16:47
winston-d_every operation would require to accquire that lock to procceed16:47
jgriffitheveryting on just one thread16:47
ameadebut a global cloud lock16:47
winston-d_yeah, cloud level GIL16:47
tbarronwinston-d_: the s/G/D/GIL16:47
tbarronwinston-d_: there was no lock barring us from saying that at the same time16:47
jgriffithtbarron: where's the flow manager setting the state to error?16:48
jgriffithtbarron: LOL16:48
tbarronjgriffith: gimme a minute16:48
*** jistr has quit IRC16:48
* ameade goes back to lurking16:48
winston-d_hey, i think that's world peace solution, we should get nominated by Nobel price16:48
winston-d_save lives for 67.5% of software engineers all over the world16:50
*** dims__ is now known as dimsum__16:50
jgriffithwinston-d_: haha!16:50
* thingee is not looking forward to decisions at the next summit16:50
jgriffithwinston-d_: so about that change... the problem is we seem to have some "bugs" upstream on detach16:50
jgriffithwinston-d_: where "begin_detach" isn't being called16:51
jgriffithwinston-d_: so we slip directly to that "detach" call16:51
*** leeantho has quit IRC16:51
jgriffithwinston-d_: then we get into hemna 's new multi-attach code that raises16:51
jgriffithwinston-d_: things go "boom"16:51
winston-d_jgriffith: which is fine, but 'detach' should always assume a well-behaved detaching operation should always call 'beging_detaching' first, which would then set volume status to 'detaching'.16:51
thingeejgriffith: :(16:52
winston-d_jgriffith: but i have to admit, i don't know what would happen when multi-attach comes into play.16:52
*** Mandell has joined #openstack-cinder16:52
thingeejgriffith: all I'm going to say is I was more than happy to merge that right at the beginning of Liberty, but an overwhelming amount of people didn't believe me on that and wanted to deal with bugs last minute in the development cycle.16:53
* thingee hides16:54
tbarronjgriffith: cinder.volume.flows.common.error_out_volume16:54
tbarronin my volume log, looks like this: 2015-04-17 10:48:28.341 DEBUG cinder.volume.flows.common [req-f2ab6271-98a4-4d38-85e1-c561fa7fe711 None None] Updating volume: c058894a-363a-4eb6-aa76-3ab20781fe69 with {'status': 'error'} due to: ??? from (pid=24510) error_out_volume /opt/stack/cinder/cinder/volume/flows/
tbarron   update = {16:56
tbarron        'status': 'error',16:56
tbarron    }16:56
tbarron db.volume_update(context, volume_id, update)16:56
tbarronThen we schedule another create attempt.16:56
winston-d_tbarron: let's file a bug16:57
*** annegentle has joined #openstack-cinder16:58
jgriffiththingee: ?16:58
tbarronwinston-d_: kk, will do.  Just wanted to make sure I wasn't missing something obvkous.16:58
thingeejgriffith: multi attach bugs16:58
jgriffiththingee: no... my question is more like WTF?16:58
thingeejgriffith: not following16:58
winston-d_tbarron: btw, if you have time, can you try
jgriffith"overwhelming amount of people didn't believe me on that" ???16:59
thingeeyeah, I said I wanted to merge it first thing in liberty16:59
jgriffiththingee: besides, it's actually not a bug in the multi-attach code16:59
jgriffiththingee: that code is doing exactly what it should expect16:59
tbarronwinston-d_: I will try it in a minute.16:59
jgriffiththingee: the bug is that fact that we're going into that routine without checking beforehand16:59
thingeewe just have some side effects on other projects, like the issue I raised yesterday with nova16:59
winston-d_tbarron: thx16:59
thingeethere hasn't been enough time to catch these issues16:59
jgriffiththingee: ok17:00
thingeewhich is why I wanted to delay to liberty17:00
thingeejgriffith: oh and also I figured out why my CI is just now seeing this issue :(17:00
jgriffithsorry, I'm not familiar with the "issues" you have found inparticular with Nova etc17:00
thingeejgriffith: it's what we were discussing yesterday17:00
thingeejgriffith: with the raise of invalidvolume17:00
winston-d_hemna: mornin, early Saturday here. :)17:01
jgriffiththingee: " jgriffith: oh and also I figured out why my CI is just now seeing this issue :("17:02
jgriffith^^ ?17:02
hemnajgriffith, so was there a bug filed on this one?17:02
jgriffithhemna: I filed one on the API not checking yeah17:03
*** markvoelker has quit IRC17:03
hemnathe simple fix is to just ignore the bogus detach call and return no?17:03
thingeejgriffith: so my preseed image didn't have a reclone=true17:03
jgriffiththingee: ahhh...17:03
jgriffithpreseed images... bad17:03
*** Apoorva has joined #openstack-cinder17:04
thingeejgriffith: well, initially I thought had might nightly preseed image creator running + git pull in /opt/stack/*17:04
thingeejgriffith: greatly speeds things up and works...just if you remember to do those thing :(17:04
thingeejgriffith: last night I was spending time doing git bisect trying to figure when things stopped working for me17:05
*** kmartin_ has quit IRC17:05
jgriffithhemna: yeah17:05
jgriffithhemna: probably so17:05
jgriffithhemna: but it bothers me to do that17:06
jgriffithhemna: so I'm totally cool with doing that of course17:06
jgriffithhemna: but it bugs me that we issue invalid RPC calls down the chain like that17:07
jgriffithhemna: we should be "smarter"17:07
jgriffithhemna: I abandoned what I had, if you want to submit something that just skips/returns that's cool17:07
jgriffithhemna: or if you want me to submit it that's cool too17:07
hemnait's a catch 22 kinda17:07
jgriffithhemna: yeah17:08
hemnait should throw invalid volume IMHO17:08
hemnaand anyone that calls cinder needs to be careful of calling APIs, instead of ignoring exceptions that might come back17:08
jgriffithhemna: true story :)17:09
*** leeantho has joined #openstack-cinder17:09
thingeehemna: we still need to catch invalid volume from nova's perspective, as discussed yesterday17:09
*** kmartin_ has joined #openstack-cinder17:09
thingeehemna: when doing a detach and there are no attachments that exist17:11
openstackgerritWalter A. Boring IV (hemna) proposed openstack/os-brick: Update os-brick requirements
jgriffithtbarron: winston-d_ so that's annoying :(17:12
*** Mandell has quit IRC17:12
*** russellb has quit IRC17:14
tbarronwinston-d_: your fix works for me, i.e. I tested with it and now only see one create attempt when my driver throws an exception back to the manager.17:16
*** saltsa has left #openstack-cinder17:16
winston-d_tbarron: great17:16
*** harlowja_away is now known as harlowja17:17
*** russellb has joined #openstack-cinder17:17
winston-d_tbarron: thx for verifying. now I can get off and get some sleep17:19
winston-d_you guys have a good day and nice weekend17:19
*** winston-d_ is now known as winston-d_zZZ17:19
tbarronwinston-d_zZZ: sleep well!17:19
*** Mandell has joined #openstack-cinder17:20
*** Mandell has quit IRC17:21
*** Mandell has joined #openstack-cinder17:21
*** russellb has quit IRC17:25
openstackgerritPetrut Lucian proposed openstack/cinder: Windows: Improve vhdutils error messages
*** russellb has joined #openstack-cinder17:29
*** timcl has quit IRC17:30
harlowjatbarron how's your investigation going17:34
* harlowja let me know if u need any details about TF (or other)17:34
*** e0ne has joined #openstack-cinder17:37
*** timcl has joined #openstack-cinder17:37
*** mriedem1 is now known as mriedem17:38
tbarronharlowja: well, we found (and winston-d_zZZ fixed) a bug wherein CONF.scheduler_max_attempts = 1 was being treated as infinite instead of as 1 :-)17:41
tbarronharlowja: the remaining issue is what we were talking about yesterday.  Turns out there isn't a consensus about how to exclude deletes from starting in the course of a sequence of retrying creates.17:42
harlowjahmmm, ya, if a delete comes in while a rescheduled create is on the RPC bus, then its hard to lock it there :-/17:43
tbarronwinston-d_zZZ: and jgriffith: believe that it is wrong to reset the volume state from creating to error between create attempts.17:43
*** esker has quit IRC17:43
tbarronand suggested that I file a bug on that.17:43
*** jordanP has quit IRC17:43
harlowjahmmm, what about new state 'creating-errored'17:44
tbarronyesterday, you on the other hand, IIRC, asserted that we should be setting a lock around the sequence as a way of doing the exclusion.17:44
jgriffithharlowja: why change state in the middle of an operation?17:44
jgriffithharlowja: so what I mean is... until we acknowledge defeat shouldn't it just stay in "creating"17:44
harlowjajgriffith fair enough, maybe not useful17:44
jgriffithharlowja: since it's still trying?17:44
jgriffithharlowja: well.. I think substates would be TREMENDOUSLY useful17:44
harlowjanever acknowledge defeat solider!!17:45
tbarronI don't have as much experience on this as you guys but my bias is for the simplest solution possible.17:45
*** emagana has joined #openstack-cinder17:45
*** e0ne is now known as e0ne_17:45
harlowjatbarron ya; i guess it becomes a question of which one is simplest, probably just not set the state  to error (until defeat is acked) is simplest17:46
tbarronIf there are independent reasons for substates, fine.  But otherwise, keep it simple and don't add more machinery.17:46
tbarronharlowja: so will anything break if we just pull those lines that reset the state to error?17:46
harlowjasure, i'd be nice to have create-attempt-1 or create-attempt-2 kind of states, but i guess thats more of a nice to have...17:47
harlowjalet me see17:47
* harlowja thought that was a shared function, so probably, lol17:47
*** annegentle has quit IRC17:50
*** e0ne_ is now known as e0ne17:52
*** Maike has joined #openstack-cinder17:53
harlowjatbarron so what i'd try is to use the information @ and have that conditionally stop error_out_volume from being triggered17:53
*** annegentle has joined #openstack-cinder17:54
harlowjaand maybe log a warning instead of activating error_out_volume and just let it be (and then when rescheduling stops this will really enter error)17:55
tbarronharlowja: that looks like a good approach to me.17:56
*** e0ne is now known as e0ne_17:57
harlowjaif anyone is interested in y! stuff + ceph ( ) <--- current object store kind of stuff (volume storage post someday...)18:04
tbarronharlowja: I raised for this.18:04
openstackLaunchpad bug 1445601 in Cinder "cinder is putting volume state to error while retrying cinder creates" [Undecided,New]18:04
*** esker has joined #openstack-cinder18:04
harlowjatbarron cool18:04
*** aix has quit IRC18:06
*** crose has quit IRC18:08
*** lpetrut has left #openstack-cinder18:12
openstackgerritVilobh Meshram proposed openstack/cinder: Driver get_stats refresh arg is useless
*** annegentle has quit IRC18:21
*** annegentle has joined #openstack-cinder18:25
*** leakypipes has quit IRC18:25
*** rushiagr is now known as rushiagr_away18:26
*** dustins_ has joined #openstack-cinder18:26
*** timcl has quit IRC18:27
*** annashen has joined #openstack-cinder18:27
*** Mandell has quit IRC18:28
*** dustins has quit IRC18:29
*** jungleboyj has joined #openstack-cinder18:29
*** mtecer has joined #openstack-cinder18:30
*** e0ne_ is now known as e0ne18:30
*** Mandell has joined #openstack-cinder18:31
mtreinishthingee: so was the failure before with tempest detaching before deletes18:32
mtreinishthingee: hemna left the -1 to indicate that the detach was no longer needed18:32
thingeemtreinish: that's exactly the issue I'm group in use18:33
mtreinishthingee: well the sec group thing is a follow on failure, because the server delete failed18:33
mtreinishso the sec groups are still in use when tempest goes to delete them in cleanup18:33
thingeemtreinish: alright, you wouldn't happen to have the related nova change to do the detach?18:34
mtreinishnope, sry. That's probably over an even bigger date range, because the lvm bug which caused to add the skip was outstanding for some time18:35
mtreinishthingee: although since hemna left the comment I want to say it might have been related to the multi-attach stuff18:35
*** jungleboyj has quit IRC18:36
mtreinishthingee: yeah it's on the cinder side, the server delete fails when calling to cinder returns a 50018:39
mtreinishthingee: and then in the cinder logs it has:
thingeemtreinish: I'll add a catch in nova and see if that resolves the problem18:40
thingeestill not exactly sure why this is not reproducible in gate18:40
mtreinishthingee: I'm wondering if it's the multi-attach patches which changed the detach behavior if the volume isn't attached anywhere18:41
mtreinishlike what used to happen if we issued detach and it wasn't attached (pre multi-attach)18:41
thingeehemna: ^18:42
mtreinishbecause that cinder traceback is clearly the len(attachments) in cinder is 0 so getting attachment[0] will fail18:42
mtreinishalthough that might have been fixed since hemna left the original comment on the unskip patch18:42
thingeealright I'll test it in my environment and propose a patch if it helps18:42
*** winston-d_zZZ has quit IRC18:43
*** lpetrut has joined #openstack-cinder18:44
*** annashen has quit IRC18:47
thingeemtreinish: thanks for your help as always. I have relatives visiting so might have to be later that I post results. unless hemna wants to post a patch to catch things on the nova side when invalidvolume is raised.18:51
*** jungleboyj has joined #openstack-cinder18:51
thingeethat's too bad multi attach caused issues in the tempest test though18:51
thingeethis has been quite a headache for me.18:52
mtreinishthingee: sure, np. I'm not sure if I'll be around later today or not18:52
*** akerr has quit IRC18:53
mtreinishbut I'll probably check my bouncer backlog at some point18:53
thingeemtreinish: no worries. we'll pick back up next week.18:53
thingeemtreinish: fwiw, I didn't want this merging late in kilo. there was just an overwhelming amount of push on multi-attach in late kilo, rather than merging first thing in liberty.18:54
thingeejust felt like there wasn't enough time to gate on it18:54
mtreinishthingee: sure, I can understand that18:54
mtreinishheh, let's just blame jgriffith because this test was only ever skipped because of an lvm setup bug :)18:55
thingeepoor jgriffith ... I'd rather blame david wang18:55
* thingee hopes to meet david wang some day18:55
thingeemystery person to propose summit sessions, we argue, and he never shows up. HA18:56
thingeealright bbl18:57
mtreinishheh, it's great when that happens18:57
*** e0ne has quit IRC18:58
*** jungleboyj has quit IRC19:00
*** marcusvrn3 has quit IRC19:00
*** Mandell has quit IRC19:10
*** jungleboyj has joined #openstack-cinder19:13
*** Maike has quit IRC19:13
*** hemna has quit IRC19:15
*** jungleboyj has quit IRC19:19
*** annegentle has quit IRC19:19
*** bnemec is now known as beekneemech19:22
*** dustins_ has quit IRC19:27
*** hemna has joined #openstack-cinder19:40
openstackgerrithadi esiely proposed openstack/cinder: Store volume encryption metadata on each volume
openstackgerrithadi esiely proposed openstack/cinder: Add test case for volume_encryption_metadata_get
openstackgerrithadi esiely proposed openstack/cinder: Remove unnecessary checks for encrypted types
*** Mandell has joined #openstack-cinder19:47
*** annashen has joined #openstack-cinder19:48
*** setmason has joined #openstack-cinder19:50
*** timcl has joined #openstack-cinder19:52
*** annashen has quit IRC19:53
*** patrickeast has quit IRC19:53
*** ronis has joined #openstack-cinder19:55
*** annashen has joined #openstack-cinder20:00
*** ndipanov has quit IRC20:13
*** Mandell has quit IRC20:16
*** openstackgerrit has quit IRC20:22
*** openstackgerrit has joined #openstack-cinder20:23
*** timcl has quit IRC20:24
*** ronis has quit IRC20:26
*** annashen has quit IRC20:28
*** Mandell has joined #openstack-cinder20:29
*** annashen has joined #openstack-cinder20:32
*** annashen_ has joined #openstack-cinder20:44
*** annashen has quit IRC20:44
*** emagana has quit IRC20:45
*** emagana has joined #openstack-cinder20:47
jgriffithmtreinish: haah... FTR, there were actually a BOAT load of issues with that test including networking.  Cinder/LVM just got tagged as the last straw :)20:47
*** Lee1092 has quit IRC20:50
openstackgerritJohn Griffith proposed openstack/cinder: Standardize logging in
*** Mandell has quit IRC20:55
*** geguileo has quit IRC20:56
openstackgerritWalter A. Boring IV (hemna) proposed openstack/os-brick: Brick: Fix race in removing iSCSI device
*** vilobhmm11 has quit IRC20:58
*** logan2 has quit IRC20:59
*** logan2 has joined #openstack-cinder21:00
*** vilobhmm1 has joined #openstack-cinder21:00
*** vilobhmm1 has quit IRC21:00
*** vilobhmm1 has joined #openstack-cinder21:01
*** esker has quit IRC21:02
-openstackstatus- NOTICE: Gerrit will be unavailable between 22:00 and 23:59 UTC for project renames and a database update.21:04
*** Apoorva has quit IRC21:06
*** mriedem has quit IRC21:07
*** openstackgerrit has quit IRC21:23
*** openstackgerrit has joined #openstack-cinder21:23
*** Mandell has joined #openstack-cinder21:26
*** kfox1111 has joined #openstack-cinder21:30
kfox1111got a weird case where I have a volume I can't detach. nova volume detach spits out a no volume found, though its there, shows up all correctly in cinder list and is on /dev/vdb on the vm.21:31
*** erlon has quit IRC21:31
kfox1111Is there a way to force it into in-use state?21:31
kfox1111or should I force it available, then reattach it so I can detach it again?21:32
*** emagana has quit IRC21:33
*** annegentle has joined #openstack-cinder21:34
*** vilobhmm1 has quit IRC21:35
*** annashen_ has quit IRC21:36
kfox1111ERROR (NotFound): volume_id not found: 491a9df5-834b-41bd-8d5c-6fba288c8c53 (HTTP 404) (Request-ID: req-bdab05fd-7522-4cbc-9438-16e7c2a4a0ce)21:36
*** vilobhmm1 has joined #openstack-cinder21:37
*** annashen has joined #openstack-cinder21:37
kfox1111| 491a9df5-834b-41bd-8d5c-6fba288c8c53 |   in-use  | cybervis-volume-new |  5   |     None    |  false   | d305f6e2-ffd5-46b8-bd03-4601a4cc151e |21:37
kfox1111I don't get it. :/21:37
*** emagana has joined #openstack-cinder21:41
*** emagana has quit IRC21:41
*** emagana has joined #openstack-cinder21:42
*** setmason has left #openstack-cinder21:44
*** jamielennox|away is now known as jamielennox21:48
*** markvoelker has joined #openstack-cinder21:49
*** markvoelker_ has joined #openstack-cinder21:50
*** vilobhmm1 has quit IRC21:51
*** markvoelker has quit IRC21:54
*** mtecer has quit IRC21:55
*** markvoelker_ has quit IRC21:58
-openstackstatus- NOTICE: Gerrit is unavailable until 23:59 UTC for project renames and a database update.22:04
-openstackstatus- NOTICE: Gerrit is unavailable until 23:59 UTC for project renames and a database update.22:06
*** ChanServ changes topic to "Gerrit is unavailable until 23:59 UTC for project renames and a database update."22:06
*** annegentle has quit IRC22:06
*** vilobhmm1 has joined #openstack-cinder22:08
*** annashen has quit IRC22:15
*** annashen has joined #openstack-cinder22:16
kfox1111any ideas how to force it to disconnect properly?22:22
*** vilobhmm1 has quit IRC22:27
*** vilobhmm1 has joined #openstack-cinder22:28
*** bswartz has joined #openstack-cinder22:30
*** annegentle has joined #openstack-cinder22:38
*** Mandell has quit IRC22:41
*** winston-d_zZZ has joined #openstack-cinder22:41
*** annashen has quit IRC22:41
*** Mandell_ has joined #openstack-cinder22:43
*** winston-d_zZZ has quit IRC22:46
*** rushil has quit IRC22:46
*** lcurtis has quit IRC22:47
*** IanGovett1 has quit IRC22:48
*** annegentle has quit IRC22:49
*** annashen has joined #openstack-cinder22:54
*** ChanServ changes topic to "The OpenStack Block Storage Project Cinder | The New Kids On the Block |"23:04
-openstackstatus- NOTICE: Gerrit is available again.23:04
*** patrickeast has joined #openstack-cinder23:10
*** lpetrut has quit IRC23:14
*** annashen has quit IRC23:15
*** ganso_ has quit IRC23:17
*** jamielennox is now known as jamielennox|away23:30
*** patrickeast has quit IRC23:50
*** hemna has quit IRC23:52
*** annegentle has joined #openstack-cinder23:54

Generated by 2.14.0 by Marius Gedminas - find it at!