Thursday, 2016-07-28

*** thorst_ has joined #openstack-powervm00:02
*** svenkat has joined #openstack-powervm00:06
*** thorst_ has quit IRC00:10
*** svenkat has quit IRC00:11
*** svenkat has joined #openstack-powervm00:21
*** thorst_ has joined #openstack-powervm00:43
*** thorst_ has quit IRC00:44
*** thorst_ has joined #openstack-powervm00:44
*** thorst_ has quit IRC00:52
*** Ashana has joined #openstack-powervm01:10
*** Ashana has quit IRC01:14
*** arnoldje has joined #openstack-powervm01:18
*** esberglu has joined #openstack-powervm01:24
*** thorst_ has joined #openstack-powervm01:25
*** thorst_ has quit IRC01:34
*** svenkat has quit IRC01:37
*** esberglu has quit IRC01:45
*** esberglu has joined #openstack-powervm01:55
*** esberglu has quit IRC02:02
*** thorst_ has joined #openstack-powervm02:04
*** thorst_ has quit IRC02:10
*** esberglu has joined #openstack-powervm02:13
*** esberglu has quit IRC02:24
*** thorst_ has joined #openstack-powervm02:40
*** svenkat has joined #openstack-powervm02:41
*** thorst_ has quit IRC02:41
*** svenkat has quit IRC02:45
*** esberglu has joined #openstack-powervm03:11
*** esberglu has quit IRC03:34
*** jwcroppe_ has joined #openstack-powervm04:28
*** tsjakobs has joined #openstack-powervm05:07
*** tsjakobs has quit IRC05:29
*** jwcroppe_ has quit IRC05:36
*** arnoldje has quit IRC05:38
*** kotra03 has joined #openstack-powervm05:39
*** madhaviy has joined #openstack-powervm05:46
*** madhaviy has quit IRC06:45
*** madhaviy_ has joined #openstack-powervm06:45
*** madhaviy_ is now known as madhaviy06:45
*** madhaviy_ has joined #openstack-powervm06:47
*** madhaviy has quit IRC06:47
*** madhaviy_ is now known as madhaviy06:48
*** madhaviy_ has joined #openstack-powervm07:14
*** madhaviy has quit IRC07:14
*** madhaviy_ is now known as madhaviy07:14
*** svenkat has joined #openstack-powervm07:24
*** svenkat has quit IRC07:28
*** k0da has joined #openstack-powervm08:11
*** svenkat has joined #openstack-powervm10:24
*** svenkat has quit IRC10:28
*** thorst has joined #openstack-powervm10:56
*** Ashana has joined #openstack-powervm11:43
*** svenkat has joined #openstack-powervm11:48
*** seroyer has joined #openstack-powervm12:26
*** edmondsw has joined #openstack-powervm13:07
*** edmondsw has quit IRC13:08
openstackgerritDrew Thorstensen proposed openstack/networking-powervm: Simplify host_uuid and gets  https://review.openstack.org/34439913:08
*** mdrabe has joined #openstack-powervm13:09
*** arnoldje has joined #openstack-powervm13:09
*** tblakeslee has joined #openstack-powervm13:10
*** edmondsw has joined #openstack-powervm13:14
*** esberglu has joined #openstack-powervm13:35
thorstesberglu: I think we need two CI changes for the networking-powervm bits13:39
thorst1) Turn down the logging of the REST API bits (I had a 1.5 Gig log file because of that  EEK)13:39
thorst2) Set the 'heal_and_optimize_interval' to like 9000013:40
*** seroyer has quit IRC13:41
thorstand I think https://review.openstack.org/#/c/344399/ will be our friend for speed ups13:43
esbergluesberglu: Okay. heal_and_optimize_interval is a config option for networking_powervm?13:47
thorstesberglu: yep.  It won't solve the speed issue...but I think that leads to a few short delays on the longer runs.13:49
*** apearson has joined #openstack-powervm13:49
thorstand having multiple VMs on the same novalink hitting that at the same time...is too expensive13:49
thorstand the other review should fix an issue where we're making WAY too many LPAR feed calls13:50
*** seroyer has joined #openstack-powervm13:56
esbergluthorst: Awesome. What section of the config does it go in?13:58
thorstthe agent section for powervm13:58
thorstnot sure we have anything in there yet13:58
esbergluthorst: Doesn’t look like it. What’s the syntax for that?14:09
esbergluFor adding that section I mean14:13
thorstlet me find the file14:13
*** efried has joined #openstack-powervm14:14
thorstesberglu: See the roles / devstack-compute / templates / local.conf.j214:15
thorstthe section is [AGENT]14:15
thorstthe post config is...14:15
thorst[[post-config|$Q_PLUGIN_CONF_FILE]]14:16
esbergluI think it might need a slash. [[post-config|/$Q_PLUGIN_CONF_FILE]]14:17
*** tsjakobs has joined #openstack-powervm14:21
thorstyeah...it had that initially...I removed it because the nova one didn't have it14:22
thorst:-)14:22
*** mdrabe has quit IRC14:27
*** efried has quit IRC14:29
*** efried has joined #openstack-powervm14:30
*** mdrabe has joined #openstack-powervm14:34
*** jwcroppe has joined #openstack-powervm14:40
*** jwcroppe has joined #openstack-powervm14:40
esbergluthorst: How do I change the rest API logging?14:55
*** tblakeslee has quit IRC14:55
thorstwhere's the path to the local.conf.aio that we use?14:57
thorstlet me look in there14:57
esbergluci-management/templates/scripts/local.conf.aio14:57
*** apearson has quit IRC14:59
*** apearson has joined #openstack-powervm15:01
thorstefried: remember that one time I tried to switch pypowervm to pretty_tox and you said no?15:01
efriedthorst, yeah, I remember.  See commit message.15:02
thorstlol15:02
thorstahh, vindicated15:02
thorst-115:02
efriedballs.15:03
efriedthorst, done.15:04
efried<sheepish>15:04
efriedCourse, the thing won't build until dclain gets jenkins in gear.15:04
thorst#devops15:05
esbergluthorst: We told stephen that we would increment by 8/2 if we we are going to go to 2.0.315:16
thorstadreznec: you may want to listen in here15:17
thorstefried: you too15:17
efriedsup15:18
*** openstackgerrit has quit IRC15:18
efriedwho's Stephen?15:18
thorstso I don't think we need to increment ceilometer-powervm for Mitaka.  I don't think anything went back there...so we can keep the existing version.  Anyone disagree?  (I in fact assert we must keep the existing version)15:18
*** openstackgerrit has joined #openstack-powervm15:18
efriedthorst, looks like there's a couple change sets on top of mitaka.15:19
thorstefried: but how recent?  Was it after we tagged last?15:19
efried| * 411bbb5 Fix package reference in version code15:19
efried* | 45809e9 Add ceilometer-powervm spec dir and template15:19
efriedThat second one is prolly fine - specs for newton/ocata15:20
adreznecYeah15:20
efriedBut the first one, looking...15:20
thorstare you thinking mitaka?15:20
thorstI thought that was just master...15:20
adreznecIs setuptools locked at <20.2 for Mitaka?15:20
efriedwait, are we diffing master with mitaka, or mitaka with liberty?15:20
thorstesberglu: when was the last tag?15:20
adreznecthorst: 5.1915:21
thorstefried: we're looking to see if there was anything significant between last mitaka tag and now (in mitaka)15:21
thorstnothing for master...15:21
adreznecTranslations were the last thing that went in15:21
efriedOkay, that other change set fixes broken docs builds.15:21
adreznecI don't see anything critical15:21
thorstso last merge to ceilometer-powervm way may 17.  We did our tag on may 19.  So no version bump15:22
thorstagree?15:22
thorst(again - this is all mitaka)15:22
adreznec+2115:22
adreznec*+115:22
adreznecNot quite that enthusiastic about it...15:23
thorstnetworking-powervm is the same way15:23
thorstlast merge was May 17th...so I think we're fine there.15:23
*** k0da has quit IRC15:24
adreznecIs there anything that should go back that hasn't?15:24
thorstnova-powervm has had many since May 17th...so we definitely need a tag there.15:24
adreznecDo we care about the fix for https://bugs.launchpad.net/networking-powervm/+bug/1573180 going back into Mitaka? If not, then I agree on networking-powervm15:28
openstackLaunchpad bug 1573180 in networking-powervm "Agent makes frequent get_devices_details_list RPC calls with an empty list" [Undecided,Fix released] - Assigned to Sridhar Venkat (svenkat)15:28
adreznecand yeah, nova-powervm definitely needs a tag15:28
thorstadreznec: I don't worry too much15:30
thorstit wasn't a huge impact15:30
thorstso...consensus.  Increment the nova-powervm one?  And if something urgent comes in...then Stephen will have to react to it  :-(15:33
adreznecYup15:34
adreznecGuess so15:34
thorstesberglu: Rip it15:35
esbergluthorst: Cool15:37
adreznecesberglu: While you're tagging that for Mitaka, can you tag our newton-2 milestones for Newton?15:44
adreznecI just realized we never did that15:44
adreznecJust tag the latest commit on master for each of the repos as 3.0.0.0b215:45
thorst+115:54
esbergluadreznec: Yeah. What about for networking-powervm? Since it’s a stadium project.15:54
adreznecNot anymore15:54
adreznecWe have control back15:54
esbergluOh yeah I remember you saying that15:54
adreznecWe can push our own tags now15:54
esbergluI need to be added to the project release teams on gerrit I think15:55
adreznecAh15:55
adreznecI think I can do that15:55
adreznecesberglu: Done15:56
*** apearson has quit IRC16:03
*** apearson has joined #openstack-powervm16:04
esbergluAlright everything is tagged16:09
*** tblakeslee has joined #openstack-powervm16:19
thorstalright alright alright16:21
*** k0da has joined #openstack-powervm16:45
*** burgerk has joined #openstack-powervm16:47
thorstburgerk: we've done MPIO with Ubuntu 16.04 LTS, right?16:55
burgerkyes, not sure how extensively it was tested ... i.e. shutting down paths, etc.16:59
thorstchmod6661rg is having trouble with it on his NL partition.  I thought we had hit it though in our test.  :-/17:00
thorstmight want to see if we can get the test team to take another look at it17:00
burgerk16.04 as the GuestOS?  or referring to the NovaLink partition?17:01
thorstwell, he's doing it for the NL partition.  But that's really just a specific type of Guest OS.  So lets start with guest os17:02
burgerkok17:02
thorstthx!17:03
*** madhaviy has quit IRC17:07
esbergluthorst: Still not finding any options to turn down the networking logging17:09
thorsthttps://github.com/openstack/nova-powervm/blob/master/devstack/local.conf.aio#L46-L4917:17
thorstsee in there where pypowervm is set to info17:17
thorstI think something like that is needed for pypowervm in the neutron conf17:17
*** tblakeslee has quit IRC17:18
efriedthorst, care to weigh in on https://review.openstack.org/#/c/348014/ ?17:31
openstackgerritEric Fried proposed openstack/nova-powervm: Rebase on pypowervm.tasks.hdisk refactor  https://review.openstack.org/34788917:36
thorstefried: weighed in17:39
efriedthx17:39
*** k0da has quit IRC17:40
efriedthorst, mdrabe: can we take a moment to discuss?17:40
thorstI'm debugging something...can chat, but distracted17:41
efriedthorst: looking at the code, I don't see any path where we don't destroy the disks.  (Except where they didn't exist in the first place.)17:42
efriedDo you?17:42
thorstyeah17:42
thorstlet me get you link17:42
thorstits live migration17:42
thorst(also cold I think)17:43
thorston shared storage, like a SSP17:43
thorsthttps://github.com/openstack/nova/blob/master/nova/compute/manager.py#L659-L66017:43
kotra03@thorst: This is regarding the bug 160275917:45
openstackbug 1602759 in nova-powervm "get_vm_qp in vm.py throws an exception if the instance no longer exists" [Undecided,In progress] https://launchpad.net/bugs/1602759 - Assigned to Ravi Kumar Kota (ravkota3)17:45
kotra03@thorst: If we change 'get_vm_qp' method to not throw InstanceNotFound exception then that would break the existing code.17:45
kotra03There are other methods such as 'instance_exists' which depend on the InstanceNotFound exception thrown by 'get_vm_qp' method.17:45
kotra03All the methods that call get_vm_qp method are going to be affected if we don't throw that exception.17:45
thorstkotra03: didn't we just not want it to log or something?17:46
efriedthorst, I see it now.17:46
efriedIt juuuust may be worth adding a flag to the scrubber to get around that.17:46
thorstefried: honestly...I kinda like reading the code the way it is17:47
thorstit makes sense.17:47
thorstit reads well...17:47
efriedCause you're used to it.17:48
thorstjust saying 'blast all the things' leads me to question 'well is that getting all the disks'17:48
thorsthow can it just assume that17:48
efriedBecause that's its job?17:48
thorstyeah, agree.  Plus I did write chunks of it17:48
thorstkotra03: specifically this block barfs if we get an instance not found exception17:50
thorsthttps://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L1999-L200117:50
thorstand clogs up the logs.17:50
thorstone could argue 'well that makes sense, the instance isn't there.'17:50
thorstbut it doesn't make sense that I'm getting logs from the destroy path for instance not found....because of course the REST API sends me events about a VM that got deleted...but then we can't process them because that VM is gone.17:51
thorstso we shouldn't chunk big log messages for that very normal scenario17:51
efriedthorst, So there's the other option, which is to rework https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/volume/vscsi.py#L371-L378 (again) so that it always removes the mappings, even if discover_hdisk fails.  I believe we've established this is safe for the "no ITL" error mdrabe is seeing.17:51
thorstso scrub in there...17:52
thorstor a form of scrubbing.17:52
*** tblakeslee has joined #openstack-powervm17:52
thorstwhat if there are multiple volumes on the scsi bus...but only one wasn't found?17:52
efriedThat's the bug we're hitting (partially).17:53
efriedOne isn't found, but that makes the method bail and ignore the rest.17:53
efriedAlthough even ignoring the first one is bad.17:53
thorstso it just returns False...you're saying the other volumes don't get processed?17:54
efriedcorrect.17:54
efriedmdrabe, keep me honest.17:54
efriedoh, maybe I'm wrong.17:55
efriedBut the point is, I think we want to reach line 413 even if that discover_hdisk fails hard.17:56
*** apearson has quit IRC17:58
*** apearson has joined #openstack-powervm18:00
mdrabeefried, thorst: No all the volumes are processed18:07
mdrabeBut each one returns False because none of the ITLs are there18:07
efriedRight.18:07
mdrabebecause it was ripped out by evacuation18:07
efriedSo we can try to patch up the above such that it does the disconnect (but not the delete) when it gets that particular error (or really any error).18:08
efriedHowever, it would have to do the disconnect via a different mechanism, because the current mechanism (line 413) relies on having info about the disk, which we don't have.18:09
efriedI reckon we can figure that out.18:09
efriedMy beef is that that chunk of code is already pretty convoluted.18:09
efriedSo18:09
efriedthorst, mdrabe: Option 1: rework vscsi.py discon_vol_for_vio as described above.  Option 2: hit the main destroy flow with a bigger hammer, adding a "destroy_disks=destroy_disks" option to the LPAR scrubber.18:10
mdrabeI honestly have no clue how the VIOS would respond to option 218:11
efriedWhat do you mean?18:11
efriedWe use that scrub already.18:11
efriedThe VIOS responds just fine.18:11
mdrabeYou can just remove all those mappings/elements no problem18:11
mdrabe?18:12
efriedYeah.18:12
*** kotra03 has quit IRC18:12
mdrabeWhere as there's designated tasks to remove them nicely, which seem to involve a lot of logic18:12
mdrabeThat's my concern, but I guess if it works for everything currently...18:12
efriedCorrect.  And which rely on the mappings/elements behaving nicely.18:12
mdrabeWait though18:12
efriedwhich they're not in this case.18:12
mdrabeThat scrub is only being used in one place right?18:12
efriedToday?18:13
efriedI think two.18:13
mdrabeYes18:13
mdrabelooking18:13
efriednova_powervm/virt/powervm/slot.py:123:        pvm_tstor.add_lpar_storage_scrub_tasks([lpar_id], scrub_ftsk,18:13
efriednova_powervm/virt/powervm/tasks/vm.py:98:        pvm_stg.add_lpar_storage_scrub_tasks([wrap.id], self.stg_ftsk,18:13
mdrabeSo the example in slot.py is kind of not a good one18:13
mdrabesince that should be for stale mappings yes? Which doesn't happen _too_ often18:14
efriedIt's almost identical to this situation we're discussing.18:15
mdrabeI disagree because18:15
mdrabehttps://review.openstack.org/#/c/348014/ would apply to all deletes18:15
mdrabeThat's what you're talking about right? Doing the scrub there? Just wanna make sure we're on the same page18:16
efriedYes, applies to all deletes.18:17
efriedYes, doing the scrub during the main destroy flow.18:17
efriedSo okay, yeah, the slot.py example is doing the scrub during the create flow.  So is the vm.py example.18:19
efriedThis would be the first attempt to use it during a "normal" destroy flow.18:19
*** k0da has joined #openstack-powervm18:20
mdrabeAlright efried, I'm gonna talk about vscsi since I know at least something about that18:21
mdrabeIn the vscsi driver, the disconnect logic, the stuff that does the work is _add_remove_mapping and _add_remove_hdisk18:22
mdrabeActually I start getting confused right here18:23
mdrabe_add_remove_mapping should remove the specific VSCSI mapping for volume X on VIOS A, which would include the PV as well no? So what does _add_remove_hdisk do?18:24
efriedmdrabe, no.  _add_remove_mapping removes the VSCSI mapping.  add_remove_hdisk deletes the PV.18:25
mdrabeGot it ok. So the remove hdisk job essentially the opposite of LUA recovery?18:26
efrieder, not really.18:26
*** k0da has quit IRC18:26
efriedThink of LUA recovery as "run cfgmgr, then try to find the disk I care about".18:27
efriedRemove hdisk is just rmdev18:27
mdrabeGot it got it18:27
mdrabeSo does that scrub remove the hdisk?18:27
efriedyup.18:27
efrieduhhh.18:28
efriedshoot, no.18:29
mdrabeK so that's one actually concrete conern18:29
efriedOnly vopt and vdisk (LV)18:29
efriedYeah.  This won't work at all.18:29
efriedOption 1 it is.18:29
efriedDang, it *almost* would have worked as is.18:31
efriedIf only [None] evaluated as False.18:31
mdrabeOk so efried: so before the return False in the disconnect in the vscsi driver, there'll be some remove mapping logic18:33
mdrabeWhich'll require some additional logic since we don't have the device name18:34
*** catintheroof has joined #openstack-powervm18:34
efriedmdrabe, thorst: I'm actually thinking we remove *all* of the 'return False's; and then add logic around the _add_remove_*() calls such that, if device_name is None:18:37
efried=> _add_remove_mappings converts [None] to None or [] (I think this will actually wind up "deleting" the mappings more than once - discussion to follow)18:37
efried=> _add_remove_hdisk doesn't get run at all.18:37
efriedSo I believe that first thing will trigger deletion of all the mappings for the LPAR in question.  Since this is in a generic disconnect flow, that may actually be a very bad thing.  If we're doing disconnect on a running instance (rather than under the auspices of a full-VM op like destroy, LPM, cold migration, etc.), it'll disconnect disks we didn't want to tough.18:38
efriedtouch.18:38
efriedSo - is there some other way we can identify just the mapping that's busted?18:39
mdrabeWe'll know there's busted mappings if we can't find the hdisk18:40
efriedIs any part of the mapping object itself broken?18:40
mdrabeer well sorry, that doesn't really help18:40
efriedIf it's missing a client adapter, we're golden.18:40
efriedIf it's got a client adapter, but is missing the storage element, we *might* be able to use an existing scrubber - I would have to check whether we have a scrubber that identifies storage-less but client-having mappings.18:41
mdrabeI don't _think_ it is18:41
efriedElse we could conceivably create such a scrubber.18:41
efriedAre you able to reproduce this problem at this point?18:41
efriedE.g. by pdbing and destroying the ITLs in an otherwise-normal flow?18:41
mdrabeIt's reproducible, I can't reproduce it right now for lack of systems18:41
efriedOh, that doesn't help.  We need to see whether the mappings are broken.18:41
efriedbrb18:41
*** apearson has quit IRC18:46
efriedI don't have a scrubber for storage-less mappings.18:46
efriedI'm not even sure if it's a valid thing to remove such mappings in general.18:47
efriedMay be an apearson question.18:47
efriedmdrabe, can you do me this:18:47
mdrabeYes, in this case18:47
mdrabeevacuated VM deletes have storage-less mappings18:47
efriedCreate a disk.  Create a mapping to it.  Rip the disk out (in the same way it's happening in this scenario).  Then pull the mapping and see what it looks like.18:47
efriedoh, unless you already have some XML like that.18:48
efriedIt's just...18:48
efriedI wonder what it looks like if the ghost of the storage is there before discover_hdisk.18:48
mdrabeIt shouldn't be just rip the disk out though18:48
efriedWhether it shows up in the mapping, and if so, if that's distinguishable from when the disk is valid.18:49
mdrabeIt should be rip the disk out (and I mean from the storage backend) and restart the VIOS OR run cfgmgr/cfgdev18:49
mdrabeBecause if the system is down, which it should be for evacuation and the VIOS comes back up...18:50
mdrabeThen that's why I'm thinking the hdisk is no longer there18:50
*** apearson has joined #openstack-powervm19:03
thorstso..I was afk.  Should I catch up on that 40 minutes of convo?19:04
mdrabethorst: eh probably not19:07
mdrabeunless you want to19:07
*** tblakeslee has quit IRC19:07
thorstmdrabe: totally good not to19:08
*** tblakeslee has joined #openstack-powervm19:30
efriedthorst, the upshot is my proposal isn't going to work.  See my abandon comment.19:30
*** apearson has quit IRC19:30
*** k0da has joined #openstack-powervm19:30
efriedNow we're investigating ways to make the mapping disappear in this error scenario.  The problem is, since we have nothing with which to identify the backing storage, we need to figure out some other way to identify the mapping.  Current speculation is that, in this situation, the Storage and/or TargetDev will actually be absent from the mapping.  If so, we should be able to remove such mappings with impunity - though we'll19:31
efriedIt'll basically be a new kind of scrubber.19:32
efriedWhich we actually may as well institute in the general case (perhaps even as part of ComprehensiveScrub)19:32
efried...assuming it's valid.19:32
efriedWe should consult with apearson about that.19:32
thorstyeah...this won't clear out the storage though?  Just the mapping?19:33
efriedthorst, Right.  The storage is already gone.  That's the point.19:35
thorstgot it19:36
*** apearson has joined #openstack-powervm19:54
*** dwayne has quit IRC20:08
efriedapearson: Advice on scrubbing storage-less mappings...20:09
efriedThe scenario is bringing a nova-powervm back up after crashing it and evacuating its instances to another host.20:11
efriedBefore the source host comes up, the PVs associated with those evacuated instances has been ripped out at the storage provider level (mdrabe, help me with the terminology here if I screwed that up)20:12
efriedThe nova compute comes up and runs a process that destroys evacuated instances.20:12
efriedAs part of that, we're trying to delete mappings and storage associated with such instances.20:12
apearson@efried - so in that case, you are trying to delete the lpar...and as part of that, absolutely you can clean up mappings.  Or you could just delete the lpar and then let your mapping 'fix' code handle it as it would now have only a server adapter and no client (since the client was associated w/ the lpar)20:14
mdrabeapearson: If I delete the LPAR, does the client adapter for any associated mappings of that LPAR automatically get deleted?20:15
efried(mdrabe, no)20:15
efriedapearson, okay, the quandary is that this code currently lives in the generic disconnect flow - meaning even if we're just trying to disconnect a (single) disk from a (live) VM that's running normally.20:15
efriedFor a normal living VM, is it possible to cause harm by removing mappings that have no Storage element?20:16
apearsonno, not generally.  The case where it causes harm is if somebody is manually changing stuff...removing a mapping and replacing it with something else (for example).20:17
efriedMaybe the storage fabric is flaking and the disk is temporarily invisible so the mapping is missing its Storage?  Normally the storage would come back to life at some point?  In which case, if we deleted the mapping, we've actually disconnected a disk which was only temporarily sick?20:17
apearsonVIOS should still be showing the disk...it's just in defined state perhaps.   IE I believe you still see the mapping.    I'm still confused why you even need to be doing this.20:25
efriedapearson: In the case we're trying to fix, it's because we've failed to retrieve one or both of the disk name or UDID from discover_hdisk.20:26
efriedaka lua_recovery20:30
efried...to which we've provided the "volume_id", whatever that is (cause apparently it's not the UDID or the UUID)20:31
openstackgerritAdam Reznechek proposed openstack/nova-powervm: Fix package setup configuration  https://review.openstack.org/34854420:34
*** apearson has quit IRC20:35
*** apearson has joined #openstack-powervm20:36
thorstadreznec: Curious if that passes devstack...20:38
thorstefried: you should probably take a peak at that ^^20:38
efriedI am20:38
efriedI'm not sure why this change is needed.20:39
thorstOSA20:39
thorsteggs20:39
thorstvenvs20:39
adreznecNot really OSA20:39
adreznecThe way it was before wasn't actually causing the shim driver files to get properly installed into /nova/virt/powervm if you did a package (e.g. pip) install of the driver20:39
adreznecIt would only install the files under nova_powervm, not those under nova/virt/powervm20:40
adreznecMeaning that you couldn't actually do an import of nova.virt.powervm.driver20:40
adreznecI'm doing a full OSA redeploy to ensure it works there as well, but that will take like an hour20:43
efriedadreznec, okay, if you say so.20:43
adreznecSolved the problem in my smaller testing though20:44
efriedthorst, I got the +1, nyah nyah.20:44
adreznecIt's ok, thorst is an PEP420/python namespace extension pro20:45
*** jwcroppe has quit IRC20:53
*** dwayne has joined #openstack-powervm20:54
efriedadreznec, really?  Then why did I wind up stumbling through https://review.openstack.org/#/c/310610/ ??20:55
adreznecefried: Uhhhhhhhhhhhhh...... learning experience?20:58
efriedTicket to Barcelona?20:59
efriedI'll take it.20:59
*** apearson has quit IRC21:04
thorstadreznec efried: svenkat wins the day...he found a way to cut out a bunch of our networking-powervm code.21:10
thorstwe should talk tomorrow21:10
thorstI've got to start biking home.21:10
adreznecHuh21:10
adrezneccool21:10
efriedthorst, he figured out how to do it without monkey patching?21:10
adreznecThat's always a nice thing21:10
thorstadreznec: yeah...21:10
thorstefried: not that yet.21:10
thorstthis is our SEA code21:10
svenkatyes! no more monkey patching21:10
thorsthe found a way to get the VLAN back to Nova-PowerVM21:10
thorstthat is proper21:10
efriedCool, look forward to hearing about it.21:10
adreznecYay21:10
thorstso we don't have to have our agent 'loop' and listen to events and do goofy stuff21:11
thorstshould probably help with CI too.21:11
adreznecLess code to maintain!21:11
thorstyeah...too bad wpward isn't around to see it get deleted21:11
thorstI should tweet him so he knows21:11
thorstlol21:11
efriedsvenkat, is a review up yet?21:11
svenkatnot yet.21:12
*** svenkat has quit IRC21:13
*** apearson has joined #openstack-powervm21:29
*** apearson has quit IRC21:33
*** thorst has quit IRC21:33
*** thorst has joined #openstack-powervm21:34
*** thorst has quit IRC21:38
*** burgerk_ has joined #openstack-powervm21:39
*** burgerk has quit IRC21:42
*** burgerk_ has quit IRC21:43
*** edmondsw has quit IRC21:46
*** apearson has joined #openstack-powervm21:46
*** seroyer has quit IRC21:56
*** esberglu has quit IRC21:57
*** Ashana has quit IRC21:57
*** Ashana has joined #openstack-powervm21:58
apearson@efried - but then aren't you trying to rip apart an existing mapping?  Just because you don't understand the disk associated w/ the mapping, doesn't mean you should clean it up.21:59
efriedapearson, exactly.21:59
efriedIt's the volume driver that has this volume_id thingy.21:59
efriedIt held onto that from the previous incarnation of the compute driver somehow.22:00
efriedPoint is, we now can't find the associated volume on the VIOS.22:00
efriedErgo we can't identify which of possibly several mappings associated with this LPAR actually held that volume.22:00
*** tsjakobs has quit IRC22:01
efriedBut the question is, will such a mapping show up without storage on it?  And is there harm in removing such mappings even when we don't know for sure they're associated with the disk we're trying to disconnect?22:01
*** mdrabe has quit IRC22:02
*** Ashana has quit IRC22:03
apearson@efried - I don't believe so.  But you can easily test this:  1)  Create a pv-based mapping.  2)  Remove the disk on the SAN, but make sure it still shows up in the vios as 'defined'.  3)  See how that mapping looks.22:13
efriedrgr, thx22:14
*** jwcroppe has joined #openstack-powervm22:15
*** Ashana has joined #openstack-powervm22:16
*** Ashana has quit IRC22:20
*** Ashana has joined #openstack-powervm22:22
*** Ashana has quit IRC22:26
*** Ashana has joined #openstack-powervm22:28
*** Ashana has quit IRC22:32
*** k0da has quit IRC22:38
*** jwcroppe has quit IRC22:46
*** Ashana has joined #openstack-powervm22:57
*** tblakeslee has quit IRC22:58
*** Ashana has quit IRC23:01
*** Ashana has joined #openstack-powervm23:03
*** apearson has quit IRC23:04
*** Ashana has quit IRC23:07
*** Ashana has joined #openstack-powervm23:14
*** Ashana has quit IRC23:19
*** Ashana has joined #openstack-powervm23:20
*** Ashana has quit IRC23:25
*** Ashana has joined #openstack-powervm23:27
*** Ashana has quit IRC23:31
*** Ashana has joined #openstack-powervm23:33
*** Ashana has quit IRC23:37
*** Ashana has joined #openstack-powervm23:45
*** Ashana has quit IRC23:49
*** Ashana has joined #openstack-powervm23:50
*** Ashana has quit IRC23:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!