Thursday, 2016-07-07

*** Ashana has joined #openstack-powervm00:00
*** Ashana has quit IRC00:05
*** Ashana has joined #openstack-powervm00:06
*** Ashana has quit IRC00:11
*** Ashana has joined #openstack-powervm00:12
*** seroyer has joined #openstack-powervm00:14
*** Ashana has quit IRC00:16
*** Ashana has joined #openstack-powervm00:18
*** Ashana has quit IRC00:22
*** Ashana has joined #openstack-powervm00:24
*** thorst has joined #openstack-powervm00:25
*** Ashana has quit IRC00:29
*** Ashana has joined #openstack-powervm00:30
*** thorst has quit IRC00:32
*** Ashana has quit IRC00:34
*** Ashana has joined #openstack-powervm00:37
*** Ashana has quit IRC00:41
*** Ashana has joined #openstack-powervm00:43
*** Ashana has quit IRC00:47
*** Ashana has joined #openstack-powervm00:49
*** thorst has joined #openstack-powervm00:49
*** Ashana has quit IRC00:53
*** Ashana has joined #openstack-powervm00:57
*** Ashana has quit IRC01:02
*** Ashana has joined #openstack-powervm01:03
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Driver cleanup work  https://review.openstack.org/33857301:03
*** tlian2 has quit IRC01:04
*** Ashana has quit IRC01:08
*** Ashana has joined #openstack-powervm01:09
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Driver cleanup work  https://review.openstack.org/33857301:09
thorstefried: 338573 and 338536 when you get a chance....hoping to simplify that code.  Been bugging me for a while.01:11
*** thorst has quit IRC01:12
*** thorst has joined #openstack-powervm01:13
*** Ashana has quit IRC01:13
*** tlian has joined #openstack-powervm01:14
*** Ashana has joined #openstack-powervm01:15
*** Ashana has quit IRC01:19
*** Ashana has joined #openstack-powervm01:21
*** thorst has quit IRC01:22
*** Ashana has quit IRC01:25
*** Ashana has joined #openstack-powervm01:26
*** Ashana has quit IRC01:31
*** arnoldje has joined #openstack-powervm01:55
*** thorst has joined #openstack-powervm02:22
*** arnoldje has quit IRC02:26
*** kriskend has joined #openstack-powervm02:26
*** thorst has quit IRC02:27
*** thorst has joined #openstack-powervm02:34
*** thorst has quit IRC02:34
*** kriskend has quit IRC02:45
*** seroyer has quit IRC03:05
*** ManojK has joined #openstack-powervm03:22
*** jwcroppe has quit IRC03:24
*** jwcroppe has joined #openstack-powervm03:24
*** jwcroppe has quit IRC03:29
*** tlian has quit IRC03:33
*** thorst has joined #openstack-powervm03:34
*** thorst has quit IRC03:43
*** ManojK has quit IRC04:01
*** tjakobs has joined #openstack-powervm04:05
*** tjakobs has quit IRC04:07
*** jwcroppe has joined #openstack-powervm04:26
*** jwcroppe has quit IRC04:28
*** thorst has joined #openstack-powervm04:41
*** thorst has quit IRC04:48
*** tlian has joined #openstack-powervm05:35
*** tlian has quit IRC05:43
*** thorst has joined #openstack-powervm05:47
*** Ashana has joined #openstack-powervm05:48
*** Ashana has quit IRC05:53
*** thorst has quit IRC05:54
*** Ashana has joined #openstack-powervm05:54
*** Ashana has quit IRC05:59
*** Ashana has joined #openstack-powervm06:00
*** Ashana has quit IRC06:04
*** Ashana has joined #openstack-powervm06:06
*** jwcroppe has joined #openstack-powervm06:09
*** Ashana has quit IRC06:10
*** Ashana has joined #openstack-powervm06:12
*** Ashana has quit IRC06:16
*** Ashana has joined #openstack-powervm06:17
*** Ashana has quit IRC06:22
*** Ashana has joined #openstack-powervm06:23
*** jwcroppe has quit IRC06:24
*** Ashana has quit IRC06:28
*** Ashana has joined #openstack-powervm06:29
*** Ashana has quit IRC06:33
*** Ashana has joined #openstack-powervm06:35
*** Ashana has quit IRC06:39
*** Ashana has joined #openstack-powervm06:41
*** Ashana has quit IRC06:45
*** Ashana has joined #openstack-powervm06:47
*** Ashana has quit IRC06:51
*** thorst has joined #openstack-powervm06:52
*** Ashana has joined #openstack-powervm06:53
*** Ashana has quit IRC06:57
*** Ashana has joined #openstack-powervm06:59
*** thorst has quit IRC06:59
*** Ashana has quit IRC07:03
*** Ashana has joined #openstack-powervm07:04
*** Ashana has quit IRC07:09
*** Ashana has joined #openstack-powervm07:10
*** Ashana has quit IRC07:15
*** Ashana has joined #openstack-powervm07:16
*** Ashana has quit IRC07:21
*** Ashana has joined #openstack-powervm07:22
*** Ashana has quit IRC07:27
*** Ashana has joined #openstack-powervm07:28
*** jwcroppe has joined #openstack-powervm07:32
*** Ashana has quit IRC07:33
*** k0da has joined #openstack-powervm07:33
*** Ashana has joined #openstack-powervm07:34
*** Ashana has quit IRC07:39
*** Ashana has joined #openstack-powervm07:41
*** Ashana has quit IRC07:45
*** Ashana has joined #openstack-powervm07:47
*** Ashana has quit IRC07:51
*** Ashana has joined #openstack-powervm07:52
*** thorst has joined #openstack-powervm07:55
*** Ashana has quit IRC07:57
*** Ashana has joined #openstack-powervm07:58
*** Ashana has quit IRC08:03
*** thorst has quit IRC08:03
*** Ashana has joined #openstack-powervm08:04
*** jwcroppe has quit IRC08:05
*** Ashana has quit IRC08:09
*** Ashana has joined #openstack-powervm08:10
*** Ashana has quit IRC08:15
*** Ashana has joined #openstack-powervm08:16
*** openstackgerrit has quit IRC08:18
*** openstackgerrit has joined #openstack-powervm08:18
*** Ashana has quit IRC08:20
*** Ashana has joined #openstack-powervm08:25
*** Ashana has quit IRC08:29
*** Ashana has joined #openstack-powervm08:31
*** Ashana has quit IRC08:35
*** jwcroppe has joined #openstack-powervm08:35
*** jwcroppe has quit IRC08:37
*** jwcroppe has joined #openstack-powervm08:41
*** thorst has joined #openstack-powervm09:01
*** jwcroppe has quit IRC09:04
*** thorst has quit IRC09:09
*** thorst has joined #openstack-powervm10:07
*** thorst has quit IRC10:14
*** jwcroppe has joined #openstack-powervm10:26
*** jwcroppe has quit IRC10:27
*** jwcroppe has joined #openstack-powervm10:58
*** thorst has joined #openstack-powervm11:06
*** jwcroppe has quit IRC11:30
*** thorst_ has joined #openstack-powervm11:45
*** thorst has quit IRC11:49
*** jwcroppe has joined #openstack-powervm11:49
*** jwcroppe has quit IRC11:54
*** thorst_ is now known as thorst12:02
*** seroyer has joined #openstack-powervm12:20
*** mdrabe has joined #openstack-powervm12:35
*** Ashana has joined #openstack-powervm12:35
*** jwcroppe has joined #openstack-powervm12:52
*** jwcroppe has quit IRC12:57
*** ManojK has joined #openstack-powervm13:06
thorstefried adreznec: Any chance I could get you to look over 338573 and 338536 quick?13:10
*** tblakeslee has joined #openstack-powervm13:16
efriedthorst, on it.13:25
thorstthx dude13:25
efriedAny particular order?13:26
thorst536 first13:26
thorst573 depends on 53613:26
efriedthorst, 536 atcha13:37
openstackgerritShyama proposed openstack/nova-powervm: Save valid udid on pre_live_migration of vscsi volumes using multi VIOS  https://review.openstack.org/33678313:46
*** esberglu has joined #openstack-powervm13:48
*** lmtaylor1 has joined #openstack-powervm13:52
efriedthorst, 573 atcha13:53
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Refactor validate vopt media repo to pypowervm  https://review.openstack.org/33853613:58
thorstefried: on 573 - I plan to do more in phases.  More of the host_uuid can be removed in a lot of places.  I guess this one isn't too big yet so I can keep iterating on that one14:00
efriedcool.14:00
efriedJust saw you get rid of it in one place, thought it would be cool to get rid of it everywhere while yer at it.14:01
*** apearson has joined #openstack-powervm14:04
thorstdon't think we can get rid of it everywhere unfortunately14:07
thorstsome things in pypowervm still take it as input14:07
thorstand that's a trickier thing to change...14:07
efriedthorst, the things in pypowervm that take it don't use it, do they?14:10
efriedAnd if they do, we should get rid of that too.14:10
*** ManojK has quit IRC14:11
thorstefried: I'm not necessarily asserting that the things in pypowervm need them14:18
thorstjust that its harder to change the API there.14:18
thorstthough we could just make that a legacy variable that could be None.14:18
efriedEasy to pass None, though14:18
efriedYuh14:18
thorstso we can/should identify those14:18
thorstbut getting through your comments first14:18
efriedYeah, I'm interested to see what in pypowervm still takes a host_uuid.14:18
efriedI hope it's minimal.14:18
thorstsome of the jobs14:19
efriedAnd if it's not minimal, I'm going to be making it my personal mission to squash those like cockroaches.14:19
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Driver cleanup work  https://review.openstack.org/33857314:20
thorstefried: want to knock out 536 so that my dependency chain is nicer?14:20
efriedstand by14:20
efriedthorst, +214:21
thorstyay!14:21
thorstI did something today14:21
thorstlol14:21
thorstefried: the build_vscsi_mapping for instance takes in a host_uuid14:23
thorstand I'm sure I'm the one that put it in there too.14:23
efriedthorst, well, I'll be ripping that apart when we have VSCSI mappings as first-class objects, and host_uuid won't even be *possible* to use since the ROOT will have to be the VIOS.14:24
thorstefried: makes sense...but we're in transition period for the time being14:25
thorstso we'll have to keep host_uuid around in some places14:25
thorstbut...definitely can minimize14:25
efriednod14:25
*** kriskend has joined #openstack-powervm14:26
efriedthorst, 573 +2; one comment you can address if you wish.  Though the issue has become more and more embarrassing for me.14:27
thorstefried: if it is about xags...14:28
efriedheh, no.14:28
efriedNot THAT embarrassing.14:28
thorstefried: I'm going to change them back to setup and then update the doc string14:29
thorstunless you prefer set_up?14:29
openstackgerritMerged openstack/nova-powervm: Refactor validate vopt media repo to pypowervm  https://review.openstack.org/33853614:29
efriedthorst, your call.  I can certainly tolerate the method name either way.14:29
thorstk14:29
*** ManojK has joined #openstack-powervm14:30
thorstefried: looks like we should be investigating python 3.5 as well.14:31
efriedthorst, what brings that up?14:34
*** tjakobs has joined #openstack-powervm14:34
efriedHas core openstack gotten anything working in py3?14:34
efriedLast I heard, you couldn't even tox the thing.14:34
adreznecefried: Most projects actually support it now... but not Nova or Swift14:35
adreznecOr one other I can't remember14:35
efriedOh, well, who needs those?14:35
adreznecWell, most of the main ones14:35
adreznecRight?14:35
openstackgerritShyama proposed openstack/nova-powervm: Save valid udid on pre_live_migration of vscsi volumes using multi VIOS  https://review.openstack.org/33678314:35
thorstefried: They have a non-voting job going for it now14:36
thorstits passing for us14:36
thorstbut would be good to officially add it14:36
thorstand I think ubuntu 16.04 is what causes it.  They ship python 3.5 by default.14:37
adreznecthorst: Well that and py34 isn't going to get fixes beyond this year iirc14:37
*** seroyer has quit IRC14:37
thorstefried: a lot of the stuff in pypowervm's cna.py also has host_uuid...  Not as sure we can rip easily from there.14:38
thorstmaybe we can...14:38
thorstnot sure14:38
efriedCNA is a child of the LPAR.14:39
efriedSo we again should be *unable* to use host UUID.14:39
thorstnot just CNAs though...vNETs14:39
efriedah, where vnet is actually a CHILD of the managed system.  That makes more sense.14:39
efriedFor those, a host_uuid param would be appropriate insofar as it would allow us to save ManagedSystem GETs if the caller already has that cached.14:40
efriedBut are vnets also ROOTs these days in NovaLink?  apearson14:40
*** burgerk has joined #openstack-powervm14:43
thorstefried: just for you, I got rid of several more host_uuids that were completely unnecessary14:51
thorstthough, before we merge this, I'd like to run it through our CI14:51
thorstesberglu: Do you think we could get the CI running with just neo38/39 for now?14:51
thorstwhile the memory upgrades are running on the others?14:51
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Driver cleanup work  https://review.openstack.org/33857314:52
esbergluthorst: Yeah. Any update on the upgrades? Still waiting on access14:52
*** jwcroppe has joined #openstack-powervm14:54
*** seroyer has joined #openstack-powervm14:55
thorstesberglu: nothing on the upgrades yet.  I suspect its a slow week due to July 4th.  A lot of people return on the 11th...14:56
thorstbut neo38/39 won't be getting any upgrades...so we're fine there.  They're already max'd14:56
*** ManojK has quit IRC14:58
*** jwcroppe has quit IRC15:00
*** svenkat has joined #openstack-powervm15:00
*** ManojK has joined #openstack-powervm15:01
svenkatthorst_, efried: this is about comments in nova-powervm sriov bp :  https://review.openstack.org/#/c/322203/7/specs/newton/powervm-sriov-nova.rst15:01
svenkatcomment 1 is about compute driver changes. i am aware of vif driver invocation changes only.15:02
svenkatand comment 2 is about discussion on redundancy15:02
apearson@efried - no, ethernet adapters are always child objects - not root obbjects.15:04
thorstapearson: question was about vnets15:04
thorstnot eth adapters15:04
thorst(I think)15:04
thorstsvenkat: for comment 1, if its a vif driver change only - which I find a little suspect - then we can remove the bits about other driver changes.  I guess all that whitelist stuff is part of the vif driver then?15:05
svenkatyes. hold on, waiting for Eric to join15:06
*** efried has quit IRC15:06
*** efried has joined #openstack-powervm15:08
efriedthorst svenkat: ping?15:08
svenkat@efried, yes15:09
efriedokay, now that THAT senseless delay is out of the way.15:09
svenkatok!!15:09
efriedI haven't seen anything since "but neo38/39 won't be getting upgrades..."15:10
svenkatso, the changes in compute driver will be specific to vif invocation to attach nic to vm.15:10
efriedRight; I assumed that would entail changes in methods like spawn.15:10
efriedPerhaps not.15:10
efriedWe've got to parse the configs to detect that we need to use the SRIOV VIF driver.  Does that happen in nova driver init?15:11
svenkatspawn ends up invoking plug.. so i thioght all needed changes in plug.15:12
thorstsvenkat efried: where is this 'whitelist' stuff encapsulated?15:12
svenkatsimilar to community code, which branches off to invoke various vif drivers in plug in vif driver15:12
efriedWell, once we know we're using the SR-IOV VIF driver, I agree all that can be within the VIF driver itself.15:13
thorstefried: this is what svenkat is referring to: https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/vif.py#L46-L4815:13
thorstused by the method at line 6015:13
svenkatyes. the high level driver level vif15:13
svenkat@thorst: to answer your question about whitelist, it can be part of vif parameter15:14
thorstcan be or is a?15:14
thorst'can be' implies you need to massage the data somehow15:14
thorst'is part of' indicates it is already there...15:15
svenkatcan be . unless we want to prepare this data at a higher level and refer to it in plug.15:15
thorstso I believe what you're saying is that the whitelist can be completely encapsulated within the vif plug.15:15
thorstand the 'vif' as a parameter is enough to work off of.15:15
efriedthorst, I think I agree with that.15:16
svenkatthorst: yes.15:16
thorstok15:16
svenkatcontain all necessary data and logic in plug itself.15:16
thorstthen I think that there aren't really any driver changes15:16
svenkatok…15:16
efriedSo the first bullet in the blueprint (line 96)15:16
svenkatyes.15:17
efriedshould state that there are no compute driver changes, and provide the rationale as discussed above.15:17
svenkatok...15:17
svenkatwill do.15:17
efriedcool.15:17
efriedNow the fun one.15:17
svenkatfor line 127 in bp, about redundancy15:17
efriedthorst and I had some discussions about this15:17
svenkatwe have a paragraph for redundancy.15:18
svenkatok...15:18
efriedAnd I can't freakin remember which chat service we were using, so it would take me a while to look it up.15:18
svenkatoh ok… np.15:18
efriedBut I do remember where we landed was that we would, at least initially, not expose any tunables for redundancy levels.15:18
svenkatok…15:18
thorst1 or 2...15:19
thorstthat's my understanding15:19
thorstif you have 2 available, go redundant.15:19
efriedWe would implement a default policy that would figure out an appropriate min_redundancy and max_redundancy based on the number of cards, number of VIOSes, number of ports.15:19
thorstif you only have 1...don't use redundancy15:19
thorstor...make it a conf option.15:19
thorstI'm fine with either.15:19
svenkatwhy stop at 2, can we call 1 or more insetad of 1 or 215:19
thorstbut I don't want to go crazy on the config.15:19
thorstyou stop at 1 so you don't flood the ports.15:20
efriedRight, the goal was to keep the config simple initially15:20
thorstI mean at 2.15:20
efriedand only provide ability to tune if demanded by customer.15:20
svenkatok…15:20
thorstIf I have 100 VMs and 4 SR-IOV ports...I'd run out of ports super fast if I had them flooding all the ports15:20
efriedWe may be talking about redundancy on a couple of different levels here, thorst.15:20
efriedFirst of all - seroyer, correct me if I got this wrong - but redundant vNIC is only going to be running over one VF at a time.15:21
thorstthis is not the # of VFs backing a vNIC?15:21
thorstwell...server vNICs15:21
thorstI know my terminology is a bit off...15:21
efriedRight.  I'm pretty sure they don't all run at once.  I'm pretty sure only one is on at any given time.15:21
efriedSo we're not worried about flooding traffic.  The closest we come there is worrying about running out of VFs.15:21
thorstefried: yeah, that's what I meant about flooding15:22
thorsteven though only one is active at a time, the others are reserved and can't be used15:22
thorstyou can't over commit them15:22
efriedSo let's talk max redundancy.15:22
efriedWith two (multi-port) cards and two VIOSes, I say we should have 4 backing VFs.15:23
efriedThat eliminates SPoF.15:23
efriedActually eliminates two PoF.15:23
efriedIs that too much redundancy?15:23
thorstefried: Are you assuming two fabrics?15:23
thorstor a single fabric15:24
efriedShrug, that's outside our purview.15:24
thorstnot necessarily.  The reason FC does 4 ports is because they want two ports per fabric15:24
efriedIf they've got 2x2 ports hooked up to the same external net, it's reasonable to assume they've got switch redundancy.15:24
efriedRemember, this is all for sure on the same external net.15:24
efriedOtherwise they'd be different whitelist entries15:24
thorstI think we can make a case for 4.  I doubt that it should ever be the default for max redundancy15:24
efriedor they configured it wrong.15:24
thorstI think the desired is almost always going to be 2.15:24
thorstand 4 is going to be for your extremely paranoid workloads15:25
thorstand we shouldn't let the 1% of workloads dictate a desired default.15:25
efriedthorst, not saying you're wrong, but upon what are you basing your assumption?15:25
efried"desired almost always 2"15:25
thorstSEA today is typically only a failover between two cards15:26
efriedAcross two VIOSes.15:26
*** ManojK has quit IRC15:26
thorstports rarely fail.  The reason they do two ports on a given SEA (within a VIOS) is for perf.  I guess a port could fail...but I just don't see it making sense to default to 415:26
thorstalso because we can only get like 20 VFs per physical port.15:26
efriedOkay, so let's flip it around: do you see a need for us to *allow* the consumer to specify more than 2?15:27
thorstso if you default to 4...and you have two 4 port cards...you're getting a max of 40 workloads on that box15:27
thorstwhich is nothin15:27
*** ManojK has joined #openstack-powervm15:28
efriedthorst, ^^ - assuming we default to max of 2, do you think we need a config option to increase?15:28
*** k0da has quit IRC15:29
svenkatit will be a good idea to drive max via a configuration15:29
svenkatgives us room to eliminate future code changes15:29
efriedwhat about min?15:29
svenkatwhy not drive both via configuration changes15:29
efriedBy enforcing a min greater than 1, the user is saying they want deploy to fail if we can't achieve the desired minimum redundancy level.15:30
efriedIs that a likely use case?15:30
thorstefried: I'm OK with it being handled via a configuration change.  As noted earlier...I think at the OpenStack level its just a conf option of 'I want X VFs backing this thing'15:30
thorstno min...no max....just X15:30
efriedsvenkat, sure, we *can* do that, but I'm trying to minimize conf clutter.15:30
efriedthorst, so we would set min = max = X?15:30
svenkatso one entry for both min and max in configuration15:31
thorstefried: that's what I'd start with15:31
efried(internally, as params to the anti-affinity algo)15:31
thorstyou can always make things more complex later15:31
svenkatok..15:31
efriedSo this is exactly what svenkat has described in the blueprint.15:32
svenkati will update BP with min/max configuration option15:32
efriedNo, I think this is exactly what you've already described.15:32
svenkatoh ok. got it… DRL15:32
efriedThough we should talk about the conf option names.15:32
svenkatdesired_redundnacy_level as mentioned in bp already15:33
svenkatdesired_redundancy_level15:33
efriedthorst, maybe I misread, did you hint that openstack already has a conf option for this?15:33
thorstefried: I don't think it does15:33
efriedokay.15:33
efriedSo15:33
thorstsvenkat: change the parameter to required_redundancy_level15:33
svenkatsure.15:33
efriedOur name needs to be somehow specific to PowerVM and to SR-IOV15:34
thorstwell, it'll be in the powervm section15:34
thorstso its specific that way15:34
svenkatok…15:34
efriedOkay, good.15:34
efriedThat should be specified15:34
efriedIn the blueprint15:34
thorstvnic_required_vf_count?15:34
svenkatok..15:34
thorstwith a default of 2?15:34
efriedvnic_required_vfs would be okay by me15:35
efriedagree default 215:35
svenkatthis is only if redundancy is needed.. if no redundancy is needed, this is not relevant.. did i understand this right15:35
efriedno15:35
efriedredundancy is implied by this setting.15:35
efriedIf you don't want redundancy, you set it explicitly to 1.15:35
thorstif they set it to 1...then no redundancy...15:36
efriedThere's no "redundancy on/off" switch.15:36
svenkatok…15:36
svenkati will describe it in bp for clarity15:36
svenkatok…15:36
svenkati will update bp and send it out for review soon.15:36
efriedSo to be clear, pci_passthrough_whitelist is an actual OpenStack setting.  It goes in whatever section it goes in today.15:36
svenkatok..15:36
efriedThis new vnic_required_vfs will go in the [powervm] section.15:36
svenkatok.15:37
efriedThanks, guys.15:37
svenkatthanks15:38
thorstrockin15:38
thorstthx15:38
seroyerefried: Yes, vNIC failover is “failover”.  Active and some number of backups.  Only one physical link is used at a time.15:43
efriedthanks15:43
*** dwayne_ has quit IRC15:49
*** ManojK has quit IRC16:04
*** ManojK has joined #openstack-powervm16:24
thorstkriskend: got a sec?16:36
thorsthttps://review.openstack.org/#/c/339094/1/nova_powervm/virt/powervm/driver.py  <-- line 592.  Did you hit an error there when not using config drive?  Or did you proactively comment that out?16:37
kriskend@thorst sure16:41
kriskendwe commented that out and it fixed our problem16:42
kriskendthe problem we hit was updating the VG16:42
kriskendcuz REST support is not there for that yet16:42
kriskendWe did not make the nova.conf change to not use config drive16:43
kriskendWould you like us to do that instead???16:43
thorstkriskend: I understand...16:44
thorstOK - I can work with that.  Thx16:45
kriskendwe are currently updating our code and going to try with the changes you have put up16:46
kriskendand the one Taylor put up16:46
*** dwayne_ has joined #openstack-powervm16:48
thorsttjakobs: I posted a suggestion on how to get that through sonar.16:48
*** dwayne_ has quit IRC16:52
*** jwcroppe has joined #openstack-powervm16:58
*** dwayne_ has joined #openstack-powervm17:00
*** jwcroppe has quit IRC17:02
*** Ashana has quit IRC17:12
*** Ashana has joined #openstack-powervm17:14
*** Ashana_ has joined #openstack-powervm17:16
*** Ashana has quit IRC17:18
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Remove delete vopt if nothing to delete  https://review.openstack.org/33915117:22
thorstkriskend tjakobs: ^^ See that.  I think if you set config drive off and use that...you should be good17:22
*** catintheroof has joined #openstack-powervm17:53
*** jwcroppe has joined #openstack-powervm17:58
*** jwcroppe has quit IRC18:04
*** ManojK has quit IRC18:07
kriskendthorst: We tried changing force_config_drive=True to False18:29
kriskendin nova.conf18:29
kriskendand pulled down the change you put up18:29
kriskendbut we are still failing in media.py18:30
kriskendlooking for the vopt18:30
thorststack trace?18:30
thorstcan you put it on pastebin or something?18:30
tjakobsthorst: http://pastebin.com/pzcvTd1P18:32
thorsthttps://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/driver.py#L44218:33
kriskendit is that same issue where it is looking for rootvg18:33
thorstkriskend tjakobs: can you put a breakpoint in there and see why configdrive.required_by is returning true?18:33
kriskendsure18:34
thorstwait18:35
thorststop18:35
thorstits not that..18:35
thorstis this a spawn?18:35
thorstor a destroy?  Cause that is definitely being called from destroy.18:35
tjakobswe are spawning18:36
tjakobsalso just put a breakpoint where you wanted and configdrive.required_by(instance) return False18:36
thorstyeah...18:39
thorstso here's what's happening18:39
thorst1) Something else in spawn is breaking18:39
thorst2) It rolls back18:40
thorst3) Destroy is invoked18:40
thorst4) Destroy itself is failing18:40
thorstand thus the real error is hidden18:40
thorstthe ole double failure hiding the real failure18:40
thorsttjakobs kriskend: I'll have a new patch for you to try out soon...18:41
kriskendyeah there is a different error before this18:42
kriskendthat we need to look at18:43
kriskendoh this other one is also on the delete I think... it is in unplug vif18:44
kriskendlooks like we have an issue there too18:44
kriskendbut still not the real problem18:44
tjakobsthorst: here was the first error message that happened: http://pastebin.com/SXV1sSaT18:45
thorsteek...that's a good one.18:47
*** ManojK has joined #openstack-powervm18:50
thorstkriskend tjakobs: that one is in pypowervm...gross18:51
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Remove delete vopt if nothing to delete  https://review.openstack.org/33915118:52
thorstyou'll need that patch....getting you something for above as well18:52
*** lmtaylor1 has left #openstack-powervm18:52
kriskendyeah we are still trying to update a vg cuz we are back to getting a REST error trying to do it18:54
kriskendI think when we do this...18:56
kriskend2016-07-07 13:37:52.260 ^[[00;36mINFO nova_powervm.virt.powervm.disk.localdisk [^[[01;36mreq-c978fd39-fcc2-4549-aa31-74b084058311 ^[[00;36madmin admin^[[00;36m] ^[[01;35m^[[00;36mCreate disk.^18:56
kriskend[[00m18:56
kriskendick18:56
kriskend    <RequestURI kb="ROR" kxe="false">/rest/api/uom/VirtualIOServer/57DFF425-1FCB-4E2D-A7AE-022DAA684983/VolumeGroup/1ea243c7-08ef-375e-a0c7-8dbde3344d88</RequestURI>18:58
kriskend    <ReasonCode kxe="false" kb="ROR">Unknown internal error.</ReasonCode>18:58
kriskend    <Message kxe="false" kb="ROO">Current state of ResourceMonitoringControl does not allow to perform operation on VIOS with ID 1 in System 9119-MME*106CCC718:58
thorstkriskend: well, nonetheless...18:58
thorstthese are good errors18:58
kriskendyeah18:58
thorstthat other one needs to be taken up with changh18:58
kriskendShould we be able to create a disk on a lparHosting VG?18:59
kriskendI thought that should work...18:59
tjakobsthorst: tried your nova-powervm patch, no longer hitting the issue in that first pastebin18:59
kriskendadreznec ^18:59
thorstkriskend: yes...you should19:02
thorst(at least *I* think you should be able to)19:03
kriskendLooks to me like there is an RMC check in that path19:03
thorstyeah, but in REST...19:03
kriskendthat is keeping it from working in the VIOSless enviroment19:03
kriskendyep19:03
thorstso changh19:03
thorstor apearson19:03
*** jwcroppe has joined #openstack-powervm19:05
*** jwcroppe has quit IRC19:05
*** jwcroppe has joined #openstack-powervm19:05
apearsonkriskend - so this volume group is on a linux VIOS?  We'd need to get the FFDC log to see exactly where it was failing...@changh can help...19:12
kriskendyeah apearson it is19:12
kriskendswitched over to Slack to discuss... since it is REST19:13
*** k0da has joined #openstack-powervm19:24
*** mdrabe has quit IRC19:41
*** mdrabe has joined #openstack-powervm19:45
*** k0da has quit IRC19:50
*** jwcroppe has quit IRC20:03
*** apearson has quit IRC20:11
*** apearson has joined #openstack-powervm20:15
*** k0da has joined #openstack-powervm20:25
*** apearson has quit IRC20:30
*** Ashana has joined #openstack-powervm20:46
thorstefried kriskend tjakobs: 3543 for the cna fix20:49
*** Ashana_ has quit IRC20:49
*** Ashana has quit IRC20:50
efriedthorst, I really don't like that method.20:54
efriedIt's too overloaded.20:54
thorst_120:54
thorst+120:54
efriedWhere's the sonar rule for "ugly"?20:54
thorstI don't like that method either20:54
thorsttried to make it better20:54
thorstI guess the question is...did that method go into a release.20:54
efriedIt's in 1.0.0.320:55
efriedBut with totally different semantics.20:56
efriedSo having changed it thus is just as bad as completely refactoring it.20:56
efriedSorry, the completely different semantics is the same as what this patch was before you changed it.20:56
efriedSo I don't think what you've done here is better than ripping it apart and making it (probably into multiple methods that) make sense.20:57
efriedthorst ^^20:57
efrieddef get_partitions(adapter, get_lpars=True, get_vioses=True, get_mgmt=False):21:00
efried    """Get a list of partitions possibly including LPARs, VIOSes, and the management partition.21:00
efried    :param get_lpars: If True, the result will include all LPARs.21:00
efried    :param get_vioses: If True, the result will include all VIOSes.21:00
efried    :param get_mgmt: If True, the result is guaranteed to include the management partition, even if21:00
efried                                              it would not otherwise have been included based on get_lpars/get_vioses.21:00
efried    """21:00
efriedthorst something like ^^21:00
*** svenkat has quit IRC21:00
*** kriskend has quit IRC21:06
thorstefried: let me see whats in 1.0.0.3...21:06
thorstcause I just assumed this weird semantic was there too...21:06
efriedthorst, it's the same as this was before you changed it.21:07
efriedMy point is, you changed it from one weird semantic to a completely different weird semantic.21:07
thorstseeing is believing21:07
*** apearson has joined #openstack-powervm21:07
thorstjust to be clear...we're talking get_all_lpars?21:08
efriedy21:08
thorstI don't actually see any code using that...21:08
thorstexcept in pypowervm itself21:08
thorstgiven the grossness of the method...I'm pretty inclined to just change it21:09
thorstat a minimum, make it private.21:09
thorstthoughts?  I know that's not ideal...21:09
efriedthorst, I'm on board.21:10
thorstripping it21:10
efriedWhat do you think of above suggestion?21:10
efried...plus making it private21:10
thorstyeah, I dig21:10
efriedk21:10
*** k0da has quit IRC21:12
efriedthorst, in other news, the options on SRIOV physical ports are way messed up in the schema.   Considering forcing them to change it.21:14
thorstnow is the time21:15
thorstefried: I put up a new patch.  It won't pass test...but would like to see if you think that's the right approach21:19
efriedthorst, beautiful.  I think this is much clearer in all ways.21:21
thorstyeah, me too21:21
thorstlol  - the unit tests still work21:24
thorstwhich do actually test that method I think21:24
thorstthat's kinda hilarious21:24
thorstI will be making a new test though...21:24
*** burgerk has quit IRC21:30
*** apearson has quit IRC21:31
*** apearson has joined #openstack-powervm21:37
thorstefried: posted a new version up21:46
efrieddone21:50
*** apearson has quit IRC21:52
thorstesberglu: http://184.172.12.213/73/338573/5/check/nova-powervm-pvm-dsvm-tempest-full/885f2c9/powervm_os_ci.html21:54
thorstI'm not sure that those test_trunk things are valid for our networking-powervm agent21:54
thorsteither that or our openstack config...not sure21:54
thorstand I think that they're leaving artifacts around that screw up the other tests...21:55
esbergluOkay. There’s new qos tests too that I’m removing right now21:55
thorstesberglu: yeah...I don't think we wipe them all out...but just a select few21:55
thorsttrunks search criteria seems odd21:56
thorstwe should figure out what a 'network trunk' is to openstack...21:56
thorstany takers?21:56
esbergluthorst: Wipe all the qos tests though right?21:57
thorstesberglu: yep21:57
thorsthttps://wiki.openstack.org/wiki/Neutron/TrunkPort21:57
thorstalright...I've got to head out22:00
thorstsee ya22:00
*** ManojK has quit IRC22:02
*** thorst has quit IRC22:03
*** ManojK has joined #openstack-powervm22:04
*** mdrabe has quit IRC22:08
*** ManojK has quit IRC22:09
*** tjakobs has quit IRC22:12
*** apearson has joined #openstack-powervm22:12
*** esberglu has quit IRC22:14
*** apearson has quit IRC22:16
*** apearson has joined #openstack-powervm22:19
*** tblakeslee has quit IRC22:35
*** seroyer has quit IRC22:36
*** catintheroof has quit IRC22:40
*** thorst has joined #openstack-powervm22:50
*** thorst has quit IRC22:54
*** ManojK has joined #openstack-powervm23:22
*** ManojK has quit IRC23:30
*** thorst has joined #openstack-powervm23:31
*** ManojK has joined #openstack-powervm23:33
*** thorst has quit IRC23:36
*** apearson has quit IRC23:48
*** thorst has joined #openstack-powervm23:52
*** thorst has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!