Thursday, 2016-10-20

*** esberglu has quit IRC00:12
*** edmondsw has quit IRC00:46
*** apearson has joined #openstack-powervm00:58
*** thorst_ has quit IRC01:24
*** thorst_ has joined #openstack-powervm01:25
*** thorst_ has quit IRC01:33
*** svenkat has joined #openstack-powervm01:46
*** thorst_ has joined #openstack-powervm02:14
*** thorst_ has quit IRC02:14
*** thorst_ has joined #openstack-powervm02:14
*** thorst_ has quit IRC02:23
*** svenkat has quit IRC02:26
*** svenkat has joined #openstack-powervm02:27
*** thorst_ has joined #openstack-powervm02:32
*** thorst_ has quit IRC02:33
*** thorst_ has joined #openstack-powervm02:39
*** thorst_ has quit IRC02:39
*** svenkat has quit IRC03:15
*** seroyer has quit IRC03:29
*** thorst_ has joined #openstack-powervm03:40
*** thorst_ has quit IRC03:48
*** thorst_ has joined #openstack-powervm04:48
*** thorst_ has quit IRC04:54
*** thorst_ has joined #openstack-powervm05:52
*** thorst_ has quit IRC05:59
*** apearson has quit IRC06:29
*** apearson has joined #openstack-powervm06:34
*** thorst_ has joined #openstack-powervm06:58
*** thorst_ has quit IRC07:04
*** thorst_ has joined #openstack-powervm08:03
*** thorst_ has quit IRC08:09
*** k0da has joined #openstack-powervm08:41
*** thorst_ has joined #openstack-powervm09:07
*** thorst_ has quit IRC09:14
*** Cartoon has joined #openstack-powervm09:53
*** Cartoon has quit IRC09:53
*** Cartoon has joined #openstack-powervm09:55
*** thorst_ has joined #openstack-powervm10:13
*** thorst_ has quit IRC10:19
*** thorst_ has joined #openstack-powervm11:17
*** thorst_ has quit IRC11:17
*** thorst_ has joined #openstack-powervm11:17
*** viclarson has joined #openstack-powervm11:29
*** kylek3h_away has quit IRC11:55
*** edmondsw has joined #openstack-powervm12:08
*** thorst_ has quit IRC12:37
*** esberglu has joined #openstack-powervm12:54
*** tblakes has joined #openstack-powervm12:54
*** mdrabe has joined #openstack-powervm13:20
*** kriskend_ has joined #openstack-powervm13:28
*** efried has joined #openstack-powervm14:16
*** tjakobs has joined #openstack-powervm14:21
*** seroyer has joined #openstack-powervm14:24
*** seroyer has quit IRC14:28
*** smatzek has joined #openstack-powervm14:35
*** apearson has quit IRC14:50
*** smatzek has quit IRC14:51
*** smatzek has joined #openstack-powervm15:06
*** smatzek has quit IRC15:07
*** mdrabe has quit IRC15:10
*** thorst_ has joined #openstack-powervm15:11
*** thorst_ has quit IRC15:11
*** thorst_ has joined #openstack-powervm15:13
*** mdrabe has joined #openstack-powervm15:18
*** seroyer has joined #openstack-powervm15:21
*** kylek3h has joined #openstack-powervm15:25
*** kylek3h has quit IRC15:51
*** seroyer has quit IRC16:00
*** AlexeyAbashkin has quit IRC16:01
*** seroyer has joined #openstack-powervm16:03
*** viclarson has quit IRC16:12
adreznecesberglu: thorst_ if the SSP is full I'd expect errors like that16:17
adreznecHow much space do we have out there?16:17
esbergluadreznec: thorst: SSP is not full16:17
thorst_output from pvmctl?16:17
thorst_and what's the lu output?16:17
esberglu+-----------+----------+------------+----------+16:17
esberglu|    Name   | Cap (GB) | Free Space | OC Space |16:17
esberglu+-----------+----------+------------+----------+16:17
esberglu| ssp_stage |  249.88  |   222.86   |   0.0    |16:17
esberglu+-----------+----------+------------+----------+16:17
thorst_efried: what is OC Space?16:18
esberglu+-----------+---------------------------------------------------------------+----------+-------+------+---------------------------------------------------------------+--------+16:18
esberglu|    SSP    |                              Name                             | Cap (GB) |  Type | Thin |                             Clone                             | In use |16:18
esberglu+-----------+---------------------------------------------------------------+----------+-------+------+---------------------------------------------------------------+--------+16:18
esberglu| ssp_stage | image_template_PowerVM_Ubuntu_Base_1476973324_c04c707c85afd5> |   60.0   | Image | True | image_template_PowerVM_Ubuntu_Base_1476973324_c04c707c85afd5> | False  |16:18
thorst_that SSP is smaller than I thought it was.16:18
esberglu|           | part10253365image_template_PowerVM_Ubuntu_Base_1476973324_c0> |   0.01   | Image | True | part10253365image_template_PowerVM_Ubuntu_Base_1476973324_c0> | False  |16:18
esberglu+-----------+---------------------------------------------------------------+----------+-------+------+---------------------------------------------------------------+--------+16:18
thorst_heh, we need to use more pastebin.16:18
efriedthorst_, not sure - ask clbush.16:18
*** openstackgerrit has quit IRC16:18
esbergluUgh that looks awful. You guys need a pastebin for it?16:19
thorst_nah16:19
adreznecI got it, but yeah, IRC doesn't handle tables well16:19
thorst_so if it can't find the right LU, I'd probably ask efried to take a peak?16:19
*** openstackgerrit has joined #openstack-powervm16:19
thorst_he's mr. SSP & OpenStack16:19
adreznecPoor guy... err... lucky, I meant lucky16:20
thorst_hey, I'm just saying that SSP has linked clones.16:21
adrezneckriskend_: any more luck with testing the new glance code?16:25
efriedesberglu, what do you want me to look at?16:26
kriskend_adreznec haven't yet figured out why it isn't running16:26
adreznecCrud, ok16:28
esbergluefried: Nodes aren't spawning on the staging CI. Seeing this in the FFDC log16:40
esbergluhttp://paste.openstack.org/show/586606/16:40
esbergluBy aren't spawning I mean they are just hanging, not erroring out16:40
efriedesberglu, that FFDC stuff doesn't mean anything to me.16:41
efriedHave you asked someone in REST?16:41
efriedI can take a look at the compute logs if you like.  Hanging is weird, sounds like another marker LU problem.16:42
esbergluDelete is hanging too16:45
esbergluIts on neo1416:45
thorst_efried: as you get a chance...can you look at launchpad bug 1634963 in networking-powervm.  Its due to a change you made and I've got an exploiter asking for status on it.16:48
openstackLaunchpad bug 1634963 in networking-powervm "provision_devices called with empty set of requests" [Undecided,New] https://launchpad.net/bugs/163496316:48
efriedthorst_, looking.16:48
*** seroyer has quit IRC16:49
*** AlexeyAbashkin has joined #openstack-powervm16:49
*** AlexeyAbashkin has quit IRC16:49
thorst_efried: thx dude16:52
kriskend_adreznec Hmm stepping thru this new glance code... apparently it is running...16:54
*** AlexeyAbashkin has joined #openstack-powervm16:54
*** k0da has quit IRC16:55
*** dwayne has joined #openstack-powervm16:57
efriedthorst_, that bug is pretty bogus.  The overhead in provision_devices for an empty list is negligible.16:57
thorst_does it call get_device_details at all?16:58
efriedno16:58
thorst_then I agree.16:58
thorst_I'm good to reject it16:58
thorst_lets just let the exploiter know16:58
thorst_I don't see arnoldje in here...16:58
efriedIt builds a couple of empty dicts.  Other than that, it's just the method call overhead itself.16:58
efriedoh16:59
thorst_the only reason I question it is because its arnoldje...  his analysis is the best analysis16:59
thorst_"the best"16:59
thorst_(channeling donald)16:59
efriedI wasn't looking at the (non-networking-powervm) exploiter code.16:59
thorst_ahh, lol16:59
efriedBut I would challenge that we shouldn't call the method at all.16:59
thorst_well, meh16:59
efriedYeah, if they want to no-op on zero requests, they can check that at the beginning of their stuff.17:00
thorst_yep.17:00
efriedHow do I know whether every exploiter wants to act on an empty list or not?17:00
thorst_you don't17:00
thorst_reject it17:00
efriednod17:01
efriedthorst_, all that said, I'm not sure I really see the value in calling the provision at all on an empty list.17:04
efriedSo, meh, I wouldn't object to "fixing" it in the community.17:04
efriedthoughts?17:04
thorst_Its minor, but yeah.  I'm fine either way17:04
thorst_your call17:04
thorst_adreznec: in our OSA install...I *think* I see the issue.17:05
thorst_the neutron-agents-container...its linking br-int to 'br-provider'.  Which doesn't seem to actually exist within the container17:05
*** apearson has joined #openstack-powervm17:07
*** apearson has quit IRC17:07
*** apearson has joined #openstack-powervm17:10
thorst_adreznec: I can manually throw it on...but we'd lose it with the OSA rebuild17:15
adreznecthorst_: Sorry, was grabbing lunch17:24
adreznecHmm, ok. br-provider was actually a bit of post-setup I did, it wasn't part of the OSA build at all.17:26
adreznecAs far as I can tell the OSA OVCS setup doesn't actually connect br-int to anything17:26
adreznec*OVS17:26
thorst_adreznec: I think there is a bug in OSA for OVS17:29
thorst_because it doesn't do that setup17:29
thorst_:-)17:29
adreznecAgreed. For now I think we can just set it up manually and doc the requirement17:31
adreznecAnd push stuff up for fixes next week at the summit17:31
thorst_bleh17:31
*** seroyer has joined #openstack-powervm18:07
-openstackstatus- NOTICE: The Gerrit service on review.openstack.org is being restarted now in an attempt to resolve some mismatched merge states on a few changes, but should return momentarily.18:09
thorst_adreznec: can we try to have each VM connect to another?18:13
adreznecDid you figure out that fw issue?18:13
thorst_did you get the 10.0.0.x applied to another VM?18:13
thorst_no, but I want to see if VM to VM is OK18:13
adreznecNo, I can do that quick18:13
adreznecWas still looking at config drive stuff18:14
*** tblakes has quit IRC18:23
adreznecthorst_: Ok, I can ping 10.0.0.4 from 10.0.0.818:28
thorst_OK...so its definitely this network node18:28
thorst_I can't seem to get it working right.18:28
thorst_I think we'll need to connect with automagically at Barcelona about it...but for now...I think we may want to redeploy with the network node using LB18:29
adreznecHmm ok, let me poke at it for a minute18:29
adreznecWhat changes did you make so far to get traffic flowing?18:29
thorst_FYI - I currently have the firewall driver turned to noop18:29
thorst_adreznec: Changes to get it into the host.  I did:18:33
thorst_1) ovs-vsctl add-port br-provider eth118:33
thorst_2) ifconfig eth1 up18:33
thorst_3) ifconfig eth1 0.0.0.018:34
thorst_Step 2 was a br-provider up...18:34
thorst_4) ifconfig br-provider <ip> netmask 255.255.252.018:34
thorst_5) restart agent18:34
thorst_and a few iterations have been messing with the conf files....but nothing drastic18:34
adreznecOk18:35
*** apearson has quit IRC18:41
*** tobias_ has joined #openstack-powervm18:55
*** tobias_ has quit IRC18:59
thorst_adreznec: anything I should be doing atm?19:23
*** k0da has joined #openstack-powervm19:23
adreznecthorst_: You could try digging into the config drive stuff I guess. I've been playing with the network node config and trying to recover our other OSA controller, haven't had time to dig any farther as to why the interfaces data isn''t getting populated correctly19:28
thorst_adreznec: can we deploy?19:29
thorst_VIOSes are RMC busy, are they not?19:29
adreznecUhh... good question19:30
adreznecDo we need to restart the vio daemons?19:30
thorst_yeah...19:30
thorst_did mmandell's upload ever finish?19:31
adreznecNot sure19:32
kriskend_thorst: any reason this code would be much faster?19:52
*** seroyer has quit IRC19:52
thorst_kriskend_: yes.  But how much faster are we talking19:53
thorst_if its like 99% faster, then I'm not sure its copying the bits  :-)19:53
thorst_if its 50% faster...I could believe that19:53
kriskend_not sure yet... we will see19:53
thorst_kriskend_: the suspense...19:54
kriskend_Task crt_disk_from_img completed in 217 seconds for instance KAx-3-c1da6a77-pvm19:56
kriskend_10G image19:56
adreznecWhat was it before19:56
*** seroyer has joined #openstack-powervm19:57
thorst_and make sure it's still doing three at a time...and not one by one19:58
kriskend_i think it was around 400 seconds or so19:58
kriskend_it grew if you did a bunch...19:59
kriskend_I am going to have to remove some breakpoints...19:59
thorst_kriskend_: so naturally one will be faster than 3 at a time...so that's expected.  But another thing I'd expect...lower CPU usage in the nova-powervm process while it runs.19:59
kriskend_I am doing 7 deploys I think 3 at a time are still happening and the total time for all of them is not going to be nearly as long as it was before20:00
kriskend_half is probably about where it will land...20:00
thorst_rockin20:01
thorst_monitor the CPU utilization of the nova_powervm process too20:01
thorst_what's that look like20:01
thorst_(I believe it killed a core initially)20:02
kriskend_like 5%20:02
thorst_smells like a winner20:03
kriskend_oh but I think it was past the copy part already20:03
thorst_o bleh.20:03
kriskend_last couple VMs are booting now20:03
thorst_that's the part I care about20:03
kriskend_so 20 minutes to do 7 VMs...not bad20:04
kriskend_and there was a bit of extra delay cuz of my breakpoints20:04
kriskend_I want to try this on the system that always has issues with multi deploys20:04
kriskend_so going to load it there and try it out20:04
thorst_kriskend_: sounds good20:06
thorst_I made a bug for the other issue you pinged me on20:06
adreznecWhich system is that20:06
thorst_https://bugs.launchpad.net/nova-powervm/+bug/163538520:06
openstackLaunchpad bug 1635385 in nova-powervm "True error can get lost in getting partition state" [Undecided,New]20:06
kriskend_hmm anyone know if Hsien's delete patch got merged...20:10
kriskend_or how I would go about getting it?20:11
adreznecthe vopt delete on kriskend_ ?20:12
adreznec*one20:12
adreznecIf so, that's merged20:13
kriskend_yes20:15
openstackgerritDrew Thorstensen (thorst) proposed openstack/nova-powervm: Support additional responses for qp's  https://review.openstack.org/38933920:19
thorst_efried: I consider that one ^^ to be backport candidate to newton.20:21
*** thorst_ has quit IRC20:22
*** mdrabe has quit IRC20:25
*** apearson has joined #openstack-powervm20:26
*** apearson has quit IRC20:29
*** apearson has joined #openstack-powervm20:29
kriskend_23 minutes to deploy and boot 10 10G VMs20:49
kriskend_use to take about 45 minutes with this controller and compute20:49
adreznecAll successful?20:49
kriskend_yeah but this is not on the system it use to fail on20:49
kriskend_trying that next...20:49
adreznecAh ok20:50
*** apearson has quit IRC20:53
*** apearson has joined #openstack-powervm20:53
*** thorst_ has joined #openstack-powervm20:56
*** edmondsw has quit IRC21:02
efriedthorst_, we should never see 406.21:08
thorst_o we do21:09
efriedthorst_ Tell me how.  406 should be if we send the wrong Accept header for the response payload.  If we're seeing it for any other reason, that's a REST bug.21:12
adreznecWhy would we see a 406? That seems like we're sending a bad request21:13
adreznecAlso do and should are very different things :P21:13
efriedadreznec, agree on both points.21:14
efriedIf we're sending a bad request, we need to fix that bug.21:14
efriedIf *do* get a response we *should* not, REST needs to fix that bug.21:15
efriedBut we *should* not be band-aiding the community code because one or both of the above bugs exists.21:15
adreznecRight, unless REST is sending back 406s for things that really shouldn't be 406s then "handling" a 406 could lead to bad behavior down the road21:15
*** tjakobs has quit IRC21:49
thorst_efried: http://pastebin.com/RKqfBChf21:51
thorst_try on your own.  I thought 406 may have applied due to the QuickProperty thing21:51
efriedthorst_, okay, I can buy that the REST server is doing "the right thing" there - because the URI is invalid, it wanted to send us an XML error response with a 400 or 500; but we special-case 'quick' to accept application/json because (when we get the URI right) that's what we get back.21:54
efriedSo I suppose the same thing is happening with a 404.21:54
efriedAnd what we really ought to be doing is accepting either application/json or application/atom+xml.21:55
efriedNot sure if there's a way to get the Accept header to take more than one type...21:55
*** thorst_ has quit IRC21:56
*** thorst_ has joined #openstack-powervm21:57
efriedthorst_, did you catch any of that?21:57
*** dwayne has quit IRC22:00
efriedthorst_, I have a fix in pypowervm that we really should have made a looong time ago.22:04
*** thorst_ has quit IRC22:05
*** thorst_ has joined #openstack-powervm22:20
thorst_efried: I think I did22:20
thorst_and I don't think there is a way to say both...22:20
efriedthorst_ - 10s22:21
efriedthorst_, 430822:21
efriedthorst_, I tested this live, and it does The Right Thing.  I got 400 and 404 to come through as proper HttpErrors with 400/404 in 'em; but the real, valid qp response comes through when you get the URI right.22:22
efriedWe should've fixed this three years ago.22:22
*** thorst_ has quit IRC22:24
*** kriskend_ has quit IRC22:34
*** k0da has quit IRC22:53
*** dwayne has joined #openstack-powervm23:10
*** thorst_ has joined #openstack-powervm23:24
*** thorst_ has quit IRC23:33

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!