Monday, 2016-11-14

*** dwayne_ has quit IRC00:01
*** chas_ has joined #openstack-powervm00:15
*** thorst has joined #openstack-powervm00:19
*** chas_ has quit IRC00:20
*** thorst has quit IRC00:26
*** chas_ has joined #openstack-powervm01:16
*** chas_ has quit IRC01:20
*** thorst has joined #openstack-powervm01:25
*** tjakobs has joined #openstack-powervm01:30
*** tjakobs has quit IRC01:30
*** thorst has quit IRC01:32
*** thorst has joined #openstack-powervm01:51
*** thorst has quit IRC01:56
*** thorst has joined #openstack-powervm01:57
*** thorst has quit IRC02:05
*** chas_ has joined #openstack-powervm02:17
*** chas_ has quit IRC02:21
*** thorst has joined #openstack-powervm04:03
*** thorst has quit IRC04:10
*** chas_ has joined #openstack-powervm04:18
*** chas_ has quit IRC04:23
*** thorst has joined #openstack-powervm05:08
*** thorst has quit IRC05:15
*** thorst has joined #openstack-powervm06:15
*** thorst has quit IRC06:20
*** AlexeyAbashkin has joined #openstack-powervm06:51
*** AlexeyAbashkin has quit IRC06:57
*** thorst has joined #openstack-powervm07:19
*** thorst has quit IRC07:25
*** AlexeyAbashkin has joined #openstack-powervm07:56
*** thorst has joined #openstack-powervm08:22
*** thorst has quit IRC08:30
*** k0da has joined #openstack-powervm08:54
*** chas_ has joined #openstack-powervm08:57
*** AlexeyAbashkin has quit IRC09:04
*** madhaviy has joined #openstack-powervm09:23
*** thorst has joined #openstack-powervm09:28
*** thorst has quit IRC09:34
*** openstackgerrit has quit IRC09:47
*** openstackgerrit has joined #openstack-powervm09:48
*** thorst has joined #openstack-powervm10:32
*** thorst has quit IRC10:40
*** AlexeyAbashkin has joined #openstack-powervm11:27
*** chas_ has quit IRC11:29
*** thorst has joined #openstack-powervm11:37
*** thorst has quit IRC11:45
*** smatzek has joined #openstack-powervm11:50
*** seroyer has joined #openstack-powervm12:25
*** AlexeyAbashkin has quit IRC12:35
*** seroyer has quit IRC12:39
*** seroyer has joined #openstack-powervm12:41
*** thorst has joined #openstack-powervm12:50
*** svenkat has joined #openstack-powervm12:50
*** thorst_ has joined #openstack-powervm12:54
*** apearson has quit IRC12:54
*** thorst has quit IRC12:54
*** kylek3h has quit IRC13:00
*** edmondsw has joined #openstack-powervm13:03
*** seroyer has quit IRC13:15
*** chas_ has joined #openstack-powervm13:22
*** apearson has joined #openstack-powervm13:24
*** mdrabe has joined #openstack-powervm13:42
*** seroyer has joined #openstack-powervm13:43
*** AlexeyAbashkin has joined #openstack-powervm13:49
*** kylek3h has joined #openstack-powervm13:51
*** tblakes has joined #openstack-powervm13:54
*** seroyer has quit IRC13:58
*** AlexeyAbashkin has quit IRC14:02
*** esberglu has joined #openstack-powervm14:07
thorst_esberglu: where are we at CI wise?14:15
thorst_cause see https://review.openstack.org/#/c/381772/14:16
openstackgerritSridhar Venkat proposed openstack/networking-powervm: ProvisionRequest does not distinguish event source  https://review.openstack.org/39646714:16
esbergluthorst_: I'm just looking at the runs from the latest deploy now. Still getting timeouts across the board on spawns14:17
esbergluhttp://184.172.12.213/10/396510/6/silent/nova-pvm-dsvm-tempest-full/954b85a/14:17
thorst_which is stuck here: "Waiting for in-progress upload(s) to complete."14:18
esbergluYep14:19
esbergluefried: ^^14:19
thorst_alright.  And the first VM hits that.14:19
thorst_this has got to be leaving a bunch of junk in the env.14:19
thorst_Is there a way we can clean the environment and just have a single test go through...and we can maybe step through that?14:20
thorst_I basically want to see the logs where we create the marker...14:21
*** chas_ has quit IRC14:24
*** chas_ has joined #openstack-powervm14:25
esbergluthorst_: For a clean env. we would have to redeploy. And then I could just disable all of the projects. Then turn a project on just long enough to manually kick a run off14:25
thorst_what about the staging env?14:26
thorst_can we do there?14:26
esbergluIt would wipe all of the OSA CI stuff.  Which is fine by me but idk if qing wu has anything on there right now he doesn't want to lose14:27
esbergluActually I may be able to just kick off a single CI run14:28
esbergluThe env. should be clean because it has just been OSA dev14:28
thorst_yeah...that'd be neat.  or maybe if it has a ready node...I could hop on there quick?14:28
esbergluAnd I can just enable a non-osa project for a second14:28
thorst_well, if it has a ready node...with the patches14:29
thorst_then I could hop on that and take a peak before you let something through14:29
*** chas_ has quit IRC14:29
esbergluWe would have to rebuild the image and all that for the patches to get in14:29
esbergluOr just apply the patches manually14:30
thorst_esberglu: lets apply manually14:30
thorst_just try to recreate ourselves14:30
esbergluI'm just trying to think if there is anything from the OSA deploy that would cause issues14:32
esbergluI don't think so14:32
*** smatzek has quit IRC14:32
esbergluthorst_: There are 2 ready nodes on neo14 that we can toy with14:38
thorst_is efried1 in yet?14:38
thorst_also, POK network is awful ATM14:38
thorst_so I'm investigating that a bit.14:38
thorst_nope, efried1 is out.14:42
thorst_esberglu: PM me the private nodes?14:43
*** chas_ has joined #openstack-powervm14:45
*** kriskend has joined #openstack-powervm14:48
*** seroyer has joined #openstack-powervm14:48
*** chas_ has quit IRC14:50
*** smatzek has joined #openstack-powervm14:56
*** tjakobs has joined #openstack-powervm15:22
thorst_esberglu: the pvmctl packages on these ready nodes is quite old15:32
esbergluthorst_: I thought there was logic that got the newest in one of the scripts? Hold on I will check15:36
thorst_I can confirm its freezing in efried's code.15:42
thorst_:-)15:42
esbergluthorst_: It installs "neo-cli-latest" from gsa. I assumed that it was something that gets updated due to the name, but it does not. And yeah its super old15:44
thorst_lol15:44
esberglu:q15:46
adreznecYeah...15:47
adreznecWe were supposed to have open sourced the cli code by now15:47
adreznecMaking that whole flow unnecessary15:47
adreznecBut it's never made it onto Chris' plate15:47
esbergluI will get a new version up on GSA15:48
thorst_esberglu: I need to make a new patch of efried1's...15:51
esbergluthorst_: Okay. Let me know when it is ready15:52
thorst_it'll be a bit15:54
thorst_efried1 forgot how threads work  :-)15:54
thorst_esberglu: somethin just kicked me off of the ready node16:05
thorst_looks like something stole its IP16:05
esbergluWhat? I disabled all of the projects on the staging CI, so nothing should be happening.16:07
esbergluBut even if runs were going through idk what would do that16:07
thorst_I'm assuming that something from prod is overlapping the IPs?16:08
thorst_i'm kinda blocked on testing till we figure it out  :-)16:11
esbergluthorst_: This what you see if you try to log in again?16:15
esbergluPowerVM_CI-PowerVM_DevStacked-2575816:15
thorst_isn't that way too high a number for the staging env?16:17
thorst_this is what it should be: powervm-ci-powervm-devstacked-1358116:17
esbergluYeah theres a prod node with the same ip16:17
thorst_kill it please?16:17
thorst_and we need to get you a different set of IPs for staging.16:17
thorst_that'll lead to nightmares.16:17
esbergluYeah I thought we did that when we very first made the staging env?16:18
esbergluI will look into it16:18
thorst_and I stand corrected... efried1 did have threads working right.16:20
thorst_esberglu: where is the code we use to 'seed' the image into the SSP?16:22
*** dwayne has joined #openstack-powervm16:22
thorst_esberglu: OK - so efried1's patch works.  I'm thinking that we have environment clean out issues.  Do you think we can do a full environment rebuild here?16:27
thorst_I want to make sure we've properly cleaned out the SSPs...16:27
*** chas_ has joined #openstack-powervm16:27
thorst_anyway to clean, verify that everything is clean, then re-open?16:27
esbergluWe should just be able to run vm_cleaner.sh on all of the nodes to clean them16:32
thorst_what about existing runs?16:32
esbergluZuul is disabled rn, no runs are going16:33
thorst_ok16:35
thorst_can you clean it out?16:35
thorst_then we'll want to step through each one individually16:35
esbergluYep. I put up a change for the ip allocations16:36
thorst_+2'd...can we roll that out quickly?16:37
thorst_going to grab lunch....I'll check where we are when I get back.  But intention is to go to each host and make sure there are no LU's there after we clean it out completely16:42
thorst_not even ready nodes...16:42
thorst_then open it back up16:42
*** k0da has quit IRC16:50
*** madhaviy has quit IRC17:02
*** chas__ has joined #openstack-powervm17:10
*** chas_ has quit IRC17:12
efried1Hey thorst_, esberglu - I'm on now.  Anything urgent I can help with?17:23
esbergluefried1: We found a couple issues with the CI. We had IPs overlapping in staging/production. And an outdated neo-cli17:24
esbergluI'm redeploying with fixes for both17:24
efried1cool17:24
*** chas__ has quit IRC17:34
*** chas_ has joined #openstack-powervm17:35
*** chas_ has quit IRC17:39
thorst_esberglu: did we get it cleaned out?18:16
esbergluYeah. And I confirmed that all the LUs were gone. I ended up doing a full cloud redeploy (still running)18:18
thorst_esberglu: OK - cool18:22
*** chas_ has joined #openstack-powervm18:30
thorst_esberglu: we'll be able to get the ready nodes set up, but not accept jobs yet?18:32
thorst_(and then maybe open it to just one job quick)18:32
*** chas__ has joined #openstack-powervm18:34
*** chas_ has quit IRC18:34
*** chas__ has quit IRC18:38
esbergluthorst_: Yep I can do that. Did you still have a change to that patch? Or did it end up being okay?18:40
thorst_esberglu: its OK18:41
thorst_I think there are optimizations18:41
thorst_but it shouldn't need it18:41
esbergluOkay. About to kick off the mgmt playbook, then get some food18:42
thorst_esberglu: k18:44
thorst_thx18:44
*** chas_ has joined #openstack-powervm19:28
*** k0da has joined #openstack-powervm19:29
*** mdrabe has quit IRC19:53
*** mdrabe has joined #openstack-powervm19:53
*** kylek3h is now known as kylek3h_away20:12
*** mdrabe_ has joined #openstack-powervm20:30
*** kylek3h_away is now known as kylek3h20:30
*** mdrabe has quit IRC20:33
*** mdrabe_ has quit IRC20:45
esbergluthorst_: We've got jenkins nodes ready to go on production21:05
thorst_can you fire one test job off?21:05
thorst_preferably one on nova-powervm21:05
*** kriskend_ has joined #openstack-powervm21:06
*** kriskend has quit IRC21:06
*** smatzek has quit IRC21:10
thorst_esberglu: this run is looking pretty clean...21:49
*** svenkat has quit IRC21:52
thorst_although it does appear to be leaking LU's21:53
*** kriskend_ has quit IRC21:55
*** kriskend_ has joined #openstack-powervm21:56
esbergluthorst_: Not getting stuck on the waiting for upload to complete thing though?21:58
*** smatzek has joined #openstack-powervm22:00
thorst_well, the 'part' isn't there anymore22:01
thorst_but it is leaking a bunch of disks22:01
*** smatzek has quit IRC22:02
*** smatzek has joined #openstack-powervm22:02
*** edmondsw has quit IRC22:02
*** mdrabe has joined #openstack-powervm22:17
*** kylek3h has quit IRC22:23
*** smatzek has quit IRC22:25
*** thorst_ has quit IRC22:30
*** tblakes has quit IRC22:37
*** thorst has joined #openstack-powervm22:53
*** kylek3h has joined #openstack-powervm22:56
*** kylek3h has quit IRC22:56
*** kylek3h has joined #openstack-powervm22:57
*** thorst has quit IRC22:57
*** seroyer has quit IRC23:09
*** kylek3h_ has joined #openstack-powervm23:13
*** kylek3h has quit IRC23:13
*** kriskend_ has quit IRC23:15
*** kylek3h_ has quit IRC23:16
*** esberglu has quit IRC23:16
*** kylek3h has joined #openstack-powervm23:16
*** esberglu has joined #openstack-powervm23:16
*** apearson has quit IRC23:17
*** chas_ has quit IRC23:17
*** chas_ has joined #openstack-powervm23:18
*** chas_ has quit IRC23:20
*** chas_ has joined #openstack-powervm23:20
*** esberglu has quit IRC23:21
*** chas_ has quit IRC23:25
*** kylek3h is now known as kylek3h_away23:27
*** esberglu has joined #openstack-powervm23:29
*** tjakobs has quit IRC23:37
*** k0da has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!