Friday, 2017-01-27

*** k0da has quit IRC00:04
*** dwayne has joined #openstack-powervm00:21
*** tblakes has joined #openstack-powervm00:22
*** thorst_ has joined #openstack-powervm00:45
*** thorst_ has quit IRC00:55
*** tblakes has quit IRC01:04
*** tblakes has joined #openstack-powervm01:12
*** tjakobs has quit IRC01:13
*** thorst_ has joined #openstack-powervm01:27
*** edmondsw has joined #openstack-powervm01:31
*** edmondsw has quit IRC01:43
*** edmondsw has joined #openstack-powervm01:54
*** thorst_ has quit IRC02:18
*** thorst_ has joined #openstack-powervm02:20
*** tblakes has quit IRC02:29
*** edmondsw has quit IRC02:42
*** edmondsw has joined #openstack-powervm02:43
*** edmondsw has quit IRC02:43
*** edmondsw has joined #openstack-powervm02:43
*** edmondsw has quit IRC02:52
*** Jay1 has joined #openstack-powervm03:00
*** edmondsw has joined #openstack-powervm03:01
*** Jay1 has quit IRC03:05
*** edmondsw has quit IRC03:09
*** Jay1 has joined #openstack-powervm03:10
*** edmondsw has joined #openstack-powervm03:11
*** edmondsw has quit IRC03:14
*** edmondsw has joined #openstack-powervm03:14
*** edmondsw has quit IRC03:14
*** edmondsw has joined #openstack-powervm03:15
*** Jay1 has quit IRC03:15
*** thorst_ has quit IRC03:40
*** edmondsw has quit IRC04:17
*** edmondsw has joined #openstack-powervm04:17
*** edmondsw has quit IRC04:21
*** Jay1 has joined #openstack-powervm05:35
*** thorst_ has joined #openstack-powervm05:56
*** thorst_ has quit IRC06:01
*** tjakobs has joined #openstack-powervm06:18
*** tjakobs has quit IRC06:37
*** thorst_ has joined #openstack-powervm07:57
*** thorst_ has quit IRC08:01
*** k0da has joined #openstack-powervm09:18
*** thorst_ has joined #openstack-powervm09:58
*** thorst_ has quit IRC10:02
*** Jay1 has quit IRC11:25
*** Jay1 has joined #openstack-powervm11:29
*** Jay1 has quit IRC11:33
*** smatzek has joined #openstack-powervm11:52
*** thorst_ has joined #openstack-powervm11:59
*** thorst_ has quit IRC12:03
*** thorst__ has joined #openstack-powervm12:43
*** dwayne has quit IRC12:46
*** edmondsw has joined #openstack-powervm13:21
*** edmondsw_ has joined #openstack-powervm13:23
*** edmondsw has quit IRC13:25
*** Jay1 has joined #openstack-powervm13:35
thorst__efried: there?13:40
*** thorst__ is now known as thorst_13:42
efriedthorst_ sup?13:50
efriedI've been looking at the code a bit.13:50
efriedWhat kind of partition was this?13:50
thorst_type linux13:51
thorst_but it's the blank image13:51
thorst_so no OS13:51
thorst_likely stuck in bootp13:51
thorst_I think the issue is here...13:51
thorst_https://github.com/powervm/pypowervm/blob/master/pypowervm/tasks/power.py#L22813:51
thorst_I think that we need to increase the timeout13:51
thorst_a force immediate should actually wait for it to finish.13:51
thorst_in all scenarios.13:52
thorst_and it looks like we do that in other places we escalate force_immediate, but not there.13:52
efriedSo a couple of things.13:53
efriedLine 248 and 252 is where we set the immediate flag13:53
efriedand 28813:54
efriedwhich is the one we hit if the request fails (rather than timing out0.13:54
efried)13:54
thorst_so we've got to be looping through twice13:55
efriedWe're supposed to, yes.13:55
thorst_and I believe that it is line 248 where we set it13:55
thorst_start the second loop13:55
thorst_Force.TRUE sets the operation to shutdown/immediate13:56
thorst_but I think we're just timing out in 60 seconds (which is kinda absurd...)13:56
thorst_but I'd like to try to patch it with a higher value.  Like line 25113:56
efriedohhh, you're saying the second loop, with shutdown/immediate, is actually timing out?13:56
thorst_I think so, yes13:57
thorst_well, let me rephrase13:57
thorst_after code inspection of this awful logic, that's the one thing I can think of.13:57
efriedAnd of course this is happening intermittently.13:57
thorst_right.13:58
efriedWell.13:58
efriedIf we are indeed supplying shutdown/immediate, there should be no excuse for that to take 60s.  That sounds like a platform bug.13:58
thorst_yeah, I've been talking to seroyer...its not the hypervisor13:59
thorst_once it gets to the hypervisor its sub second13:59
thorst_the question is what's inbetween13:59
thorst_:-)13:59
thorst_yeah, I am becoming convinced this is a solid thing to try out.13:59
thorst_async eventing and what not14:00
thorst_mind if I just make it consistent between those two paths and we try again?14:00
efriedSorry, you lost me.14:00
efriedabove you said, "it looks like we do that in other places we escalate force_immediate, but not there"14:01
efrieddo what?14:01
thorst_let me show with a code example14:01
thorst_its easier that way14:01
efriedight14:02
efriedThough I'm not convinced anything is easy when it comes to this beast.14:02
thorst_if it were easy, it wouldn't be worth doing  ;-)14:02
*** dwayne has joined #openstack-powervm14:04
thorst_efried: 476114:04
efriedack14:04
efriedthorst_ Yes, I see - we're doing same in the other paths where we set it to True.  This makes sense... sort of.14:06
thorst_sort of.14:06
efriedIt still doesn't make sense that we should need to do it.  If we say shutdown/immediate, it should be.... immediate.14:06
thorst_mind pushing through and we'll see if sort of fixes things?14:06
thorst_yeah, I can talk to Hsien about that in the scrum today14:07
thorst_it seems like we may have more problems with the async eventing.14:07
thorst_but I also agree that in the case of force immediate, we should wait until it says its done14:07
thorst_(it just should be done really damn fast)14:07
efriedThat said, if the original timeout was 1s, and we do the force in 1s, we can't reasonably expect even a force shutdown to get all the way through the REST server's Jobs module, to PHYP, and back in 1s, especially on a loaded system.14:08
thorst_right.14:08
efriedWhat are you talking about wrt "async eventing"?14:08
thorst_when we shut down a VM, there is an 'async event' sent from the hypervisor to the REST server14:08
thorst_and the rest server will finish the job when that comes in14:08
efriedbtw, I'm not sure how this change is going to pass sonar.  Did we disable cyclomatic complexity for this module?14:08
thorst_so if there are a lot of events...14:08
thorst_efried: no idea...I hope we did14:09
efriedchecking...14:09
efriednooooo...14:09
thorst_well, we can have a Slack I hate Jenkins fest again14:10
efriedBut I remember we changed this thing recently, to add the Force enum - how tf did we pass sonar then?14:11
efriedOh well, guess we'll see.14:11
efriedWe should add UT for this.14:11
thorst_I think we swore at it enough to make it feel bad and let it through14:12
thorst_even the UT for power off is awful14:12
efriedI can see ways we could refactor this code, but it would be risky because of all the things consuming it.14:14
*** mdrabe has joined #openstack-powervm14:14
thorst_yep14:14
thorst_nothing in UTs checks the timeout now14:14
thorst_great.14:14
*** kriskend has joined #openstack-powervm14:15
*** kriskend has quit IRC14:16
*** kriskend has joined #openstack-powervm14:16
*** tjakobs has joined #openstack-powervm14:23
*** tblakes has joined #openstack-powervm14:27
*** kriskend has quit IRC14:27
*** esberglu has joined #openstack-powervm14:27
*** kriskend has joined #openstack-powervm14:27
*** kriskend has quit IRC14:29
*** kriskend has joined #openstack-powervm14:29
*** apearson has joined #openstack-powervm14:30
*** esberglu_ has joined #openstack-powervm14:30
*** esberglu has quit IRC14:33
efriedthorst_ One thing: should a 2G image be taking 470s to "upload" on e.g. neo40 with nothing else going on?14:33
*** Jay1 has quit IRC14:33
thorst_efried: depends...comes from NovaLink to local VIOS?14:34
thorst_probably not14:34
efriedAlso regularly seeing lots and lots of the forced pipe close messages.14:34
efriedThis is for the in-tree code.14:34
thorst_uhhh14:34
thorst_that seems wrong14:34
thorst_updated pypowervm?14:35
efried1.0.0.414:35
*** tlian has joined #openstack-powervm14:39
efriedthorst_ Jenkins passed15:18
thorst_well, rip it?15:18
efriedNot sure how, but gift horse, mouth, etc.15:18
thorst_I'm going to rip it15:18
efriedight15:18
thorst_esberglu_: that should automatically get picked up being in pypowervm now right?15:19
thorst_or do we need a new base image rebuild?15:19
openstackgerritMerged openstack/networking-powervm: Use neutron-lib portbindings api-def  https://review.openstack.org/42275915:24
esberglu_New base image rebuild15:24
esberglu_You're talking the power off change I'm assuming?15:25
efriedesberglu_ yes, 476115:28
efriednow merged.15:28
thorst_esberglu_: can we rebuild it now?15:29
thorst_wipe the existing ready nodes and rebuild?15:29
thorst_https://github.com/powervm/pypowervm/tree/develop15:30
thorst_it's the latest commit in there15:30
esberglu_Sure, I can just rebuild the mgmt node.15:30
esberglu_ls15:32
thorst_-la15:33
*** kriskend has quit IRC16:12
esberglu_thorst_: efried: adreznec: Yesterday I got a bunch of spawn tests pass on an in tree CI run.16:20
esberglu_Instead of defining the networks in prep_devstack, allow the networks to be defined by os_ci_tempest.sh16:20
esberglu_I only have tested this on one manual run, about to do a second and confirm16:20
esberglu_I can pm you guys the test results if you want16:20
adreznecesberglu_: How do the generated networks differ?16:21
*** burgerk has joined #openstack-powervm16:21
esberglu_os_ci_tempest sets some tempest conf. stuff when creating the networks that wasn't getting set when doing it in prep_devstack16:21
*** k0da has quit IRC16:21
*** kriskend has joined #openstack-powervm16:22
efriedYeah, I remember writing os_ci_tempest.sh to assume certain specific network names etc.16:23
esberglu_I think the initial change was because devstack was creating networks, but not the way we wanted them. So we turned devstack network creation off16:24
esberglu_And then defined them ourselves16:24
esberglu_But I'm not 100% clear on why we did that instead of letting tempest create them16:24
esberglu_os_ci_tempest that is16:25
esberglu_I also let a run through with that change for OOT16:25
esberglu_It didn't seem to cause any issues16:26
esberglu_But I would like to test it with more runs before putting it in16:26
esberglu_The only difference in the net creates16:30
*** apearson has quit IRC16:30
esberglu_There are some differences in the net-creates though16:31
esberglu_The networks in prep_devstack are --shared, but not is os_ci_tempest16:33
esberglu_And the public net in os_ci_tempest is defined with an external router16:33
*** apearson has joined #openstack-powervm16:35
efriedWell I, for one, don't understand how any of that stuff works or how it affects anything or why it should matter.  Networking is thorst_'s bailiwick.16:40
*** mdrabe has quit IRC16:41
esberglu_For now I say leave OOT as is until we can test it further. And just make the change for IT if it is confirmed to work16:44
*** apearson has quit IRC16:49
*** mdrabe has joined #openstack-powervm16:49
*** nbante has joined #openstack-powervm16:53
*** apearson has joined #openstack-powervm17:00
*** apearson has quit IRC17:21
*** nbante has quit IRC17:22
*** apearson has joined #openstack-powervm17:47
*** k0da has joined #openstack-powervm18:55
*** apearson has quit IRC19:00
*** apearson has joined #openstack-powervm19:05
*** apearson has quit IRC19:37
*** apearson has joined #openstack-powervm19:47
esberglu_thorst_: efried: adreznec: Confirmed that using the networks from os_ci_tempest.sh works for in tree20:10
thorst_for now20:10
esberglu_I created a whitelist based on the results of that run. Gonna redeploy the staging CI and test the whitelist and get a few more runs with this change through20:13
adreznecCool20:32
*** smatzek has quit IRC20:58
*** smatzek has joined #openstack-powervm20:59
*** tblakes has quit IRC21:12
*** smatzek has quit IRC21:22
*** tblakes has joined #openstack-powervm21:25
thorst_I revote that this security group test should just be skipped.22:02
esberglu_I have a patch up for it, just responded to a question adreznec had on it22:09
thorst_can I just rogue +2 that beast?22:10
adrezneclol22:10
esberglu_He was just asking if we want to merge the change to the skip list, or just pick up the change as a patch in production. I think it is better to just merge it22:13
thorst_yeah, as long as we root cause it22:13
thorst_and not just forget it later22:13
adreznecSure22:15
adreznecthorst_: feel free to rogue +2 with that caveat22:15
thorst_I did that like 5 minutes ago22:15
adrezneclol22:15
thorst_#rogue22:15
thorst_I'm out for the weekend.  See ya!22:15
*** thorst_ has quit IRC22:16
*** edmondsw_ has quit IRC22:23
*** edmondsw has joined #openstack-powervm22:23
*** edmondsw has quit IRC22:28
*** kriskend has quit IRC22:28
*** esberglu_ has quit IRC22:36
*** esberglu has joined #openstack-powervm22:36
*** apearson has quit IRC22:39
*** esberglu has quit IRC22:41
*** burgerk has quit IRC22:50
*** esberglu has joined #openstack-powervm22:50
*** esberglu has quit IRC22:54
*** dwayne has quit IRC22:58
*** tblakes has quit IRC23:10
*** mdrabe has quit IRC23:14
*** tjakobs has quit IRC23:32
*** edmondsw has joined #openstack-powervm23:43
*** dwayne has joined #openstack-powervm23:48
*** smatzek has joined #openstack-powervm23:52
*** smatzek_ has joined #openstack-powervm23:53
*** smatzek_ has quit IRC23:54
*** smatzek_ has joined #openstack-powervm23:55
*** smatzek has quit IRC23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!