Tuesday, 2017-08-01

*** smatzek has quit IRC00:04
*** edmondsw has joined #openstack-powervm00:21
*** edmondsw has quit IRC00:25
*** thorst_afk has joined #openstack-powervm00:46
*** thorst_afk has quit IRC00:54
*** thorst_afk has joined #openstack-powervm01:29
*** thorst_afk has quit IRC01:38
*** thorst_afk has joined #openstack-powervm01:38
*** thorst_afk has quit IRC01:43
*** thorst_afk has joined #openstack-powervm01:55
*** thorst_afk has quit IRC01:55
*** thorst_afk has joined #openstack-powervm02:11
*** thorst_afk has quit IRC02:25
*** esberglu has joined #openstack-powervm02:39
*** esberglu has quit IRC02:44
*** esberglu has joined #openstack-powervm02:49
*** apearson has joined #openstack-powervm03:10
*** apearson has quit IRC03:19
*** apearson has joined #openstack-powervm03:19
*** https_GK1wmSU has joined #openstack-powervm03:22
*** https_GK1wmSU has left #openstack-powervm03:23
*** apearson has quit IRC04:13
*** apearson has joined #openstack-powervm04:25
*** thorst_afk has joined #openstack-powervm04:26
*** thorst_afk has quit IRC04:31
*** esberglu has quit IRC06:06
*** esberglu has joined #openstack-powervm06:06
*** esberglu has quit IRC06:15
*** thorst_afk has joined #openstack-powervm06:28
*** thorst_afk has quit IRC06:32
*** thorst_afk has joined #openstack-powervm08:27
*** thorst_afk has quit IRC08:32
*** esberglu has joined #openstack-powervm09:48
*** esberglu has quit IRC09:53
*** thorst_afk has joined #openstack-powervm10:28
*** thorst_afk has quit IRC10:32
*** smatzek has joined #openstack-powervm11:04
*** edmondsw has joined #openstack-powervm11:09
*** edmondsw has quit IRC11:14
*** esberglu has joined #openstack-powervm11:37
*** thorst_afk has joined #openstack-powervm11:40
*** efried has quit IRC11:40
*** esberglu has quit IRC11:42
*** svenkat has joined #openstack-powervm11:49
*** efried has joined #openstack-powervm11:52
*** apearson has quit IRC11:53
*** svenkat_ has joined #openstack-powervm11:55
*** svenkat has quit IRC11:57
*** svenkat_ is now known as svenkat11:57
*** thorst_afk has quit IRC12:22
*** esberglu has joined #openstack-powervm12:24
*** openstackgerrit has joined #openstack-powervm12:35
openstackgerritEric Berglund proposed openstack/nova-powervm master: DNM: ci check  https://review.openstack.org/32831512:35
openstackgerritEric Berglund proposed openstack/nova-powervm master: DNM: CI Check2  https://review.openstack.org/32831712:35
*** edmondsw has joined #openstack-powervm12:38
*** esberglu has quit IRC12:39
*** esberglu has joined #openstack-powervm12:55
*** apearson has joined #openstack-powervm12:57
*** thorst_afk has joined #openstack-powervm12:57
*** kylek3h has joined #openstack-powervm13:01
*** jay1_ has joined #openstack-powervm13:01
esberglu#startmeeting powervm_driver_meeting13:01
openstackMeeting started Tue Aug  1 13:01:56 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.13:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.13:01
openstackThe meeting name has been set to 'powervm_driver_meeting'13:02
mdrabeo/13:02
esberglu#link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda13:02
efried\o13:03
thorst_afko/13:03
esberglu#topic In Tree Driver13:03
esberglu#link https://etherpad.openstack.org/p/powervm-in-tree-todos13:03
esbergluefried: Any updates here?13:04
edmondswo/13:04
edmondswI think we're pretty much in a holding pattern here until pike goes out and we can start working on queens13:04
esbergluYeah that's what I thought as well13:05
esberglu#topic Out of Tree Driver13:05
efriedsorry, yes, that's the case.13:05
*** dwayne has quit IRC13:06
edmondswwhat did we decide to do with mdrabe's UUID-instead-of-instance-for-better-performance change? any news there?13:07
mdrabeWe're gonna cherry-pick that internally for test13:07
thorst_afkget burn in, then once we're clear that its solid, push through13:08
mdrabeSo I'm gonna finish out UT, do the merge, and we're currently getting the test cases reviewed13:08
edmondswby cherry-pick, you mean into pvcos so everyone has it, or just for a select tester to apply to their system?13:08
mdrabeThe former13:08
edmondswgood13:08
edmondswwe had a conversation this week about adding support for mover service partitions to NovaLink13:09
mdrabeYea that'd be good for queens13:10
edmondswPowerVC already has this for HMC, and we're going to start exposing it to customers via a new CLI command in 1.4.0, but we don't have this for NovaLink13:10
edmondswso we're investigating what it would take to support for NovaLink as well... yeah, queens13:11
edmondswanything else?13:11
mdrabeOn that...13:12
mdrabeCould we still work it in regardless of platform support?13:12
edmondswnot sure I follow...13:12
efried"we" who, and what do you mean by "platform"?13:12
mdrabeWell if NL doesn't have the support for specifying MSPs, can we still have all the plumbing in nova-powervm?13:13
thorst_afkwe need the plumbing in place before we do anything in nova-powervm.  We could start the patch, but we would never push it through until the pypowervm/novalink changes are through13:14
mdrabeK that's what I was wondering, thanks13:14
esbergluAnything else?13:15
edmondswI may have found someone to help with the iSCSI dev, but not sure there13:17
esberglu#topic     PCI Passthru13:17
edmondswthat's it13:18
edmondswI don't have any news on PCI passthru... efried?13:18
efriedno13:18
edmondswnext topic13:18
*** cjvolzka has joined #openstack-powervm13:19
edmondswesberglu?13:19
esberglu#topic PowerVM CI13:20
esbergluJust got some comments back on the devstack patches I submitted, need to address them13:20
edmondswI saw those13:20
edmondswdo you know what he's talking about with meta?13:20
esbergluYeah I think there may be a way you can set tempest.conf options in the local.conf without using devstack options13:21
esbergluLike put the actual tempest.conf lines in there instead of using devstack options mapped to tempest options13:21
esbergluOther than that I'm testing REST log copying on staging right now, should be able to have that on prod by the end of the day I think13:22
efriedCan you add me to those reviews?  I may not have any useful feedback, but want to at least glance at 'em.13:23
esbergluefried: Yep13:23
edmondswefried they're all linked in 5598's commit message13:23
esbergluThe relevant rest logs are just the FFDC logs? Or are there other rest logs that we want13:23
efriedesberglu Certainly FFDC and Audit.13:24
efriedNot sure any of the others are relevant, lemme look real quick.13:24
efriedYeah, that should be fine, assuming we're not turning on developer debug.13:25
mdrabeAren't there JNI logs? Would we want those?13:25
efriedMm, don't know where those are offhand.  We seldom need them.  But probably not a bad idea.13:26
efriedHave to ask seroyer or nvcastet where they live.13:26
esbergluThere somewhere in /var/log/pvm/wlp I can find them13:26
esbergluThey're13:27
mdrabeActually one dir up13:27
esbergluYep13:27
efriedSo esberglu This could wind up being a nontrivial amount of data.  Do we have the space?13:28
esbergluefried: Let me look at the size of those files when zipped quick13:29
efriedtalking maybe tens of MB per run.13:29
esbergluI'll take a look and do some math after the meeting13:29
esbergluIf not we can add space or potentially change how long they stick around13:30
efriedOh, hold on13:30
efriedWe're talking about scping the REST (and JNI) logs from a neo that's serving several CI nodes across multiple runs?13:31
esbergluefried: Yeah13:31
efriedYeeeaaahhh, so that's not gonna work.13:31
efriedThat's gonna be more than tens of megs.13:31
efriedAnd we'll be copying the same data over and over again.13:32
efriedI think we need to be a bit more clever.13:32
edmondswyeah...13:32
efriedWe should make a dir per neo on the log server.13:32
edmondswwhat were we planning to use as the trigger for this?13:32
efriedAnd copy each neo's logs into it.13:33
edmondswand only if we see that the current logs there are not recent13:33
efriedAnd refresh (total replace) those periodically (period tbd)13:33
efriedAnd then link to the right neo's dir from the CI results of a given run.13:33
esbergluefried: Should be able to just add a cron to each neo to scrub and copy13:33
efriededmondsw Well, they'll always be out of date.13:33
esbergluperiodically13:33
edmondswefried what do you mean, always out of date?13:34
efriedUnless we have a period of time where zero runs are happening against that neo.13:34
efriedesberglu That's pretty rare, nah?13:34
esbergluEh it happens decently often13:35
efriedIn any case, perhaps we look into rsync.13:35
esberglu14 neos, we are often running fewer runs than that13:35
esbergluK. I will work on that today13:36
efriedHonestly don't know how it works trying to copy out a file while it's being written to.13:36
efriedbut I'm sure people smarter than us figured that out decades ago.13:36
efried...which is why we should try to use something like rsync rather than writing the logic ourselves.13:37
efriedAnd a trigger to make sure we're synced should be a failing run.13:38
efriedWith appropriate queueing in case a second run fails while we're still copying the logs from the first failing run.13:38
efriedAnd all that.13:38
esbergluJust trying to figure out how we will handle the scrubbing13:39
efriedI think aging, not scrubbing.13:39
efriedThe FFDC logs take care of their own rotation13:40
efriedHow old do we let our openstack logs get before we scrub 'em?13:40
esbergluNot sure off the top of my head13:41
esbergluLooking13:41
esbergluAnyway we can sort out the details post meeting13:42
edmondswanything else going on with the CI?13:43
esbergluHaven't looked at failures today, but just the timeout thing13:43
esbergluNeed to touch base to get someone looking at the rest logs13:44
edmondswwe still seeing a lot of timeouts?13:44
esbergluExcuse me I was talking about the Internal Server Error 500 for rest logs13:44
edmondswI thought with the marker LUs and all fixed that would go back to an occasional thing13:44
esbergluYeah still seeing timeouts as well13:44
esbergluedmondsw: The marker LU thing was causing the 3-4+ hour runs13:45
esbergluThese are timeouts on a specific subset of tests that hit intermittently13:45
edmondswk13:46
esberglu#topic Driver Testing13:47
edmondswjay1_ anything here?13:47
jay1_I haven't got any update from Ravi yet, seems like he still needs some more time to come back13:48
jay1_The present issue is with the Iscsi volume attach.13:49
edmondswjay1_ is the issues etherpad up to date?13:53
edmondswhttps://etherpad.openstack.org/p/powervm-driver-test-status13:53
edmondswnot a lot of information there13:53
jay1_Yeah.. same issue with the volume attach, will try to add the log error message as well13:55
edmondswtx13:55
edmondswesberglu that's probably all there for today13:55
edmondswnext topic13:55
esberglu#topic Open Discussion13:55
esbergluAny last words?13:56
esberglu#endmeeting13:57
openstackMeeting ended Tue Aug  1 13:57:10 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)13:57
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-08-01-13.01.html13:57
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-08-01-13.01.txt13:57
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-08-01-13.01.log.html13:57
edmondswI did have one thing I should have brought up in the OOT driver time...13:58
edmondswthis nova-powervm tox issue that pushkaraj found13:58
edmondswI was able to reproduce in a fresh environment13:58
edmondswMaybe there was a recent nova change that started this... I need to look13:59
efriededmondsw pvc or community-only?13:59
edmondswI reproduced with community-only14:00
edmondswpushkaraj found with pvcos... so it's both14:00
efriedAnd is the req in nova's [test-]requirements.txt?14:00
edmondswyes14:00
efriedthat's weird.  That should not happen.14:00
efriedshould be chaining the reqs properly.14:00
edmondswI suspect the way we list nova as a dependency doesn't also pull in nova's dependencies14:00
edmondswat least test dependencies14:01
efriedah, yup.14:02
efriedwe do it in tox.ini, not in requirements.14:02
edmondswright14:02
efriededmondsw Does nova list it in requirements.txt or test-requirements.txt?14:02
efriedThis is wsgi-intercept?14:03
efriedThat's in test-requirements.14:03
efriedSo yeah, we may need to do an extra -r in there.14:04
edmondswefried I really don't know where to begin here... if you do, can you take this?14:04
edmondswI did confirm that wsgi-intercept is not new for nova14:05
edmondswso I'm not sure why we haven't hit this before14:05
edmondswmaybe they started using it in a new place14:05
efriededmondsw Is there an email somewhere in my inbox that describes how to reproduce this locally?14:07
edmondswah, yep, that's exactly it... https://github.com/openstack/nova/commit/fdf27abf7db233ca51f12e2926d78c272b54935b14:07
edmondswefried yes14:07
edmondswit's pretty simple... tox --recreate -epy27,pep814:07
edmondsw:)14:07
efriedOkay, I'll see if I can figure it out.  I'm not a tox expert.14:08
edmondswyeah, me either... not by a long shot14:09
edmondswtx14:09
edmondswefried just resent the email with the latest info I have14:09
efriedack14:11
*** jay1_ has quit IRC14:25
*** dwayne has joined #openstack-powervm14:54
esbergluefried: Sounds like the file may show up as corrupt if you run rsync while it's being written15:39
efriedboo15:42
efriededmondsw esberglu How has this wsgi-intercept thing not shown up in nova-powervm jenkins results?15:42
efriededmondsw And do we have a LP bug for it?15:43
esbergluefried: It is hitting the jenkins results now15:43
edmondswyeah, I was wondering that too... maybe we have nova installed in the same env, complete with test-requirements?15:43
esbergluhttps://review.openstack.org/#/c/328315/15:43
esbergluhttp://logs.openstack.org/15/328315/56/check/gate-nova-powervm-python27-ubuntu-xenial/5c0b870/console.html15:43
edmondswefried I don't think pushkaraj opened a LP bug15:43
edmondswefried yeah, I assume we have nova and nova-powervm installed together, including nova's test-requirements, hence no issue with jenkins15:44
efriededmondsw In the CI that'd be the case, but the regular jenkins part of a nova-powervm change set should be the same as a local tox -r run, more or less.15:45
efriedAnd as esberglu notes, that appears to be the case.15:45
edmondswoh, I see what you mean15:45
efriedCan someone open a LP bug please?15:45
efriededmondsw ?15:45
edmondswsure, I'll open15:45
efriedI have the fix.15:45
esbergluefried: Want to brainstorm other options for REST log copying since it doesn't sound like rsync is going to work?15:48
efriedesberglu Create a temp dir, do a local copy, scp/rsync the copy, blow away the temp dir?15:49
edmondswefried https://bugs.launchpad.net/nova-powervm/+bug/170795115:49
openstackLaunchpad bug 1707951 in nova-powervm "nova-powervm tox failing with ImportError for wsgi_intercept" [Undecided,New]15:49
efriededmondsw Thanks.15:50
edmondswnp15:50
esbergluefried: local cp wouldn't have any issues with the file still being written right?15:50
efriedesberglu I believe that to be true.15:50
efriedWe *might* get into trouble if logrotate hits while we're doing the local copy.15:51
efriedBut probably not.15:51
esbergluNeed to wipe ips as part of it as well15:52
*** dwayne has quit IRC15:53
efriedesberglu Oh, that's gross.  Easy enough to do on the .logs, but the .log.gzs will have to be unzipped and rezipped.15:53
esbergluYeah. Annoying but easy15:53
openstackgerritEric Fried proposed openstack/nova-powervm master: Install nova test requirements for tox  https://review.openstack.org/48964515:55
efriededmondsw esberglu ^^15:55
mdrabeefried: Should the topic point to the bug for that review?15:56
efriededmondsw Is Pushkaraj on openstack gerrit?15:57
efriedmdrabe Can do, sec.15:57
edmondswefried not sure I understand the question... he saw with pvcos, which is our own internal gerrit15:57
efriededmondsw I mean to add him to the review.15:58
edmondswoh, I don't know... good question15:58
edmondswefried githubusercontent.com? ?15:58
edmondswoh, nm... that's what I get when I go to the raw file as well16:00
efriededmondsw Swhat I got when I punched the 'raw' button from https://github.com/openstack/nova/blob/master/test-requirements.txt -- yeah.16:00
edmondsw+216:00
efriedthx16:00
edmondswty16:04
*** dwayne has joined #openstack-powervm16:14
mdrabeedmondsw: efried: stable nova-powervm pulls nova master?16:17
efriedmdrabe Shouldn't.  Find a bug?16:17
mdrabeI'm just wondering about that hard master URL16:18
efriedLook at tox.ini for e.g. stable/ocata.16:18
efriedWe've been manually editing those links when we cut a stable branch.16:18
efriedWhich is a pain16:18
efriedBut is one of the reasons to get integrated with the official releases process - they do that stuff for ya.16:19
efriedThough I'm not sure if they would fix the nova dep link too.16:19
efriedAnyway, yes, we frequently forget to update those things.16:19
mdrabeOk that's my concern16:19
edmondswhttps://github.com/openstack/nova-powervm/blob/stable/ocata/tox.ini#L1416:20
edmondswnot much we can do about it... if we forget when pike moves to stable, tox will break, and we'll notice and fix it... but we should see it when we're cutting over16:20
efriededmondsw tox won't break, though.16:20
efrieduntil it does.16:20
mdrabeCan tox.ini run a script?16:20
efriedmdrabe Dunno.  That's beyond my ken.16:21
edmondswefried i meant if we started pulling nova from stable/ocata but forgot to make the same change for test-req16:21
edmondswI assumed mdrabe was asking in referene to this fix16:22
efriedOh - we don't need to backport this fix, cause the thing that surfaced it was only in pike.16:22
efriedThe test req is in N+, though.16:22
efriedI mean, technically we could backport the fix, but I say if it ain't broke...16:22
mdrabeI'm thinking in the future, if we forget to update for queens16:22
efriedWell, this one won't break us.  But something else might.16:23
mdrabeRight some version incompatibility16:23
efriedAnyway, yeah, it's a hole.  We know about it, but we haven't been motivated to fix it f'real yet.  We just patch it up once per release whenever we think about it, or if it actually breaks something.16:23
efriedIf you have the time and inclination to figure out the mysterious swirling vortext of tox et al, feel free to make it right.16:24
efriedBut if you have that kind of time, I've got better things for you to do.16:24
mdrabeYea google isn't revealing much on getting a bash context in tox.ini16:25
efriedmdrabe There's 'commands', but that runs every time, whereas deps only get installed first time or with -r.16:26
openstackgerritEric Fried proposed openstack/nova-powervm master: Adopt new pypowervm power_off APIs  https://review.openstack.org/47627417:09
edmondswthorst_afk if you'll +2 https://review.openstack.org/#/c/489645/ we can get nova-powervm passing jenkins again17:47
openstackgerritEric Fried proposed openstack/nova-powervm master: Adopt new pypowervm power_off APIs  https://review.openstack.org/47627417:52
thorst_afkedmondsw: looking now18:01
thorst_afklooks like a pretty complex change18:01
edmondswoh?18:01
edmondswone line, right?18:02
thorst_afk:-)18:02
thorst_afksarcasm is lost in IRC18:02
edmondsw:)18:02
edmondswthought maybe you were looking at the wrong thing ;)18:02
efriededmondsw I rebased https://review.openstack.org/476274 on it to (further) prove that it works.18:04
edmondsw+218:04
thorst_afkedmondsw: I +2'd the earlier one18:20
edmondswthorst_afk tx18:20
openstackgerritMerged openstack/nova-powervm master: Install nova test requirements for tox  https://review.openstack.org/48964518:28
*** apearson has quit IRC19:00
*** apearson has joined #openstack-powervm19:03
*** jay1_ has joined #openstack-powervm19:20
edmondswesberglu can you check why the CI failed for https://review.openstack.org/#/c/476274/ ?19:45
edmondswtimeouts... just recheck?19:46
esbergluedmondsw: It also failed a few with this19:47
esbergluFailed to power off instance: 'module' object has no attribute 'power_off_progressive'19:47
edmondswesberglu ok yeah, that'd be a problem19:48
edmondswefried ^19:48
*** apearson has quit IRC20:12
efriedjeez20:27
efriedlooking.20:27
efriedesberglu Can you give me a pointer to that?20:27
esbergluhttp://184.172.12.213/74/476274/7/check/nova-powervm-out-of-tree-pvm/e829e6b/powervm_os_ci.html20:28
efriedIt worries me that we're getting all these timeouts, need to dig into it a bit more, makes me think we may not be doing forced power-off when we oughtta.20:28
esbergluefried: See the ServerDiskConfigTestJSON tests there20:28
esbergluI didn't dig into any actual logs20:28
esbergluefried: edmondsw: thorst_afk: See 5632 for the first wave of rest log copying20:29
esbergluIt will keep roughly 24 hours of rest logs and rsync hourly20:29
*** apearson has joined #openstack-powervm20:30
efriededmondsw Do we log a pip freeze anywhere in our CI runs?20:30
edmondswesberglu ^20:30
efriedsorry, yeah20:31
efriedI've already complained about you two having nicks of the same length starting with e, right?20:31
edmondswbe careful, yours starts with e as well ;)20:31
efriedI mean, at least I have the decency to have two fewer chars in mine.20:31
*** svenkat has quit IRC20:31
edmondswso nice of you20:31
esbergluefried: Not seeing it anywhere20:32
edmondswesberglu I guess you're wiping ips and domain names because this is stored in public? Why not store it somewhere private and not do that?20:33
edmondswnobody outside IBM is going to be interested in that data, are they?20:33
esbergluI thought it would be nice to have it all in one place20:33
edmondswefried, what do you think? Is wiping ips and such going to make it harder to debug things?20:34
esbergluBut yeah no one outside will likely be interested20:34
efriednono, we need to have the stuff public20:34
efriedwhy, that's how mriedem found the wait-for-compute bug last week.20:35
efriedAlso for accountability.20:35
efriedNow, do we really need to bother wiping IPs?20:35
efriedWe should ask someone who knows things about security.20:35
edmondswI'm not saying our logs shouldn't be public... I'm asking why the novalink REST logs need to be public20:35
edmondswspecifically20:35
efriedoh20:35
edmondswthe new stuff we're going to start grabbing that will be greek to anyone outside IBM20:35
efriedwell, yeah, they probably don't.20:35
edmondswas for security... yeah, if you're going to make it public you need to wipe IPs20:36
efriedWe're going to need *some* way to walk from a CI result to the appropriate REST logs.20:36
edmondswefried timestamp, no?20:36
edmondswand short hostname20:36
edmondswI think we leave that in the logs, right?20:36
efriededmondsw Yeah, it's the hostname.20:36
efriedSo far we've had trouble determining getting neo's hostname into the CI logs.  esberglu Did we solve that yet?20:37
esbergluYeah a while ago20:37
esbergluIts at the top of the console log20:37
esbergluefried: Do you think it's worth rsyncing on failures? Or just keeping the cron hourly?20:38
esbergluNot very often would we be ready to look at the rest logs within an hour of a specific failure20:38
efriedBe pretty frustrating not to find it.  But we can wait an hour.20:39
efriedSo hang back a sec.20:39
efriedWhy are we doing this?20:39
efriedIf we don't need to make the data public, and we know which neo the logs are on, and we know the time stamp of the failure (from the openstack logs), then we can get to the logs we need regardless.20:40
edmondswI thought we were grabbing things before they got overwritten / wrapped20:40
edmondswbut that was just my assumption, could be wrong20:41
efriedesberglu How long before that happens?20:41
efriedGimme an example of a neo that's been running a while.20:41
esbergluSo I was copying the last 20 FFDC logs which is about 24 hours20:41
esbergluI thought we were trying to get the rest logs set up somewhere so it was easy to get from a failed run to the rest logs20:42
esbergluSo we have the last 24 hours or so of rest logs syncing to the logserver20:42
efriedesberglu If we're not making the logs public, then we can't link 'em from CI results, so we're not getting any ease-of-get-to benefit.20:42
efriedIs neo7 a CI host?20:42
esbergluefried: Yeah that's why I put it on the logserver20:42
esbergluYes it is20:43
efriedThat guy has FFDC logs back to 7/27 a.m.20:43
efriedHow long do we keep CI results?20:43
esbergluefried: Quite a while20:44
esberglusec20:44
esbergluI thought the point of all of this was to make it easier to go from a failed run to the rest logs20:44
efriedesberglu If we're not making the logs public, then we can't link 'em from CI results, so we're not getting any ease-of-get-to benefit.20:44
esbergluRather than having to actually go to the neo, you can just click a link in the run logs that will take you to the rest logs20:44
esbergluYeah I'm saying why not make them public20:44
esbergluIt doesn't hurt anythgin20:44
efriedOkay, right.  Cause then you gotta scrub IPs, and that's a pain?20:45
esbergluefried: Not really, I just copied the log scrubbing that we use for everything else20:45
efriedAnd that works for .gzs as well?20:45
esbergluYep. In unzip those, scrub, then rezip20:45
efriedight.20:45
efriedHow big is 24h worth of logs?20:46
efriedLike 220MB?20:47
esbergluNah the neo I was using was 124 MB with everything zipped20:47
edmondswdo we need an ease-of-get-to benefit here? Is it hard to just go to the neo yourself?20:47
efriedSo then esberglu what's your strategy for aging these things?  Cause any time you recopy, you're going to be carving the window down to 24h.  Which is worse than what we've got on the neo.20:48
efriedI'm leaning towards edmondsw's view here.  We're doing a lot of stuff (writing code, consuming disk, chewing up bandwidth) to implement something that's severely limited and of negligible benefit.20:50
esbergluefried: Yeah. I thought that you guys just wanted them copied to the logserver for easy access20:51
efriedesberglu Plonk a paragraph into the README (do we have a README?) that describes how to find the REST logs (i.e. which log to find the neo name in, and what to search for in there).20:51
efriedAnd I think that solves it.20:52
edmondswsounds good to me20:52
efriedLooking back at what we were thinking when we put this on the to-do list, having not dug into it, we were thinking we could somehow have per-run REST logs.20:53
efriedBut having grabbed that tiger by the tail and ridden it, knowing what we know now, it doesn't seem like there's a benefit.20:53
efriedNow20:53
efriedIf we could filter the logs to get per-run entries only20:53
efriedThen I would be totally on board.20:53
efriedThat might be *theoretically* doable.20:54
efriedBut probably very tricky.20:54
*** apearson has quit IRC20:55
efriedCause we know the IP/hostname of the CI node, and that info should be in *some* of the REST requests.  Probably every Audit.log entry, in fact.  From there to transaction IDs; and from there to FFDC log entries.20:55
efriedTricky to pull out multiline entries, though.20:55
efriedBut we could also just base it on the timestamps of the lifespan of the CI run.  We'd get entries for all CI nodes running during that time, but that's okay.20:56
esbergluefried: Yeah but then we get into the problem of copying a ton of duplicate data20:57
efriedesberglu But a) we only do it on failures, and b) we're not duplicating *everything* every time - just things in that time window when runs are happening in parallel.20:57
efriedesberglu Anyway, I think this is a low-priority wishlist item.20:58
efriedRight now we need to figure out why tf http://184.172.12.213/74/476274/7/check/nova-powervm-out-of-tree-pvm/e829e6b/logs/ isn't picking up pypowervm 1.1.620:58
efriedesberglu This is a little weird: http://184.172.12.213/74/476274/7/check/nova-powervm-out-of-tree-pvm/e829e6b/console.html#_2017-08-01_18_03_47_65620:59
efriedThis happens several times during stacking.21:00
efriedWhich makes a guy think we've got a git repo sitting around somewhere that might be (sometimes??) hijacking the 1.1.6.21:00
efriedesberglu Hm, so the CI node base image probably has a pypowervm sitting on it.21:01
esbergluefried: Yep thats where the 1.1.4 is from21:01
efriedesberglu But it's refusing to uninstall it.21:02
esbergluI thought stack would install over it though21:02
efriedThat would also answer why we're getting lots of timeouts - because we're running into the power_off bug from 1.1.421:02
efriedYeah, so how do you explain "found existing installation"?21:02
efriedesberglu Rebuilding the base image is a thing that takes a long time and is a pain in the ass, right?21:04
efriedesberglu Can you spin me up a pseudo-CI node, pre-stack?21:04
efriedI wanna see where this 1.1.4 is coming from.21:05
esbergluefried: Yeah. Would be easier to modify prep_devstack script to install properly21:05
esbergluWe install it so that the prepare_node_powervm.sh script works21:05
efriedesberglu Well, remember, we don't do that for a reason.21:05
efriedesberglu Remind me, is there a reason we need pypowervm before stack?21:05
esbergluYeah we need it for the image template and ready node scripts to work21:05
efriedoh, certainly now, since we're doing the remote hack before stack.21:05
efriedSo how are we getting it?  Based on the requirements.txt in the nova clone we're testing?21:06
esbergluefried: It's a variable in neo-os-ci21:06
esbergluFor the undercloud21:06
efrieduh21:06
efriedboo21:06
esbergluFor the ready nodes scripts I mean21:06
esbergluWe can just explicitly install the pypowervm version found in the u-c in the prep_devstack script21:08
esbergluefried: Just add an else block there that installs pypowervm_version21:11
esbergluhttps://github.com/powervm/powervm-ci/blob/master/devstack/prep_devstack.sh#L157-L18221:12
efriedesberglu Dig.  Though I would rather get it from the requirements.txt of the repo of the change we're testing.21:13
efriedWe should be able to figure that out, nah?21:13
esbergluhttps://github.com/powervm/powervm-ci/blob/master/devstack/prep_devstack.sh#L14721:14
esbergluYeah just need to change that21:14
efriedReason I say that is cause that allows us to "preview" a g-r bump by proposing a patch that bumps the requirements.txt version.21:14
esbergluWe can already preview a patch with the patching logic21:15
esbergluFor any openstack project21:15
esbergluefried: Oh I see what you mean21:21
esbergluWe use the prep_devstack script for projects that don't have pypowervm as a req21:22
esbergluneutron, ceilometer silent runs21:22
esbergluBut I could add logic to check the project21:22
*** jay1_ has quit IRC21:29
*** cjvolzka has quit IRC21:29
efriedesberglu Yeah, if the project in question doesn't have a pypowervm req, then get it from global.21:30
*** cjvolzka has joined #openstack-powervm21:30
*** cjvolzka has quit IRC21:31
esbergluefried: Hmm same thing happened when I tried installing 1.1.6 pre-stack21:32
esbergluAnd trying to uninstall pypowervm prior to that gives the same message stack is seeing21:32
esbergluCan't uninstall 'pypowervm'. No files were found to uninstall.21:33
efriedesberglu Yeah, so where is that 1.1.4 coming from?21:33
efriedit's like the pip db is corrupted or something.21:33
esberglu /opt/stack/pypowervm21:33
efriedoh, so it's not deleting files cause that guy was installed with -e.21:33
efriedWhich ought to be just fine.21:34
efriedI mean, I don't know that this is really the problem.21:34
efriedBut we're certainly getting a pypowervm other than 1.1.6.21:34
*** smatzek has quit IRC21:42
*** apearson has joined #openstack-powervm21:47
*** esberglu has quit IRC21:58
*** smatzek has joined #openstack-powervm22:01
*** smatzek_ has joined #openstack-powervm22:02
*** apearson has quit IRC22:05
*** thorst_afk has quit IRC22:05
*** apearson has joined #openstack-powervm22:06
*** smatzek has quit IRC22:06
*** apearson has quit IRC22:06
*** apearson has joined #openstack-powervm22:07
*** apearson has quit IRC22:08
*** kylek3h has quit IRC22:12
*** esberglu has joined #openstack-powervm22:12
*** kylek3h has joined #openstack-powervm22:13
*** kylek3h has quit IRC22:13
*** esberglu has quit IRC22:17
*** thorst_afk has joined #openstack-powervm22:19
*** smatzek_ has quit IRC22:20
*** esberglu has joined #openstack-powervm22:23
*** thorst_afk has quit IRC22:24
*** edmondsw has quit IRC22:36
*** smatzek_ has joined #openstack-powervm22:36
*** svenkat has joined #openstack-powervm22:40
*** smatzek_ has quit IRC22:49
*** svenkat has quit IRC22:53
*** esberglu has quit IRC23:06
*** esberglu has joined #openstack-powervm23:12
*** cjvolzka has joined #openstack-powervm23:38
-openstackstatus- NOTICE: osic nodes have been removed from nodepool due to a problem with the mirror host beginning around 22:20 UTC. please recheck any jobs with failures installing packages.23:47

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!