Tuesday, 2017-05-16

*** thorst_afk has joined #openstack-powervm00:10
*** chhavi has joined #openstack-powervm00:17
*** mdrabe has quit IRC00:31
*** svenkat has joined #openstack-powervm00:33
*** chas has joined #openstack-powervm00:47
*** chas has quit IRC00:53
*** chas has joined #openstack-powervm01:09
*** chas has quit IRC01:13
*** thorst_afk has quit IRC01:19
*** thorst_afk has joined #openstack-powervm01:28
*** chas has joined #openstack-powervm01:30
*** chas has quit IRC01:34
*** thorst_afk has quit IRC01:35
*** esberglu has quit IRC01:38
*** chas has joined #openstack-powervm01:50
*** lan has joined #openstack-powervm01:55
*** chas has quit IRC01:55
*** chhavi has quit IRC02:01
*** thorst_afk has joined #openstack-powervm02:03
*** thorst_afk has quit IRC02:04
*** chas has joined #openstack-powervm02:11
*** chas has quit IRC02:16
*** esberglu has joined #openstack-powervm02:31
*** chas has joined #openstack-powervm02:32
*** lan1 has joined #openstack-powervm02:33
*** esberglu has quit IRC02:35
*** lan has quit IRC02:36
*** chas has quit IRC02:37
*** shyama has quit IRC02:52
*** thorst_afk has joined #openstack-powervm03:04
*** svenkat has quit IRC03:09
openstackgerritGautam Prasad proposed openstack/nova-powervm master: Save management switch mac address on deploy and use on rebuild  https://review.openstack.org/46455403:15
*** thorst_afk has quit IRC03:25
*** chhavi has joined #openstack-powervm04:15
*** shyama has joined #openstack-powervm04:33
*** chas has joined #openstack-powervm04:37
*** chas has quit IRC04:42
*** chas has joined #openstack-powervm04:58
*** lan1 is now known as lan04:58
*** chas has quit IRC05:03
*** esberglu has joined #openstack-powervm05:15
*** esberglu has quit IRC05:19
*** thorst_afk has joined #openstack-powervm05:22
*** thorst_afk has quit IRC05:26
*** edmondsw has joined #openstack-powervm06:53
openstackgerritGautam Prasad proposed openstack/nova-powervm master: Save management switch mac address on deploy and use on rebuild  https://review.openstack.org/46455406:53
*** chas has joined #openstack-powervm06:55
*** edmondsw has quit IRC06:57
*** k0da has joined #openstack-powervm06:59
*** k0da has quit IRC07:44
*** thorst_afk has joined #openstack-powervm08:24
*** thorst_afk has quit IRC08:29
*** YuYangWang has joined #openstack-powervm08:57
lanCan anyone help with this problem:  https://bugs.launchpad.net/nova-powervm/+bug/1691055  thanks!09:11
openstackLaunchpad bug 1691055 in nova-powervm "400 error when launching an instance" [Undecided,New]09:11
lan09:11
lanchhavi,  are you around, could you help me with that problem,  https://bugs.launchpad.net/nova-powervm/+bug/1691055  thanks!09:11
*** thorst_afk has joined #openstack-powervm09:25
*** k0da has joined #openstack-powervm09:37
*** thorst_afk has quit IRC09:44
*** a1fisher has quit IRC09:55
*** thorst_afk has joined #openstack-powervm10:41
*** thorst_afk has quit IRC10:45
*** thorst_afk has joined #openstack-powervm11:12
*** smatzek has joined #openstack-powervm11:20
openstackgerritArun Mani proposed openstack/nova-powervm master: Remove block_migration attribute from migration rollback call  https://review.openstack.org/46504511:35
*** svenkat has joined #openstack-powervm11:51
thorst_afkchhavi: I posted a comment on https://review.openstack.org/#/c/463720/412:05
thorst_afklet me know if I'm remembering that one wrong12:05
*** thorst_afk is now known as thorst12:05
chhavithorst: yes we will no more need the vios_uuid changes, will be handling in pypowervm12:20
thorstcool12:21
thorstso we'll need a new rev of pypowervm for the nova-powervm changes...but that should be OK12:21
*** jpasqualetto has quit IRC12:23
*** edmondsw has joined #openstack-powervm12:35
lanthorst,  thanks for your answer in my problem.   I am using HMC, not NovaLink since I am trying to manage power7 or power6 system.12:37
*** jpasqualetto has joined #openstack-powervm12:37
lanthorst,  it there a way to just call HMC rest api without using NovaLink ?12:37
lanthorst, or other ways to manage lower version of power system?12:42
*** kylek3h has joined #openstack-powervm12:42
thorstlan: sorry for the delay.  The PowerVM drivers are only compatible with Power8 (currently - later revs will of course be supported) and PowerVM NovaLink12:46
thorstif P7 and P6 are requirements, you should look at PowerVC12:46
thorstthat supports HMC management and NovaLink together.12:47
*** esberglu has joined #openstack-powervm12:50
lanthorst,  OK , can I just use IVM command through "ssh" , I can find this kind of code in havana release of nova code base.12:52
thorstthat driver was dropped then and severely lacked features/support.  I would strongly advise against using it.12:53
lansince some power customers may not by powerVC licese, right ?12:53
lanwhat's the key features/support that driver not support ?12:55
thorstThe IVM Driver?12:57
thorstor the PowerVM NovaLink driver that is suppported?12:57
lanthe IVM driver12:58
thorstI'm not sure.  I think it could do a basic deploy.  Had no neutron integration.  Didn't have cinder support.  Minimal glance support.12:58
thorstit's several years old, so it will not work on any new openstack...12:59
thorstnone of the supported openstack releases.12:59
*** jay1_ has joined #openstack-powervm12:59
thorstI'm not really sure, I didn't work on it...I reviewed the code at one point...but it was like a couple hundred line driver.  It was quite lacking.12:59
lanyes, you are right.  How can I make PowerVC come into use ?  Firstly setup PowerVC env and then use nova-powervc driver?13:00
thorstwell, a PowerVC environment deploys the drivers and everything for you13:01
thorstit is an OpenStack13:01
thorst(FYI - we have a driver meeting starting in a minute here)13:01
lanOK, thank you so much!!! : )13:01
esberglu#startmeeting powervm_driver_meeting13:01
openstackMeeting started Tue May 16 13:01:52 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.13:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.13:01
openstackThe meeting name has been set to 'powervm_driver_meeting'13:01
edmondswo/13:02
esberglu#topic In Tree Driver13:02
efriedo/13:03
thorst\o13:03
efriedNo progress13:03
efriedobviously nothing was going to happen during the summit.13:03
esbergluYep that's what I thought. Anything to discuss here?13:03
efriedAnd at this point I wanna give everyone a bit to calm back down.13:03
efriedNope.  I'll look for an opportune time to pester again.13:03
esberglu#topic Out-of-tree Driver13:04
thorstlots of reviews out.  PowerVC team has been pushing up some solid patches13:04
thorstI sent efried a list to review13:04
efried#action efried to do those13:05
thorstchhavi has also been driving a ton of the iSCSI work.  She found a ton of issues that needed to be worked, changes in the REST itself, etc...13:05
thorstso great work chhavi...many many thanks13:05
efried+113:05
thorstI'm basically working on core issues with the QCOW handler13:07
thorstbut they're transparent to the OpenStack driver13:07
efriedI have a couple of topics for the driver in general, came out of the summit.  Let's have a separate "Summit takeaways" topic later.13:07
thorstk13:07
thorstI think that's all I had.13:07
esbergluI'm still continuing to work in-tree backports as able. Nothing else from me13:07
*** edmondsw_ has joined #openstack-powervm13:08
efriedCI?13:08
esberglu#topic PowerVM CI13:08
esbergluMy #1 priority this week is identifying/opening/fixing bugs that are causing tempest tests13:08
*** mdrabe has joined #openstack-powervm13:09
*** edmondsw has quit IRC13:09
*** edmondsw_ has quit IRC13:09
*** edmondsw has joined #openstack-powervm13:09
esbergluThere are still some in-tree tests that fail occaisonally because of other networks that exist from other tests during the time they are running13:09
esbergluAnd a handful of problematic tests for OOT13:10
*** YuYangWang has quit IRC13:10
esbergluOther than that I'm still working the systemd change. It's looking good for the most part.13:10
efriedI'd still like a systemd'd setup to noodle around with.13:11
efriedI've got some definite comments queued up for 5245; but will have more once I've had a chance to play.13:11
esbergluHowever the n-dhcp service looks like it is still getting started with screen, which doesn't work with the logserver changes13:11
esbergluHaven't got into a node to play around with it yet.13:12
thorstthat systemd thing is weird13:12
thorstI'm not sure I'm a fan of getting rid of screen.13:12
thorstbut that may just be that I'm so used to screen.13:12
efriedthorst Totally agree.  I'm old, and not a fan of change.  But I think once we get used to it, it'll be better.13:13
efriedThe one thing I'm going to miss the most is the coloring.13:13
thorsttotes...but grumble in the mean time13:13
efriedHopefully we figure out how to get colors.13:13
esbergluefried: I can spin you up a node on the prod. cloud. I'm gonna be tearing the staging down a bunch this week13:13
efriedI don't see why we can't still throw color codes into journald - mebbe it doesn't allow them?13:14
thorstI thought I was seeing it...13:14
efriedthorst Where?  We haven't produced anything with journald yet, have we?13:14
thorstchhavi's env13:15
thorstiscsi debug13:15
efriedAnyway, converged logs plus the "traveling request-id" stuff jaypipes is working on ought to make debugging much nicer.13:15
thorsttraveling request-id?  Sounds beautiful13:16
thorstalmost as nice as the service token stuff13:16
edmondswit's related, actually13:16
edmondswdependent on service tokens... so another reason to love those13:17
thorstheh, you know how much I love security discussions...13:17
thorstanywho...13:18
thorstanything else here?13:18
esbergluBacklog of small CI issues as always. But nothing noteworthy13:18
esberglu#topic Driver Testing13:19
efriedthorst #link https://review.openstack.org/#/c/464746/13:19
efried^^ request-id spec13:19
*** dwayne has joined #openstack-powervm13:20
esbergluAny issues or updates on the manual driver testing?13:20
efriedjay1_ ^^ ?13:21
efriedFYI guys, I gotta jump in about 20 minutes.  My #3 has a track meet.  (He's running the hurdles, like his ol' dad used to do ;-)13:23
efriedCan we move on, and come back to jay1_ later?13:23
esbergluAlright moving on.13:23
esberglu#topic Open Discussion13:24
efriedOkay, couple things came out of the summit that I wanted to document for the sake of having 'em in writing (read: before I forget all about 'em)13:24
efriedFirst, placement and SR-IOV.  Talked with jaypipes; he agrees PCI stuff is totally broken, and would definitely be in the corner of anyone "volunteering" to fix it.13:25
efriedHe seemed at least superficially sympathetic to the fact that the existing model doesn't work at all for us (or anyone who's not running on the hypervisor - access to /dev special files, pre-creation of VFs, etc.)13:26
efriedSo this would be an opportunity for us to a) fix our SR-IOV story; and b) look good in the community.13:26
edmondsw+113:27
edmondswthat should definitely be on our TODO list13:27
efriedSecond, resource groups.  Today, if we have 5 compute nodes all on the same SSP which has, say, 20GB of free space, and you ask the conductor how much storage you have in your cloud, it answers 5x20GB, cause it doesn't know any better.13:27
jay1_efried: the update on the ISCSI verification, issue is there with attach/detach.13:28
jay1_https://jazz07.rchland.ibm.com:13443/jazz/web/projects/NEO#action=com.ibm.team.workitem.viewWorkItem&id=17434213:28
efriedjay1_ Hold on, we'll come back to that.13:28
edmondswI thought there was a concept of shared resources... are we not signaling something correctly there?13:28
efriedThere's a spec to define resource groups within placement, seems to be mostly done.  This would allow you to register the fact that all of those computes are on the same SSP, and the math would fix itself.13:28
edmondswah... so still in the works. Is the spec almost done, or the implementation?13:29
efriedbah, I'll have to find that spec later, I've got it here somewhere.13:29
efriededmondsw I think the code is ready for use.13:29
efriedSo the thing is:13:29
efriedThe user can define the resource group by running commands.  In that picture, we don't have to do anything - but it's the responsibility of the user to get it right.13:30
efriedHowever, there's a way we can tie into the placement API with code and set up this resource group ourselves from within our driver.13:30
efriedLike, when get_inventory is called.13:31
edmondswyeah, we don't want users having to do that13:31
efriedRoughly, from get_inventory, we would check whether the SSP is registered (hopefully we're allowed to specify its UUID).  If not, register it from this host.  If so (and this might be a no-op), add this host to it.13:32
efriedjaypipes volunteered to show me where that code is, but he's in China this week, so I'll follow up next week.13:32
efriedThose were the two major takeaways from the summit.13:33
efriedfrom me.13:33
efriedto-dos, I should say.  I had much more takeaway than that.13:33
edmondsw:)13:33
*** smatzek has quit IRC13:33
edmondswthanks, efried13:33
edmondswesberglu did you get a chance to look at that etcd stuff and see whether it would impact us?13:34
edmondswor more likely, how?13:34
esbergluThat totally slipped my mind. Adding it to the list now13:35
efriedAlso on the backlog of driver work to be done (these have been hanging out for a while, but I may as well mention them here to get them on record):13:37
efriedo Implement get_inventory().  This replaces get_available_resource().13:37
efriedo Make the compute driver mark the host unavailable if the REST API is13:37
efriedbusted.13:37
efriedo Subsume HttpNotFound exception - now available via pypowervm 1.1.4,13:37
efriedwhich is through global-reqs.13:37
edmondswefried that's all for the IT driver, right?13:38
efriedBoth.13:38
edmondswah, true13:38
efriedOh, also need to keep an eye on https://review.openstack.org/#/c/452958/ and remove that arg from our destroy method when it merges.13:39
edmondswefried why isn't that fixing the IT driver as part of the patch?13:40
thorstI love the idea to mark the driver down if the REST API isn't working13:40
efriededmondsw Was just about to -1 for that.13:40
edmondswgood13:40
*** shyama has quit IRC13:40
edmondswshould we go back to the iSCSI testing now?13:41
*** shyama has joined #openstack-powervm13:42
esbergluSure. I didn't have anything else13:42
edmondsw#topic iSCSI testing13:42
efriedGotta bounce - I'll have to catch up on the iSCSI stuff later.  (I think I may have some to-dos wrt scrubbers there.)13:42
edmondswbye efried13:43
edmondswjay1_ ready to talk about iSCSI testing now13:43
jay1_sure; so far the progress is that we are able to do successful deploy without NW13:44
jay1_attach/detach issue is still getting fixed13:44
*** apearson has joined #openstack-powervm13:45
jay1_the next target would be to try LPM.13:45
edmondswjay1_ this is with a CHAP secret configured but not using CHAP, is that right?13:46
thorstmulti attach has REST issues as well13:46
edmondswI spoke to gfm about the CHAP issues... sounds like a couple different problems we're trying to track down and get fixed13:46
edmondswglad you were able to work around that in the meantime13:46
edmondswjay1_ is the bug you linked above the only other issue we're seeing?13:47
edmondswdo we have a bug for the CHAP issues?13:47
jay1_edmondsw: yes we are not using CHAP yet13:48
chhaviedmondsw: currently we have the CHAP disabled for SVC,13:48
chhaviedmondsw: reason why i disabled CHAP, we were having discovery issues if we enable CHAP13:49
thorstworking the multi attach, lpm, then chap can come in13:49
edmondswthorst +113:49
thorstjust chipping away at all the edge cases13:49
chhaviedmondsw: reason is on neo34 iscsid.conf CHAP is not configured13:49
edmondswI'm trying to work CHAP in parallel with the storage guys, but that shouldnt' be focus for jay1_ or chhavi right now13:49
chhaviyeah, for us we have put CHAP on the side for now, and just playing with attach/detach13:50
edmondswwhat's the latest on the attach issues?13:50
jay1_https://jazz07.rchland.ibm.com:13443/jazz/web/projects/NEO#action=com.ibm.team.workitem.viewWorkItem&id=17434213:50
chhavicurrently the issue which I am seing is, if I am trying to attach say 2 volumes on the VM. the second volume is not getting discovered correctly13:50
jay1_we have other open defect as well13:50
thorstjay1_: as noted earlier, please don't put links to internal IBM sites in here  :-)13:51
thorstbut that issue is for multi-attach13:51
chhavithe problem which i suspect is, while we do the discover we just do iscsiadm login, i am trying to find, where we can use lun id to identify the correct lun on the same target13:51
chhavithorst: this is not multiattach13:52
thorstooo13:52
thorstits just straight detach13:52
jay1_thorst: sure ..13:52
chhavimultiattach means, same volume on multiple VM13:52
thorstahh, sorry I meant multiple volumes on same vm13:52
edmondswchhavi if we only attach/detach one volume, things work, but if we attach/detach a second volume we have issues... is that correct?13:53
chhaviyeah13:53
edmondswchhavi have you spoken to the owner of that bug? Do they have any ideas on how to fix it?13:54
chhavianother use case which i have seen, is we have 2 VM's on the same host, and if u try to attach one volume on each VM, it does not discover that as well13:54
thorstedmondsw: changch is aware.  He's got a backlog himself that he's working through unfortunately13:55
thorstmaybe nvcastet can help him out13:55
chhavii am waiting for hsien to come, and i am also checking parallely how to do iscidisovery using lun-id13:55
chhaviby the time, i am updating my pypowervm review with exception handling, too many stuff's in parallel :)13:55
*** smatzek has joined #openstack-powervm13:57
thorstchhavi: yeah, we probably need to just list out the active items there.  Maybe a etherpad would help keep track of it all13:57
thorstbut we know we'll at least need a new pypowervm rev when all said and done13:57
edmondswwe're at the top of the hour... any last words?13:59
esbergluThanks for joining14:00
esberglu#endmeeting14:00
openstackMeeting ended Tue May 16 14:00:32 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-05-16-13.01.html14:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-05-16-13.01.txt14:00
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-05-16-13.01.log.html14:00
*** jay1_ has quit IRC14:05
*** tjakobs has joined #openstack-powervm14:34
*** mdrabe has quit IRC14:35
*** mdrabe has joined #openstack-powervm14:41
*** chas has quit IRC15:01
*** chas has joined #openstack-powervm15:02
*** YuYangWang has joined #openstack-powervm15:06
*** chas has quit IRC15:06
*** chas has joined #openstack-powervm15:22
*** chas has quit IRC15:27
esbergluefried: Hitting an issue with power-off/destroy in the CI. Here's the error we are seeing15:38
esbergluHTTP error 500 for method DELETE on path /rest/api/uom/LogicalPartition/72E42FB1-FE45-4A05-9DC9-B59FB0D809C1: Internal Server Error -- [PVME01050010-0056] This task is only allowed when the partition is powered off.15:38
esbergluSo I looked through the power off flow in the logs15:38
efriedesberglu What version of pypowervm?15:38
*** kriskend has joined #openstack-powervm15:38
efriedoh, never mind, reread the message.15:39
esbergluefried: Whatever the upper constraint is right now it pulls it each time (1.1.4?)15:39
efriedSo yeah, look a little earlier and see if we caught and ignored PVME0105090115:39
efried...for the power-off15:39
esbergluWhat I'm seeing for power off is that it goes through and tries to power off. The first fails the os_shutdown because the RMC connection is not active15:40
esbergluThen it goes through and tries power off again15:40
esbergluAt this point it says that power off is not required for the instance (I think the state at this point is not activated)15:41
esbergluSo it seems that it is getting to a state that is considered "powered off" but where you can't destroy15:42
esbergluefried: Not seeing PVME01050901 in the logs15:42
efriedesberglu I wonder if we're just not logging that.15:42
efriedChecking...15:42
efriedOkay, we should see a WARN like "Partition %s already powered off."15:43
efriedDo you see that?15:43
efriedI should print the code in there.15:43
*** chas has joined #openstack-powervm15:43
efriedI'm paranoid that we just added PVME01050901 to the ignore list, but it may not have been appropriate to do so.15:44
esbergluefried: This is running 1.1.2, not 1.1.4 I lied15:48
*** chas has quit IRC15:48
efriedOkay, even so, if that code were a problem, it would be showing up as a failure on the power-off, not on the delete.15:48
efriedesberglu Are you seeing "Partition %s already powered off." ?15:49
esbergluefried: No I'm not15:49
esbergluhttp://184.172.12.213/87/435387/23/check/nova-out-of-tree-pvm/a7f8cdb/15:50
esbergluIf you look for 08:35:49.00215:50
esbergluYou can see where the power off flow starts15:51
esbergluInstance ID: 72e42fb1-fe45-4a05-9dc9-b59fb0d809c115:51
esbergluInstance name: pvm4-tempest-Ser-72e42fb115:51
*** chas has joined #openstack-powervm16:04
*** chas has quit IRC16:09
*** chas has joined #openstack-powervm16:25
*** jwcroppe has quit IRC16:29
*** chas has quit IRC16:29
*** jwcroppe has joined #openstack-powervm16:37
chhavithorst, efried: we need to have a call to discuss on the iscsi changes needed in the rest layer16:37
*** k0da has quit IRC16:44
*** chas has joined #openstack-powervm16:46
*** chas has quit IRC16:50
efriedesberglu Ahahahahahahahaaaaa!16:54
efriedAdd -a to journalctl and you get COLORS!16:54
chhavithorst, efried: i have started one new etherpad, how to add the members to view it16:57
chhavihttps://etherpad.openstack.org/p/powervm-iscsi16:57
efriedchhavi Nothing special needed.  Just openID login.  I can read & write it.16:58
chhaviare u able to view the discussion on etherpad16:59
efriedyes17:00
chhaviso thats the summary of the discussion17:03
*** chas has joined #openstack-powervm17:07
*** chas has quit IRC17:11
esbergluefried: Does that work for outputting to files too?17:21
efriedesberglu Yes... but it's not going to help us much (except with pypowervm/nova_powervm lines) because they switched off colors in devstack.  I'm working with sdague to fix it for the world.17:22
esbergluefried: Alright keep me posted17:26
efriedack17:27
efriedesberglu Actually, do you have a way I can noodle with the devstack source before you stack, and then stack and test?17:27
*** chas has joined #openstack-powervm17:28
esbergluYou can just unstack the one you are using right? Otherwise I can spin you up a new one and not stack it17:28
*** shyama has quit IRC17:28
*** chas has quit IRC17:32
*** chhavi has quit IRC17:34
efriedesberglu Okay, I'll try to unstack.17:38
efriedesberglu Do I do that as powervmci or jenkins?17:39
esberglujenkins17:39
efriedthx17:39
*** chhavi has joined #openstack-powervm17:52
edmondswesberglu even if you assume "This task is only allowed with the partition is powered off" is the correct error message, it shouldn't be an HTTP 500... that should be an HTTP 400.18:08
*** chas has joined #openstack-powervm18:09
*** chas has quit IRC18:14
*** kriskend has quit IRC18:15
*** openstackgerrit has quit IRC18:17
thorstefried: did you have a chance to go through (or will you have a chance to go through) that list of reviews I sent earlier?18:27
thorstI'd like to get them through for PowerVC18:27
efriedthorst Haven't yet.  Distracted.  Can do now.18:27
thorstthx18:27
*** chhavi has quit IRC18:29
*** kriskend has joined #openstack-powervm18:29
esbergluefried: Is "Not activated" the state we would expect after powering off?18:29
efriedesberglu Yeah.18:29
efriedI looked through those logs you sent me earlier and I can't see where we went wrong.18:30
efriedWe've seen this before, though, I believe.18:30
efriedThis fails intermittently, not all the time, right?18:30
esbergluYeah on and off18:30
*** chas has joined #openstack-powervm18:30
esbergluEvery time we look into it something larger seems to break in CI18:31
efriedClearly a timing thing.  Wonder if it's a UUID conflict with the instance we're rebuilding vs. the one we're rebuilding from.18:31
*** openstackgerrit has joined #openstack-powervm18:33
openstackgerritMerged openstack/nova-powervm master: Update host cpu util calculation to consider idle proc cycles  https://review.openstack.org/46007818:33
*** chas has quit IRC18:35
efriedthorst Reviews done.18:41
thorsthugs18:41
thorsthow was the summit?18:41
openstackgerritMerged openstack/nova-powervm master: Save management switch mac address on deploy and use on rebuild  https://review.openstack.org/46455418:48
*** chas has joined #openstack-powervm18:51
*** k0da has joined #openstack-powervm18:53
*** chas has quit IRC18:56
efriedesberglu https://review.openstack.org/#/c/465147/19:00
efriedthorst Kick ass.  I got SO much more out of it this time.19:00
thorst:-)19:01
thorstgood19:01
efriedthorst I got log colors back ;-)19:08
thorstoo19:09
*** chas has joined #openstack-powervm19:12
*** chas has quit IRC19:16
*** YuYangWang has quit IRC19:24
efriedthorst https://review.openstack.org/#/c/465147/19:27
esbergluefried: Cool. Saw your comment about the systemd changes. Doing a manual run with the improved commands for sanity. Then I think that should be good to go19:30
efriedesberglu You're doing a manual run with the devstack change set?19:30
esbergluNah just with your improved bash19:30
efriedoh, that.19:31
efriedOkay, good deal.19:31
esbergluI can also do a run with your devstack change if you want19:31
efriedesberglu Actually, yeah, let's do that.  Cause one of my suggestions was to include -a in the journalctl command.  So including the devstack change set as well will give us end-to-end proof that the colors come through.19:32
*** chas has joined #openstack-powervm19:33
esbergluYeah I can get them both in one run19:33
esbergluSo no problem19:33
*** chas has quit IRC19:37
*** chas has joined #openstack-powervm19:54
efriedesberglu Have you started that yet?19:56
esbergluefried: Yeah it's stacking right now19:57
efriedk.  One more patch set coming into the devstack change, but should be a no-op.19:57
esbergluok19:57
*** chas has quit IRC19:59
efriedesberglu Oh, hey, did you look at that `=-` thing too?  Was that bogus?20:06
esbergluefried: You mean when setting EnvironmentFile?20:08
efriedesberglu Yeah20:08
esbergluThat's needed20:08
esbergluCan't remember why off the top of my head, but I know if was needed20:08
efriedweird20:10
esbergluAt least I think it was needed. There was some note about it in whatever I was following to set that var20:10
*** chas has joined #openstack-powervm20:15
esbergluefried: Manual runs don't copy the logs to the logserver. So you won't have a link to logs like normal CI runs have20:17
*** chas has quit IRC20:19
esbergluefried: Ok found it. The - makes it so that it doesn't error out if the path supplied doesn't exist20:22
esbergluWhich maybe we don't actually want? Because if that file doesn't exist in CI, remote is broken20:23
mdrabeI assumed that "-" was alright since I saw it in other .service files20:24
mdrabeMight be for files that are "to be created"20:25
esbergluShouldn't really matter for CI since we know /etc/environment exists since we create it earlier20:27
*** thorst has quit IRC20:31
*** chas has joined #openstack-powervm20:36
*** smatzek has quit IRC20:38
*** kriskend has quit IRC20:40
*** chas has quit IRC20:40
efriedesberglu Okay, either way is fine by me.20:46
efriedesberglu Can you copy the logs out manuall?20:46
efriedy20:46
esbergluefried: You can just scp from the node. That good enough for your purposes?20:50
efriedesberglu Yes; though I'd like to put it somewhere public (and perhaps even semi-permanent) so I can link it from the change set.20:51
*** thorst has joined #openstack-powervm20:51
edmondswefried nice job with the nova patch adding colors back20:51
efriededmondsw Thanks20:52
edmondsws/nova/devstack20:52
*** kriskend has joined #openstack-powervm20:55
*** thorst has quit IRC20:55
*** chas has joined #openstack-powervm20:56
*** thorst has joined #openstack-powervm20:59
*** chas has quit IRC21:01
*** thorst has quit IRC21:02
*** kriskend_ has joined #openstack-powervm21:04
*** kriskend has quit IRC21:04
*** k0da has quit IRC21:09
*** thorst has joined #openstack-powervm21:11
*** apearson has quit IRC21:12
*** svenkat has quit IRC21:13
*** chas has joined #openstack-powervm21:17
*** chas has quit IRC21:22
*** k0da has joined #openstack-powervm21:25
*** chas has joined #openstack-powervm21:38
*** chas has quit IRC21:43
*** dwayne has quit IRC21:45
*** thorst has quit IRC21:51
*** esberglu has quit IRC21:53
*** mdrabe has quit IRC21:57
*** k0da has quit IRC21:57
*** chas has joined #openstack-powervm21:59
*** edmondsw has quit IRC21:59
*** edmondsw has joined #openstack-powervm22:00
*** kriskend_ has quit IRC22:02
*** chas has quit IRC22:03
*** edmondsw has quit IRC22:04
*** chas has joined #openstack-powervm22:20
*** chas has quit IRC22:24
*** chas has joined #openstack-powervm22:41
*** chas has quit IRC22:45
*** tjakobs has quit IRC22:48
*** chas has joined #openstack-powervm23:02
*** chas has quit IRC23:06
*** dwayne has joined #openstack-powervm23:22
*** chas has joined #openstack-powervm23:22
*** chas has quit IRC23:27
*** chas has joined #openstack-powervm23:43
*** chas has quit IRC23:48
*** thorst has joined #openstack-powervm23:51
*** thorst has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!