Thursday, 2017-03-09

*** dwayne has joined #openstack-powervm00:14
*** kylek3h has joined #openstack-powervm00:18
*** thorst has joined #openstack-powervm00:25
*** k0da has quit IRC00:28
*** thorst has quit IRC00:29
*** thorst has joined #openstack-powervm01:02
*** thorst has quit IRC01:02
*** esberglu has joined #openstack-powervm01:10
*** esberglu has quit IRC01:11
*** seroyer has joined #openstack-powervm01:33
*** thorst has joined #openstack-powervm02:03
*** seroyer has quit IRC02:05
*** thorst has quit IRC02:08
*** kjw3 has joined #openstack-powervm02:23
*** thorst has joined #openstack-powervm02:54
*** thorst has quit IRC02:54
*** kjw3 has quit IRC03:12
*** thorst has joined #openstack-powervm03:16
*** thorst has quit IRC03:17
*** apearson has joined #openstack-powervm04:07
*** thorst has joined #openstack-powervm04:17
*** thorst has quit IRC04:22
*** k0da has joined #openstack-powervm05:05
*** thorst has joined #openstack-powervm05:18
*** thorst has quit IRC05:23
*** edmondsw has joined #openstack-powervm06:13
*** apearson has quit IRC06:15
*** edmondsw has quit IRC06:17
*** thorst has joined #openstack-powervm06:19
*** thorst has quit IRC06:24
*** k0da has quit IRC07:19
*** thorst has joined #openstack-powervm07:20
*** thorst has quit IRC07:24
*** thorst has joined #openstack-powervm08:21
*** thorst has quit IRC08:25
*** k0da has joined #openstack-powervm08:39
*** thorst has joined #openstack-powervm09:22
*** thorst has quit IRC09:31
*** edmondsw has joined #openstack-powervm09:50
*** edmondsw has quit IRC09:54
*** edmondsw has joined #openstack-powervm10:22
*** edmondsw has quit IRC10:26
*** thorst has joined #openstack-powervm11:28
*** thorst has quit IRC11:33
*** seroyer has joined #openstack-powervm12:25
*** smatzek has joined #openstack-powervm12:31
*** smatzek_ has joined #openstack-powervm12:32
*** smatzek has quit IRC12:36
*** thorst has joined #openstack-powervm12:37
*** dwayne has quit IRC12:37
*** tblakes has joined #openstack-powervm12:59
*** esberglu has joined #openstack-powervm13:06
*** edmondsw has joined #openstack-powervm13:12
esberglu#startmeeting powervm_ci_meeting13:31
openstackMeeting started Thu Mar  9 13:31:25 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.13:31
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.13:31
thorsto/13:31
openstackThe meeting name has been set to 'powervm_ci_meeting'13:31
efriedo/13:31
esberglu#topic In Tree CI13:32
esbergluIn tree CI is still looking good. One test that is hitting us every now and then13:33
esbergluhttp://184.172.12.213/85/442685/4/check/nova-in-tree-pvm/54cc6d5/powervm_os_ci.html13:34
esbergluWhere it can't find an image13:34
esbergluWhich is something I wanted to debug today13:34
esberglu#action esberglu: Debug failing IT CI test13:34
thorsthmm...that doesn't seem to be related to our driver13:34
thorst:-/13:35
esbergluOther than that, I had to redeploy the staging CI. But the plan there is13:35
esbergluto have a set of changes to the whitelist13:35
esbergluSo that as we start getting IT patches approved, we can have the update whitelist ready13:36
thorstneat13:36
esberglu#topic OOT CI13:37
esbergluOOT also has a couple failures I wanted to debug today13:37
esbergluhttp://184.172.12.213/66/443266/1/check/nova-out-of-tree-pvm/c04198b/powervm_os_ci.html13:37
esbergluThese two tests have been causing problems on and off for a while13:38
esbergluIt seemed like they were a side effect of another test we disabled, as they stopped showing up13:39
esbergluafter we disabled another test from that same class13:39
thorstbut are still an issue now?  Just not as often?13:39
esbergluYeah still an issue now. They seem to have started hitting us more this week13:40
thorsthmm....could debug in staging if we're failing >20% or so13:41
esberglu#action esberglu: Debug 2 failing rebuild tests for OOT CI13:41
esbergluThen we have the newton/ocata failures13:42
esbergluThe fix has been merged for g-r master, just has to be cherry-picked to the other branches13:42
esbergluAnd the other part of the fix (in tempest) is still under review13:43
esbergluBut we have a fix incoming13:43
esberglu#topic OSA CI13:44
esbergluI finally got the OSA CI dev. env. deployed after some struggles.13:44
esbergluI need to increase the size of the SSP though, I forgot that OSA requires a larger flavor13:45
esbergluAnd didn't make it big enough13:45
thorststaging CI?13:45
esbergluNo 3rd environment13:45
thorstI just got a bunch of capacity for the v7k...so we can make all the SSPs larger at some point if needed.13:46
esbergluCool. I don't think we need to atm. But might have to later13:46
efriedYou can do that without rebuilding the whole SSP, as I'm sure you know.13:47
thorstyep, just add new disks13:47
esbergluOkay. Might have questions later, I don't think I've done that before13:48
esberglu#topic Other Items13:49
esbergluAt some point I want to move the undercloud from newton to ocata13:49
esbergluJust kind of waiting for a lull in the action to get that done13:49
thorstheh13:50
thorstlull13:50
thorstbut yeah, agree that'd be a good update to do13:50
esbergluAlso I've been working with bjayasan as he is trying to get tempest running for an SDE setup13:50
esbergluStill just trying to help him understand the flow13:51
esbergluWhat is the plan for that? Is that going to be a whole new CI setup? Or are we trying to integrate that into our CI eventually?13:51
thorstesberglu: no, he's just trying to run some tests with systems configured in a different way13:52
thorstso right now you run SSPs13:52
thorstso we have a coverage gap with anything not SSP13:52
thorsthe's trying to validate some of the other style configurations - or get to that point really13:52
esbergluOh okay. Sounded kind of like he was trying to get a full fledged CI running when we were working13:54
esbergluI'll touch base with him again today13:54
thorstesberglu: hmmm...don't think so13:54
thorstI am 90% sure he only has one server.13:54
thorst:-)13:54
esbergluYeah that's why I brought it up. He was asking about the staging environment and how he could test stuff13:55
thorsthmm...I may touch base with him13:55
esbergluBut staging definitely can't support that13:55
thorstthe intention is simply to test different flavors of configurations13:55
thorstJay to test the in-tree stuff (initially) with tempest.  That'll eventually move to a SDE style env13:55
thorstnbante to test the oot stuff with temptest, on a SDE style env.13:56
thorstand we don't know how to bring SDE style env's into CI because they can't do the same tricks we're doing here...13:56
esbergluThat's all I had today13:57
esbergluThanks for joining13:58
esberglu#endmeeting13:58
openstackMeeting ended Thu Mar  9 13:58:51 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)13:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_ci_meeting/2017/powervm_ci_meeting.2017-03-09-13.31.html13:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_ci_meeting/2017/powervm_ci_meeting.2017-03-09-13.31.txt13:58
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_ci_meeting/2017/powervm_ci_meeting.2017-03-09-13.31.log.html13:58
*** apearson has joined #openstack-powervm14:01
efriedthorst esberglu We can test SSP and localdisk (and probably even LIO stuff?) on the same system.  Just have to twiddle the conf and restart the compute service each time, neh?14:04
thorstpotentially, yes14:09
thorstwould need LIO and VIOSes co-existing14:09
thorstI don't know that apearson would say that's supported if we find issues  :-)14:10
adreznecefried: thorst in general that should work (and we've had systems in similar configs in dev environments), though likely not for the way we do CI14:10
efriedSince we build and stack a separate node for each run, we would just build that node with the appropriate config.14:11
efried...except they share a nvl, riiight.14:11
thorstso it goes back to how we do the API14:12
apearsonthorst - I'd say that's supported...no reason for it not to be.  And it gives the user the ability to run with their traditional enterprise attach storage on the same system as they try out the cloud attach storage.  So with standard NovaLink + OpenStack, I think we could support that model.14:12
thorstwhich is the discussion from yesterday14:12
*** tjakobs has joined #openstack-powervm14:27
*** seroyer has quit IRC14:34
*** smatzek_ has quit IRC14:36
openstackgerritEric Fried proposed openstack/networking-powervm master: WIP: Event-driven heal_and_optimize for SR-IOV  https://review.openstack.org/43607814:44
*** waler has joined #openstack-powervm14:45
efriedthorst D'oh bug ^^14:47
thorstI like the change...UT of course14:49
thorstthat event is in 1.0.0.5?14:49
esbergluefried: thorst: I put up a change for the test that was failing IT CI14:49
esbergluIt was failing because snapshot isn't implemented yet14:50
*** alainfisher has joined #openstack-powervm14:50
esbergluWhat I don't understand (and we have seen this on other tests for IT)14:50
esbergluIs why this test is not failing every time14:50
thorstwhat's the change set number?14:50
efriedthorst 496814:51
esbergluYeah14:51
thorstLGTM, but yeah...no idea how that would ever pass14:52
*** seroyer has joined #openstack-powervm14:55
esbergluFailing and passing both have the same NotImplementedError in the logs. But it only gets picked up as a failure sometimes. Idk14:57
*** smatzek_ has joined #openstack-powervm14:57
*** jpasqualetto has joined #openstack-powervm14:57
*** smatzek_ has quit IRC14:57
thorstlol14:58
efriedI had an interesting conversation about this the other day in #openstack-nova - Just because we don't implement something doesn't mean it doesn't "succeed".14:58
*** smatzek_ has joined #openstack-powervm14:58
adreznecWell it's not like you want your tests to have consistent results, right?14:58
efriedAt least from horizon, the ops are asynchronous.  So horizon telling you it succeeded just means it successfully submitted the request to the (nova) API.14:58
efriedSo I believe the tempest tests are similarly submitting the request, and then instead of checking the return from that command, they're polling for certain VM states or other conditions.14:59
thorstyep, you're right15:00
efriedIn other words, us raising NotImplementedError doesn't necessarily cause a failure.15:00
efriedNow, if it's possible for a test to succeed sometimes, even when one of the critical ops in its path fails, I submit to you that it is a bad test.15:01
efriedNot that I'm necessarily in favor of submitting bugs every time we find one of these...15:02
efried...but that might be The Right Thing To Do.15:02
walerMorning, sorry to disturb, I'm seeking some clarification regarding integrating a POWER8/PowerVM compute node, into a broader OpenStack environment. I was advised this was the best place to ask?15:03
efriedwaler Probably so.  What's up?15:03
waleram seeking some enlightenment to how (deployable) images are handled/treated? I know from a traditional PowerVM background how LPARs are handled storage wise. And I know from using PowerVC how you can "dd" images in and out. But it wasn't obvious from wiki.openstack.org/wiki/PowerVM how the images were handled when you don't have PowerVC in the middle15:05
walerI was presuming images would be "discovered", as opposed to pushed down from the controller node?15:06
*** thorst is now known as thorst_afk15:06
efriedWell, for starters, you'll have to have images that are specifically built for the ppc64le architecture.  So, like, you won't be able to use the same images you use for e.g. Intel.15:07
waleryep, appreciate that15:07
efriedWhen we do it, we stuff the images in glance just like any other.15:07
walerso it's not the discovered approach, like it used to be with ICM <> PowerVC?15:07
efriedSorry, I don't have the background to answer that.  Where would those guys "discover" from?15:08
efriedI believe PowerVC uses cinder for everything.  The native PowerVM compute driver can use ephemeral boot disks.15:09
efriedYou need to configure what kind of disk driver you want to use - either 'localdisk' or 'ssp'.15:09
efried'localdisk' creates LVs on a specified VG on your VIOS(es).  'ssp' uses a Shared Storage Pool that you've got set up.15:10
efriedDo you know which one of those you want to use?15:10
waleryes, I think we've got that far. In that our simple setup (one VIOS, one novalink, one SSP).15:11
efriedOkay.  And you've told the conf that you want to use the ssp disk driver?15:12
efriedShould just need that one line in the conf, and the driver will discover your SSP.15:12
efried[powervm]15:13
efrieddisk_driver = ssp15:13
waleryep, and our upstream controllers, report correctly (RAM/CPU/Storage space)15:13
efriedCool.  So have you tried a deploy yet?15:13
walerit's then what todo with an image? I'd presumed from the VC ways, that you'd "dd" the image into the SSP and OpenStack would somehow "discover" what was in the SSP for use?15:14
efriedOh, no, you don't have to do that.15:14
efriedIf the image is in glance, you just select that image when you deploy.15:14
efriedOur driver will upload it for you.15:15
efriedinto a shiny new LU that it creates for your VM in the SSP.15:15
efriedIt's actually slightly cooler than that: If it's the first time you've used that image, the whole thing gets uploaded into an "image LU", but each VM you subsequently create against that image - even if it's on a separate host on the same SSP - will get a "thin clone" disk LU backed by that image LU.  So the first one is slow, but subsequent ones are really fast.15:16
walerso where on the compute, would is store the glaced image, in the process? Does that transition through the novalink LPAR or the VIOS?15:17
walerah, an image LU, that's the special type of LU within the SSP, if I'm following you correctly15:17
efriedRight.15:17
efriedI didn't quite catch your question above.  If you've already put the image into glance (e.g. via `openstack image create`), then it'll have a name/UUID.15:18
walerso would that first time, transfer it through the novalink LPAR? or direct from controller to vios?15:19
efriedWhen you do your spawn, you pick that image (either in horizon, or via --image on the CLI)15:19
efriedOkay, I think I understand the question.15:19
efriedIt goes through the NovaLink partition to the VIOS.15:19
efriedBut glance is talking to the NovaLink partition, not to the controller node.15:20
efriedThe NovaLink partition is where the compute service runs.15:20
efriedThe compute process is what talks to the glance image API.15:20
waleryep, understand that. And from what you explained above, we need to treat an image like other images (ie. it's not discovered like with VC - good to know). I'm now trying to see how the flow works, as some AIX/IBMi images could be of the larger size. If they get staged down to the SSP, via the novalink LPAR, I presume I need to ensure the novalink LPAR has enough free disk space for temp storage? Or does the image just route through nov15:24
waleralink, it's not staged/copied there?15:24
efriedwaler - shouldn't need temp space on the NovaLink.15:27
efriedIt's streamed through.15:27
walerah great, streamed, thank you! Thanks for the enlightenment.15:28
efriedYou bet.  Good luck, and let us know if you have any issues.15:29
walerJust to check, my expectation that manually creating an LPAR elsewhere (on another POWER8, with another SSP) and then using the documented process to dd that out of the SSP, is a recommended route to creating images? I know it's the documented approach for backing up images when using VC, so hoped the same trick would be "good enough" here?15:34
thorst_afkwaler: that should work.  Just make sure you get the cloud-init on there15:39
thorst_afkand in a standard openstack install, swap to this interfaces.template on the powervm compute nodes15:39
thorst_afkhttps://ibm.ent.box.com/s/pjfy571tg6v5imu8kq59v32rdgmu7x3715:39
thorst_afkthat allows a mix of IPv4 and IPv6.  The IPv6 on a NovaLink POWER8 is used for RMC15:40
*** alainfisher has quit IRC15:40
walerthorst_afk, efried thanks. We've actually got some S822L's running Ubuntu+KVM happily hooked into our broader OpenStack environment already. We're just trying todo the same with S822's now for AIX and IBMi deployments15:43
walerappreciate the clarification and enlightenment...15:44
thorst_afkwaler: let us know how it goes15:45
thorst_afkand be sure to fill out the user survey too  :-)15:45
thorst_afkwe like having PowerVM show up in the openstack survey :-D15:46
*** tjakobs has quit IRC16:03
*** dwayne has joined #openstack-powervm16:16
esbergluthorst: efried: Getting rest errors on those 2 failing OOT CI tests trying to closevterm16:23
esbergluhttp://paste.openstack.org/show/602090/16:23
*** k0da has quit IRC16:35
thorst_afkesberglu: yikes.  efried any idea?  That URL looks right...and that code has been in there forever.16:38
*** thorst_afk is now known as thorst16:38
thorstwonder if that's what you'd get if the console was already open?16:39
*** tjakobs has joined #openstack-powervm16:39
esbergluI'm still looking through the logs to see if anything else happened before that16:41
*** Guest27328 has joined #openstack-powervm16:50
*** jpasqualetto has quit IRC17:05
*** alainfisher has joined #openstack-powervm17:12
*** Guest27328 has quit IRC17:13
*** alainfisher has quit IRC17:13
*** jpasqualetto has joined #openstack-powervm17:21
*** tjakobs has quit IRC17:50
*** alainfisher has joined #openstack-powervm18:28
*** tjakobs has joined #openstack-powervm18:29
efriedthorst esberglu That looks to me like the LPAR is already gone.18:39
efriedI thought we were ignoring that one.  Is it causing a failure?18:39
*** alainfisher has left #openstack-powervm18:39
esbergluWe were talking about ignoring them. We never came to a conclusion and they stopped showing up for a while18:41
efriedthorst eshaseth tested PS4 of https://review.openstack.org/#/c/436078/ and it works.  Should I keep those debug messages in there or rip 'em out?18:42
thorstefried: I think btang had thoughts/feels about what we do when the lpar doesn't exist.19:00
efriedDo tell.19:01
thorstefried: honestly I don't remember.  It was some use case they were driving where they wanted better messages19:08
thorstwe should probably ping him...he doesn't hang out in this channel19:08
thorstas for the messages...I do think those debugs are a bit aggressive.  If doing another rev, I'd say rip em19:09
thorstor at least prune them...19:09
efriedHad to do another rev for UT and one bugfix.19:11
esbergluefried: Thoughts on disabling those 2 tests? Or continue debugging19:15
efriedesberglu Debug.19:15
efriedThe logs should hopefully make it clear whether the LPAR was deleted (or never existed) before we got to that point.19:15
efriedIn either case, I think a 404 on CloseVTerm should be ignored.19:16
thorstefried: I agree.19:23
*** k0da has joined #openstack-powervm19:25
*** jpasqualetto_ has joined #openstack-powervm19:31
esbergluefried: It looks like the LPAR never actually existed19:34
*** jpasqualetto has quit IRC19:34
efriedesberglu Fine.19:34
efriedFeel like proposing that fix?19:34
*** jpasqualetto_ has quit IRC19:36
esbergluSure19:36
*** alainfisher has joined #openstack-powervm19:38
*** jpasqualetto has joined #openstack-powervm19:45
openstackgerritEric Fried proposed openstack/networking-powervm master: Event-driven heal_and_optimize for SR-IOV  https://review.openstack.org/43607819:49
efriedthorst ^^ before you leave.19:50
thorstanything for you efried19:51
efried:)19:51
thorstso we still poll on the heal and optimize, this just escalates it a bit19:51
thorstfor sr-iov19:51
efriedthough I guess svenkat has the power to approve.19:51
thorstwell, lets get two +2's19:52
efriedYes, the original h-a-o behavior hasn't changed.  This just triggers it extra on certain events.19:52
thorstsvenkat will be owning this more than me - he's the one catching defects for it19:52
efriedeshaseth tested it earlier (at PS4) and it works.19:52
efriedSo at least make sure I didn't break the logic with PS5 ;-)19:52
efriedNote bug in PS3 that made it not work at all.  Subtle.19:52
thorsthah19:53
thorstyeah, this looks solid to me19:54
thorstwish we could get a heal and optimize for the sea...but meh.19:54
thorstin terms of priorities...that's so low19:54
efriedWe gonna kick SEA to the kerb at some point anyway?19:55
efriedFor that matter, we'll need SR-IOV direct VIF.19:55
efried...with some kind of answer for mobility...19:55
efriedOkay, now back to trying to figure out how to break SSP into smaller change sets.19:55
thorstefried: SEA to the kerb...in like 50 years19:57
thorstI don't think we'll be enhancing it much, but its still super broadly used19:57
efriedSure.  IBM time.19:57
thorstlike Linux Bridge  :-)19:57
*** svenkat has joined #openstack-powervm19:57
efriedthorst Man, I can get rid of like 50LOC by ripping out the config option, but that hardly seems worth it.19:59
efriedThe only other gains I can make would be by ripping out test or doc.19:59
thorstsvenkat: I'm good with https://review.openstack.org/#/c/436078/520:00
efriedOr splitting out functionality - like create/connect in one change set, delete/disconnect in another - which I hate the idea of.20:00
thorstbut since your team is the one supporting that, I want you to be the second +2 on it20:00
svenkatthorst: looking…20:00
thorstefried: yeah...I hate the idea too, but it is a bit overwhelming to look through right now20:00
thorsteven to me...and I know the code.20:00
svenkatefried: looking good, is_hao_event filters out physicalport change (anything on phyisical port i guess, not just label) and triggers healandoptimize...20:03
svenkatlooking ut20:03
efriedsvenkat Yeah, unfortunately it's as granular as we're able to get right now.20:04
efriedBut that should be okay.20:04
svenkatyess… adding my +2.20:06
svenkatefried: done.20:07
efriedsvenkat Okay, merging.  Does this need an ocata cherry-pick, or are you going to carry it internally?20:07
svenkatis it possible to cherry pick into ocata?20:08
efriedI would think so.  thorst ?20:08
efriedWe have a bug for it.  Have had for a while.20:08
efriedhttps://review.openstack.org/#/c/443956/  Will let thorst dictate whether this goes in or not.20:10
svenkatafter pep8 and py27 gates i will add my +220:11
openstackgerritMerged openstack/networking-powervm master: Event-driven heal_and_optimize for SR-IOV  https://review.openstack.org/43607820:14
svenkatefried: there is an issue with ocata.. i think it is related to neutron_lib20:16
efriedsvenkat That shouldn't have happened, gr.  Means networking-powervm is probably entirely busted in ocata.20:21
efriedWill need a new fix for that.20:21
svenkatchecking20:23
svenkatefried : from neutron import context as ctx - this is supposed to work in ocata isitnt? in master it is moved to neutron_lilb. is it correct20:28
efriedsvenkat I believe that's right.  But this is a different error.20:29
svenkatfirst error is see is : from neutron import context as ctx    ImportError: cannot import name context.20:31
efriedsvenkat Are you going to open a bug and backport https://review.openstack.org/#/c/442540/ ?20:44
svenkatyes… few minutes20:44
*** k0da has quit IRC20:46
*** k0da has joined #openstack-powervm20:47
*** k0da has quit IRC20:54
*** alainfisher has left #openstack-powervm20:58
*** alainfisher has joined #openstack-powervm20:58
*** alainfisher has quit IRC20:58
svenkatefried : opened bug : https://bugs.launchpad.net/networking-powervm/+bug/167162821:00
openstackLaunchpad bug 1671628 in networking-powervm "Need to back port 442540 to coat" [High,New] - Assigned to Sridhar Venkat (svenkat)21:00
*** smatzek_ has quit IRC21:01
efriedsvenkat Roger that.  Cherry-pick away.  Then I'll rebase the HAO change set on top.21:01
svenkatsure… one moment21:01
svenkatdone : https://review.openstack.org/#/c/443960/21:02
*** k0da has joined #openstack-powervm21:07
efriedsvenkat I'll wait for jenkins.21:08
svenkatefreid sure.21:08
thorstefried svenkat: https://bugs.launchpad.net/bugs/167162821:11
openstackLaunchpad bug 1671628 in networking-powervm "Need to back port 442540 to coat" [High,New] - Assigned to Sridhar Venkat (svenkat)21:11
thorstis that the right way to name the bug?21:11
thorstshouldn't the bug be a description of the issue?21:12
efriedYeah, that's not a good title.21:12
efriedI didn't read it, mahbad.21:12
thorstefried: I mean, I know you've classified me as a project manager, so I'm just doing my part21:12
efriedI'm actually betting there's already a bug for all the other stuff that got broke by that change.21:13
svenkatthorst: i added description in the details.  i was looking for a short title - i see now - ocata  became coat, my browser replaced it21:13
efriedWe should just add networking-powervm to that bug.21:13
svenkatI am rephrasing title21:15
svenkatcalled it Import context from neutron_lib21:17
svenkatcherrypicked into ocata also failed with similar error21:19
efriedHm, passes locally.  Wonder what that's about.21:24
efriedI wonder if this is a global reqs problem.21:25
efriedthorst adreznec Should https://review.openstack.org/#/c/439805/2/requirements.txt be requiring neutron?21:26
efried...or python-neutronclient?21:26
thorstefried: so...for stable branches yeah21:26
thorstnot python-neutronclient21:26
thorstbut the issue is we need it to pull down the LATEST neutron21:27
thorstso we had that dependency in the tox.ini and setup.cfg I think?21:27
efrieduh, looking...21:27
efriedYeah.21:28
thorstit was in line with what some other neutron ml2 plugin was doing at the time21:28
efriedOne sec.21:28
efriedthorst svenkat Like this: https://review.openstack.org/#/c/385486/21:30
thorstcorrect21:30
efriedWe need to do that for ocata.  svenkat may as well do it in this change set.21:30
thorstfair enough21:30
thorstI wonder if that will solve some of esberglu's issues with CI there21:30
svenkatefried  lok…21:31
efriedthorst Yeah, we need to do that for the other projects.21:32
*** apearson has quit IRC21:32
svenkati will generate another review for 443960 .21:33
*** apearson has joined #openstack-powervm21:35
esbergluJust a heads up21:36
esbergluuntil21:36
esbergluhttps://review.openstack.org/#/c/443558/121:36
esberglugets in ocata CI is failing21:36
*** edmondsw has quit IRC22:03
svenkatsecond patchset ready for https://review.openstack.org/#/c/443960/22:15
*** jpasqualetto has quit IRC22:19
svenkatjenkins job went through except out-of-tree-pvm. Error: IOError: [Errno 2] No such file or directory: '/opt/stack/tempest/.tox/tempest/lib/python2.7/site-packages/appdirs-1.4.2.dist-info/METADATA'22:31
svenkatbut py27 jobs are going through now.22:31
esberglusvenkat: Yep. https://review.openstack.org/#/c/443558/122:33
*** edmondsw has joined #openstack-powervm22:33
svenkatesberglu : thanks.22:33
esbergluNeeds to merge for ocata CI to pass22:34
openstackgerritEric Berglund proposed openstack/nova-powervm master: Except HttpError 404 close_vterm  https://review.openstack.org/44403122:36
*** edmondsw has quit IRC22:38
*** thorst has quit IRC22:39
*** apearson has quit IRC22:40
efriedesberglu Where's that pastebin with the 404 error?22:45
esbergluhttp://paste.openstack.org/show/602090/22:46
efriedAha.22:46
efriedOkay, so I don't agree with your solution.22:46
efriedWe're already trapping the 404 at the driver level, which is as it should be.22:46
efriedThe problem is, we're trapping it based on a specific REST path, which is horrible.22:47
efriedSee nova_powervm/virt/powervm/driver.py:68522:47
efriedArguably, based on the logic in the other methods in that module (e.g. power_on and power_off), we should be trapping and ignoring the 404 where you're doing it, and NOT at the driver level at all.22:50
efried...which would mean your change isn't necessarily wrong, just not the complete solution.22:51
esbergluOkay so you're proposing I remove the 404 from the driver level. And then add an exception for 404 on the lpar delete as well?22:59
esbergluAt the vm level23:00
*** apearson has joined #openstack-powervm23:01
*** dwayne has quit IRC23:01
efriedesberglu Yeah, something like that.23:02
efriedWouldn't mind getting thorst's nod, as that's a nontrivial change.23:02
efriedBut it should be okay.  We only use dlt_lpar in two places: the destroy flow, and the spawn revert.23:02
efriedIn both cases, I'm pretty sure we don't want to bubble up if the LPAR isn't there.23:02
efriedBUT23:02
*** thorst has joined #openstack-powervm23:03
efriedWe should first make sure that 404 on the CloseVTerm URI really means the LPAR ain't there.23:03
efriedThat should be easy enough to reproduce, I should think.23:04
*** thorst has quit IRC23:07
*** tjakobs has quit IRC23:12
*** esberglu has quit IRC23:21
*** esberglu has joined #openstack-powervm23:33
*** esberglu has quit IRC23:37
*** tblakes has quit IRC23:46
*** seroyer has quit IRC23:48
*** apearson has quit IRC23:50
*** dwayne has joined #openstack-powervm23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!