13:01:04 <esberglu> #startmeeting powervm_driver_meeting
13:01:05 <openstack> Meeting started Tue Jul 11 13:01:04 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:01:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:01:08 <openstack> The meeting name has been set to 'powervm_driver_meeting'
13:01:24 <efried> \o
13:02:06 <edmondsw> I will be presenting in another meeting, so can't follow the conversation here... will read back later
13:02:15 <mdrabe> o/
13:02:16 <esberglu> edmondsw: ack
13:02:33 <esberglu> #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda
13:02:41 <efried> I'm asking Jay & Chhavi if they can join.
13:02:59 <esberglu> ^ Lets try to start adding stuff to that agenda. I think it will cut some serious time off of these meetings
13:03:17 <efried> Cause I think our #1 priority at this point is getting iSCSI verified so we can cut a new pypowervm release.
13:03:28 <chhavi> hi efried,
13:03:51 <efried> Hi chhavi - seeing if Jay can join too.
13:04:29 <chhavi> me and jay tried to do the verification yesterday and there were environment issues. For verification we just need a setup where atleast the devstack is configured properly
13:04:49 <chhavi> jay1_: did u tried anything further this morning
13:05:21 <jay1_> Yeah.. still getting the stacking issue
13:05:28 <efried> esberglu Can you take an #action to stack the nodes jay1_ is using ASAP?
13:06:37 <esberglu> efried: jay1_: What is the current issue?
13:06:56 <jay1_> I have created new etherpad location for current test issues and status and updated the same link in our original etherpad location
13:06:57 <esberglu> jay1_:  Did adding those neutron dirs get you past the issue you were seeing yesterday
13:07:36 <efried> #link https://etherpad.openstack.org/p/community_testing_issues_status
13:07:37 <jay1_> no, I have even ran the whole prep script and still getting the issue
13:08:44 <efried> esberglu At this point I think we need to take over jay1_'s system(s) and do whatever it takes to get it/them stacked and ready by his next morning.
13:09:10 <esberglu> efried: Sure. I can put the other stuff aside today and focus on that
13:09:24 <esberglu> jay1_: Can you ping me the creds in slack?
13:09:42 <jay1_> I have mentioned in the same etherpad location
13:11:50 <esberglu> jay1_: You can't put internal ips in the external openstack etherpad
13:12:19 <efried> Just knowing it's neo32 with standard creds is enough to get you there.
13:12:34 <efried> jay1_ What's the status of neo34?
13:13:04 <jay1_> neo34 Deploy issue
13:13:24 <jay1_> I will remove the IP from the etherpad
13:13:30 <esberglu> jay1_: I already did
13:13:36 <jay1_> sure
13:14:13 <esberglu> Anything else we need to talk about here? Or come back to it once the system is stacked
13:15:01 <efried> jay1_ chhavi Just so we're clear:
13:15:22 <jay1_> for the Deploy issue I will put a neo defect so that Hsien would have a look at it.
13:15:23 <efried> The urgent priority is signing off on the current state of pypowervm so we can cut a new release there and push it through global requirements.
13:16:20 <efried> If we need to continue to work bugs in REST or nova-powervm after that, that's fine.  We're just trying to avoid needing changes to pypowervm after we cut a new release - and we want that release to happen ASAP.
13:18:46 <chhavi> yes, if the stack issue is resolved, we can do the quick validation for the pypowervm
13:19:32 <esberglu> Cool sounds like we have a plan there. Any other driver topics that need to be discussed?
13:20:18 <jay1_> esberglu, can we have another etherpad loc for indicating the current CI issue and the work around ?
13:20:33 <jay1_> it would help us esp during our day time
13:20:45 <jay1_> instead of waiting for you till evening.
13:20:58 <esberglu> jay1_: https://etherpad.openstack.org/p/powervm_ci_todos
13:21:25 <esberglu> I don't want to add another etherpad on top of that
13:21:39 <esberglu> I can start adding you to CI reviews if you would like so you can have an idea of whats happening
13:21:47 <jay1_> if it has that info, that is fine. let me have detailed look.
13:22:10 <efried> esberglu I think what jay1_ is asking for is more of a cheat-sheet of things that are going to be generally useful to someone trying to stack.
13:22:49 <jay1_> efried: exactly, bcz most of the times that is what becomes problem.
13:23:03 <efried> What you've got at the bottom of that etherpad (the stuff that's crossed out) is a large superset of that info.
13:23:14 <efried> but probably also missing a couple of items that you just saw and fixed real quick.
13:23:31 <esberglu> efried: Yeah that's why I was keeping the history. Your second point is also accurate
13:24:21 <efried> Like what would be ideal is a paste of a stacking error plus a description of or pointer to the fix for that error.
13:24:33 <efried> So a guy could search
13:25:55 <esberglu> I'll start linking stack traces on paste.o.o and link them in when a fix is found
13:26:19 <efried> esberglu paste.o.o won't facilitate search
13:26:55 <efried> Like in this case I would want to be able to search for "tee: /etc/neutron/neutron.conf: No such file or directory" and it would get right to that error.
13:27:03 <esberglu> I don't want to clutter up that etherpad with traces though. And I want to minimize the number of locations that we are tracking things
13:27:28 <esberglu> I will provide enough info to find what you are looking for
13:27:33 <esberglu> Then link in the full trace
13:27:45 <esberglu> I will _try_ to do that
13:27:54 <efried> Okay, hopefully that'll be enough.  We'll try it and see how it flies.
13:28:41 <efried> You don't have to do it retroactively, I don't think, as long as we can get jay1_'s systems stacking now.
13:28:47 <efried> Then we can just maintain it going forward.
13:28:58 <esberglu> efried: Sure
13:29:18 <efried> During the transition period, us other poor schlubs may have to ask you one-off questions as we try to get up to currency.
13:29:37 <efried> but that's what the status quo has been up to this point :)
13:30:15 <esberglu> Alright sounds like a plan. Lets move on
13:30:27 <esberglu> Any other driver topics?
13:31:09 <mdrabe> Yea
13:31:09 <esberglu> Otherwise I can do a quick run through of what's happening CI
13:31:16 <esberglu> mdrabe: go ahead
13:31:25 <mdrabe> The NVRAM stuff, update is I still have to get with Farnaz on the environment stuff
13:31:59 <mdrabe> Might involve going off on a tangent in pvc to get the scale tests running, but that's it, that's the update
13:32:52 <esberglu> Alright. I'll just run through CI quick then
13:32:57 <esberglu> #topic CI
13:33:31 <esberglu> So as we talked about I've been working on the devstack generated tempest.conf
13:33:39 <esberglu> This has spawned a bunch of other threads
13:33:59 <esberglu> Like ways to remove skip list tests, tempest bugs, missing tempest/devstack features, etc.
13:34:04 <BorD_> with Nutanix coming to Power, any chance we will see Nutanix with PowerVM / AIX / IBM i?
13:34:16 <BorD_> or will it strictly be Nutanix with Linux on Power
13:34:29 <esberglu> I'm planning on organizing the list of TODOs that have come up this afternoon
13:34:46 <esberglu> And continuing to work through them
13:34:51 <esberglu> That's all I had for CI
13:35:13 <efried> BorD_ That sounds like a thorst question.  Would you mind holding until we're done with this meeting?
13:36:33 <thorst> yeah, I'll respond on the nutanix stuff after the meeting  :-)
13:36:50 <esberglu> Any final discussion items? We already covered driver testing
13:38:32 <efried> We need to start queueing up PCI discussions, but edmondsw needs to be involved, so that can happen later.
13:39:52 <esberglu> Alright. Sounds like that's it for the day then. Thanks for joining
13:39:58 <esberglu> #endmeeting