13:01:15 #startmeeting powervm_driver_meeting 13:01:16 Meeting started Tue Jun 27 13:01:15 2017 UTC and is due to finish in 60 minutes. The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:01:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:01:19 The meeting name has been set to 'powervm_driver_meeting' 13:01:28 \o 13:01:33 o/ 13:02:04 o/ 13:02:12 AndyWojo Well, I wouldn't say what we've got in tree is usable at this point. Were there other third-party (out-of-tree) drivers in the survey? 13:02:33 #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 13:02:38 #topic In Tree Driver 13:02:50 #link https://etherpad.openstack.org/p/powervm-in-tree-todos 13:03:37 Anything to talk about here? Or still just knocking through the todos? 13:03:53 AndyWojo which survey is that? I thought the Ocata survey was closed a while back. We are working to get on the survey. 13:04:40 efried: what operations are ready for test in IT ? 13:05:06 jay1_ That hasn't changed in a while. SSP disk was the last thing we merged. 13:05:16 ok.. 13:05:22 jay1_ no network yet IT 13:05:51 that would be the next one to do ? 13:06:03 jay1_ but you can deploy with SSP boot disk and do some things like stop, restart... see the support matrix 13:06:08 efried you have a quick link for that? 13:06:18 Config drive will probably be next, cause it's easy. 13:06:42 jay1_ network will be one of the priorities for queens, along with config drive 13:06:55 #link https://docs.openstack.org/developer/nova/support-matrix.html 13:06:58 okay 13:08:34 efried, in that matrix PowerVM refers OOT only ? 13:08:41 jay1_ IT only 13:08:52 ah ok. 13:08:59 OOT we have a lot more green check marks 13:09:01 http://nova-powervm.readthedocs.io/en/latest/support-matrix.html 13:09:05 That's OOT 13:09:48 esberglu we need to change the OOT version's checkmarks to green... would be much easier to read 13:09:55 I'll throw that on the TODO 13:10:17 +2 13:11:59 Alright sounds like that's it IT 13:12:13 #topic Out Of Tree Driver 13:13:21 when is the next ISCSI integration point ? 13:13:28 is that integration done ? 13:14:05 Have you heard anything from chhavi about the latest pypowervm + https://review.openstack.org/#/c/467599/ ? 13:14:28 no 13:14:29 She was going to sniff test that to make sure we didn't need any further pypowervm fixes so we can cut a new release. I want to get that done pretty quickly here. 13:14:53 efried agreed 13:15:24 jay1_ please talk to chhavi about this. I'll send a note as well to try to push this along 13:16:00 edmondsw: sure 13:18:47 note sent 13:18:52 If it turns out you do need pypowervm fixes let me know and I can push a run through CI with it when ready 13:19:32 Nvm just clicked on the review 13:20:06 I don't think it would hit any changes going through our CI? 13:20:22 It what? 13:21:02 The pypowervm that's merged right now is copacetic. Last thing merged was the power_off_progressive change, and you already tested that. 13:21:13 The question is whether we're going to need anything else in 1.1.6 13:21:14 Well any pypowervm changes would be related to ISCSI right? Which isn't part of the CI 13:21:38 esberglu Well, right, but a regression test wouldn't be a bad thing. 13:21:41 So I don't know that the changed paths would get hit 13:21:50 Yeah I can push one anyways just to be safe 13:23:15 mdrabe efried should we talk about https://review.openstack.org/#/c/471926/ now? 13:23:35 okay 13:23:36 we've had emails flying back and forth... hash it out? 13:24:01 mdrabe you still here? 13:24:07 Yea I'm gonna whip that up this afternoon I think 13:24:28 what exactly does that whipping entail? ;) 13:24:36 With the caching, and evacuating on instance deletion events 13:25:05 how do you plan to demonstrate perf improvement to satisfy efried? 13:25:10 Respond to efried's comments and introduce the caching to event.py 13:25:35 Stop calling that instance object retrieval 13:25:38 I think we're out of runway to get arnoldje to test this. 13:25:52 right, I was afraid of that 13:26:06 Who's his replacement, and does said replacement have the wherewithal and time to do it? 13:26:31 I haven't heard of a replacement... I can ask 13:26:54 If anything I can test it myself, though I don't have any fancy performance tools 13:27:07 edmondsw: The OpenStack User Survey. Only PowerKVM was on the list, I selected other and filled in PowerVM, since I'm in the middle of implementating it 13:28:11 mdrabe Yeah, I'm obviously concerned that it *works*, but that's not sufficient for me to want to merge it. We have to have a demonstrable nontrivial performance improvement to justify the risk. 13:28:31 For the caching I'm still concerned in the pvc case around management of LPARs 13:28:52 When arnoldje validated the PartitionState change, he was able to produce hard numbers. 13:29:13 My fear is that this change is bigger & more pervasive, but will yield a smaller return. 13:29:49 I've no hard numbers, but he said something of a 10-12% deploy time improvement 13:30:00 AndyWojo I think the last user survey is closed. But I'm hoping to have PowerVM on the October one. 13:30:01 But there're fewer NVRAM events that PartitionState events 13:30:09 during deploy 13:30:15 than* 13:30:43 edmondsw: they just sent an e-mail out about the user survey is now open, and it's for June - Dec 13:30:57 mdrabe efried yeah, arnoldje had estimated something like 5% improvement for this 13:30:57 7.2% improvement was what he said for the PartitionState change. 13:31:01 Openstack Operators List 13:31:35 AndyWojo ok, hadn't seen that yet... guess we missed the boat. Will shoot for the next one then 13:35:20 annasort gave me a couple names to do perf testing now, I'll ping them to you efried mdrabe 13:35:33 edmondsw Yea I got em 13:35:45 edmondsw Ping anyway, maybe your names are different than mine. 13:36:49 pinged you both on slack 13:37:40 K so I'll work on that. good? 13:37:50 Cool man. 13:38:46 Alright lets move on to CI then 13:38:58 #topic PowerVM CI 13:39:34 The network issues caused quite a bit of inconsistency so I redeployed last night 13:40:01 Then the control node's /boot/ dir filled up which also caused a bunch of inconsistencies 13:40:14 Is the proper way to clean that out 13:40:21 Just can't get a break, can ya 13:40:26 apt-get autoremove? 13:40:53 esberglu what filled that partition? Ideas on how to prevent that in future? 13:41:31 edmondsw: I'm pretty sure you can just run apt-get autoremove and it cleans it out, however I'm no expert on apt 13:41:41 But since it was at 100% that command was also failing 13:41:56 So I had to manually go in and clean out the old ones 13:42:04 I wouldn't expect /boot to be affected by autoremove. 13:42:10 Do you have old kernels lying around? 13:42:16 I had that happen. 13:42:23 efried: Yeah 13:43:11 dpkg -l | grep linux-image 13:43:29 If you see more than one rev, you can *probably* apt-get remove all but the newest. 13:44:11 efried: That sounds scary, what happens if the newest errantly gets deleted? 13:44:23 You don't boot. 13:44:25 But don't do that. 13:44:33 :) 13:44:40 efried: Yeah we just have to make sure that the logic is really good 13:44:51 This is not something I would automate, dude. 13:44:57 Do it once to free up space. 13:45:10 Manually type in the full package names of the old ones. 13:45:14 efried: Yeah but I want to add a step that would clean this every time 13:45:18 you could automate detection of the problem... cron job that emails you if it sees things are getting filled up? 13:45:26 but right, don't automate cleanup 13:45:27 And I read something last night saying apt-get autoremove would do that 13:45:28 "every time" isn't a thing that should happen for old kernel images. 13:45:36 autoremove won't hurt. 13:45:43 But I don't think it's likely to help /boot most of the timee. 13:45:46 time 13:47:12 efried: Okay. I'll try to find that article I was reading, but stick with manual cleanup for now 13:47:20 You could definitely work up a cron job to keep you informed of filling file systems. 13:47:49 That's all I had for CI 13:47:59 #topic Driver Testing 13:48:06 We kinda covered this above 13:48:21 Any other thoughts about it? 13:52:50 any tentative dcut as such, to close the pike changes ? 13:53:56 jay1_ the stuff we're still working on for pike is mostly doc changes 13:54:58 I've got a change in progress for disabling the compute service if there's no VIOS or we can't talk to NovaLink REST API 13:55:06 that's about it, I think 13:55:25 edmondsw: how about ISCSI merging, do we have any planned date ? 13:56:22 jay1_ oh, I thought you were talking about IT... we're not doing iSCSI IT for pike, but yeah, we will be doing that OOT 13:57:02 efried, I think there are still some IT changes that we need to push to OOT for pike, right? anything else you can think of? 13:57:30 jay1_ you can look over the TODO etherpad: https://etherpad.openstack.org/p/powervm-in-tree-todos 13:57:48 edmondsw: sure 13:57:48 edmondsw Should all be in the etherpad, I hope. 13:57:56 yep 13:59:58 #topic Other Discussion 14:00:10 Any last words? 14:01:09 supercalifragilisticexpialodocious 14:02:07 lol 14:02:10 #endmeeting