13:31:38 #startmeeting powervm_ci_meeting 13:31:39 Meeting started Thu Feb 9 13:31:38 2017 UTC and is due to finish in 60 minutes. The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:31:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:31:43 The meeting name has been set to 'powervm_ci_meeting' 13:31:46 o/ 13:31:49 o/ 13:31:56 #help 13:32:16 o/ 13:32:58 #topic Power-off issues? 13:33:48 Sure 13:34:43 So the good news (esberglu please confirm) is that PS11 of the power.py refactor ran through the CI with no appreciable difference in failure rate. 13:34:54 Yep 13:35:13 Which means it's at least working to the point of backward compatibility, at least in the code paths that are hit by the CI. 13:35:25 With the caveat that that doesn't include some things, including IBMi. 13:35:44 The bad news (well, neutral news, really) is that we're still seeing failures. 13:35:55 still seeing power off failures or general failures? 13:35:57 Which I really don't want to address in the current patch set, which I'd like to reserve for pure refactor. 13:36:05 power off failures, at least. 13:36:43 I'm still working on the whole "enumerating all the job options" thing. Should have a patch set up for a look by lunchtime, I believe. 13:37:05 OK - just to verify...with the patch, we still see power off failures? 13:37:10 yes 13:37:10 or just 'failures' 13:37:13 k. 13:37:21 Still power off failures 13:37:46 Before we merge that sucker, I would like to run some live tests on IBMi. nvcastet has volunteered to help me out in some fashion there. I think by sliding me a disk with an IBMi image on it. 13:38:23 so efried, was the refactor not supposed to help with power off failures? 13:38:26 just make it more managable? 13:38:26 No. 13:38:39 Just make it easier to debug and fix said failures. 13:38:48 ok 13:38:52 confusion cleared. 13:39:15 So I think once I get the PowerOpts patch up, I'll first investigate those failures and try to put up a separate change set (on top of the refactor) that addresses them. 13:39:32 With esberglu's handy-dandy live patching gizmo, we ought to be able to run that through the CI fairly easily, yes? 13:39:39 Yep 13:39:46 #action efried to finish proposing PowerOpts 13:39:59 #action efried to investigate power-off failures and propose a fix on top. 13:40:20 #action efried to live test on IBMi (and standard, for that matter). 13:40:53 Anything else on the power-off issue for now? 13:41:05 esberglu Other topics? 13:41:21 #topic CI redeploy 13:41:45 Just wanted to say that the redeploy finished last night 13:42:04 jobs going through yet? 13:42:07 So now we are running 1.0.0.5 across the board 13:42:11 Yep. 13:42:13 neat 13:42:25 I haven't looked at any results yet though 13:42:34 that's good to know for the CI host server....CPU utilization on that sucker is like 10% 13:42:40 after we moved everything to the SAN 13:44:11 That's all I had for that, just wanted to update 13:44:27 #topic In Tree CI 13:45:11 I think we need to talk about how we want to handle moving the in-tree runs from silent to check when we are ready 13:45:44 Because if we start posting results, it will fail everything until PS1 is through 13:45:56 which is a lot of red coming from our CI 13:46:20 Can it be as simple as checking for the presence of, say, our driver.py? 13:46:40 Or do we not know that until too late in the process? 13:47:09 I guess we could inspect the commit tree and bail out if we don't see that first change set's commit hash / Change-Id. 13:47:49 efried: yep 13:47:52 that's what we should do 13:48:04 if the commit message (probably?) has the word powervm in it, we publish. 13:48:38 and for the oot driver, if the commit message has a set of files from the nova project and contains the word powervm, we should just not run (because we'll fail) 13:48:44 (due to duplicate options) 13:48:55 Or if the file list in the change set contains '/powervm/'? 13:49:19 Wait, why do we need to do something special for OOT? 13:49:35 The OOT driver will always fail on an IT driver change set. 13:49:37 Oh, you mean we don't run the *in-tree* CI on *out-of-tree* patch sets. 13:49:44 because the OOT driver has duplicate options 13:49:57 Gotcha. So it should be as simple as whether the change set is in the nova-powervm project, neh? 13:50:04 so if a patch set comes in that is in tree for PowerVM, we should avoid running the OOT driver change 13:50:12 otherwise we post a +1 and a -1 in the same patch 13:50:21 Sorry, yeah, I had it backwards. 13:50:26 k 13:50:40 once it merges, we can remove the opts from the oot. 13:50:44 Right. 13:50:45 and be happy again 13:50:59 So esberglu Do you know how to make all of that happen? 13:51:10 I can help out with the git commands if you need. 13:51:45 #action esberglu to set up mutually-exclusive running/publishing of CI results for in- and out-of-tree. 13:51:56 #action efried to assist as needed. 13:52:07 (that's not going to show up right in the minutes) 13:53:15 Cool. That's all I had for in-tree 13:53:24 Any other topics? 13:53:33 I'm assuming that once we get in-tree going, we flip back to ansible CI? 13:53:42 I know that the openstack-ansible team is still waiting there. 13:53:52 Yep 13:53:52 Yeah 13:54:04 FYI I discussed that a bit with Jesse last week 13:54:13 ok - yeah, that was my next question 13:54:22 do they understand we still are targeting that? 13:54:27 (seems like they do) 13:54:29 Gave him a bit of status on where we were with CI (the whole in-tree driver, etc) 13:54:34 Yeah, theydo 13:54:36 *they do 13:54:40 k. Assume you'll connect up more at PTG? 13:54:44 Yep 13:54:46 That was the plan 13:55:03 rockin 13:55:08 that was the only other thing I had 13:55:27 Just curious - wangqwsh esberglu how much work do you think is left there? 13:56:05 I know the whole OVS thing needs to be solved... 13:56:40 openstack can be installed via osa, but not run tempest to test it 13:58:18 so need to compose some codes for tempest for powervm osa ci 13:59:08 Is that what Nilesh is supposed to be doing? 13:59:23 Nilesh is supposed to do some tempest tests with it, yeah 13:59:31 we know that other env's have gotten that running 14:00:27 Right, you can definitely run tempest against OSA with PowerVM. For the most part it really shouldn't be all that different than running it against a devstack AIO 14:00:34 Since it's just calling into the APIs 14:04:19 Cool. Sounds like we are starting to get that back on the radar, but we aren't too far away 14:04:37 Anything else? 14:05:28 yes, when we can continue to do for powervm osa ci? after the in-tree ready, right? 14:08:29 If anyone has free cycles they can go for it. I reserved systems for the infrastructure 14:08:31 a question related to convert instance's uuid to powervm uuid. 14:08:37 Otherwise yes, after in-tree 14:08:54 wangqwsh Is that a CI-related question, or should it wait til after the meeting? 14:09:27 not ci question 14:09:29 ok 14:09:46 #endmeeting