14:00:06 #startmeeting PowerVM Driver Meeting 14:00:07 Meeting started Tue May 1 14:00:06 2018 UTC and is due to finish in 60 minutes. The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:10 The meeting name has been set to 'powervm_driver_meeting' 14:00:13 efried esberglu mujahidali ^ 14:00:17 \o 14:00:27 Merged openstack/ceilometer-powervm master: Trivial: Update pypi url to new url https://review.openstack.org/565438 14:00:30 ō/ 14:00:37 #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 14:00:43 mujahidali: Greetings, nice to meetcha. 14:00:44 #topic In-Tree Driver 14:00:54 #link https://etherpad.openstack.org/p/powervm-in-tree-todos 14:01:07 esberglu walk us through the latest? 14:01:44 efried: thanks :) 14:01:45 We need scenario testing to move forward on snapshot and localdisk. I'll go into more detail during CI section 14:02:06 vSCSI is blocked until we have some sort of volume testing 14:02:17 And cold mig/resize is waiting on multinode CI 14:02:29 That's pretty much it 14:02:39 esberglu I dropped some comments on localdisk 14:02:46 ack 14:02:50 of course CI is the priority atm 14:03:19 looks like z got back into a runway quickly after they resolved their CI issues... hopefully things go as well for us 14:04:36 #topic Out-of-Tree Driver 14:04:47 #link https://etherpad.openstack.org/p/powervm-oot-todos 14:05:25 I don't know of any progress made on OOT since the last meeting 14:06:04 I think everyone's been focused elsewhere 14:06:11 so, moving along... 14:06:23 #topic Device Passthrough 14:06:28 efried anything new here? 14:06:55 Looks like tetsuro is taking over the nrp-in-alloc-cands series, which is gooood. He proposed the microversion patch last night. 14:07:08 sweet 14:07:22 His bp/spec for including all resources in provider summaries got approved this morning, which is also gooood. That was going to hold up the series. 14:07:46 I'm getting real close to having granular ready. Just a couple more tests to write and a bug to work out. 14:07:56 Course, we're going to be racing each other. 14:08:06 Whoever loses is going to have some serious rebasing hell. 14:08:17 But at least progress is being made. 14:08:20 end. 14:08:25 tx 14:08:37 #topic PowerVM CI 14:08:42 #link https://etherpad.openstack.org/p/powervm_ci_todos 14:08:49 welcome mujahidali! 14:09:06 3 major things going on right now for CI 14:09:16 Seems ill since last night 14:09:46 efried: Yep. We can start there. The queens cloud is seeing those same EOF errors we were seeing on the staging 14:10:18 However, on staging we changed the NIC type from some weird protocol to virtio and everything was fine 14:10:32 oh, right, this is what we were thinking was the glance wsgi business 14:10:42 Production is already using virtio 14:10:42 'cept that should be in queens 14:10:57 efried: The glance wsgi thing was a real issue 14:11:06 edmondsw: thanks :) 14:11:13 I'm running glance not with wsgi on production 14:11:51 So I will be doing more recon on the EOF errors today 14:12:09 I assume that's top priority 14:12:20 yes 14:12:29 and then next would be the scenario tests so we can resume IT efforts 14:12:32 2) Scenario CI 14:12:54 I've got working scenario tests for OOT, running the suite with the same changes IT right now 14:13:19 I will be running those tests as part of the base CI job 14:13:33 how can we tell they're working if the CI as a whole is not working? 14:13:41 edmondsw: Staging env. 14:13:44 ah 14:14:15 Still some things to clean up there, but I think that should be ready by the end of the week 14:14:22 got numbers on what that added to the OOT CI times? 14:14:38 edmondsw: I've only ran twice all the way through 14:14:57 1 was 100s longer, the other actually ran faster than the base time I was comparing against 14:15:03 wow 14:15:24 that would be really nice if we find it's negligible 14:15:31 ++ 14:15:51 edmondsw: Yep. Obviously I want to get a lot more runs going through to confirm first, but seems that it may be 14:16:03 sure 14:16:18 3) Multinode CI 14:16:42 I know what I want to do here and am 90% sure it's gonna work, just need to finish scenario before I test 14:17:22 I learned that the subnodes are NOT always on the same underlying compute host 14:18:17 So we're going to have to define an AZ for each neo host. Then we can force the subnodes to be on the same neo host in nodepool 14:18:56 i.e. read the AZ of the parent, then specify that AZ for the subnode? 14:20:11 my concern with AZs is that if we split each host into a different AZ, don't we have to specify the AZ on any deploy? 14:20:22 edmondsw: Not exactly 14:20:25 and we don't care which AZ is used for the parent, so how do we pick one? 14:20:39 edmondsw: So we will have a bunch of providers in the nodepool conf, 1 for each AZ 14:20:57 And then we will say spawn 1 node and 1 subnode 14:21:14 Since the provider is a specific AZ, both node and subnode will be in it 14:21:37 The IP of the subnode is then saved in some nodepool files on the subnode 14:21:42 so we just tell nodepool we want 1 node and 1 subnode, and for the node it picks a provider, we don't have to? 14:21:44 *on the main node 14:21:52 edmondsw: Yep 14:21:56 ok nice 14:22:16 edmondsw: Only downside is that the config file is gonna be kinda gross, but oh well 14:22:22 so how does the subnode AZ get specified? 14:22:39 subnode will always come from the same provider? 14:22:47 edmondsw: Yes, you just have to config it right 14:22:50 k 14:22:58 That's all I have 14:23:00 work your magic :) 14:23:10 let's talk a bit about vSCSI CI 14:23:17 Sure 14:23:26 I assume that's next after the 3 we just covered 14:23:48 I thought we determined we didn't have the HW for it? 14:24:10 I think that depends on which solution we're talking about 14:24:25 in order to test vSCSI on every CI run, yeah, I think we would need more hardware 14:24:48 but mriedem suggested that it would be ok to just have a job that could be run on-demand 14:24:59 and I think we could probably do that with the hardware we have 14:25:44 I don't think we've ever done anything like that before, but I assume there are other examples out there we could look at 14:26:41 edmondsw: I can start thinking about it 14:26:41 esberglu thoughts? 14:26:44 tx 14:26:54 edmondsw: Idk how we will run on demand once jenkins goes away 14:27:10 Maybe zuul v3 has a way to do so, but I haven't seen anything like that ever 14:28:08 esberglu I'd probably start by asking mriedem if he knows of an example you could look at 14:28:12 We can sort through the details once I know more 14:28:35 and then go from there, talk to him or infra about the zuul v3 options 14:28:58 I wonder if that's one of the reasons zuul v3 isn't ready for 3rd party CI 14:29:45 mujahidali coming up to speed? 14:30:02 have a quick update on your status? 14:30:54 He's got access to all of the repos, etc. now. I've started adding him to CI reviews and will start giving him some tasks 14:32:11 alright tx 14:32:17 #topic Open Discussion 14:32:29 anything else? 14:32:34 esberglu: thanks for the update. edmondsw: yeah, I am looking into the code and reading the wiki 14:32:53 ok good 14:33:37 if there's nothing else, we can get some time back. Thanks! 14:33:41 #endmeeting