13:00:09 #startmeeting powervm_driver_meeting 13:00:11 Meeting started Tue Sep 26 13:00:09 2017 UTC and is due to finish in 60 minutes. The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:12 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:14 The meeting name has been set to 'powervm_driver_meeting' 13:00:24 https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 13:00:45 \o 13:01:06 o/ 13:01:46 #topic In-Tree Driver 13:02:11 I'm planning on pushing up an updated config drive patch today 13:02:16 Then continuing work on OVS 13:02:49 That's all from my end 13:03:22 Oh yeah and slowly knocking out doc updates 13:04:28 Anything else? 13:05:17 Might want to push mriedem to approve our spec. 13:05:28 Actually, get sdague to +2 it first. 13:06:09 They might want more +1s from us first 13:06:34 But yeah I can bump them 13:08:20 #topic Out-of-Tree Driver 13:09:23 Anything to discuss here? 13:10:05 o/ 13:10:38 ceph? 13:10:43 iscsi? 13:10:46 and iscsi.. 13:11:05 taylor is working on both, with patches up 13:11:06 resize attached volumes? 13:11:33 I've given some comments on the iscsi pypowervm changes that he needs to resolve first, but I think we're closing in on that 13:11:43 and Gerald has also looked at that, which is good 13:11:59 I don't think Gerald has looked at the ceph changes yet, which I need to ping him about again 13:12:12 efried what about resize attached volumes? 13:12:39 It's out there, looks like it's getting reviews. Also a tjakobs thing. 13:12:51 I'm just listing open stuff in nova-powervm 13:13:10 k, it sounds familiar but I don't know if I've reviewed it yet or not 13:13:17 I'll look for it after this 13:13:36 I think the force resize work merged, right? 13:14:50 we should talk about the new request that thorst mentioned yesterday 13:15:08 host-level CPU util metrics ported down to pypowervm 13:15:19 anybody want to take that? 13:16:36 I nominate anyone with an 8-char nick starting with 'e'. 13:16:51 efried not just anyone starting with "e" ? 13:16:52 ;) 13:16:56 definitely not 13:17:07 pretty much what I figured, we're all busy... I'll throw it on the todo list 13:17:15 try to get to it if I can 13:17:28 that's probably all for OOT right now 13:17:52 #topic Device Passthrough 13:19:09 efried you up 13:19:26 OOT, no progress 13:19:36 IT, been keeping a close eye on placement specs 13:20:12 That's about it. 13:20:42 efried it would be good if you could glance at the cyborg stuff and how that fits in 13:21:09 I think I sent you a link about that the other day? If not, I'll go find it again 13:21:53 mdrabe have you been able to spend time on this since you got back? 13:22:23 Only a little bit 13:22:30 I have these links: https://wiki.openstack.org/wiki/Cyborg https://review.openstack.org/#/c/448228/ 13:22:50 But 13:23:10 If nova doesn't call out to cyborg, how could it help us? 13:23:51 in boston it was very unclear how cyborg would fit into the picture. They were still very much trying to figure that out themselves 13:24:19 if that has started to gel at all now, and I've gotten the impression it has, we need to know how that is going to work 13:24:24 and then we can answer your question :) 13:24:36 I'll read up. 13:24:39 tx 13:25:38 mdrabe I'll ping you offline and we can talk about that 13:25:47 esberglu I think that's all here for now 13:25:49 k 13:25:56 #topic PowerVM CI 13:26:26 I ended up just reinstalling that whole problem SSP group last night 13:26:39 The image template just finished building, ready nodes are spawning now 13:26:47 So it should be back up within the hour 13:27:07 Nothing else to report 13:27:23 k, tx 13:27:24 esberglu No love from Veena or Uma? 13:27:48 Veena helped for a little bit, but the problem wasn't obvious. She said I could wait until today or just reinstall 13:28:20 I'll post here when it's back online 13:28:36 +1 13:28:42 #topic Open Discussion 13:29:31 efried here's another link for cyborg: https://etherpad.openstack.org/p/cyborg-queens-ptg 13:29:44 see the notes section there 13:30:02 esberglu Did you force-wipe the SSP disks? 13:30:25 Cause if you didn't, and the problem is in the cluster metadata, reinstalling the systems ain't gonna do you much good. 13:30:57 I created new ones 13:31:12 edmondsw ack 13:34:16 #endmeeting