14:31:48 #startmeeting PowerVM Driver Meeting 14:31:49 Meeting started Tue Apr 3 14:31:48 2018 UTC and is due to finish in 60 minutes. The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:31:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:31:52 The meeting name has been set to 'powervm_driver_meeting' 14:32:00 https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 14:32:15 #topic In-Tree Driver 14:33:09 efried esberglu ready? 14:33:15 yep 14:33:16 hit it 14:33:25 so https://review.openstack.org/#/c/547169/ merged 14:33:33 but none of the actual function yet 14:33:49 we have one +2 on the network hotplug 14:34:18 and there is a runway queued up to look at that, vscsi, and snapshot 14:34:33 vSCSI, Snapshot, base DiskAdapter, and Localdisk are all ready for review 14:34:55 Only other one is resize/cold migrate which is still WIP 14:35:01 esberglu I think you said yesterday that you rebased something? 14:35:08 I didn't get a chance to look 14:35:27 edmondsw: I just made everything one commit chain, made the resize/cold migrate stuff easier 14:35:39 ah cool 14:36:19 vscsi -> snapshot -> diskadapter -> localdisk -> migr/resize 14:36:43 And responded to the couple comments you left on snapshot and disk adapter, those should be ready for your +1 14:36:57 esberglu will try to look this afternoon 14:37:07 tx 14:37:07 I did +1 the vscsi changes 14:37:16 yep 14:37:29 Resize is coming along 14:37:44 Need to figure out if boot from cinder volume works, some of the code paths depend on that 14:37:54 And then UT, Live test, and get CI rolling 14:38:32 esberglu testing boot from volume would also be good for the vscsi commit, right? 14:39:02 I don't recall whether there is a separate op for that in the support matrix that we need to mark "complete" 14:39:15 if it works 14:39:27 edmondsw: Yep I'm stacking right now to test it out 14:39:33 cool, let us know 14:39:46 It for me 14:40:10 efried no word on core yet? 14:40:57 any day now I assume 14:40:58 nope. The one week "speak now or forever hold your peace" period technically ended last night. If melwitt is thinking about it, hopefully she'll flip the switch when she gets on today. 14:41:12 yep 14:41:16 ok moving on 14:41:23 #topic Out-of-Tree Driver 14:41:39 https://etherpad.openstack.org/p/powervm-oot-todos 14:42:10 we got pypowervm bump merged and the 2 commits waiting on that as well 14:42:42 chhagarw is working on iscsi stuff 14:42:59 yes 14:43:19 it should be mostly done by tomorrow, adding UTs 14:43:20 there is a commit about setting fuse_limit for fileio that I'm trying to dig into 14:43:27 chhagarw cool 14:43:55 got a response from thorst about fuse_limit that "fileio seems to max at 15…I think" 14:44:17 so I don't think they're doing that for perf reasons 14:44:44 so it might actually make sense in nova-powervm and not just in NovaLink... but I need to get more clarity there 14:45:19 tjakobs rebased the volume refactor but then it has another merge conflict now with other things having merged 14:45:47 I think that covers OOT unless anyone has anything to add? 14:45:49 edmondsw: "seems to max" leads me to think this is the result of the same kind of testing the other consumer was seeing. 14:46:01 edmondsw: noted 14:46:21 efried you and I should catch hsien today about that 14:46:34 and get to the bottom of it 14:46:41 We need to get all the players together to make sure we're really talking about two different things before we send hsien off to fix it one way and drop code on our layer to avoid the bug. 14:47:00 sure 14:47:11 edmondsw: Can you find out who in pvc observed the issue? I think svenkat referenced an RTC bug 14:47:12 I'm told the other player would be thorst 14:47:25 thorst who? 14:47:36 lol 14:47:36 Oh THAT guy - haven't seen him in ages. 14:48:01 alright, moving on 14:48:05 Oh one other thing, haven't figured out how to handle duplicate opts yet, but also haven't looked into it at all since last week 14:48:15 #topic Device Passthrough 14:48:35 this has been languishing from my perspective... been caught up in other things 14:48:39 how's it going with you efried? 14:48:48 update on nova? 14:49:09 Designs and patches churning along frantically. 14:49:23 It is becoming clear that we've got way, way, way too much content for Rocky. 14:49:40 across Nova, or re: placement? 14:49:45 yes 14:49:49 :) 14:49:51 upt is starting to merge. 14:49:59 \o/ 14:50:02 Still verrry slow progress on nrp-in-alloc-cands. 14:50:31 I'm thinking I'm going to start grinding on the granular work, because everybody is depending on it; even though I won't be able to tie it off until we get nrp-in-alloc-cands. 14:50:55 can/should you help more on nrp-in-alloc-cands? 14:51:19 double-edged sword. I want to be able to +2 that stuff. 14:51:30 yeah 14:51:40 One of our main cores who looks at placement stuff is out for 2w 14:51:45 (gibi on honeymoon) 14:52:14 Progress is being made; it's just slower than I would like. 14:52:26 k 14:52:30 BTW, there's also been some argument over a design point for granular. 14:52:52 Per design, if you specify two separate numbered request groups, there's no restriction on whether they will or will not land on the same provider. 14:53:09 Now people are demanding that separate numbered groups *must* land on separate providers. 14:53:23 which I think is a mistake for several reasons. But I'm being outvoted. 14:53:24 why? 14:53:39 not why you think it's a mistake 14:53:48 why others disagree? 14:54:10 I'm actually not terribly sure, to be honest. I think it's mainly to satisfy NUMA use cases. 14:54:22 hmmm 14:54:28 I mean, I concede that we have to be able to handle both things eventually. 14:54:45 could you add a way to say THESE providers have to go together but others don't? 14:55:00 Yeah. And we're going to have to do that eventually. 14:55:27 maybe need to accelerate that 14:55:38 It's a question of whether that option is gonna be called allow_same_provider= or separate_providers= 14:55:40 or at least articulate it 14:55:52 oh, believe me, I've been articulating my ass off. 14:55:58 :) 14:56:19 I'm not sure it's going to affect us, to be honest. 14:56:33 We can make dev passthrough work whichever way the default shakes out. 14:56:48 I'm just arguing from a purely architectural standpoint. And I'm right. But I'm gonna lose. 14:56:59 boo 14:57:05 them's the breaks. 14:57:10 yeah 14:57:12 anything else? 14:57:23 The only thing I still have on my side is that the spec, which is approved, already has it my way. 14:57:31 so someone is going to have to propose a delta to it. 14:57:42 -2 it ;) 14:57:42 And I'm conspicuously dragging my feet on that leetle work item. 14:58:01 #topic: PowerVM CI 14:58:12 https://etherpad.openstack.org/p/powervm_ci_todos 14:58:14 esberglu? 14:58:22 Upgraded all the systems last week 14:58:30 Everything is running really smoothly 14:58:38 yay 14:58:53 Only failure that is hitting with any sort of consistency is the vopt naming conflict thing 14:58:59 But even that is really low % 14:59:42 I'm gonna have to join another mtg in a min... esberglu take over this one? 14:59:44 I got some multinode CI runs going for OOT, all manual, nothing automated yet 14:59:50 edmondsw: Sure 14:59:51 http://184.172.12.213/manual/2node/ 15:00:03 But it's mostly passing, hitting a few errors on resize I haven't looked into yet 15:00:10 :) 15:00:19 "Specified UUID is already used by other partitions" 15:01:15 edmondsw: Wanted to talk about my move to PowerAI in open discussion, but we can do it later today so you can be there 15:01:26 nice 15:01:29 I just got a 5 min reprieve so go ahead 15:01:39 Do we want to do that here or internal? 15:01:39 That's all for CI 15:02:00 Actually internal may be better 15:02:23 esberglu any more progress on the move to queens? 15:02:38 edmondsw: Haven't touched it, working multinode has taken priority 15:02:48 sure, agreed on that 15:03:07 #topic Open Discussion 15:03:08 anything? 15:03:18 nada 15:03:53 alright, thanks guys 15:03:54 #endmeeting