14:00:08 #startmeeting PowerVM Driver Meeting 14:00:09 Meeting started Tue Mar 20 14:00:08 2018 UTC and is due to finish in 60 minutes. The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:12 The meeting name has been set to 'powervm_driver_meeting' 14:00:28 \o 14:00:40 @/ 14:00:46 agenda: https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 14:00:57 #topic In-Tree Driver 14:01:06 https://etherpad.openstack.org/p/powervm-in-tree-todos 14:01:24 esberglu I just gave you a few easy comments on the vscsi commit 14:01:37 then that should be good for efried to look at 14:02:06 I'm still ploughing my way through my review backlog, and the in-tree patches are on it. 14:02:06 we're stacking things up for the nova cores to look at, with none of them actually looking for a while 14:02:13 +1 14:02:15 edmondsw: Saw that. Yep once I fix those it's ready for review 14:02:32 We ought to have runways kicked off "soon" - like maybe this week 14:02:33 efried I would look at that one next of the IT patches 14:02:33 I live tested attach/detach again which looks good and also got extend_volume working 14:02:43 next on my list is snapshot, then disk adapter 14:02:50 edmondsw: Cool 14:03:04 esberglu anything else you want to say for IT? 14:03:11 in which case we can queue up the couple we have ready 14:03:28 I just had a question about microversions 14:03:28 efried sorry I didn't follow that 14:03:39 ask away 14:04:23 When I was issuing commands using the cinder CLI it was defaulting to 3.0 14:04:48 In cinder/api/openstack/api_version_request.py 14:05:02 I changed the _MIN_API_VERSION = "3.42" 14:05:14 The level required for extending attached volumes 14:05:29 Is that the proper way to go about changing the min? 14:05:31 no 14:05:49 For the CLI? Yes. It isn't? 14:05:50 I couldn't find a clear answer in my googling 14:05:59 wait... maybe I'm not following what you're trying to do 14:06:15 are you working on a cinder change? 14:06:26 edmondsw: I got this. 14:06:35 edmondsw: No I was testing extend_volume in the vSCSI change 14:06:39 esberglu: Different CLIs are different. 14:06:49 then you just need to set an env var, not alter code 14:07:11 * edmondsw letting efried explain 14:07:16 Some default to minimum version. Some default to a specific version. Some negotiate latest version with the server per call. Clearly cinder is the first one. 14:07:34 And yes, there's an env var and/or flag you can specify to the CLI to get a different version. 14:07:49 AND the CLI has to know how to handle whatever operation you're trying to execute. 14:08:00 yep 14:08:09 So sometimes it may not be sufficient just to increase the version. 14:08:27 I think in this case I checked and it should be, but that was last week so... :) 14:08:27 But it sounds like in this case you determined that the CLI *does* support it, and that you need to use 3.42 to get that support switched on. 14:08:30 So you're doing the right thing. 14:09:04 efried by altering code? why would you say that over using an env var? 14:09:14 eh? I never said altering code. 14:09:33 esberglu said he changed _MIN_API_VERSION in cinder/api/openstack/api_version_request.py 14:09:35 Oh, I didn't follow that esberglu was actually changing code. My bad. 14:09:47 or at least that's how I read it 14:09:47 Yeah I should have used an env. var but the end result is the same 14:09:50 esberglu: There'll be an env var and/or CLI option to set the microversion. 14:09:54 Do that instead. 14:10:04 But if you're just testing locally, meh. 14:10:07 sure 14:10:15 edmondsw: Runways. In Nova. Basically a mechanism to promote more equitable distribution of review time. See https://etherpad.openstack.org/p/nova-runways-rocky 14:10:40 Anyways, point is that extend_volume works for attached vSCSI volumes if using the right microversion 14:10:49 cool 14:11:32 That's all I had 14:11:40 Just need reviews on snapshot and vscsi 14:12:05 efried we need to get the spec approved... 14:12:26 efried: Saw that you asked about fast approve there, hopefully someone will push it through 14:12:53 I pinged melwitt the other day and she said just to put it on https://etherpad.openstack.org/p/rocky-nova-priorities-tracking which it was, but no activity yet 14:13:36 Yeah. Re spec approval, when we talked about it the other day, I mentioned that we shouldn't be too strict about the "whole approved blueprint" aspect of this. 14:13:58 So like, I fully intend to put the spec down as part of the runway if it's not approved by the time we get this going. 14:14:10 esberglu: btw, are you planning to go on vacation any time soon? 14:14:20 or be otherwise unavailable to address review comments quickly? 14:14:36 efried: Nothing more than a day or 2 until July 14:14:45 cause that would be the main thing stopping us from queuing up a runway. 14:15:32 we won't want to wait for "The code for the blueprint must be 100% ready for review" 14:16:54 I read the "If a blueprint is too large" following sentence as saying it still has to be 100% ready, but it doesn't have to all be reviewed in the same runway... but I hope I'm wrong there 14:17:32 jgwentworth was supposed to leave the discussion in a separate etherpad. 14:17:42 oh. 14:17:44 It's at the bottom. 14:18:01 cool, I'll read over that later 14:18:08 and add comments as appropriate 14:18:10 tx for the link 14:18:22 anything else for IT? 14:18:28 nope 14:18:52 #topic Out-of-Tree Driver 14:18:59 https://etherpad.openstack.org/p/powervm-oot-todos 14:19:23 I did some more reviewing on the refactor... need to finish that and post the comments 14:19:50 I did some reviewing there. Did I post comments? 14:19:55 I can't remember whether I saved 'em. 14:20:04 yeah 14:20:11 yep, I see 'em 14:20:20 I'm still -0.5 until we get some live testing. Which is gonna suck. 14:20:45 yeah, I think we're all agreed this will need some good testing 14:21:01 just wanted to get comments addressed first, so we're not doing that multiple times 14:21:18 yuh 14:21:28 and I uncovered something interesting while digging through this... we don't have iSCSI live migration implemented 14:21:49 so I added that to the TODO list. PowerVC wants it 14:22:52 the other big thing OOT is https://review.openstack.org/552172 14:23:15 hope to see a pypowervm release today or tomorrow so we can propose a requirements bump and unblock that 14:23:27 anything else to discuss OOT? 14:23:52 actually 14:23:59 have we released pypowervm yet? Or tagged it or whatever? 14:24:04 no 14:24:05 Nope 14:24:17 Cause I had one comment in the refactor about something that we should be doing in pypowervm instead of nova-powervm. 14:24:21 I asked about that yesterday, and you and hsien both said sure, but it hasn't been done 14:24:28 So if we could get that into the release, that would help. 14:24:38 absolutely 14:24:47 let's drive that today 14:25:17 efried: edmondsw: IIRC that was a vscsi comment that will apply IT as well 14:25:25 https://review.openstack.org/#/c/530816/6/nova_powervm/virt/powervm/volume/fcpvvscsi.py@48 14:25:45 ah, yeah, that would apply to the IT commit as well 14:26:03 cool 14:26:18 anything else? 14:26:25 Well, first we need someone to confirm that always caching it is The Right Thing. 14:26:57 I don't know from WWPNs, so that someone ain't me. 14:27:16 certainly... you want to discuss that here or after the mtg? 14:27:33 I'd rather do after so I can stop and think about it 14:27:52 ight 14:27:58 #topic Device Passthrough 14:28:12 efried and I went through use cases yesterday and took some notes 14:28:51 I need to work that up and get a mtg on the calendar with the NovaLink guys 14:29:05 efried you have the floor 14:29:41 Nothing to add 14:29:48 k 14:29:58 #topic PowerVM CI 14:30:05 https://etherpad.openstack.org/p/powervm_ci_todos 14:30:10 esberglu ? 14:30:49 Since the start of the weekend CI has not been looking great, a bunch of new failures and long runs 14:31:06 I've been taking inventory of failures here https://etherpad.openstack.org/p/powervm_tempest_failures 14:31:18 That's what I'll be working on today 14:32:03 esberglu sure... focus on that and hold off on the vscsi IT commit respin while we workout the question of whether to move that into pypowervm 14:32:38 edmondsw: Yep I'm gonna just sit IT until we start getting some movement so I don't have to rebase the world 14:32:46 Other than that the CI management upgrade is ongoing, facing some roadblocks upgrading nodepool 14:32:52 We're currently nodepool 0.3.0 14:33:24 Starting in 0.4.0 they don't allow the flow of taking an image, doing some stuff, taking a snapshot of it and spawning from the snapshot 14:33:36 And instead you have to use diskimage-builder 14:33:48 Which from what I've read so far only supports 14.04 14:33:58 And I really don't want to go back to 14.04 from 16.04 14:34:28 I hope that's not right... someone you can catch on IRC to talk about that? 14:34:29 But there may be a way around that, I need to do some more recon 14:34:46 not sure who... maybe tonyb? 14:35:18 edmondsw: I really haven't looked to much into, just saw a blurb about it. I'm sure there are people using new nodepool with 16.04 14:35:26 So there's got to be a solution 14:35:34 good 14:36:02 The other thing I need to do is update the CI firmware 14:36:21 getting the CI stable again is obviously the priority, but you might want to shoot off a couple feelers so that you have suggestions ready to try when you can get back to this 14:36:48 So I will need to find a good time to take the CI down for a day or so 14:37:20 that related to the undercloud moving to queens, or just normal need to apply security updates and such? 14:37:25 Security updates 14:37:33 ok good 14:37:49 maybe do that on a Friday? 14:38:07 edmondsw: Yeah that was my plan, probably not until next week though 14:38:23 That's it for me 14:38:36 #topic Open Discussion 14:38:36 I'll try to time it so we don't have much in the pipeline getting blocked 14:38:43 +1 14:38:54 I had one thing to bring up here 14:39:07 we talked a little about the logo the other day: http://eavesdrop.openstack.org/irclogs/%23openstack-powervm/%23openstack-powervm.2018-03-14.log.html#t2018-03-14T19:03:19 14:39:17 but I don't think I saw esberglu chime in 14:39:25 any thoughts? 14:39:37 or if anyone else is lurking and wants to throw something out there... 14:39:42 edmondsw: +1 on gorilla, thought that was a great idea 14:39:48 cool 14:40:12 then barring something unexpected, I'll ask for gorilla 14:40:29 that's it from me 14:40:36 nothing else from me 14:40:44 wwpns? 14:40:58 sure we can talk about that... let me go back to your comment 14:41:03 def get_physical_wwpns(adapter): 14:41:03 """Returns the active WWPNs of the FC ports across all VIOSes on system. 14:41:03 :param adapter: pypowervm.adapter.Adapter for REST API communication. 14:41:03 """ 14:41:03 vios_feed = vios.VIOS.get(adapter, xag=[c.XAG.VIO_STOR]) 14:41:03 wwpn_list = [] 14:41:03 for vwrap in vios_feed: 14:41:05 wwpn_list.extend(vwrap.get_active_pfc_wwpns()) 14:41:05 return wwpn_list 14:41:11 def get_active_pfc_wwpns(self): 14:41:11 """Returns a set of Physical FC Adapter WWPNs of 'active' ports.""" 14:41:11 # The logic to check for active ports is poor. Right now it only 14:41:11 # checks if the port has NPIV connections available. If there is a 14:41:11 # FC, non-NPIV card...then this logic fails. 14:41:12 # 14:41:12 # This will suffice until the backing API adds more granular logic. 14:41:12 return [pfc.wwpn for pfc in self.pfc_ports if pfc.npiv_total_ports > 0] 14:42:10 So the question is: does the list of pfc_ports, or their number of npiv_total_ports, never* change? (*without, like rebooting) 14:42:39 * efried has GOT to figure out how to turn off emojis in this IRC client) 14:42:44 physical WWPNs shouldn't change unless you hotplug an adapter... but I think you can do that? 14:43:21 no idea 14:43:31 so this may have been an oversight when the code was first written and we've just never hit an issue because folks don't hotplug FC adapters on a regular basis 14:43:32 If you can, then the nova-powervm code is wrong. 14:43:39 yeah, that. 14:43:43 we should have this conversation with seroyer 14:43:48 ight. 14:44:10 let's move that to slack so we can pull him in 14:44:14 anything else before we close here? 14:44:42 nothing from me. 14:44:50 #endmeeting