13:04:01 #startmeeting powervm_driver_meeting 13:04:02 Meeting started Tue Aug 29 13:04:01 2017 UTC and is due to finish in 60 minutes. The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:04:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:04:06 The meeting name has been set to 'powervm_driver_meeting' 13:04:18 \o 13:04:31 edmondsw is at VMWorld. 13:04:41 thorst_afk - you going to be here? 13:04:48 o/ 13:04:52 not really 13:05:02 will be catching up on the feed periodically 13:05:16 #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 13:05:24 #topic In Tree Driver 13:05:37 #link https://etherpad.openstack.org/p/powervm-in-tree-todos 13:05:40 Okay. I guess it'll be mostly a status update. esberglu when done, send link to minutes out so others can catch up. 13:06:45 I tested the config drive patch with FORCE_CONFIG_DRIVE=True 13:06:56 Everything seemed to be working as expected 13:07:06 Sweet. Have you seen my email? 13:07:20 efried: Yep. Ran it last night, 0 failures 13:07:29 Well, so here's the funky thing... 13:07:29 Need to port that to the IT patch 13:07:46 https://review.openstack.org/#/c/498614/ <== this passed our CI. 13:07:59 It oughtn't to have. 13:08:42 efried: Weird... 13:08:56 We would have been sending down AssociatedLogicalPartition links like http://localhost:12080/rest/api/uom/ManagedSystem/None/LogicalPartition/ 13:09:16 But... perhaps the REST side is just parsing the end off without looking too closely at the rest of it. 13:09:39 The only side effect I can think of would have been that we would be ignoring fuse logic in mappings 13:09:48 which just means every mapping ends up on its own bus. 13:09:57 which wouldn't manifest any problems in the CI, really. 13:10:20 efried: Want me to post the CI logs from the manual CI run to see if there's anything different going on? 13:10:32 No, if it passes, we won't be able to see anything useful in there. 13:11:00 Anyway, yeah, esberglu you want to pick up those two changes and run with 'em? 13:11:06 finish up UT and whatnot? 13:11:16 efried: Sure 13:11:33 #action esberglu: Port host_uuid change to IT config drive patch 13:11:52 #action esberglu: Finish UT for OOT host_uuid patch 13:12:03 That 13:12:08 that's it for IT? 13:12:39 For completeness, the pypowervm side is 5818; the nova-powervm change to be finished up and ported to the in-tree cfg drive change is https://review.openstack.org/#/c/498614/ 13:13:09 #action esberglu UT for 5818 too 13:13:24 It's passing right now, but needs some extra testing for the stuff I changed. 13:13:35 efried: ack 13:13:55 Oh, and following up on vfc mappings. I don't have anything fibre-channely to test with. Do you? 13:14:38 Don't think so 13:14:57 Need to make sure the same logic (posting ROOT URIs to create mappings) also works for vfc, then make the same change in the vfc mapping bld methods. 13:15:06 Can be a followon change, I suppose 13:15:26 For the moment, we're just using it to trim down the cfg drive stuff, which is always vscsi. 13:16:00 efried: Ok. We can loop back to that after this first wave is done and either add it in or start a new change 13:16:26 yuh. Perhaps someone from pvc can lend us a fc-havin system for a day or two. 13:16:36 mdrabe you got anything like that? 13:16:52 Yes 13:17:02 nice 13:17:07 But as far as lending I'm not sure :/ 13:17:28 I actually would literally need it for half an hour 13:17:44 All be consumed for pvc stories atm, in a few days they should be free I think 13:17:45 and would need a free fc disk I could assign to an LPAR (even the nvl) 13:18:11 Assuming that disk is free, the testing would be nondestructive. 13:18:24 I just need to create a mapping in a certain way and make sure it works. 13:18:40 Testing with devstack though right? 13:18:43 no 13:18:45 just pypowervm 13:18:49 don't even care what level 13:18:55 oh mkay 13:19:09 That's good then, dm me 13:19:14 rgr 13:19:46 #action efried to validate vfc mappings work the same way (with ROOT URIs for AssociatedLogicalPartition) using mdrabe's setup. 13:19:49 aaaand... 13:20:03 #action efried to continue thorough review of cfg drive change 13:20:22 At some point I reckon we're gonna need thorst_afk to review it too. 13:20:54 All three of us have had hands on it, so approval is going to be like supermajority consensus. 13:21:07 Oh, wait, this is in tree. 13:21:17 So we all just get to +1 it anyway. 13:21:25 And community approval is going to entail... 13:21:48 #action esberglu to drive pvm in-tree integration bp for q. 13:22:07 efried: Yep 13:22:14 Not sure if you caught my parting shot yesterday on that, but: may want to ask mriedem in -nova whether he wants a fresh bp or just re-approve existing. 13:22:31 efried: Yep saw that was planning on putting that in motion today 13:22:37 coo 13:23:11 #topic Out Of Tree Driver 13:23:28 https://review.openstack.org/#/c/471926 passed functional testing 13:23:49 There's one issue uncovered from the testing left to be ironed out, but it's unrelated to the change 13:24:34 What was the issue? 13:24:38 Love it when test finds bugs. 13:24:49 It kinda justifies the whole existence of testing as a thing. 13:25:21 One evacuation failed with the dreaded vscsi rebuild exception... 13:25:45 got bug? 13:26:09 Can't link here, dming, sec 13:26:17 If it's RTC, I don't care. 13:26:28 Heh k 13:26:28 Is it not a bug in the community code? 13:26:49 If so, we ought to have a lp bug for it. 13:26:54 It's been some time since I've looked at it 13:27:05 oh, is it an old bug? 13:27:09 * efried confused 13:27:23 No, I just mean within a weeks timeframe 13:27:28 I forget things quickly, sorry 13:28:18 mdrabe Okay, well, I'm not in a huge hurry to get a lp bug opened, but if the changes are going into nova-powervm, that should happen eventually (before we merge it). 13:28:47 efried: The exception that was raised was this one: https://github.com/powervm/pypowervm/blob/develop/pypowervm/tasks/slot_map.py#L665 13:29:00 For 1 out of 5 evacuations 13:29:32 As in, we couldn't find one of the devices on the target system? 13:29:37 Right 13:29:42 Uhm. 13:29:47 So first of all, 1/5 ain't good. 13:29:58 And I _think_ I recall seeing LUA recovery failures in the logs 13:30:02 Second, upon what are you basing your assertion that this is unrelated to your change? 13:30:16 Because it's not related to the slot map 13:31:21 even though ten out of the 13 or so LOC leading up to that exception have 'slot' in 'em? 13:31:49 but 13:31:52 ok 13:32:04 I'll -1 WF until we resolve it 13:32:35 efried: fair? 13:32:46 I had put a +2 on it, but yeah, I think we should follow up first. 13:34:21 Reminder that pike official release is tomorrow 13:34:30 That it for OOT? 13:34:48 other than pci stuff, I think so. 13:35:10 #topic PCI Passthrough 13:35:18 okay 13:35:28 Lots to catch up on here since last week. 13:36:11 First of all, last week I got a prototype successfully claiming *and* assigning PCI passthrough devices during spawn. 13:36:38 Were any of y'all in the demo on Friday? 13:36:58 Yeah 13:36:58 nope 13:37:32 The nova-powervm code is here: https://review.openstack.org/#/c/496434/ 13:38:09 And I'm actually not sure ^ relies on any pypowervm or REST changes, as currently written. 13:38:19 despite what the commit message says. 13:38:29 now 13:39:12 REST has merged the change that lets us assign slots on LPAR PUT. Which means I can remove the hack here: https://review.openstack.org/#/c/496434/3/nova_powervm/virt/powervm/vm.py@573 13:40:37 Also the much-debated PCI address spoofing I think I'm gonna keep in nova-powervm (abandoned 5755 accordingly) because... 13:40:43 All of this is going to be temporary 13:40:50 It may not even survive queens, gods willing. 13:41:02 efried: I forget, through what API do we assign PCI devices after spawn? 13:41:46 mdrabe Before that REST fix? IOSlot.bld and append that guy to the LPAR's io_config.io_slots. Then POST the LPAR. 13:42:11 Eric Berglund proposed openstack/nova-powervm master: DNM: ci check https://review.openstack.org/328315 13:42:43 efried: And that's triggered by an interface attach from an openstack perspective? 13:43:16 mdrabe No, actually, I'm not sure what happens during interface attach - should probably look into that. 13:43:40 No, in openstack the instance object we get passed during spawn contains a list of pci_devices that have been claimed for us. 13:43:49 Eric Berglund proposed openstack/nova-powervm master: DNM: CI Check2 https://review.openstack.org/328317 13:44:24 Via the above change sets, we're culling that info and sending it into LPARBuilder (curse him). 13:44:51 mdrabe Is that what you were looking for? 13:45:12 I'm just trying to understand the flows affected 13:45:38 Sure, definitely worth going over in more detail, let's do that. 13:46:38 Yea, I've been meaning to take some time to stare at this stuff, I'll probably ask better questions after I do that 13:46:45 Nova gets PCI dev info from three places: 13:46:52 => get_available_resource (in the compute driver - code we control) produces a list of pci_passthrough_devices as part of the json object it dumps. 13:47:35 => The compute process looks in its conf for [pci]passthrough_whitelist, which it intersects with the above to filter down to only devices you're allowed to assign to VMs. 13:48:16 => The nova API process looks in its conf (which may not be the same .conf as the compute process - took me a while to figure THAT one out) for [pci]alias entries, which it *also* uses to filter the above. 13:48:55 The operator sets up a flavor. In the flavor extra_specs he sets a field called pci_passthrough:alias whose value is a comma-separated list of : 13:49:48 The names come from the [pci]alias config, and are how the op identifies what kinds of devices he wants on his VM. Those [pci]alias entries just map the alias name to a vendor/product ID pair. 13:49:57 And the is how many of that kind of dev you want. 13:50:01 So 13:50:56 When you do a spawn with that flavor, nova looks at the pci_passthrough:alias in the flavor, maps it to the vendor/product ID, and then goes and looks in the filtered-down pci_passthrough_devices list for devices that match. 13:51:12 Meanwhile it's keeping track of how many of those kinds of devices it has claimed and whatnot. 13:51:36 Ok so adding/removing PCI devices is triggered through resize 13:51:38 So assuming it finds suitable devices, it decrements their available count and assigns 'em to your instance. 13:51:50 Yes, I believe that's the case, though I haven't explicitly tried it yet. 13:52:17 That makes me wonder how this works with SR-IOV 13:52:32 To come full circle: nova puts the specific devices it claimed into your instance object that it passes to spawn, which is where our code again gets control. 13:52:50 Yeah, SR-IOV is going to be a different story 13:52:59 Especially since we're not doing the same thing nova does with SR-IOV. 13:53:16 But much of the flow is the same. 13:53:54 pci_passthrough_devices is *supposed* to register each VF as a child of its respective PF. 13:54:05 So you could claim a VF and the matching is done based on the parent. 13:54:32 But when you're doing that as part of network interface setup, things go off the rails a bit. 13:55:02 Now it starts looking for a physical_network tag on your device and trying to bind a neutron port with that network and all that jazz. 13:55:40 In the rest of the world, you have to pre-create VFs, and they're passed through explicitly one by one and assigned directly to the VM. 13:56:01 In our world... we don't have the VFs until we need 'em, and even then, they're not assigned directly to the VM. 13:56:49 So we have to fool the pci manager by spoofing "fake" VFs in our pci_passthrough_devices list. We just create however many entries according to the MaxLPs on the PF. 13:57:23 Right okay, I'm stuck in the PowerVM perspective 13:58:12 Yeah, so when we do a claim with SR-IOV, nova actually hands us one of those fake VFs, but we ignore it and just create our VNIC on the appropriate PF. 13:58:55 This is probably enough historical treatise. The aforementioned PoC code gives me confidence that we can make this work in q without community involvement. Which is not bad. 13:59:01 But it also ain't pretty. 13:59:16 The main ugliness is that we have to spoof our PCI addresses. 13:59:41 Because nova refuses to operate without a Linuxy PCI address in ::. format. 13:59:54 Our devices don't have those. We have DRC index and location code. 14:00:09 Linuxy PCI addresses are 32-bit. DRC index is 64-bit. 14:00:42 What determines the DRC index for us? 14:00:46 PHYP 14:00:46 phyp? 14:01:25 So I started down a path of suggesting some changes to nova's pci manager that would allow us to use our DRC index (or location code, or whatever we wanted) to address and identify devices. 14:01:42 https://review.openstack.org/497965 14:02:38 It was basically shot down as being an interim hackup that would be superseded by the move to placement and resource providers. 14:03:04 Which is really what I was going for in the first place. I wanted to garner some attention and discussion that would get us moving in that direction. 14:04:07 The upshot is that we (I believe Jay is the nova core most invested in this) want to make devices (not just PCI - any devices) managed through the placement and resource provider framework. 14:04:36 In that nirvana, our compute driver provides a get_inventory method, which replaces get_available_resource. 14:05:24 The information contained therein is able to represent any resource generically, and the nova code doesn't try to introspect values and do stuff with 'em like it is doing today for PCI addresses and whatnot. 14:05:54 That sounds like the way to go 14:06:10 That work is off the ground at this point in nova, for resources like vcpu, mem, and disk. 14:06:18 There's also some support for custom resource classes. 14:06:23 So 14:06:57 Jay and I are working up content for discussion at the PTG toward making devices managed by the same setup. 14:07:18 Cool 14:08:22 Good discussion. We ready to move on? 14:08:33 A resource provider would describe the devices it has available; those devices would have qualitative and quantitative properties. Nova would get a spawn request asking for a device with certain qualitative and quantitative properties. Placement and scheduler and claims and family would just match those values (again, blindly, not introspecting the values) and give us the resources. 14:08:51 And we get the helm back in our driver and do whatever we want with those claimed resources. 14:09:29 I feel much more informed than I did an hour ago 14:09:53 Same 14:10:02 So my action this week is going to be collating some of these notes and stuff, creating an etherpad for the PTG, and perhaps putting some of it down in a blueprint https://blueprints.launchpad.net/nova/+spec/devices-as-resources whose spec is here: https://review.openstack.org/#/c/497978/ 14:10:51 efried: Is the resource provider change targetted for q? 14:11:02 Well, that's what I don't know. 14:11:09 I'm sure it will be targetted for q. 14:11:15 Whether it will get done in q is another question. 14:11:19 So 14:11:31 We need to be prepared to move forward with our hacked version 14:11:44 And we can transition over as able. 14:11:48 It's a big piece of work. 14:12:09 So I suspect that even if it gets done in q, it'll get done late in the cycle, possibly too late for us to exploit it fully ourselves. 14:12:37 The really good news here is that Jay is very invested in this, and it fits with the overall direction nova is moving wrt placement and resource providers, so I don't doubt it's going to get done... eventually. 14:12:57 It's not just us whining "we need this for PowerVM". 14:13:40 Cool 14:13:50 Okay, I think that's probably enough of that for now. Any further questions, or ready to move on? 14:14:09 #action efried to write etherpad and/or spec content for nova device management as generic resources. 14:14:09 I might have questions later, I need to look through the code still 14:14:32 #topic PowerVM CI 14:14:37 Not much to report here 14:15:10 Still waiting for the REST change for the serialization issue 14:15:25 esberglu It's been prototyped, though? 14:15:33 And run through CI? 14:15:58 efried: Prototyped and run through CI, but not with the latest version of the code 14:16:10 5775? 14:17:38 efried: I think it requires the related changes as well. Not 100% sure though, hsien deployed it 14:18:59 Other than that the compute driver was occaisionally failing to come up on CI runs. The stacks on the undercloud for a few systems were messed up 14:19:09 I redeployed, haven't seen it since, gonna keep an eye out 14:20:00 Those were the only failures hitting CI consistently, so failure rates should be pretty low now 14:20:16 Well not now, once that rest fix is in 14:20:30 That's all I had CI 14:20:52 #topic Driver Testing 14:21:05 Jay isn't on. But he was having problems stacking last week 14:21:24 I got his system stacked, not sure if any further testing has been done on it yet 14:22:14 Nothing else to report there 14:22:34 #topic Open Discussion 14:22:45 That's it for me 14:22:52 nothing else here 14:23:40 Alright. See you here next week 14:23:50 #endmeeting