Monday, 2016-06-13

*** thorst has joined #openstack-powervm01:01
*** thorst has quit IRC01:04
*** thorst has joined #openstack-powervm01:05
*** thorst has quit IRC01:13
*** thorst has joined #openstack-powervm01:26
*** thorst has quit IRC01:26
*** thorst has joined #openstack-powervm01:27
*** thorst has quit IRC01:31
*** jwcroppe has quit IRC01:38
*** jwcroppe has joined #openstack-powervm01:39
*** thorst has joined #openstack-powervm02:30
*** thorst has quit IRC02:30
*** thorst has joined #openstack-powervm02:30
*** thorst has quit IRC02:36
*** jwcroppe_ has joined #openstack-powervm02:41
*** jwcroppe has quit IRC02:44
*** jwcroppe_ has quit IRC03:03
*** jwcroppe has joined #openstack-powervm03:08
*** jwcroppe has quit IRC03:13
*** thorst has joined #openstack-powervm03:34
*** thorst has quit IRC03:41
*** Cartoon_ has quit IRC03:49
*** Cartoon_ has joined #openstack-powervm04:29
*** thorst has joined #openstack-powervm04:39
*** thorst has quit IRC04:46
*** Cartoon_ is now known as Cartoon05:20
*** thorst has joined #openstack-powervm05:43
*** thorst has quit IRC05:51
*** thorst has joined #openstack-powervm06:50
*** thorst has quit IRC06:56
*** k0da has joined #openstack-powervm07:28
*** Cartoon_ has joined #openstack-powervm07:40
*** Cartoon has quit IRC07:43
*** thorst has joined #openstack-powervm07:53
*** thorst has quit IRC08:01
*** thorst has joined #openstack-powervm09:00
*** thorst has quit IRC09:06
*** thorst has joined #openstack-powervm10:04
*** thorst has quit IRC10:11
*** thorst has joined #openstack-powervm11:08
*** thorst has quit IRC11:15
*** tlian has joined #openstack-powervm11:25
*** thorst has joined #openstack-powervm11:33
*** Cartoon has joined #openstack-powervm12:03
*** Cartoon_ has quit IRC12:06
*** thorst has quit IRC12:11
*** thorst has joined #openstack-powervm12:15
*** jwcroppe has joined #openstack-powervm12:16
*** kriskend has joined #openstack-powervm12:32
thorstadreznec: Looks like CI is having issues.  Something seems up with keystone12:32
*** kriskend has quit IRC12:44
*** Ashana has joined #openstack-powervm12:46
*** Ashana has quit IRC12:49
*** Ashana has joined #openstack-powervm12:50
*** Ashana has quit IRC12:50
*** Ashana has joined #openstack-powervm12:51
*** burgerk has joined #openstack-powervm13:01
*** tblakeslee has joined #openstack-powervm13:06
*** Cartoon has quit IRC13:07
*** kriskend has joined #openstack-powervm13:11
*** edmondsw has joined #openstack-powervm13:15
*** jwcroppe_ has joined #openstack-powervm13:16
*** mdrabe has joined #openstack-powervm13:17
*** jwcropp__ has joined #openstack-powervm13:17
*** jwcroppe has quit IRC13:18
*** burgerk has quit IRC13:19
*** jwcroppe_ has quit IRC13:20
*** lmtaylor1 has joined #openstack-powervm13:27
*** burgerk has joined #openstack-powervm13:30
thorstAshana: adreznec and I are debating the approach to move forward.13:34
thorstwe're basically at a cross road because, this can't be something unique to our env.  So we're thinking we either missed a configuration step, or...something else.13:34
*** esberglu has joined #openstack-powervm13:34
thorstso the debate now is - do we rebuild the systems from the snapshots or do we investigate on the existing systems13:35
adreznecthorst: Yeah, unless this is a LXC issue specific to Power/16.0413:35
thorstone of the questions is - how long does it take you to get the environment stood back up if we restore the snapshots?13:35
thorstadreznec: I thought Ashana was also seeing this on the x86 controller node tho?13:35
adreznecAh right, so this would be a Xenial issue13:35
adreznecIf anything13:36
thorstesberglu: side note: Run a quick CI run.  Something is up in keystone.  Mind debugging?13:36
thorstAshana: so let us know what the rebuild time is.  If high, we'll debug existing environment.13:40
adreznecthorst: Ashana I'm going to do a bit more comparison between our setup and the gate environment just to look for obvious issues. If I don't find anything we should consider trying from the base images to see if we hit this same issue13:40
AshanaOk great, ill keep looking at my cfg files see if I see anything as well13:41
adreznecAshana: If we do go down that route, how long would it take to get back to this point again?13:41
adreznecA few hours? Days?13:41
Ashanaa few hours, becuase when I had to redo the config the last time, I was finish with it by lunch. And wouldn't the network stuff still be there because we did the capture after I did the network stuff13:43
thorstAshana: network stuff would be there.  Everything up to the point of capture is there.13:43
thorstMay need to update the OSA components (in case you had any downloaded pre-capture)13:44
AshanaAlright13:45
*** apearson has quit IRC13:49
thorstadreznec: Let me know if I can help with the debug.  Otherwise, I'll wait to see what you come up with and assist with the snapshot restore (if needed)13:52
*** efried has joined #openstack-powervm13:53
adreznecthorst: Will do13:54
thorstefried adreznec: May want to see the discussion here at some point:  https://review.openstack.org/#/c/294596/13:56
thorstesberglu: Lets also get your SSP set up today.13:57
thorstI've got some time to do it now.  What is your system?  Neo-14?13:58
esberglu@thorst: Yep13:58
thorstand it does look like the failure was on stable/mitaka13:58
thorstI know there are issues around that...13:58
esbergluYeah. All of the stable/mitaka builds are failing with that same keystone error13:58
thorstOK - you've got that in your debug backlog?13:59
esbergluyep13:59
thorstrockin13:59
thorstesberglu: Looks like you have some old volumes attached to your neo.  Can I wipe those out?14:04
esbergluYeah14:04
thorstcool14:05
*** apearson has joined #openstack-powervm14:23
thorstesberglu: SSP is created (ci_stage_ssp) for the staging env14:34
esbergluthorst: Cool. I’ll reinstall today14:35
*** apearson_ has joined #openstack-powervm14:40
*** mdrabe has quit IRC14:41
*** apearson has quit IRC14:43
thorstadreznec: Any updates?  Should I start with the rebuild of the system?14:45
*** arnoldje has joined #openstack-powervm14:45
adreznecthorst: Yeah, though I really am thinking we're going to have to obey the device name limits... looking in the gate job for xenial they actually are following the <16 char length14:48
adreznece.g. lxc.network.veth.pair = enstack1_eth014:48
adreznecSo we're going to have to be picky with our device names to keep them at <7 chars14:49
adreznecUnless we can find a way around it...14:49
adreznecWe can either try renaming properly or rebuilding and redeploying14:50
*** kriskend_ has joined #openstack-powervm14:50
*** tblakeslee has quit IRC14:55
*** apearson_ has quit IRC14:57
*** Ashana has quit IRC14:58
*** mdrabe has joined #openstack-powervm14:58
*** Ashana has joined #openstack-powervm15:00
*** apearson_ has joined #openstack-powervm15:00
thorstadreznec: which names do we need to update?  the br-storage?15:05
*** jwcroppe has joined #openstack-powervm15:07
*** jwcropp__ has quit IRC15:10
*** tblakeslee has joined #openstack-powervm15:12
*** jwcroppe_ has joined #openstack-powervm15:20
*** k0da has quit IRC15:22
*** jwcroppe has quit IRC15:23
*** apearson_ has quit IRC15:24
*** apearson_ has joined #openstack-powervm15:27
*** miltonm has joined #openstack-powervm15:31
thorstadreznec: Do we also need to change the vlan interfaces?15:33
thorstesberglu: http://184.172.12.213/15/328315/1/check/nova-powervm-pvm-dsvm-tempest-full/f8a07d5/powervm_os_ci.html15:36
thorstI think that this needs an actual code change in nova-powervm15:36
thorstthe four failures there15:37
adreznecthorst: Yeah, I think we should try and keep all the interfaces name <=7 chars until we figure out if this is the base issue15:37
thorstadreznec: OK - I'll work with Ashana to make this happen15:38
esbergluthorst: Yeah it looks like it’s expecting a different parameter when adding the host15:38
thorstesberglu: Want to propose the change there?15:38
thorstI think its just byte typing15:39
*** Ashana has quit IRC15:51
*** Ashana has joined #openstack-powervm15:52
*** Ashana has quit IRC15:56
*** Ashana has joined #openstack-powervm16:03
*** Ashana has quit IRC16:07
esbergluthorst: Huh? Not that familiar with nova-powervm yet.16:08
*** Ashana has joined #openstack-powervm16:26
*** mdrabe has quit IRC16:30
*** mdrabe has joined #openstack-powervm16:30
*** apearson_ has quit IRC16:30
*** Ashana has quit IRC16:30
*** Ashana has joined #openstack-powervm16:32
*** Ashana has quit IRC16:36
*** Ashana has joined #openstack-powervm16:38
*** Ashana has quit IRC16:42
*** tblakeslee has quit IRC16:42
*** Ashana has joined #openstack-powervm16:44
*** apearson_ has joined #openstack-powervm16:45
*** tblakeslee has joined #openstack-powervm16:47
*** Ashana has quit IRC16:48
*** Ashana has joined #openstack-powervm16:49
*** Ashana has quit IRC16:52
*** Ashana has joined #openstack-powervm16:52
thorstesberglu: I know.  Lets get you familiar  :-D16:54
thorstI think its a simple change is why I'm recommending16:54
*** tblakeslee has quit IRC16:57
*** kriskend has quit IRC17:02
*** kriskend_ has quit IRC17:03
*** Ashana has quit IRC17:06
thorstadreznec: Ashana and I are trying again from my office...we changed br-storage to br-st and br-vxlan to br-vx17:10
thorstso far...so good...17:10
*** Ashana has joined #openstack-powervm17:11
thorstnevermind...17:11
*** k0da has joined #openstack-powervm17:32
adreznecthorst: Where did it break down this time?17:36
thorstsame spot17:37
thorstbut I see why - it was the ens0.2230.  I had no idea why that one mattered, but it makes sense now17:38
thorstso I swapped those onto vlan 30, 31 and 32...which I think will meet the requirements17:39
thorstI know that was something tried earlier, but the VLANs weren't strung out for 30, 31 and 32...so I just did that17:39
*** kriskend has joined #openstack-powervm17:54
*** kriskend_ has joined #openstack-powervm17:54
*** tblakeslee has joined #openstack-powervm17:55
thorstadreznec: Looks like you have to wipe the original containers out to retry17:56
adreznecthorst: Ah yeah, it doesn't really handle cleanup17:57
adreznecThat's why I'd wiped them out by hand before17:57
thorstadreznec: well I hadn't had that tidbit of info  ;-)18:01
adreznecthorst: Ah sorry, I thought I mentioned that in Slack earlier18:01
*** apearson_ has quit IRC18:02
thorstadreznec: probably, depends on if I listened18:02
*** apearson_ has joined #openstack-powervm18:03
*** k0da has quit IRC18:07
*** apearson_ has quit IRC18:34
*** apearson_ has joined #openstack-powervm18:43
*** tblakeslee has quit IRC19:04
thorstesberglu: I think the error would be from here:  https://github.com/openstack/nova-powervm/blob/master/nova_powervm/virt/powervm/host.py#L7519:08
thorstperhaps in how we're returning the name.19:08
*** tblakeslee has joined #openstack-powervm19:09
thorstarnoldje: Did you see any benefits from the loghandler change?19:14
esbergluthorst: Can you explain how you came to that?19:21
thorstesberglu: Well, mostly I know that this is where we set the host name.  And given the 'AggregatesAdminTestJSON' tests failing on something around the host name...that seems likely19:25
thorstnow with that said... the error is: "Invalid input for field/attribute host. Value: 8247-22L*212E5DA. u'8247-22L*212E5DA' does not match '^[a-zA-Z0-9-._]*$'"19:26
thorstso it almost looks like the test is bad?19:26
thorstit looks like its a regex that's not being evaluated as a regex?19:26
thorstor maybe its the presence of the * in our name that throws it off...19:27
esbergluI think the regex is expecting an underline in our name19:30
thorstexpecting or allowing?19:31
thorstit looked to me like it allows any characters, a-z, A-Z, 0-9, periods and _'s19:32
efriedand hyphen19:32
efriedBut not splat.19:32
thorstright.19:32
efriedI thought we sanitized our hostnames.19:32
thorstso the splat in our name I think is throwing it off19:32
efriedAgree.19:32
thorstefried: we don't.19:32
efriedThen that's a bug on our part.19:32
thorstinstance names we do, but not the server host name19:32
efriedit would seem.19:32
thorstefried: I don't know that I agree...19:33
efriedor a regression in community code19:33
thorstefried: I think this is just a new test19:33
*** seroyer has joined #openstack-powervm19:33
efriedsvenkat, thorst, erlarese: Would like to brainstorm ideas for nova/networking-powervm implementation for SRIOV.19:35
efried[1] https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking19:35
*** svenkat has joined #openstack-powervm19:36
efriedThis assumes a couple of things.  1) That the adapter is assigned to the "hypervisor"; 2) the "hypervisor" is a linux partition; 3) You have to carve out the VFs beforehand.19:36
efriedI think we can do better than that.19:36
efriedBut19:36
efriedI think we can use the existing pci_passthrough_whitelist to do it.19:36
efriedHere's what I'm thinking:19:36
efriedpci_passthrough_whitelist is already set up to allow a (list of) dict(s) that map "devname" to "physical_network".19:37
efriedWhere 'devname' (or 'address') is supposed to represent a "PF".19:38
thorstefried: So I haven't read through all of this.  So I apologize if my questions are absurd19:38
thorstNovaLink == a linux partition that 'kinda' represents the hypervisor19:38
efriedBut the SRIOV adapters aren't attached to the Novalink.19:38
thorstright...19:38
efriedThey belong to the platform.19:38
thorstso can we just obfusicate that?19:38
efriedSwhere I'm going .19:38
efriedWhat we need in order to implement SRIOV - both direct VF-to-VM and vNIC - is a way to map physical ports to physical networks.19:39
thorstagree.19:39
efriedIn [1], 'devname' is e.g. 'eth0' - the PF as represented in the Linux hypervisor; or 'address' is a PCI domain:bus:slot.function spec.19:40
efriedSo I'm thinking we piggyback on pci_passthrough_whitelist.  The "physical_network" semantic stays as is.  We need a way to refer to the physical port within the same dict.19:41
efriedSo we could a) override "devname", b) override "address", c) introduce a new key19:41
efriedAnd the value would identify the pport via x) its location code, or y) some parseable combo of its adapter ID / phys port ID.19:42
thorstefried: the physical_network is something that maps to neutron.  I'd argue devname is probably what we want to override19:42
efriedThe dict should still use physical_network.19:43
thorstefried: yeah.19:43
efriedI thought that part was a given.19:43
thorstyep yep19:44
thorstaddress seems very specific.  So I don't think we do that one...19:44
svenkatI am back. so we will have whiltelist in nova conf ?19:44
efriedArguably use physloc in there.  It may also be possible for us to translate our physloc to/from a PCI spec.19:44
efriedsvenkat, that's the idea.19:45
svenkatok.19:45
svenkatif i do lspci on novalink, will i be able to list sriov card details on it?19:46
erlareseare administrators required to manually set this up in nova conf or would the code generate the mapping under any circumstances ?19:46
efriedsvenkat, no, because the cards aren't assigned to the novalink.19:46
efriedIf you assigned them to the novalink, I imagine you could.19:46
thorsterlarese: I'd assume they set it, now if openstack-ansible were to set it for them...that's another thing19:46
efriederlarese, admins will be required to set up *something*.  Current proposal under discussion is to allow them to set it up in a relatively familiar way in a familiar place.19:47
*** k0da has joined #openstack-powervm19:47
erlareseright, so we can't generate the mapping because we have no knowledge of what physical ports are plugged into what physical networks, right ?19:47
efriedCorrect.  They will have to give that to us.  And this is what we're discussing how to do via pci_passthrough_whitelist.19:47
svenkatalso, we need to discuss about supported_pci_vendor_devs in ml2_conf_sriov.ini once we are done with whitelist.19:48
efriedThe user will use that to tell us "this physical port maps to that physical network".  And we can do the rest.19:48
efriedSo a) means we're overriding the semantic of "devname" a bit - it's a physical port identifier, not a PF dev name on the hypervisor.19:48
efriedb) means we're either overriding the semantic of "address", OR doing what may be a nontrivial amount of work to translate to/from physloc;19:48
efriedc) means we're introducing a new key that would be outside the comfort zone of existing admins.19:48
efriedThere's another option: instead of identifying pports, we identify pport *labels*.19:50
efriedOptions a) or c) above would still pertain.19:50
efriedBut19:50
*** tblakeslee has quit IRC19:50
efriedA label would actually point to N pports, which need to have been preconfigured with that label.19:51
efriedSo instead of having one entry per physical port, I would have one entry per label.  (Usually one label per phys net, but I suppose it's possible to have multiple labels on the same phys net.)19:52
erlareseso if an OpenStack cloud had an environment where POWER systems w/ SRIOV and x86 systems w/ SRIOV co-existed, we would need some format that accommodates both?  Or that wouldn't matter since each host has it's own conf?19:53
thorstefried: I prefer mapping as close to one to one with KVM.  Which means a19:53
thorsterlarese: wouldn't "matter" because each has their own19:54
efriedthorst, brings up an interesting tangent, though: we may be able to use both drivers at once.19:54
thorstboth 'drivers'?19:55
thorstKVM must be able to co-exist with PowerVM (and vice versa)19:55
efriedThose would be on separate hosts.19:56
efriedI'm talking about pvm_vf and pvm_vnic19:56
efriedBut before we go there, let's walk through how the whitelist info gets used for each.19:56
esbergluthorst: So how can I sanitize the name to remove the *? Won’t that cause problems elsewhere when trying to use the sanitized value instead of the original one?19:56
efriedFirst, pvm_vf (direct-attach VF to VM): User requests VM connection to a particular physical network by name.  pvm_vf driver looks up which devname(s) are associated with that phys net.  Driver does PUT LogicalPartition/{uuid}/SRIOVEthernetLogicalPort where payload contains adapter ID + pport ID.  Done.19:58
thorstesberglu: I think we should do two things.  1) Ignore those tests for now.  2) Have you propose a change to Tempest that includes the * in valid host names for those tests19:59
efriedIn this case, I think it's up to the user to decide, by associating 1 or N pports with a phys_net, whether he gets 1 or N VFs created.  If N, he's agreeing that the VM is responsible for NIB.  And in either case, no mobility with pvm_vf.20:01
svenkatefried: fine. also close on how this VF will be attached to VM. i think what you descrinbed is only creation of VF ?20:01
efriedsvenkat, above PUT does the create *and*  the attach.20:01
efriedit's one op.20:01
svenkatefried: ok… so we need to provide VM details during PUT op…20:02
svenkatoh sorry . got it20:02
efriedThat's the LogicalPartition/{uuid} part.20:02
svenkatyes..20:02
efriedthorst, svenkat: any reaction to the 1/N VFs for direct attach thing?20:03
svenkatnope, setting up NIB on vm wiill be outside of the scope..20:03
efriedHere's where we should maybe look back at the discussion of whether the whitelist entry is for a pport or a label.20:06
thorstefried: I feel one entry per pport, not label (personally)20:08
efriedIf it's pport, we're stuck with "create VFs for all N pports on this phys net" (or "create the VF on one randomly-selected one", which I don't think we want).  I don't think there's another way for the user to specify extra specs for network creation.20:08
efriedWhereas if it's label, the user can provide different labels for the same physnet.20:08
efrieduhh, cancel.  That would be bad.20:09
thorstright...20:09
thorstits not so much that it would be randomly selected...20:09
svenkatlabels - they already exist for pports ? who creates them20:09
thorstwe'd have to figure out which to assign it to I guess...20:09
thorstI guess a label would allow us to potentially solve the redundancy issue20:09
thorst(where we need two ports)20:09
thorst(but one vnic)20:10
efriedthorst, that's for pvm_vnic, which is what I'm getting to next.20:10
efriedsvenkat, label (and sublabel) are fields available on physical port.20:10
svenkatok…20:10
efriedOriginally the thought was that the user would preconfigure them outside of the auspices of the community code.20:10
efriedakin to SEA setup20:10
efriedbut20:10
efriedI'm thinking we may not need this.20:10
efriedIt would be neat if the user could do *only* the nova.conf config of pci_passthrough_whitelist, with no other preconfig - and I think we can accomplish that.20:11
efriedviz:20:11
efriedFor pvm_vnic, the user associates 1..N pports with each phys net via the whitelist.20:12
efriedWhen the user asks to attach a phys net to a VM, we go find all the pports associated with that physnet, and create the vNIC via PUT LogicalPartition/{uuid}/VirtualNICDedicated with a payload of N [VIOS + adapter ID + pport ID].20:13
efriedI think we once again use all available pports in this scenario, creating a dot-product across all available VIOSes, distributing cards across VIOSes as much as possible.20:14
efriedAnti-affinity algorithm.  Should be relatively straightforward to code. up.20:14
efriedSo labels:20:15
efriedWe want to be able to use these for migration20:16
thorst+120:16
efriedQuestion is (thorst): Can we count on the phys net having the same name on the dest as the source?20:16
thorstefried: Yes.20:16
svenkatso labels should match between source and destination for migration? (like vswitchnames in SEA today)20:16
efriedPerfect.20:16
efriedThen the user doesn't even need to be aware of labels.  We can consume the whitelist and set the labels automatically to (some derivation of) the phys net name.20:17
efriedOr, you know, not.20:17
efriedLike, I'm not even sure we *need* to use the labels.20:17
efriedCause the user would have had to associate the destination's phys ports with the phys nets in the same way.20:17
efriedsvenkat, that's what I'm trying to brainstorm right now.  At one time, we thought we were going to need to use the labels kinda like we use WWPN:fabric_name mappings for NPIV.  But that may not be necessary in this case - see above.20:18
thorstefried: Yeah, I'm not sure we need them...but I kinda like the idea of setting them based on the OpenStack config20:19
thorstmakes operator debug easier20:19
efriedthorst, I can dig that idea.20:19
efriedSo let me go back just one more time to the idea of using labels as the key in the whitelist.20:19
efriedCons: 1) Less like existing impls; 2) Requires user preconfig (user has to set labels outside of the auspices of OpenStack).20:20
efriedBut20:20
efriedThe major pro is that the user could then conceivably twiddle around the phys ports on the fly, without having to change the config and restart the drivers.20:20
efriedE.g. if I have to replace a card20:21
efriedor if I want to change my redundancy profile.20:21
erlareseI'm not sure adding that extra layer of abstraction will be valuable for most users, but perhaps that's just me trying to simplify the implementation20:22
efriederlarese, agree KISS.  It wouldn't be out of the question to support both mechanisms at once, but that's also complicated.20:23
efriedSo stake in the ground: support pport only for now; consider the other for future enhancement if called for.20:24
efriedSo (thorst) back to how we identify the pport.20:24
thorstefried: good question.20:25
thorstwe can't follow the pattern they have in KVM20:25
efriedUNLESS we come up with a way to translate to a PCI spec-style "address".20:26
efried...which we would have to spit out in pvmctl sriov list.20:26
efried...next to physloc20:26
efriedseroyer: is there a way to map physloc to standard PCI-style domain:bus:slot.function ?20:27
thorstefried: damn, you're running through these items in style20:27
efriedAlmost seems like MTMS-P1-C2-T3 would map to MTMS:1:2.320:28
efried...except we wouldn't use the .3 because that would be the VF.20:28
thorstMT:MS:1:2?20:29
efriedthorst, not sure.20:33
efriedI thought the MTMS was the enclosure (I/O drawer), P was the I/O planar, and C was the slot thereon (1:1 with the card); so T would be the phys port.20:34
efriedSo you may be right.  seroyer, yt?20:35
thorstefried svenkat: Please document ALL of this in the blueprint as well20:36
efriedFor sure.20:36
thorstthis whole discussion seems important to answering many of my q's in the bp20:36
*** tlian2 has joined #openstack-powervm20:39
svenkatsure.. i will update BP with these information20:39
svenkatefried: anything else on this topic, i will pickup these and consolidate and update nova-powervm bp20:41
efriedthorst, svenkat, I believe I talked myself out of being able to use both pvm_vf and pvm_vnic at the same time.  I can't see how we would differentiate.20:41
thorstefried: I believe sticking to one or the other is ideal initially20:42
thorstyou can always add more later...but for now...I'm OK with just one at a time20:42
efriedsvenkat, yes, open question still on how we key the physical port in the whitelist entry.  Need to find out whether there's a deterministic mapping from physloc to PCI-style address.20:42
*** tlian has quit IRC20:42
thorstas long as we have both options20:42
efriedthorst, dig.20:42
efriedsvenkat, thorst, another open question is whether we should override the existing 'direct' option with this thing we've been calling 'pvm_vf'.20:43
thorstefried: I think so?  Cause that is a VF in KVM right?20:44
seroyerefried: Not really, no.20:44
efriedthorst, I think so.  I'm far from expert here, but I believe that use case maps as close as we're ever going to get.20:44
efriedThen we're only adding one driver name, pvm_vnic.20:45
efriedseroyer, dangit, no help there?20:45
seroyerYou don’t want to do anything with parsing location codes.  They are extremely reliable on IBM branded Power systems.  They are completely unreliable on OpenPower systems.20:45
efriedseroyer, What happens if I assign a whole pport to the novalink partition and say lspci?20:46
efriedNot that I necessarily think we should do this, but out of curiosity...20:46
seroyerWhole port or fraction of a port makes no difference.  It’s a VF.20:47
seroyerWhole card is different.20:47
efriedBut presumably lspci would give me something deterministic to work with at that point?20:47
*** k0da has quit IRC20:47
seroyerYes.  But (sorry, not caught up in the history), it is not at all clear to me how that really helps you.20:48
efriedBy deterministic, I mean that any time I assign the same pport to the NL partition, it would always show up the same in lspci.20:48
seroyerNo idea, sorry.20:48
efriedseroyer, sure, let me recap/filter so you don't have to wade through the gorp.20:48
efriedWe want to use the existing pci_passthrough_whitelist specification in the nova.conf file to map SRIOV physical ports to physical network names.20:48
efriedWe're trying to figure out a good way to identify the physical port.20:49
efriedWe'd like to do it in a way that's familiar to existing nova users, but also usable for Power people.20:49
efriedUsing the physloc is one idea, but then we either have to override the semantic of "devname" (which in KVM is the device name of the PF on the hypervisor - e.g. "eth0"); override the semantic of "address" (by using physloc therein); or use "address" as it's defined, with a PCI-style domain:bus:slot.function spec.20:50
efriedOr use a brand new key.20:51
esbergluthorst: efried: Either of you know where that regex actually defined? Having trouble finding it20:51
efriedThe "PCI style" would have been neat from a user point of view because we wouldn't have to redefine existing fields or introduce new ones, but we would be able to spit out that value alongside the physloc in pvmctl sriov list.20:51
efriedesberglu, regex for the test case, or regex for an MTMS?20:52
esberglutest case20:52
thorstesberglu: Not off hand.20:52
efriedesberglu  Lemme see if I can find it.20:52
efriedesberglu, what's the name of the test case?20:53
thorstesberglu efried: wait a sec...this may be a nova thing.20:53
seroyerefried, let me think on it and confer with someone.  I have an idea that is more reliable than location codes.20:53
thorstnova/api/validation/parameter_types.py - line 20420:53
thorstI need to run...but will check back later.  If that is the case, we may need to change how we name our hosts.20:54
seroyerefried (sneak peek: use DRC index instead)20:55
thorstDRC index's...my favorite...20:56
efriedseroyer, that doesn't actually help unless there's a way to map that to the PCI-style address.20:56
seroyerYep.  DRC index has a bus component and a slot component for PCI slots.20:56
efriedI don't have a problem using physlocs as an opaque string.  I think that would be the most useful thing for Power users.20:56
efriedoo, cool.20:56
*** thorst has quit IRC20:56
efriedseroyer, if we can nail that down, I'm greatly interested.20:57
seroyerOk.20:57
efriedlmtaylor1, you're doing something with "PCI specs"?20:57
*** apearson_ has quit IRC21:01
*** thorst has joined #openstack-powervm21:03
*** apearson_ has joined #openstack-powervm21:04
*** svenkat has quit IRC21:05
*** thorst has quit IRC21:07
*** k0da has joined #openstack-powervm21:10
lmtaylor1@efried: not currently, but thorst mentioned something about working on the PCI spec with you21:22
*** lmtaylor1 has quit IRC21:25
*** apearson_ has quit IRC21:35
*** thorst has joined #openstack-powervm21:35
*** apearson_ has joined #openstack-powervm21:36
*** thorst_ has joined #openstack-powervm21:39
*** thorst has quit IRC21:39
*** thorst_ has quit IRC21:43
*** seroyer has quit IRC21:44
efriedlmtaylor1, do you have any background on what that means?21:49
efriedmm, never mind.21:49
*** mdrabe has quit IRC21:58
*** seroyer has joined #openstack-powervm22:00
*** k0da has quit IRC22:02
efriedesberglu, yt?22:05
esbergluYep22:05
efriedIt looks to me like nova-powervm itself doesn't rely on cross-referencing the system hostname anywhere.22:07
*** edmondsw has quit IRC22:07
efriedIf I put up a change set, as things stand right now, it'll get run through the CI with that test enabled, right?22:07
esbergluYep22:09
esbergluUntil that skip test change goes in22:10
openstackgerritEric Fried proposed openstack/nova-powervm: WIP: Sanitize Managed System Hostname  https://review.openstack.org/32920522:10
efriedthorst, esberglu: ^^22:10
efriedesberglu, once that guy finishes CI, see if it passes that particular test.22:11
efriedAlso look for any additional failures beyond previous "normal" runs.22:11
*** arnoldje has quit IRC22:22
*** seroyer has quit IRC22:26
*** burgerk has quit IRC22:30
*** seroyer has joined #openstack-powervm22:35
*** seroyer has quit IRC22:38
*** k0da has joined #openstack-powervm22:55
*** Ashana has quit IRC22:58
*** Ashana has joined #openstack-powervm23:05
*** Ashana has quit IRC23:09
*** kriskend has quit IRC23:09
*** kriskend_ has quit IRC23:10
*** Ashana has joined #openstack-powervm23:10
*** Ashana has quit IRC23:15
*** k0da has quit IRC23:16
*** Ashana has joined #openstack-powervm23:16
*** tlian2 has quit IRC23:20
*** Ashana has quit IRC23:21
*** Ashana has joined #openstack-powervm23:22
*** Ashana has quit IRC23:27
*** Ashana has joined #openstack-powervm23:28
*** Ashana has quit IRC23:33
*** Ashana has joined #openstack-powervm23:34
*** Ashana has quit IRC23:38
*** Ashana has joined #openstack-powervm23:40
*** Ashana has quit IRC23:44
*** Ashana has joined #openstack-powervm23:46
*** Ashana has quit IRC23:51
*** Ashana has joined #openstack-powervm23:51
*** Ashana has quit IRC23:56
*** Ashana has joined #openstack-powervm23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!