19:03:14 #startmeeting PCI passthrough 19:03:15 Meeting started Mon Nov 25 19:03:14 2013 UTC and is due to finish in 60 minutes. The chair is sadasu. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:19 The meeting name has been set to 'pci_passthrough' 19:03:30 #chair baoli 19:03:31 Current chairs: baoli sadasu 19:03:41 hi 19:03:43 hi everyone, welcome to the meeting 19:04:19 now that we have a google doc:https://docs.google.com/document/d/1EMwDg9J8zOxzvTnQJ9HwZdiotaVstFWKIuKrPse6JOs/edit?usp=sharing 19:04:35 #topic design google doc 19:04:42 can we go through the following?1: Any questions with current nova-passthrough 2: define pci assignable list on compute nodes: -- associate a device with a 'group' -- define pci-devfilter -- device 'feature' auto-discovery -- pci alias no longer defined on controller 3: scheduling -- based on 'group' -- pci requester -- lci passthrough onl 19:05:34 first of all, any questions with current nova-passthrough implementation? 19:06:15 good summary in doc, no question from me 19:08:37 thanks, irenab. Looks like that we are all on the same page with the current nova-passthrough 19:11:16 regarding what is currently being proposed, appears everyone wants pci-alias 19:11:23 to be pci-group 19:11:35 is that correct? 19:11:42 let's move on to the pci assignable list on the compute node 19:12:05 yes, pci-groug seems to be the term to go 19:12:11 sorry , pci-group 19:12:25 I am confused about terms that used for compute and controller 19:12:48 we use pci_alias for Controller and pci-group for compute? 19:13:10 compute node runs nova-compute, and controler runs nova-scheduler/nova-api 19:14:01 we are suggesting that PCI alias is no longer defined on the controller 19:14:37 @baoli: no longer defined at all? 19:15:20 pci-group will be used instead? 19:15:33 yes 19:15:58 an assignable list per compute node: 19:16:18 pci-group has cloud significance 19:17:02 @baoli: just to be sure, its up to admin to manage properly per each compute node? 19:18:04 pci-group (the name, and it's underlying implementation) is defined cloud wise, mapping from devices per compute node is defined by the assignable list. 19:20:29 @baoli: so pci-group will be used for scheduling purposes, while pci-devfilter will be used for getting the list of suitable devices on compute? 19:21:56 That sounds about right to me. pci-group is used for both scheduling and actual allocation on the compute node that is to run the instance. It's also used by the feature to map to the physical network in the case of neutron 19:22:59 @baoli: can you please define what you mean by feature? 19:23:24 is it something like device-type in current implementation? 19:24:33 I can not find a right term for it. But yes, it is similar to the device-type. On the other hand, it means neutron (for network), and possbile cinder (for storage), for example 19:26:20 @baoli: so there is a predefined list of 'features' as you stated. 19:27:17 That's the current thinking if device 'feature' auto-discovery is possible. It's possible for network devices since they are all mapped to ethernet interfaces on a linux host 19:28:13 I think it will be correct to say they are mapped to network interfaces, but not mandatory to ethernet. 19:29:06 we have Infiniband interfaces if NIC connected to IB physical network 19:29:31 with mellanox, you mentioned infiniband type of interface. how are they represented in the linux device hierarchy? 19:29:50 irenab: yes, network interfaces and not a specific type of network interface 19:30:25 @baoli: I'll put an example on the google doc tomorrow once have access to the machine 19:31:04 @baoli: You mean under /sys/class/net ? 19:31:18 #itzikb, yes 19:31:39 @baoli: ib0,ib1 etc .. 19:32:12 @baili: with link to actual device like the ethernet if 19:33:21 That sounds ok to me because following the symbolic link leads to an pci device and it's within the class 'net' 19:33:36 @baoli: correct 19:34:22 @baoli: So the thinking is to go over /sys/class/net in the network 'feature' ? 19:35:30 So if libvirt does it, we can use it as yl has commented on the doc. otherwise, I don't know why we can't as long as 'sys/class/net' is a portable implementation 19:36:36 I think libvirt has it - but probably someone can correct me if I'm wrong 19:37:01 #baoli: I think both ways should work as long as there is no assumption for specific (Ethernet) network type 19:37:45 #itzikb, I didn't find it from the version of libvirt I have. Maybe the lastest libvirt version support that. Regardless, it's possible to figure out the 'class' of a pci device 19:38:13 And subsequently use that device for the feature 19:39:39 my feeling that community will favor libvirt, but since here we have more than one way to solve, may continue to other item? 19:40:20 cool 19:40:42 let's discuss pci-devfilter 19:41:36 does the name/term sound right to you guys? 19:42:16 #baoli: seems fine 19:43:45 I think we miss here yongli who had some concerns 19:44:35 #irenab, i'm hoping to have a meeting time that he can join 19:45:19 Any questions on scheduling? 19:45:52 before that, I think there were concerns about the pci-devfilter format 19:46:21 @baoli: can we discuss a bit the request part prior to scheduling? 19:46:29 Sure 19:47:25 the goal here is not having the admin to keep entering repetitive fields 19:48:01 #sadasu, agreed 19:48:22 you can use * or range for each field, and the later parts can be missing 19:48:51 I did not understand, yongli's concern here or even if he has a concern 19:49:15 Ian does not seem to like the format 19:49:55 #sadasu, it's just an option. it's open for better ideas 19:50:08 #sadasu: Does he have some suggestion? 19:50:25 baoli: agreed. I am just trying to understand the concerns so we can make it better 19:51:18 @baoli: can you please explain the way you see admin request for VM with PCI device as vNIC on net1 on physical network=phynet? 19:51:47 @irenab: not sure. 19:52:41 @baoli: I mean define any extra-spec on flavor, neutron-port is precreated, etc. 19:52:53 @irenab, sure. Assuming it's using VLAN 19:53:28 when you create a neutron network, it's assigned a vlan and thus mapped to the physical net, agreed? 19:53:47 @baoili: does not matter. I mean what managment operations admin does 19:54:24 he created network on phynet and it was allocated with VLAN. 19:54:52 Then he creates port with vnic_type SRIOV? 19:55:35 --nic net-id=, vnic-type=sriov 19:56:10 --nic net-id= , vnic-type=sriov, pci-group=net1 19:56:30 @baoli: great. That's all? Or he should also define extra-spec for some flavor? 19:56:37 no 19:57:10 @baoli: meaning, that's all? 19:57:32 yes, no flavor is needed 19:57:45 I mean no flavor is needed for networking purpose 19:57:51 @baoli: great, thanks. 19:58:16 #irenab, would it work for mellanox? 19:59:21 @baoli: for what we have discussed till now, seems very generic and will work. 19:59:53 I think that our time is up shortly. #itzikb, please let me know how to retrieve device 'class' or 'type' from libvirt if you find out. 20:00:04 #irenab, are you able to edit the google doc? 20:00:04 @baoli: for the rest described in doc regarding libvirt and neutron, we need to elaborate more 20:00:38 @irenab, I will try, or if you can edit, go ahead to edit it with what you think would clarify 20:01:18 #endmeeting