15:05:57 #startmeeting libvirt 15:05:58 Meeting started Tue Oct 28 15:05:57 2014 UTC and is due to finish in 60 minutes. The chair is danpb. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:05:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:06:01 The meeting name has been set to 'libvirt' 15:06:10 * danpb curses DST shifts 15:06:26 :) 15:06:54 anyone here for the libvirt meeting besides vladikr 15:06:58 o/ 15:07:11 s1rp: sew are around as well 15:07:13 o/ 15:07:24 o/ 15:08:56 ok, so only 3 topics in agenda all from vladikr so far 15:09:19 #topic Multiple vnic drivers per guest 15:09:26 vladikr: go ahead 15:09:30 thanks 15:09:39 I was trying to figure out what would be the best way to select drivers for vnics (when there is more then one in a guest) 15:09:48 Currently, we have only an image property to select the hw_vif_model. 15:09:54 So, there is no way to set both virtio and vhost_user, for example, on the same guest.. Unless, I'm missing something. 15:10:12 I was thinking that the easiest way would be to set it as past of nova boot -nic, as you've suggested to do with the vhost queues, 15:10:26 but I'm not sure if people will be happy with the end users being able to select vnic drivers from the api ..? :/ 15:10:33 If not, maybe, settings in the extra_specs would do? 15:10:42 I was thinking about : 15:10:50 * hw:net_devices=NN - number of network devices to configure. 15:10:50 * hw:net_device.0=name - Driver for device 1 15:10:51 * hw:net_device_opt.0= - List of options for device 1 15:10:51 * hw:net_device.1=name - Driver for device 2 15:10:51 * hw:net_device_opt.1= - List of options for device 2 15:11:05 or maybe setting it in neutron binding would be better? 15:11:05 binding:vif_model = 'e1000'? 15:11:13 don't know if it make sense 15:11:27 why would we ever want to support multiple drivers 15:12:06 danpb, actually it came from please who are interested in ivshmem based nics 15:12:11 i've never seen anyone use anything other than the "best" nic for their needs 15:12:21 and apparently they need virtio as well 15:13:07 afaik the ivshmem based NICs are something that is implemented outside the scope of libvirt/qemu 15:13:24 from libvirt/qemu's POV you are just providing an ivshmem device to the guest 15:13:46 the fact that they run a networking protocol over this shared memory device is invisible to libvit/qemu (and thus to Nova too) 15:15:57 in general i think the ivshmem integration for nova will require a blueprint + spec before we can consider it 15:16:10 so probably isn;t something we need to get into details for here 15:16:45 currently yes, but if i'm not mistaken there was something new from 6wind, not sure, but they were asking about the multinic approach and I couldn't figure out what would be they best 15:16:52 ok 15:17:48 if you do see any blueprint/spec submitted about it just point me to it 15:18:05 #topic "preferred" NUMA policy 15:19:13 danpb, yes, this came up as well, recently, I was wondering if it make sense to configure it now, considering the recent work 15:19:36 so (by accident) we weren't setting any memory policy 15:19:50 i submitted a patch yesterday to fix that by setting a strict policy 15:20:08 yea 15:20:17 the problem with allowing a preferred policy is that nova's accounting for memory usage based on what we configured for the guest 15:20:42 so if we set a "preferred" policy and the kernel then allocates from a non-local NUMA node, nova's accounting of allocation is going to be wrong 15:20:56 so the schedular will think a node has free space when it does not in fact have space 15:21:05 and thus make bad scheduling placement decisions 15:24:07 right, I was curious if this is something we should try solving or it's not worth the effort? don't really know what is the use case 15:24:38 personally i'd not bother with it unless someone appears with a compelling use case for why we need it 15:24:56 danpb, ok :) thanks 15:25:04 the numa stuff is already fairly complex, so we should try to minimize adding extra features unless clearly needed 15:25:29 #topic transparent spice proxy 15:26:00 ok 15:26:07 A while ago, it was discussed, how to enable the spice/vnc native clients connect to the hosts, not using the web sockets. 15:26:14 i dunno if you've spoken to them already, but about 6 months back the spice upstream devs did propose some changes for this 15:26:23 oh 15:26:24 no 15:26:49 basically the spice client has built-in ability to do http tunnelling 15:26:58 so they were wondeirng how to just enable use of that directly 15:27:22 we had some disagreements about the design at the time, and then i think they had other higher priority things to look at 15:27:30 so might be worth talking to them again about it 15:27:45 i think they might actually be at the summit next week 15:28:13 christophe Fergeau and marc-andre are the people to speak with 15:28:29 danpb, I see, yea, I'll definitely ping some one about it 15:28:42 ah, doubt that I'll be there 15:29:00 I wrote an extension to the current spice proxy, that reserves a dedicated port, provides it to the client and sets the iptables(dnat, snat)/firewalld forwarding 15:29:00 ok, well just mail them or the spice mailing list 15:29:12 o the guest's host port. 15:29:12 I was wondering, if that would be useful if I'll try to push it upstream 15:29:37 but if they already have something, it probably better to go with their solution 15:30:02 #topic Open Discussion 15:30:20 s1rp: you mentioned about the NoopMounter patch on review 15:30:29 was there a previous posting of this ? 15:30:42 i could have sworn there was something like this posted before but your link is patchset 1 15:30:46 there was 15:30:58 I believe s1rp's is a refresh of that patch 15:31:53 ok, i'll try to find it again 15:32:34 anything else people wnat to talk about ? 15:32:54 danpb I'd be intersted in discussing this bug again https://bugs.launchpad.net/nova/+bug/1375868 15:33:00 interested* 15:33:33 I did a small amount of research into it when it first popped up, but wasn't as straight forward as I expected 15:34:10 danpb: yeah there was 15:34:25 ill dig that up 15:34:57 i think apmelton proposed it originally so i couldn't revive it (dont have the perms) 15:35:26 this is the original https://review.openstack.org/#/c/106405/ 15:35:49 apmelton: thanks 15:36:15 mjturek: ok 15:36:30 so I emailed you awhile back but probably got burried 15:36:53 what I'm wondering is whether or not nova is already tracking the information that's currently coming from the libvirt xml 15:37:16 yeah possibly missed it as i've been travelling alot 15:37:29 I dug into the db a little bit but didn't see fields that line up with it. This was awhile ago though so I'm a bit fuzzy on the details 15:37:32 yeah no worries 15:38:37 so IIRC the thing we were interested in was distinguishing image based disks from cinder based disk 15:38:52 yep 15:38:52 i would expect (hope) we have info on the cinder based disks 15:39:15 but possibly not about the image base disks, but the coudl be inferred by virtue of them not being cinder based disks 15:40:47 I see, if I remember correctly the image based disk information was pulled directly from the xml 15:41:34 last time i looked at this, i wasn't even sure the callpath leading upto the _get_instance_disk_info method was sane 15:41:54 ie i couldn't help thinking the caller should be working in a totally different way 15:42:07 but i never got into investigating it in detail either 15:42:18 I see, so the issue might be a little deeper than removing this race 15:42:32 as i got side tracked on cleaning up the resource tracker to make it clearer to understand wtf was going on 15:44:11 alright well since it might be a little deeper than I expected I might move away from it. But if I do any investigating, cool if I ping you? 15:45:15 sure 15:45:29 great, thanks! 15:46:23 ok, lets call this meeting done 15:46:35 danpb, thanks 15:46:44 thanks danpb, have a good one 15:49:35 #endmeeting