15:05:57 <danpb> #startmeeting libvirt
15:05:58 <openstack> Meeting started Tue Oct 28 15:05:57 2014 UTC and is due to finish in 60 minutes.  The chair is danpb. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:05:59 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:06:01 <openstack> The meeting name has been set to 'libvirt'
15:06:10 * danpb curses DST shifts
15:06:26 <vladikr> :)
15:06:54 <danpb> anyone here for the libvirt meeting  besides vladikr
15:06:58 <apmelton> o/
15:07:11 <apmelton> s1rp: sew are around as well
15:07:13 <mjturek> o/
15:07:24 <sew> o/
15:08:56 <danpb> ok, so only 3 topics in agenda all from vladikr  so far
15:09:19 <danpb> #topic Multiple vnic drivers per guest
15:09:26 <danpb> vladikr: go ahead
15:09:30 <vladikr> thanks
15:09:39 <vladikr> I was trying to figure out what would be the best way to select drivers for vnics (when there is more then one in a guest)
15:09:48 <vladikr> Currently, we have only an image property to select the hw_vif_model.
15:09:54 <vladikr> So, there is no way to set both virtio and vhost_user, for example, on the same guest.. Unless, I'm missing something.
15:10:12 <vladikr> I was thinking that the easiest way would be to set it as past of nova boot -nic, as you've suggested to do with the vhost queues,
15:10:26 <vladikr> but I'm not sure if people will be happy with the end users being able to select vnic drivers from the api ..? :/
15:10:33 <vladikr> If not, maybe, settings in the extra_specs would do?
15:10:42 <vladikr> I was thinking about :
15:10:50 <vladikr> * hw:net_devices=NN - number of network devices to configure.
15:10:50 <vladikr> * hw:net_device.0=name - Driver for device 1
15:10:51 <vladikr> * hw:net_device_opt.0=<options-list> - List of options for device 1
15:10:51 <vladikr> * hw:net_device.1=name - Driver for device 2
15:10:51 <vladikr> * hw:net_device_opt.1=<options-list> - List of options for device 2
15:11:05 <vladikr> or maybe setting it in neutron binding would be better?
15:11:05 <vladikr> binding:vif_model = 'e1000'?
15:11:13 <vladikr> don't know if it make sense
15:11:27 <danpb> why would we ever want to support multiple drivers
15:12:06 <vladikr> danpb, actually it came from please who are interested in ivshmem based nics
15:12:11 <danpb> i've never seen anyone use anything other than the "best" nic for their needs
15:12:21 <vladikr> and apparently they need virtio as well
15:13:07 <danpb> afaik  the ivshmem based NICs are something that is implemented outside the scope of libvirt/qemu
15:13:24 <danpb> from libvirt/qemu's POV you are just providing an ivshmem  device to the guest
15:13:46 <danpb> the fact that they run a networking protocol over this   shared memory device is invisible to libvit/qemu (and thus to Nova too)
15:15:57 <danpb> in general i think the ivshmem integration for nova will require a blueprint + spec before we can consider it
15:16:10 <danpb> so probably isn;t something we need to get into details for here
15:16:45 <vladikr> currently yes, but if i'm not mistaken there was something new from 6wind, not sure, but they were asking about the multinic approach and I couldn't figure out what would be they best
15:16:52 <vladikr> ok
15:17:48 <danpb> if you do see any blueprint/spec submitted about it just point me to it
15:18:05 <danpb> #topic "preferred"  NUMA policy
15:19:13 <vladikr> danpb, yes, this came up as well, recently, I was wondering if it make sense to configure it now, considering the recent work
15:19:36 <danpb> so (by accident) we weren't setting any memory policy
15:19:50 <danpb> i submitted a patch yesterday to fix that by setting  a strict policy
15:20:08 <vladikr> yea
15:20:17 <danpb> the problem with allowing a preferred policy is that nova's accounting for memory usage based on what we configured for the guest
15:20:42 <danpb> so if we set a "preferred" policy and the kernel then allocates from a non-local NUMA node,  nova's accounting of allocation is going to be wrong
15:20:56 <danpb> so the schedular will think a node has free space when it does not in fact have space
15:21:05 <danpb> and thus make bad scheduling placement decisions
15:24:07 <vladikr> right, I was curious if this is something we should try solving or it's not worth the effort? don't really know what is the use case
15:24:38 <danpb> personally i'd not bother with it unless someone appears with a compelling use case for why we need it
15:24:56 <vladikr> danpb, ok :) thanks
15:25:04 <danpb> the numa stuff is already fairly complex, so we should try to minimize adding extra features unless clearly needed
15:25:29 <danpb> #topic transparent spice proxy
15:26:00 <vladikr> ok
15:26:07 <vladikr> A while ago, it was discussed, how to enable the spice/vnc native clients connect to the hosts, not using the web sockets.
15:26:14 <danpb> i dunno if you've spoken to them already, but about 6 months back the spice upstream devs did propose some changes for this
15:26:23 <vladikr> oh
15:26:24 <vladikr> no
15:26:49 <danpb> basically the spice client has built-in ability to do http tunnelling
15:26:58 <danpb> so they were wondeirng how to just enable use of that directly
15:27:22 <danpb> we had some disagreements about the design at the time, and then i think they had other higher priority things to look at
15:27:30 <danpb> so might be worth talking to them again about it
15:27:45 <danpb> i think they might actually be at the summit next week
15:28:13 <danpb> christophe Fergeau and  marc-andre are the people to speak with
15:28:29 <vladikr> danpb, I see, yea, I'll definitely ping some one about it
15:28:42 <vladikr> ah, doubt that I'll be there
15:29:00 <vladikr> I wrote an extension to the current spice proxy, that reserves a dedicated port, provides it to the client and sets the iptables(dnat, snat)/firewalld forwarding
15:29:00 <danpb> ok, well just mail them or the spice mailing list
15:29:12 <vladikr> o the guest's host port.
15:29:12 <vladikr> I was wondering, if that would be useful if I'll try to push it upstream
15:29:37 <vladikr> but if they already have something, it probably better to go with their solution
15:30:02 <danpb> #topic Open Discussion
15:30:20 <danpb> s1rp: you mentioned about the NoopMounter  patch on review
15:30:29 <danpb> was there a previous posting of this ?
15:30:42 <danpb> i could have sworn there was something like this posted before but your link is patchset 1
15:30:46 <apmelton> there was
15:30:58 <apmelton> I believe s1rp's is a refresh of that patch
15:31:53 <danpb> ok, i'll try to find it again
15:32:34 <danpb> anything else people wnat to talk about ?
15:32:54 <mjturek> danpb I'd be intersted in discussing this bug again https://bugs.launchpad.net/nova/+bug/1375868
15:33:00 <mjturek> interested*
15:33:33 <mjturek> I did a small amount of research into it when it first popped up, but wasn't as straight forward as I expected
15:34:10 <s1rp> danpb: yeah there was
15:34:25 <s1rp> ill dig that up
15:34:57 <s1rp> i think apmelton proposed it originally so i couldn't revive it (dont have the perms)
15:35:26 <apmelton> this is the original https://review.openstack.org/#/c/106405/
15:35:49 <danpb> apmelton: thanks
15:36:15 <danpb> mjturek: ok
15:36:30 <mjturek> so I emailed you awhile back but probably got burried
15:36:53 <mjturek> what I'm wondering is whether or not nova is already tracking the information that's currently coming from the libvirt xml
15:37:16 <danpb> yeah possibly missed it as i've been travelling alot
15:37:29 <mjturek> I dug into the db a little bit but didn't see fields that line up with it. This was awhile ago though so I'm a bit fuzzy on the details
15:37:32 <mjturek> yeah no worries
15:38:37 <danpb> so IIRC  the thing we were interested in was distinguishing  image based disks from cinder based disk
15:38:52 <mjturek> yep
15:38:52 <danpb> i would expect (hope) we have info on the cinder based disks
15:39:15 <danpb> but possibly not about the image base disks, but the coudl be inferred by virtue of them not being cinder based disks
15:40:47 <mjturek> I see, if I remember correctly the image based disk information was pulled directly from the xml
15:41:34 <danpb> last time i looked at this, i wasn't even sure the callpath leading upto the _get_instance_disk_info method was sane
15:41:54 <danpb> ie i couldn't help thinking the caller should be working in a totally different way
15:42:07 <danpb> but i never got into investigating it in detail either
15:42:18 <mjturek> I see, so the issue might be a little deeper than removing this race
15:42:32 <danpb> as i got side tracked on cleaning up the resource tracker to make it clearer to understand wtf was going on
15:44:11 <mjturek> alright well since it might be a little deeper than I expected I might move away from it. But if I do any investigating, cool if I ping you?
15:45:15 <danpb> sure
15:45:29 <mjturek> great, thanks!
15:46:23 <danpb> ok, lets call this meeting done
15:46:35 <vladikr> danpb, thanks
15:46:44 <mjturek> thanks danpb, have a good one
15:49:35 <danpb> #endmeeting