14:00:32 <tdurakov> #startmeeting Nova Live Migration
14:00:34 <openstack> Meeting started Tue Nov 29 14:00:32 2016 UTC and is due to finish in 60 minutes.  The chair is tdurakov. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:38 <openstack> The meeting name has been set to 'nova_live_migration'
14:00:41 <tdurakov> hi everyone
14:00:59 <raj_singh> o/
14:01:28 <tdurakov> so, let's start
14:01:32 <tdurakov> #topic ci
14:02:09 <tdurakov> since it seems to be hard merging new tests to tempest I've started moving tests to tempest plugin
14:02:17 <tdurakov> will push that tomorrow
14:02:26 <raj_singh> yes, that is the feedback I got from tempest cores
14:02:28 <wznoinsk> hi
14:02:38 <wznoinsk> tdurakov, ping me a review of that when you have it up
14:02:46 <tdurakov> wznoinsk: sure
14:03:13 <raj_singh> yes let me know as well, then I will move my patch too
14:03:18 <davidgiluk> o/
14:03:25 <tdurakov> so there will be bit more control, and I hope we could move forward on test coverage goal
14:03:54 <tdurakov> another thing: https://review.openstack.org/#/c/347471/
14:04:02 <tdurakov> hook works on master
14:04:10 <tdurakov> but grenade job is failing
14:04:15 <tdurakov> pkoniszewski: ^
14:04:35 <tdurakov> I suspect we need to backport the same fix to stable branch
14:05:07 <pkoniszewski> i will take a look on that
14:05:21 <tdurakov> thanks
14:06:17 <tdurakov> do we have anything else on ci?
14:06:52 <tdurakov> ok, next topic
14:06:56 <tdurakov> #topic bugs
14:07:29 <tdurakov> https://bugs.launchpad.net/nova/+bug/1638625
14:07:29 <openstack> Launchpad bug 1638625 in OpenStack Compute (nova) "Nova fails live migrations on dedicated interface due to wrong type of migrate_uri" [High,In progress] - Assigned to Pawel Koniszewski (pawel-koniszewski)
14:07:34 <tdurakov> there is a fix on review
14:07:41 <tdurakov> https://review.openstack.org/#/c/398956/
14:08:02 <tdurakov> my understanding is that fix is OK
14:08:06 <tdurakov> thoughts?
14:09:01 <pkoniszewski> well
14:09:05 <pkoniszewski> bauzas raised a good point
14:09:15 <pkoniszewski> I need to change it because it is not compatible with Python3
14:10:03 <bauzas> and not compatible with non-ascii hostnames :)
14:10:11 <tdurakov> got it
14:10:29 <pkoniszewski> but that's libvirt thing
14:10:55 <tdurakov> once it merged let's backport it to stable branches
14:11:10 <pkoniszewski> migrate uri in libvirt is just a char array
14:11:28 <pkoniszewski> I don't know whether we can backport it
14:11:39 <pkoniszewski> it's not a single fix, those are two seperate patches
14:11:42 <bauzas> pkoniszewski: I'm more interested by what the libvirt python binding is accepting
14:11:43 <pkoniszewski> can such thing be backported?
14:11:56 <pkoniszewski> python binding accepts anything, validation is on libvirt's side
14:12:03 <bauzas> oh ok
14:12:05 <tdurakov> pkoniszewski: well, first is kind of partial fix, right?
14:12:20 <pkoniszewski> binding is just autogenerated thing, there's not much code in it
14:13:26 <pkoniszewski> partial fix that introduces regression
14:13:33 <pkoniszewski> and then fix for the regression
14:13:42 <pkoniszewski> bauzas: is it possible to backport something like that? ^^
14:13:53 * bauzas shrugs
14:14:23 <bauzas> I don't know why it couldn't be acceptable, as it's a bugfix
14:14:41 <bauzas> but I guess it comes to the discussion by which libvirt version we're tied to
14:14:52 <pkoniszewski> any
14:14:55 <tdurakov> I'd create a case for that
14:14:56 <pkoniszewski> it's not version related
14:15:13 <tdurakov> I mean we have a functionality that doesn't work
14:15:18 <tdurakov> and it worth to fix it
14:15:39 <tdurakov> even if we haven't merge such patches before
14:15:43 <pkoniszewski> okay, we can try
14:16:21 <tdurakov> I'll ask on nova weekly meeting for that case
14:16:54 <tdurakov> any other bugs/fixes to discuss?
14:16:59 <johnthetubaguy> it sounds like a reasonable backport to me, at a first glance
14:17:59 <tdurakov> johnthetubaguy: the point is that ideally we need to squash 2 patches while backporting
14:18:14 <johnthetubaguy> why do you need to squash them?
14:18:24 <bauzas> agreed with johnthetubaguy
14:19:07 <tdurakov> > pkoniszewsk:partial fix that introduces regression
14:19:07 <tdurakov> > and then fix for the regression
14:19:16 <tdurakov> johnthetubaguy: ^
14:19:41 <tdurakov> so inital fix breaks things totally, while second one mitigate that regression
14:20:04 <johnthetubaguy> I am not against that being two separate patches still, we just need to not do a release in the middle
14:20:18 <tdurakov> johnthetubaguy: ok
14:20:33 <johnthetubaguy> anyways, I wouldn't worry about that too much, either way should work
14:20:50 <tdurakov> johnthetubaguy: thanks
14:21:00 <tdurakov> both works for me too)
14:21:56 <tdurakov> any other bugs?
14:23:39 <tdurakov> #topic open discussion
14:24:01 <scsnow> ready for review: https://review.openstack.org/#/c/355805/
14:25:00 <tdurakov> scsnow: ok, will take a look
14:25:12 <tdurakov> a bit worried that there are no ci for that
14:25:16 <tdurakov> or we have some?
14:25:54 <scsnow> tdurakov, we're adapting Virtuozzo CI for multinode to run live migration tempest tests
14:26:26 <tdurakov> scsnow: acked
14:27:03 <tdurakov> don't want to block that work, because of lack of ci
14:27:16 <scsnow> and one more question. why we always update and pass new domain xml [1]? [1] https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5941
14:28:25 <scsnow> I can imaging only a single case, when it's needed. That's when source and destination hosts have vnc/spice consoles bound to different addresses.
14:29:11 <scsnow> When consoles are disabled or bound to local or catch-all addresses (0.0.0.0), xml update is not needed
14:29:21 <tdurakov> afair that's one of the cases
14:29:24 <scsnow> Any other cases?
14:30:00 * tdurakov trying to remember
14:31:51 <scsnow> I'm asking because libvirt vz driver doesn't support passing destination xml and potentially that's something we may would like to improve in nova code
14:32:32 <pkoniszewski> that's the case we might want to handle
14:32:43 <pkoniszewski> I mean the case that we might not want to update domain XML as it is not needed
14:32:53 <pkoniszewski> however, updating cause no harm
14:33:31 <pkoniszewski> unless libvirt does not support it for some reason
14:33:34 <scsnow> pkoniszewski, no harm for qemu driver
14:33:44 <tdurakov> there hm
14:33:50 <tdurakov> s/hm
14:33:54 <tdurakov> according to code
14:34:05 <tdurakov> that update xml handles volumes too
14:34:24 <tdurakov> and perf events
14:34:47 <tdurakov> scsnow, pkoniszewski ^
14:35:14 <tdurakov> I'd expect these to be the case
14:35:22 <scsnow> tdurakov, can you imaging a case when we need to update disk device?
14:37:06 <scsnow> tdurakov, at least iscsi target should be the same
14:37:11 <tdurakov> scsnow: not ready to answer right now, need to spend more time recalling that code
14:37:50 <scsnow> tdurakov, ok, let's talk about it again next meeting
14:38:07 <pkoniszewski> I think that encrypted volumes might require some special handling
14:41:02 <tdurakov> scsnow: https://review.openstack.org/#/c/137466/
14:42:47 <tdurakov> perf events https://review.openstack.org/#/c/329339/
14:42:54 <tdurakov> okay, let's move on
14:43:04 <scsnow> tdurakov, thank, will take a look
14:43:33 <tdurakov> nice clean up for booted from volume https://review.openstack.org/#/c/382024/
14:43:36 <tdurakov> please review
14:43:52 <tdurakov> johnthetubaguy, bauzas ^
14:44:42 <tdurakov> especially previous patch that removes duplicate check
14:44:52 <johnthetubaguy> can we get that in the priority etherpad? maybe its there already?
14:45:29 <scsnow> johnthetubaguy, can you post a link to that etherpad?
14:45:53 <tdurakov> johnthetubaguy: do you mean ocata etherpad?
14:47:52 <tdurakov> scsnow: https://etherpad.openstack.org/p/ocata-nova-priorities-tracking
14:48:00 <tdurakov> johnthetubaguy: done
14:48:43 <tdurakov> any other open discussions?
14:48:49 <johnthetubaguy> thats the one, thanks
14:50:20 <tdurakov> if we don't have any
14:50:25 <tdurakov> thanks everyone
14:50:34 <tdurakov> #endmeeting