13:01:08 #startmeeting hyper-v 13:01:08 Meeting started Wed Sep 21 13:01:08 2016 UTC and is due to finish in 60 minutes. The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:01:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:01:12 The meeting name has been set to 'hyper_v' 13:01:24 hello hello 13:01:43 hello 13:01:46 waiting for a bit, so people can join 13:04:19 Hi All 13:04:36 hello :) 13:04:46 sorry late today... 13:04:53 it's fine :) 13:05:04 anyone else joining us? 13:05:08 hi all, sorry, internet is a little slow today 13:05:20 we can start 13:05:31 sonu may not join 13:05:36 ok 13:06:00 #topic performance test results 13:06:16 abalutoiu: hellou. :) can you share with us some of your results? 13:06:46 claudiub: hello, sure 13:07:42 here are some performance comparison results between KVM, Hyper-V on WS2012R2 and WS 2016TP5 with all the latest improvements that we've been working on: http://paste.openstack.org/show/fzCXXHcLrk2L0SsrJgx9/ 13:08:11 the test consists in booting a VM, attaching a volume and deleting the VM 13:08:42 100 total iterations, with 20 iterations in parallel 13:09:50 we are close to kvm... nie 13:09:54 nice 13:10:05 but ... only with TP5 ? 13:10:19 cant we achieve the same with 2012 R2 ? 13:10:32 another test (which includes nova boot, test ssh connection to the VM and delete VM): http://paste.openstack.org/show/iRwQHKku6CCz6PoX0kRi/ 13:10:37 i see in the paste there is also 2012 r2 results 13:10:48 there are* 13:11:08 it is written above each table the host used 13:11:17 and i see that we actually perform better thank kvm 13:11:18 claudiub: yes... but the results are not as good as kvm or TP5 13:12:16 yeah, by 3 seconds in total, or 1% 13:12:43 load durations for the test, on WS 2012R2 250sec, on WS 2016TP5 244sec and on KVM 280sec 13:14:07 the difference between kvm and hyper-v is even greater on the 2nd paste abalutoiu sent 13:14:08 OK 13:14:14 looks good... 13:14:31 the new version of pyMI available in pypi 13:14:42 ? 13:14:55 and can it can be seen the difference between the latest pymi patches and without them 13:15:11 not yet, there are still 2 pull requests that we have to merge on pymi 13:15:29 claudiub: ok.. not a issue 13:15:32 but it'll be released by next week 13:15:53 one question... what were the tenant VMs.... windows or linux ? 13:15:59 * clarkb asks drive by question. Are you booting the same image on both? 13:16:41 i'm assuming it's a cirros image. abalutoiu? 13:16:50 yes, it's a cirros image 13:17:04 ok 13:17:37 clarkb: yep, it's the same image on both, the only difference is the disk format 13:18:10 abalutoiu: for kvm are oyu using qcow2? and if so are you booting qcow2 or is nova converting to raw? (I juts recently discovered it does this and makes things slow) 13:18:58 (basically if you are going to have nova boot raw its best to upload raw to glance) 13:19:19 will there be any difference if we use same other image ? other than cirros... or will the results be same 13:20:39 I'm using qcow2 for KVM and vhdx for Hyper-V 13:20:51 I'm not sure if nova is converting the image to raw 13:22:57 clarkb: i assume there is a nova.conf option for this behaviour, right? 13:23:32 claudiub: there are 2! this is why we were so confused about it and took a while. But yes you have to set use_raw_images to false and libvirt image type to qcow2 iirc 13:24:01 abalutoiu: but with infracloud we saw it was causing long boot times because the qcow2 was copied to the compute host then converted to raw before being booted 13:24:10 so we turned it off and just boot off the qcow2 now 13:25:06 (anyways I just learned this stuff yesterday and saw it might be relevant to your performance discussion ealrier since we were also tuning boot times) 13:25:42 it seems force_raw_images is set to True by default 13:26:14 claudiub: yup and its a fine default if you also upload raw images to glance 13:26:19 clarkb: yeah, thanks for the tips. :) 13:26:50 it'll be something we'll take into account next time we do some performance tests 13:28:16 clarkb: also, if you know any, do you know about specific kvm performance fine-tuning we can do? it'll be helpful to know we've applied all the best practices for performance for both kvm and hyper-v when we compare them. 13:29:31 claudiub: the only other thing we have done is set the writeback setting to unsafe beacuse all of our instances are ephemeral for testing 13:29:37 this gives us better io performance 13:31:13 i see. thanks for your input! 13:32:28 so, it seems we'll have to apply those suggestions next time we'll do some testing 13:32:56 yep, thanks clarkb for the tips 13:34:07 hey guys. I think that the force_raw_images config option would not affect those results as in each scenario the image was already cached, correct me if I'm wrong Alin 13:35:14 lpetrut: if its already cached on all compute hosts it shouldn't affect it 13:35:40 (which happens by booting an instance of that image on all compute hosts prior) 13:36:17 i see. so the impact is quite small anyways 13:36:50 still worth setting the config option to false, imo 13:37:08 or use a raw image to start or make sure its cached across the board or something. 13:37:17 we used only one host for each hypervisor, and the image was already cached before running the tests 13:38:36 please let us know if you find any other performance tips&tricks on KVM if you don't mind clarkb 13:38:45 can do 13:39:11 thanks clarkb! 13:39:42 #topic release status 13:40:16 soo, newton is going to be released in 2 weeks, aproximately 13:40:32 discovered an issue with ceilometer-polling on windows 13:40:51 there is a new dependency in ceilometer, which crashes the agent 13:40:59 cotyledon 13:41:15 will have to send some pull requests to fix that 13:41:44 other than that, nothing new 13:42:29 #topic open discussion 13:42:57 what happened to monasca patches ? 13:43:13 for the following weeks, testing will be our priority 13:43:33 ok 13:43:50 will we miss newton ? for monasca 13:43:53 we have a couple more common testing scenarios in mind, regarding performance 13:44:27 like live / cold migration, cold resize, different volume types, etc. 13:45:08 sagar_nikam: unfortunately, nothing. it got a bit too late for them. :( 13:45:13 they're still up for review 13:45:28 ok 13:45:37 hopefully O will have it 13:45:43 hope so too. 13:45:55 other than that, we're reproposed blueprints to Ocata 13:45:56 for nova 13:46:10 ok 13:46:21 currently, the hyper-v UEFI VMs spec and the os-brick in nova spec are approved 13:47:13 vNUMA instances and hyper-v storage qos specs are up. since they've been previously approved, they'll get in fast. 13:47:44 and there are a few other specless blueprints that should be reapproved, like the Hyper-V OVS vif plug blueprint 13:47:56 all patches have already been rebased to master, ready for review. 13:48:35 ok 13:48:53 the Hyper-V PCI passthrough spec should be up in the following weeks as well, as well as code. 13:50:02 that's pretty much all that comes to mind. 13:50:31 any further news on magnum support ? 13:50:45 since we last discussed few weeks back 13:51:02 atuvenie: hi. :) 13:51:41 well, we won't make it in newton, obviously 13:51:56 the work in k8 is kind of slow at the moment, but going 13:52:15 ok 13:52:39 we have had some problems with networking on TP5 for some reason, I'm trying to figure what/if something has changed in their model 13:55:15 ok 13:55:32 sagar_nikam: any news from your side? 13:55:33 claudiub: nothing much from my side... 13:55:47 no... the scale tests will take longer 13:55:54 not getting a free slot 13:56:30 not sure when we will get it.. 13:56:43 we are planning to run it on Mitaka 13:57:45 i see 13:57:59 well, let us know when you start. :) 13:58:09 other than that, I think we can end the meeting here 13:58:15 sure 13:58:22 yes... we can end 13:58:27 thank you .... 13:58:27 thanks folks for joining! see you next week! 13:58:37 #endmeeting