13:01:18 <claudiub|2> #startmeeting hyper-v
13:01:18 <openstack> Meeting started Wed Mar 23 13:01:18 2016 UTC and is due to finish in 60 minutes.  The chair is claudiub|2. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:01:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:01:21 <openstack> The meeting name has been set to 'hyper_v'
13:01:27 <sagar_nikam> Hi
13:01:38 <c64cosmin> hello all
13:01:59 <claudiub|2> ok, I'll try to keep it short.
13:02:08 <claudiub|2> #topic mitaka releases
13:02:46 <claudiub|2> so, mitaka is going to be released in early april, meaning that any and all dependencies will have to be released until then too.
13:02:59 <claudiub|2> that includes networking_hyperv, compute_hyperv, and os-win
13:03:21 <claudiub|2> I'm going to cut the stable/mitaka branches on friday / saturday
13:03:32 <claudiub|2> #action claudiub to cut the stable/mitaka branches
13:04:15 <claudiub|2> which means, that if there are any more issues or bugs, this week is going to be a good chance to fix them. :)
13:04:58 <claudiub|2> if there's anything, let me know
13:05:08 <claudiub|2> ok, moving on.
13:05:23 <claudiub|2> #topic hyper-v neutron vif plug event listener on spawn
13:06:19 <claudiub|2> so, this was suggested some time ago by the HP folks. The main idea was that the Hyper-V Driver was not waiting for the vNICs to be bound, which was leading to missed DHCP requests.
13:06:52 <claudiub|2> we've addressed that issue in compute_hyperv, and it will have to be included in nova as well
13:06:56 <claudiub|2> #link https://review.openstack.org/#/c/292615/
13:07:27 <claudiub|2> #action propose the hyper-v neutron vif plug event listerner commit on nova
13:08:25 <claudiub|2> from experiments, it seemed to be behaving properly, with little to no decrease in performance, but a rally would be nice to confirm this.
13:08:58 <claudiub|2> ok, moving on.
13:09:10 <claudiub|2> #topic Rally performance tests
13:09:27 <claudiub|2> abalutoiu: hello. do you have any good news for us? :)
13:09:33 <abalutoiu> hello
13:10:50 <abalutoiu> so last week we had some results for the spawn / destroy scenario, here are the results for the spawn / destroy scenario including ssh guest access http://46.101.88.127:8888/PyMI_KVM_ssh_Mitaka.html
13:11:25 <abalutoiu> Liberty ones, for comparison: http://46.101.88.127:8888/PyMI_KVM_ssh_Liberty.html
13:12:40 <claudiub|2> #link Mitaka Hyper-V vs KVM http://46.101.88.127:8888/PyMI_KVM_ssh_Mitaka.html
13:12:57 <sagar_nikam> avg time is more in HyperV
13:13:03 <claudiub|2> #link Liberty Hyper-V vs KVM http://46.101.88.127:8888/PyMI_KVM_ssh_Liberty.html
13:13:19 <sagar_nikam> kvm - nova.boot_server	19.502	34.971	66.859	74.398	78.741	42.074	100.0%	100
13:13:33 <sagar_nikam> hyperv - nova.boot_server	32.596	120.705	147.194	155.103	177.713	109.565	98.0%	100
13:14:01 <abalutoiu> sagar_nikam: those are the results for Liberty, the old ones, have a look over the Mitaka results
13:14:11 <sagar_nikam> has anything changed ? last week we saw almost same result for hyperv and kvm
13:14:32 <sagar_nikam> ok
13:14:53 <abalutoiu> the results from last week didn't include ssh guest access
13:15:01 <sagar_nikam> ok, this looks good
13:15:07 <sagar_nikam> kvm and hyperv almost same
13:15:25 <sagar_nikam> the avg time includes ssh access ?
13:15:31 <claudiub|2> yep.
13:15:35 <sagar_nikam> ok
13:15:47 <sagar_nikam> and how many NICs ?
13:15:51 <sagar_nikam> per VM
13:16:05 <abalutoiu> only one NIC
13:16:06 <sagar_nikam> ?
13:16:11 <sagar_nikam> and are we trying attaching volumes
13:17:06 <abalutoiu> it's a simple scenario, spin up a VM, add floating ip to it, wait until it becomes active via ssh, then destroy the VM
13:17:33 <sagar_nikam> ok
13:17:49 <sagar_nikam> these results are good
13:17:53 <sagar_nikam> nice work
13:18:18 <sonu> can we continue our tests without destroying the VM?
13:18:32 <abalutoiu> we also have some results for Hadoop workloads
13:18:42 <claudiub|2> sonu: any reason why?
13:18:46 <abalutoiu> #link Mitaka Hyper-V vs KVM Hadoop cluster tests http://46.101.88.127:8888/Cluster_Results_Mitaka.html
13:19:19 <sonu> The only reason being, the larger the number of ACLs in MI, the more load on the neutron agent.
13:19:43 <sonu> and more time it takes. And we can measure the times it varies with the number of ACLs.
13:20:11 <abalutoiu> Flavor for VMs: 2 vCPUs, 6GB RAM, 25GB disk, number of parallel clusters for multi cluster test: 3, input data set for tera jobs: 5.000.000, clusters consists of 4 nodes (1 master + 3 slaves)
13:20:53 <claudiub|2> sonu: That also depends on the rally scenario. For example, if the scenario says 100 VMs, with 100 VMs per round, for 1 round, it will spawn all 100 VMs. Then destroy them.
13:21:15 <sonu> sure. then we are good.
13:21:37 <claudiub|2> anyways, looking at abalutoiu's hadoop results, it seems hyper-v is more favorable in this scenario
13:21:50 <sonu> and we use default security group rules for our tests.
13:22:14 <claudiub|2> sonu: default + ssh. :)
13:22:25 <sonu> Thank you. We will consume this coming next week and run some scale tests.
13:22:43 <claudiub|2> sure, sounds good. :)
13:23:04 <claudiub|2> anything else on this topic?
13:23:20 <claudiub|2> ok, moving on.
13:23:20 <sonu> Any chance we can get these improvements backported to L
13:23:29 <sonu> we are on L release
13:23:55 <sagar_nikam> it would be nice if these are backported to L
13:24:01 <sonu> we are trying to backport on our local branch - native threading and PYMI and ERPC
13:24:11 <claudiub|2> sonu: any improvement related to spawning / creating / destroy the vm etc. has been included in compute_hyperv on stable/liberty
13:24:24 <sonu> thank you
13:24:59 <claudiub|2> for the networking-hyperv side, I'm afraid not. networking-hyperv is an official project under neutron's governance. The improvements we've done in Mitaka were marked as blueprints, meaning that they can't be backported.
13:25:52 <claudiub|2> ok, moving on.
13:26:04 <claudiub|2> #topic OVS agent on Hyper-V
13:26:28 <claudiub|2> atuvenie: hello. I suppose the agent works fine for Mitaka, right?
13:26:52 <atuvenie> yeah, we just merged the last patch that was causing some issues, all is fine now
13:27:28 <claudiub|2> cool.
13:27:47 <claudiub|2> seems that we're almost done with the work for Mitaka. :)
13:27:52 <sonu> atuvenie : Do we know the feature parity between OVS for Linux and OVS Windows. Are all features supported as is in OVS.
13:28:45 <atuvenie> sonu: I'm not quite sure what you mean. Are you talking about the OVS Agent or OVS itself?
13:28:51 <sonu> Who participates in OVS forums from within us, who can bring us this info?
13:28:55 <sonu> OVS itself.
13:29:40 <atuvenie> sonu: I do not work on OVS directly so I'm not up to speed on the topic
13:30:11 <claudiub|2> AFAIK, no, there shouldn't be any disparity. all network types work the same on Linux and Windows.
13:30:44 <sonu> Thanks. May be we start attending OVS IRC chats to get more info.
13:31:06 <claudiub|2> but as far as OVS agent is concerned, it works the same on Windows and Linux.
13:31:24 <sonu> And is there a CI for Microsoft HyperV when a change is done in neutron-openvswitch-agent?
13:31:47 <sonu> because it is shared b/w Linux and Windows now :)
13:32:03 <claudiub|2> not yet, but we're working on it. :)
13:32:17 <sonu> Great. Thank you.
13:32:40 <claudiub|2> ok, moving on
13:32:56 <sagar_nikam> claudiub|2: i have a topic
13:33:07 <claudiub|2> #topic OpenStack Summit
13:33:07 <sagar_nikam> on certs
13:33:49 <claudiub|2> sagar_nikam: sure, I'll get to that soon. :)
13:34:21 <claudiub|2> anyways, we've requested a worksession at the next OpenStack summit.
13:35:14 <claudiub|2> in which we can discuss any further improvement and development of os-win and other Hyper-V / Windows related workloads.
13:35:15 <sonu> Will OVN be part of design discussion?
13:35:21 <claudiub|2> including new features and so on.
13:36:04 <claudiub|2> sonu: sure, we are looking into it as well.
13:36:18 <claudiub|2> so, question: who is going to attend the summit?
13:37:11 <sonu> I will have one representative from networking team.
13:38:05 <claudiub|2> ok, cool.
13:38:47 <claudiub|2> ok, moving on
13:38:51 <claudiub|2> # topic certificates
13:38:57 <claudiub|2> #topic certificates
13:39:16 <sagar_nikam> based on the meeting discussion last week
13:39:29 <sagar_nikam> we tried using certs on the hyperv hosts
13:39:39 <sagar_nikam> we copied the .crt file from controller
13:39:48 <sagar_nikam> to the hyperv host
13:40:04 <sagar_nikam> and had the correct https and cafile entry in nova.conf
13:40:28 <sagar_nikam> we are using python 2.7.10 64 bit on the hyperv host
13:40:32 <sagar_nikam> we hit a issue
13:41:00 <sagar_nikam> which is exactly same as described here
13:41:10 <sagar_nikam> http://stackoverflow.com/questions/33140382/troubleshooting-ssl-certificate-verify-failed-error
13:41:39 <sagar_nikam> the issue was urllib2.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
13:41:55 <sagar_nikam> this looks like a issue in python 2.7.10
13:42:15 <claudiub|2> interesting
13:42:22 <sagar_nikam> since it uses openssl 1.0.2a
13:42:35 <sagar_nikam> and we need 1.0.2b or greater
13:42:44 <sagar_nikam> which is available in python 2.7.11
13:42:49 <sagar_nikam> now my question
13:42:59 <sagar_nikam> how was your tests done using certs
13:43:07 <sagar_nikam> did you not hit this issue
13:43:09 <sagar_nikam> ?
13:43:22 <sagar_nikam> what version of python do you use ?
13:43:33 <sagar_nikam> on hyperv host
13:45:06 <claudiub|2> the hyper-v compute installers come with python 2.7.9
13:45:20 <claudiub|2> so, we're using that. so, the person that answered your previous email is not here at the moment, but he'll be back later and he'll be able to send you an answer.
13:45:46 <sagar_nikam> ok
13:46:01 <sagar_nikam> thanks
13:46:14 <sagar_nikam> not sure if python 2.7.9 can solve the issue
13:46:22 <sagar_nikam> but i will wait for the mail
13:46:37 <claudiub|2> sagar_nikam: although, i am a bit curious, can you check what python version you use on your Openstack controller?
13:46:48 <sagar_nikam> python 2.7.9
13:47:00 <sagar_nikam> we had some issues with python 2.7.9 on windows
13:47:08 <sagar_nikam> and hence used 27.10
13:47:26 <sagar_nikam> nova code was not getting compiled on 2.7.9
13:47:33 <claudiub|2> what kind of issues did you have with python 2.7.9 on windows?
13:47:33 <sagar_nikam> some dependent packages
13:47:44 <sagar_nikam> when we run setup.py
13:47:49 <sagar_nikam> of nova
13:47:59 <sagar_nikam> packages were not getting complied
13:48:05 <claudiub|2> sure, nova and a lot of other projects have linux specific dependencies, which cannot be installed on windows.
13:48:21 <claudiub|2> which is why we recommend the installer, as we package those dependencies as well
13:48:59 <sagar_nikam> the compiled packages --- are they available for others to use
13:49:11 <sagar_nikam> whatever you got it compiled ?
13:49:32 <claudiub|2> plus, there are other dependencies like numpy that has to be compiled. windows doesn't typically come with a compiler and you wouldn't want one on nano or bare hyper-v hosts.
13:50:03 <sagar_nikam> we use mingw for compiling
13:50:30 <sagar_nikam> and that solved lot of complilation issues
13:50:36 <sagar_nikam> in 2.7.10
13:51:31 <claudiub|2> yeah, ofc, but there is no need to compile anything when using the installer, everything is already packaged in the Python folder it installs.
13:51:31 <sagar_nikam> one question -- all the packages which you have complied
13:51:45 <sagar_nikam> is that available for others to use
13:52:13 <sagar_nikam> we are not using that installer
13:52:21 <sagar_nikam> hence the question
13:52:34 <claudiub|2> i'm not against it. :)
13:53:26 <sagar_nikam> if the compiled files are available, we can use it while we run setup.py
13:53:47 <sagar_nikam> and that would solve lot of the compilation issues we faced in 2.7.9
13:54:06 <claudiub|2> but we still recomend using the installer. it can be installed in unnattended mode, which is perfect for automations.
13:54:13 <slogan> #join #openstack-monasca
13:54:56 <claudiub|2> #topic open discussion
13:55:34 <claudiub|2> so, the Newton branch is open, which means we will start merging patches on master again
13:55:52 <claudiub|2> one pending patch that we want is the os-win in os-brick patch
13:56:48 <claudiub|2> that is based on henma's patch
13:57:05 <sagar_nikam> upstream branch open ?
13:57:09 <claudiub|2> sagar_nikam: hist patch is still WIP. Any news from him?
13:57:16 <sagar_nikam> for newton
13:57:23 <claudiub|2> sagar_nikam: yeah, for most projects.
13:57:24 <sagar_nikam> i can check
13:57:32 <sagar_nikam> can you let me know which patch set
13:57:58 <sagar_nikam> can we start submitting FC and cluster BPs and patches in N
13:58:00 <claudiub|2> #link https://review.openstack.org/#/c/275943/5
13:58:11 <sagar_nikam> we can get review time
13:58:13 <sagar_nikam> now
13:58:17 <claudiub|2> the blueprints have been reapproved for N.
13:58:32 <sagar_nikam> can we re-submit code as well
13:58:40 <sagar_nikam> both for cluster driver and FC
13:58:57 <sagar_nikam> it would be nice to get it merged upstream
13:58:59 <sagar_nikam> in N
13:59:04 <claudiub|2> anyways, we'll also have to recreate the famous 3 hyper-v patches queue that needs to be reviewed and ready to merge on the etherpad.
13:59:32 <claudiub|2> ofc. having os-win in os-brick will greatly help on this subject.
13:59:53 <claudiub|2> #action claudiub to create the nova patches queue on etherpad.
14:00:06 <claudiub|2> Seems that our time is over
14:00:16 <claudiub|2> thanks all for attending!
14:00:23 <claudiub|2> #endmeeting