13:01:25 #startmeeting hyper-v 13:01:26 Meeting started Wed Nov 30 13:01:25 2016 UTC and is due to finish in 60 minutes. The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:01:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:01:29 The meeting name has been set to 'hyper_v' 13:01:39 hello 13:01:43 anyone else joining us? 13:01:57 no... we can start 13:02:27 cool. 13:02:40 #topic OVS 13:03:03 so, first things first 13:03:13 #link OVS 2.6 announcement: https://twitter.com/cloudbaseit/status/803575092005347328 13:03:22 OVS 2.6 has been released 13:03:58 just fyi 13:04:02 checking 13:04:06 in case you want to try it out :) 13:04:41 yes.. sure... need to discuss with kvinod and somu 13:04:43 sonu 13:05:06 it should improve network throughput 13:05:20 so, it's worth checking out 13:05:41 ok 13:05:50 secondly, I've been working on the nova hyper-v + ovs ci 13:06:10 all of the configs are in place, the setup is fine 13:06:20 there are a few things i'm testing out at the moment 13:06:42 will OVS2.6 work with mitaka ? 13:07:00 i've noticed that neutron-ovs-agent freezes after ~10 hours or so 13:07:15 and I'll have to see why and how to address this 13:07:30 sagar_nikam: yep. it is independent from openstack releases 13:07:38 ok 13:07:51 sagar_nikam: did you try neutron-ovs-agent? 13:07:58 did you have any issues like this? 13:08:15 freezing after 10 hours ... at what scale ? high number of VMs and networks causes it ? 13:08:31 not sure yet 13:08:42 it's not an easy case to reproduce 13:08:53 it is not deterministic 13:09:03 i think sonu's team was planning it since we wanted VxLAN support. I will mail them about the new release 13:09:31 but, from what i can see, the neutron-ovs-agent still sends its heartbeat and it processes rpc calls, but the main thread is frozen 13:09:40 meaning that it won't bind ports anymore 13:10:24 OK 13:11:04 so yeah. that's what i'm currently working on. 13:11:43 any questions? 13:12:29 no... i have mailed the networking team about it 13:12:48 will check with them tomorrow 13:13:03 cool 13:13:09 #topic open discussion 13:13:20 so, no new updates on any nova patches 13:13:30 ok 13:14:06 have you considered supporting scvmm ? 13:14:07 we're currently trying out the windows server 2016 networking, and planning to integrate it in networking-hyperv 13:14:30 ok 13:14:44 it will be able to support vxlan network types 13:15:21 sagar_nikam: we don't see any benefits from supporting scvmm. it's basically hyper-v + clustering 13:15:34 ok 13:15:36 which we already have in openstack 13:15:57 you mean using the cluster driver ? 13:16:01 yep 13:16:06 ok 13:16:34 sagar_nikam: anyways. I still haven't received any email from sonu 13:16:42 the reason i asked... scvmm instances can be bursted to azure 13:17:00 was thinking if coudbase has thought of this usecase 13:17:26 *cloudbase 13:20:01 sagar_nikam: not particularly familiar with hybrid cloud solutions, but from my point of view, it can be a tricky subject, as you don't have a lot of control on what vms are bursted to azure 13:20:14 as it has its own internal ha 13:20:35 ok 13:20:51 i was thinking of user triggered busting... not automated 13:20:55 and it could break some affinity rules that nova has been configured with 13:21:08 the user provisions a VM using Openstack+SCVMM 13:21:30 and then if required.. using some other component... burst to azure 13:21:41 i mean some other python component 13:21:48 you mean like coriolis? 13:21:49 outside of openstack 13:21:57 yes.. possible 13:22:50 any idea if pyMI supports SCVMM ... or should i say can MI be used for SCVMM ? 13:23:28 well, in this case, i think coriolis might work better, as it also handles the tooling / bootstraping the instances when migration to a public / private cloud, or to a particular hypervisor 13:24:41 coriolis.. supports SCVMM to azure ? 13:24:53 sagar_nikam: theoretically, yes, as long as there is a WMI interface for SCVMM objects and so on, like almost anything else on Windows, including Hyper-V objects, Cluster objects, system objects, etc. 13:25:12 you can even manipulate excel tables through WMI objects 13:25:22 ok 13:26:27 sagar_nikam: scvmm is basically hyper-v, and hyper-v instances can be migrated to azure 13:26:55 ok 13:27:31 i was thinking... how do we create VMs in SCVMM programatically .. hopefully it is possible using MI 13:27:37 and hence pyMI 13:27:48 your opinion ? 13:28:31 as a rule of thumb, if there's a powershell cmdlet for it, it can be done programatically 13:28:46 and most probably with pymi as well. 13:29:23 yes ... but using python and trigger powershell cmdlet to create VMs may be the best solution 13:29:31 anyhow.... will read more 13:29:37 thanks for this info 13:30:30 no problem. but still, from my point of view, there aren't any advantages to adding support for scvmm in openstack. :) 13:30:55 ok 13:30:58 agree 13:31:45 any updates from your side? 13:32:09 not as of now.. we are slowing trying to move to Mitaka 13:32:19 work is starting this week 13:32:46 but it may take time as we have some issues in getting mitaka in our downstream branch 13:33:05 as of now mostly planning stage 13:33:17 are you doing some changes to the upsteam branches? 13:33:32 we had done a POC to check with Mitaka few months back 13:33:41 and it had worked 13:33:55 but that was just a POC... manually checking out mitaka 13:34:24 no changes to upstream branch... atleast on nova.. it is quite stable 13:34:26 why not run some tempest tests, in order to make sure it all works fine? 13:34:54 we have not found any issues in nova... so no changes to be submitted upstream 13:35:06 yeah, mitaka is currently in phase 2 support, which means that only high / critical importance bugfixes are backported 13:35:11 yes... we will run temptes 13:35:29 but first we need to get mitaka in our downstream bacnh 13:35:31 branch 13:35:41 some work to be done by ops team 13:36:06 we have requested... it will be done... but no time frame yet 13:36:17 ok :) 13:36:30 what we are starting is manually cloning nova and start tetsing 13:36:45 but that is more from developer perspective 13:36:54 not much for the product itself 13:37:01 so no scale tests... 13:37:17 which will only happen when we get the downstream repo sorted out 13:37:36 we will also need os-win in our downstream repo 13:37:45 some process trying to get it 13:37:51 might take time 13:38:02 well, it should work out of the box 13:38:20 correct.. that is what i am thinking 13:38:21 we haven't done any backwards-incompatible to os-win, as far as I know 13:38:31 hence just my developer testing should be good enough 13:38:35 plus, os-win has a mitaka branch as well 13:38:38 is what i feel 13:39:00 we will use os-win in mitaka 13:39:05 branch 13:39:10 so we should be fine 13:39:38 cool. well, i hope it goes as smoothly as possible. :) 13:39:57 yes ... even i hope so 13:40:09 it is mostly the process which take time 13:40:17 code wise i think we will be fine 13:40:43 i have asked my team mate who will be working on hyperv to also try the cluster driver 13:40:50 when we tried it few months back 13:40:57 it all worked perfectly 13:41:04 manually cloning mitaka 13:41:11 and applied your patch 13:41:19 i think the same will work now as well 13:41:34 i think we merged the cluster driver in compute-hyperv in mitaka 13:41:40 if i remember correctly 13:41:49 ok 13:43:04 anyways, one more thing. i propose we don't have a hyper-v meeting next week, and have it next-next week instead. the thing is that I probably won't have many updates next week, as today and tomorrow are public holidays in romania, which means that we won't do a lot in these days. :) 13:43:39 if there's something, you can email me 13:43:48 is that ok with you? 13:43:51 sure no problem 13:43:58 same from my end as well 13:44:04 cool :) 13:44:06 we will be busy trying to get unto mitaka 13:44:13 and it will take some time 13:44:16 so not a issue 13:44:24 i dont expect to hit too many issues 13:44:33 we can do it 2 weeks from now 13:44:46 great, thanks. :) 13:44:54 well then, that's it from me 13:45:02 thanks 13:45:07 we can close the meeting 13:45:22 thanks for joining, see you in 2 weeks! 13:45:26 #endmeeting