13:01:25 <claudiub> #startmeeting hyper-v
13:01:26 <openstack> Meeting started Wed Nov 30 13:01:25 2016 UTC and is due to finish in 60 minutes.  The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:01:27 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:01:29 <openstack> The meeting name has been set to 'hyper_v'
13:01:39 <claudiub> hello
13:01:43 <claudiub> anyone else joining us?
13:01:57 <sagar_nikam> no... we can start
13:02:27 <claudiub> cool.
13:02:40 <claudiub> #topic OVS
13:03:03 <claudiub> so, first things first
13:03:13 <claudiub> #link OVS 2.6 announcement: https://twitter.com/cloudbaseit/status/803575092005347328
13:03:22 <claudiub> OVS 2.6 has been released
13:03:58 <claudiub> just fyi
13:04:02 <sagar_nikam> checking
13:04:06 <claudiub> in case you want to try it out :)
13:04:41 <sagar_nikam> yes.. sure... need to discuss with kvinod and somu
13:04:43 <sagar_nikam> sonu
13:05:06 <claudiub> it should improve network throughput
13:05:20 <claudiub> so, it's worth checking out
13:05:41 <sagar_nikam> ok
13:05:50 <claudiub> secondly, I've been working on the nova hyper-v + ovs ci
13:06:10 <claudiub> all of the configs are in place, the setup is fine
13:06:20 <claudiub> there are a few things i'm testing out at the moment
13:06:42 <sagar_nikam> will OVS2.6 work with mitaka ?
13:07:00 <claudiub> i've noticed that neutron-ovs-agent freezes after ~10 hours or so
13:07:15 <claudiub> and I'll have to see why and how to address this
13:07:30 <claudiub> sagar_nikam: yep. it is independent from openstack releases
13:07:38 <sagar_nikam> ok
13:07:51 <claudiub> sagar_nikam: did you try neutron-ovs-agent?
13:07:58 <claudiub> did you have any issues like this?
13:08:15 <sagar_nikam> freezing after 10 hours ... at what scale ? high number of VMs and networks causes it ?
13:08:31 <claudiub> not sure yet
13:08:42 <claudiub> it's not an easy case to reproduce
13:08:53 <claudiub> it is not deterministic
13:09:03 <sagar_nikam> i think sonu's team was planning it since we wanted VxLAN support. I will mail them about the new release
13:09:31 <claudiub> but, from what i can see, the neutron-ovs-agent still sends its heartbeat and it processes rpc calls, but the main thread is frozen
13:09:40 <claudiub> meaning that it won't bind ports anymore
13:10:24 <sagar_nikam> OK
13:11:04 <claudiub> so yeah. that's what i'm currently working on.
13:11:43 <claudiub> any questions?
13:12:29 <sagar_nikam> no... i have mailed the networking team about it
13:12:48 <sagar_nikam> will check with them tomorrow
13:13:03 <claudiub> cool
13:13:09 <claudiub> #topic open discussion
13:13:20 <claudiub> so, no new updates on any nova patches
13:13:30 <sagar_nikam> ok
13:14:06 <sagar_nikam> have you considered supporting scvmm ?
13:14:07 <claudiub> we're currently trying out the windows server 2016 networking, and planning to integrate it in networking-hyperv
13:14:30 <sagar_nikam> ok
13:14:44 <claudiub> it will be able to support vxlan network types
13:15:21 <claudiub> sagar_nikam: we don't see any benefits from supporting scvmm. it's basically hyper-v + clustering
13:15:34 <sagar_nikam> ok
13:15:36 <claudiub> which we already have in openstack
13:15:57 <sagar_nikam> you mean using the cluster driver ?
13:16:01 <claudiub> yep
13:16:06 <sagar_nikam> ok
13:16:34 <claudiub> sagar_nikam: anyways. I still haven't received any email from sonu
13:16:42 <sagar_nikam> the reason i asked... scvmm instances can be bursted to azure
13:17:00 <sagar_nikam> was thinking if coudbase has thought of this usecase
13:17:26 <sagar_nikam> *cloudbase
13:20:01 <claudiub> sagar_nikam: not particularly familiar with hybrid cloud solutions, but from my point of view, it can be a tricky subject, as you don't have a lot of control on what vms are bursted to azure
13:20:14 <claudiub> as it has its own internal ha
13:20:35 <sagar_nikam> ok
13:20:51 <sagar_nikam> i was thinking of user triggered busting... not automated
13:20:55 <claudiub> and it could break some affinity rules that nova has been configured with
13:21:08 <sagar_nikam> the user provisions a VM using Openstack+SCVMM
13:21:30 <sagar_nikam> and then if required.. using some other component... burst to azure
13:21:41 <sagar_nikam> i mean some other python component
13:21:48 <claudiub> you mean like coriolis?
13:21:49 <sagar_nikam> outside of openstack
13:21:57 <sagar_nikam> yes.. possible
13:22:50 <sagar_nikam> any idea if pyMI supports SCVMM ... or should i say can MI be used for SCVMM ?
13:23:28 <claudiub> well, in this case, i think coriolis might work better, as it also handles the tooling / bootstraping the instances when migration to a public / private cloud, or to a particular hypervisor
13:24:41 <sagar_nikam> coriolis.. supports SCVMM to azure ?
13:24:53 <claudiub> sagar_nikam: theoretically, yes, as long as there is a WMI interface for SCVMM objects and so on, like almost anything else on Windows, including Hyper-V objects, Cluster objects, system objects, etc.
13:25:12 <claudiub> you can even manipulate excel tables through WMI objects
13:25:22 <sagar_nikam> ok
13:26:27 <claudiub> sagar_nikam: scvmm is basically hyper-v, and hyper-v instances can be migrated to azure
13:26:55 <sagar_nikam> ok
13:27:31 <sagar_nikam> i was thinking... how do we create VMs in SCVMM programatically .. hopefully it is possible using MI
13:27:37 <sagar_nikam> and hence pyMI
13:27:48 <sagar_nikam> your opinion ?
13:28:31 <claudiub> as a rule of thumb, if there's a powershell cmdlet for it, it can be done programatically
13:28:46 <claudiub> and most probably with pymi as well.
13:29:23 <sagar_nikam> yes ... but using python and trigger powershell cmdlet to create VMs may be the best solution
13:29:31 <sagar_nikam> anyhow.... will read more
13:29:37 <sagar_nikam> thanks for this info
13:30:30 <claudiub> no problem. but still, from my point of view, there aren't any advantages to adding support for scvmm in openstack. :)
13:30:55 <sagar_nikam> ok
13:30:58 <sagar_nikam> agree
13:31:45 <claudiub> any updates from your side?
13:32:09 <sagar_nikam> not as of now.. we are slowing trying to move to Mitaka
13:32:19 <sagar_nikam> work is starting this week
13:32:46 <sagar_nikam> but it may take time as we have some issues in getting mitaka in our downstream branch
13:33:05 <sagar_nikam> as of now mostly planning stage
13:33:17 <claudiub> are you doing some changes to the upsteam branches?
13:33:32 <sagar_nikam> we had done a POC to check with Mitaka few months back
13:33:41 <sagar_nikam> and it had worked
13:33:55 <sagar_nikam> but that was just a POC... manually checking out mitaka
13:34:24 <sagar_nikam> no changes to upstream branch... atleast on nova.. it is quite stable
13:34:26 <claudiub> why not run some tempest tests, in order to make sure it all works fine?
13:34:54 <sagar_nikam> we have not found any issues in nova... so no changes to be submitted upstream
13:35:06 <claudiub> yeah, mitaka is currently in phase 2 support, which means that only high / critical importance bugfixes are backported
13:35:11 <sagar_nikam> yes... we will run temptes
13:35:29 <sagar_nikam> but first we need to get mitaka in our downstream bacnh
13:35:31 <sagar_nikam> branch
13:35:41 <sagar_nikam> some work to be done by ops team
13:36:06 <sagar_nikam> we have requested... it will be done... but no time frame yet
13:36:17 <claudiub> ok :)
13:36:30 <sagar_nikam> what we are starting is manually cloning nova and start tetsing
13:36:45 <sagar_nikam> but that is more from developer perspective
13:36:54 <sagar_nikam> not much for the product itself
13:37:01 <sagar_nikam> so no scale tests...
13:37:17 <sagar_nikam> which will only happen when we get the downstream repo sorted out
13:37:36 <sagar_nikam> we will also need os-win in our downstream repo
13:37:45 <sagar_nikam> some process trying to get it
13:37:51 <sagar_nikam> might take time
13:38:02 <claudiub> well, it should work out of the box
13:38:20 <sagar_nikam> correct.. that is what i am thinking
13:38:21 <claudiub> we haven't done any backwards-incompatible to os-win, as far as I know
13:38:31 <sagar_nikam> hence just my developer testing should be good enough
13:38:35 <claudiub> plus, os-win has a mitaka branch as well
13:38:38 <sagar_nikam> is what i feel
13:39:00 <sagar_nikam> we will use os-win in mitaka
13:39:05 <sagar_nikam> branch
13:39:10 <sagar_nikam> so we should be fine
13:39:38 <claudiub> cool. well, i hope it goes as smoothly as possible. :)
13:39:57 <sagar_nikam> yes ... even i hope so
13:40:09 <sagar_nikam> it is mostly the process which take time
13:40:17 <sagar_nikam> code wise i think we will be fine
13:40:43 <sagar_nikam> i have asked my team mate who will be working on hyperv to also try the cluster driver
13:40:50 <sagar_nikam> when we tried it few months back
13:40:57 <sagar_nikam> it all worked perfectly
13:41:04 <sagar_nikam> manually cloning mitaka
13:41:11 <sagar_nikam> and applied your patch
13:41:19 <sagar_nikam> i think the same will work now as well
13:41:34 <claudiub> i think we merged the cluster driver in compute-hyperv in mitaka
13:41:40 <claudiub> if i remember correctly
13:41:49 <sagar_nikam> ok
13:43:04 <claudiub> anyways, one more thing. i propose we don't have a hyper-v meeting next week, and have it next-next week instead. the thing is that I probably won't have many updates next week, as today and tomorrow are public holidays in romania, which means that we won't do a lot in these days. :)
13:43:39 <claudiub> if there's something, you can email me
13:43:48 <claudiub> is that ok with you?
13:43:51 <sagar_nikam> sure no problem
13:43:58 <sagar_nikam> same from my end as well
13:44:04 <claudiub> cool :)
13:44:06 <sagar_nikam> we will be busy trying to get unto mitaka
13:44:13 <sagar_nikam> and it will take some time
13:44:16 <sagar_nikam> so not a issue
13:44:24 <sagar_nikam> i dont expect to hit too many issues
13:44:33 <sagar_nikam> we can do it 2 weeks from now
13:44:46 <claudiub> great, thanks. :)
13:44:54 <claudiub> well then, that's it from me
13:45:02 <sagar_nikam> thanks
13:45:07 <sagar_nikam> we can close the meeting
13:45:22 <claudiub> thanks for joining, see you in 2 weeks!
13:45:26 <claudiub> #endmeeting