13:04:18 <claudiub> #startmeeting hyper-v
13:04:19 <openstack> Meeting started Wed Feb  1 13:04:18 2017 UTC and is due to finish in 60 minutes.  The chair is claudiub. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:04:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:04:22 <openstack> The meeting name has been set to 'hyper_v'
13:04:54 <claudiub> hello
13:04:59 <claudiub> sorry for being late
13:05:09 <claudiub> anyone still here?
13:05:22 <armstrong> yes
13:05:43 <claudiub> oh hi, new people. :D
13:05:58 <armstrong> yea second time attending
13:06:11 <claudiub> hm, when was the other time?
13:07:06 <armstrong> last week
13:07:54 <claudiub> i see. just a quick tip, all of the openstack meetings are logged, and people present at meetings are considered "present" if they say at least one thing. :)
13:08:19 <claudiub> so, that would be useful for future meetings
13:08:19 <armstrong> is this the sahara wg?
13:08:36 <claudiub> sorry, this is the hyper-v meeting. :)
13:08:53 <armstrong> oh my mistake
13:08:58 <claudiub> armstrong: https://wiki.openstack.org/wiki/Meetings/Hyper-V
13:09:18 <armstrong> thanks for the link
13:09:32 <claudiub> no problem :)
13:10:01 <claudiub> anyways, i'll be continuing the meeting. seems sagar already left, but he'll read the logs later
13:10:17 <claudiub> #topic nova patches status
13:11:21 <claudiub> so, the last hyper-v feature to merge in nova in Ocata is the PCI passthrough.
13:11:42 <claudiub> #link Hyper-V PCI passthrough devices: https://review.openstack.org/#/c/420614/
13:12:04 <claudiub> i'll have to update the openstack docs for this as well.
13:12:20 <claudiub> i'll probably start doing that next week.
13:13:01 <sagar_nikam> Hi... I am back
13:13:05 <claudiub> oh hi. :)
13:13:07 <sagar_nikam> anybody there ?
13:13:18 <sagar_nikam> i got connected few mins early
13:13:18 <claudiub> just us it seems.
13:13:22 <sagar_nikam> and then lost network
13:13:32 <claudiub> yeah, i was late for the meeting, sorry. :)
13:13:44 <sagar_nikam> network issues with my data provider
13:13:55 <sagar_nikam> off late... getting disconnected often
13:14:03 <sagar_nikam> we can start
13:14:07 <claudiub> don't pay them then. :D
13:14:19 <claudiub> anyways, the current topic is nova patches status
13:14:22 <sagar_nikam> yes.... i think i should do that
13:14:28 <sagar_nikam> ok
13:14:43 <claudiub> and I was saying that the Hyper-V PCI passthrough patch merged
13:14:51 <claudiub> and it is the last feature to merge in nova in Ocata.
13:14:52 <sagar_nikam> nice
13:15:15 <claudiub> everything else if frozen, except bugfixes, if any.
13:15:21 <sagar_nikam> ok
13:15:27 <sagar_nikam> any bug fixes planned ?
13:15:31 <domi007> oh hi all :D
13:15:53 <sagar_nikam> Hi domi007:
13:16:25 <claudiub> haven't started the testing phase yet, so don't know. :) but the Hyper-V CI is up and green.
13:16:30 <claudiub> domi007: hello. :D
13:16:55 <sagar_nikam> ok
13:17:14 <domi007> sorry for not being that active, currently swamped with including HyperV bootstrapping in an Ansible playbook
13:17:57 <claudiub> anyways. I was asked a few things about Hyper-V SR-IOV and how it works. Just FYI, we don't have support for that yet, but I'm planning to add support for that as soon as possible.
13:18:17 <sagar_nikam> ok
13:18:30 <claudiub> domi007: no problem, wb. :) would be interesting to hear about that in the open discussion. :D
13:18:42 <domi007> sure :)
13:19:22 <claudiub> that's pretty much it for this topic.
13:19:24 <claudiub> any questions?
13:19:51 <domi007> no
13:20:02 <sagar_nikam> no
13:20:12 <claudiub> cool, moving on.
13:20:16 <claudiub> #topic Ocata release status
13:20:35 <claudiub> so, the next deadline is tomorrow, to have the stable/ocata branches cut.
13:20:46 <claudiub> I've already sent requests for this for networking-hyperv and os-win
13:20:53 <claudiub> should be done until then.
13:21:41 <claudiub> with those stable branches, we'll also have release candidates for those projects
13:21:42 <sagar_nikam> what are the changes in os-win ?
13:22:16 <claudiub> sagar_nikam: there isn't any change to os-win, it is just a request to create the stable/ocata branch: https://review.openstack.org/#/c/427662/
13:22:32 <sagar_nikam> ok
13:23:41 <claudiub> #topic open discussion
13:23:58 <claudiub> just a few things here from me
13:24:15 <claudiub> first of all, I'll have to update the openstack docs for the nova PCI passthrough stuff.
13:24:32 <claudiub> to include details about Hyper-V and how they can be used.
13:25:28 <sagar_nikam> ok
13:25:31 <claudiub> secondly, I'll be going to the Atlanta OpenStack PTG, so if anyone there want to meetup, I'd be more than happy to meetup. :)
13:26:36 <claudiub> domi007: so, you were saying about ansible Hyper-V playbooks.
13:26:44 <claudiub> can you share some details with us?
13:27:44 <domi007> sure
13:27:55 <domi007> so basically we are in the transition to Newton
13:28:10 <domi007> and we decided to use the Openstack-Ansible project
13:28:29 <domi007> that has proven to be great according to many production systems around the world apparently
13:28:55 <domi007> so we decided to automate everything using Ansible including HyperV compute hosts with OVS
13:29:51 <domi007> currently it seems there are no issues with this approach, although Ansible's Windows support is quite basic, but with some PowerShell scripts we can overcome that
13:30:02 <claudiub> domi007: interesting. are you planning to send those playbooks upstream?
13:30:29 <domi007> we are not sure yet
13:31:01 <domi007> they aren't as pretty as they could, for example windows hosts are defined in a different inventory file instead of using openstack-ansible's dynamic inventory
13:31:16 <domi007> so there is definitely room for improvement, but it should work in the end
13:31:45 <domi007> also I guess this is the time to point out, that documentation-wise HyperV and OVS is not that mature yet :P
13:32:14 <domi007> the Newton HyperV guide is a good starting point, but it isn't complete at all, and some config options are deprecated or wrong
13:32:37 <claudiub> domi007: hmm, is there any documentation missing? we have some guides on Windows OVS.
13:33:06 <domi007> http://docs.openstack.org/newton/config-reference/compute/hypervisor-hyper-v.html
13:33:11 <domi007> I meant this firstly
13:33:36 <domi007> Your guides are good actually, although some config options needed to be adjusted
13:34:36 <domi007> Actually I wanted to get some input from you on some questions we had regarding to our architecture
13:35:14 <claudiub> domi007: yeah, that openstack docs page should get an update. :) I didn't look at it since.... a while. :D
13:35:27 <domi007> makes sense :)
13:35:45 <claudiub> as for windows OVS, there's this guide which should help: https://cloudbase.it/open-vswitch-2-5-hyper-v-part-1/
13:36:08 <domi007> So the questions: is it considered good practice to use ScaleOut Failover Cluster File server for cinder-volumes?
13:36:31 <domi007> We are planning on using 2 storage heads serving SMB
13:36:59 <domi007> should we install cinder-volume on both, and connect them to openstack, or use haproxy? The heads will connect to the same shared storage
13:37:02 <domi007> appliance
13:38:56 <claudiub> hmm..
13:39:20 <claudiub> soo, the best idea would be to register that cinder-volume service as a cluster service
13:39:40 <domi007> meaning that haproxy should manage which head gets the request?
13:39:50 <claudiub> if a host containing that service goes down, it will automatically be restarted on another node.
13:39:58 <domi007> oh I see
13:40:13 <domi007> so a single cinder-volume service that can migrate between the heads
13:40:16 <domi007> interesting
13:40:56 <claudiub> if you have 2 cinder-volume services running on the same shared filesystem, you will get inconsistentcies when it comes to reported storage left
13:41:09 <domi007> makes sense, we need to avoid that for sure
13:41:31 <claudiub> pretty much like how nova-compute nodes have the same issue on shared storage.
13:41:47 <claudiub> at least this issue is being fixed on nova
13:41:49 <domi007> this is actually good, because Cinder will be HA, but the HyperV servers can use SMB multipath power
13:42:13 <domi007> is being right now? :)
13:42:13 <sagar_nikam> i have tried a POC some years back on cinder-volume HA using failover cluster
13:42:18 <sagar_nikam> worked very well
13:42:32 <domi007> thanks for the feedback, we'll go down that route
13:42:35 <domi007> also what's the current status on Security Groups and OVS in Newton?
13:43:20 <claudiub> domi007: yeah, that sounds good. :)
13:44:40 <claudiub> security groups on OVS.. so, if you are using OVS 2.5, you'll have to use the HyperVSecurityGroupsDriver. if you are using 2.6, you can use either that, or the normal OVS one.
13:44:48 <lpetrut> hi guys
13:44:53 <domi007> nice
13:44:57 <domi007> we are using 2.6 for sure
13:45:12 <claudiub> then both should work. :)
13:45:15 <domi007> performance-wise the OVS one is better right?
13:45:23 <claudiub> anyways. lpetrut knows a lot more about cinder than me. :)
13:45:30 <domi007> :) I see
13:45:37 <lpetrut> about Cinder HA, there are a few ways to achieve this
13:45:45 <domi007> I'm all ears
13:45:52 <domi007> err, eyes? I guess :D
13:46:13 <lpetrut> as you guys were saying, Cinder is now aiming to provide in-built HA capabilities, defining concepts such as clusters
13:47:00 <lpetrut> AFAIK, this is still a work in proggress, which leads us to the second option:
13:47:11 <lpetrut> defining the Cinder service as a Windows clustered service
13:47:19 <domi007> exactly
13:48:10 <lpetrut> the advantage of the Cinder native HA feature will be the fact that it will work as a Active-Active cluster
13:48:41 <domi007> I see, but I guess it is not a true requirement, since the VHDX access goes through SMB anyways
13:49:02 <domi007> but I understand naturally the intention behind it
13:49:50 <lpetrut> yeah, but even if the vhdx themselves remain available, you want your Cinder service to be as highly available as possible as well
13:49:57 <domi007> naturally
13:50:00 <domi007> one last thing
13:50:12 <lpetrut> sure
13:50:13 <domi007> HyperV Clusters. If I'm correct you implemented a driver for them
13:50:21 <lpetrut> yep
13:50:32 <domi007> so in theory we can offer clients highly available instances
13:50:56 <domi007> is it included in Newton? Is it stable for production?
13:51:58 <claudiub> domi007: one little thing to keep in mind. make sure that cluster-registered cinder-volume service has a "cluster-available" hostname. meaning that it would be bad if the cinder-volume service moves from one node to another and its hostname changes. cinder would see it as a different cinder-volume service.
13:52:36 <claudiub> domi007: I think we've added the Cluster support since Mitaka
13:52:37 <domi007> of course, in my head I was already thinking of VIP/hostnames
13:52:49 <domi007> I see
13:53:07 <lpetrut> yep, it's included in Newton. we have some customers that are now going to use it in production
13:53:17 <domi007> nice!
13:53:35 <domi007> it is great to see this project mature so much
13:53:37 <claudiub> also, since you are moving to newton, there is a small detail to keep in mind when upgrading
13:53:51 <domi007> yes?
13:55:28 <claudiub> nova "broke" support for out-of-tree drivers, meaning that until mitaka, you would have in the nova.conf file the driver "hyperv.nova.driver.HyperVDriver" or "hyperv.cluster.driver.HyperVClusterDriver". starting with newton, it has to be "compute_hyperv.driver.HyperVDriver" or "compute_hyperv.cluster.driver.HyperVClusterDriver"
13:55:45 <claudiub> just a small fyi, there were questions about this a few times in the past. :)
13:55:46 <domi007> hahhahaha
13:55:52 <domi007> I had that issue just 2 days ago
13:56:02 <domi007> I struggled for 2 hours :D
13:56:05 <claudiub> :D
13:56:24 <domi007> then found in importutils.py that it prepends the config string with nova.driver :D
13:56:59 <domi007> so yeah, got the experience first hand :D thanks for the heads up anyways :)
13:57:23 <claudiub> yeah, I found out that change during the Newton OpenStack Summit, when I was under fire to fix it asap. :D
13:57:41 <domi007> haha, better late then never I guess
13:57:56 <domi007> but other than these little annoying things Newton seems really good to me
13:58:15 <claudiub> nice, I'm glad. :D
13:58:17 <domi007> so I have high hopes :)
13:58:34 <domi007> huh, I took up all the time, sorry :(
13:58:41 <claudiub> anyways. any questions, news?
13:58:45 <claudiub> domi007: no problem. :)
13:59:09 <claudiub> if you do decide to submit those Hyper-V ansible playbooks upstream, let us know. :D
13:59:16 <domi007> for sure
14:00:00 <claudiub> anyways. our time is up.
14:00:08 <claudiub> thanks folks for joining
14:00:14 <claudiub> see you next week! :D
14:00:21 <domi007> see you
14:00:29 <claudiub> #endmeeting