13:00:21 <claudiub|2> #startmeeting hyper-v
13:00:23 <openstack> Meeting started Wed Jul 27 13:00:21 2016 UTC and is due to finish in 60 minutes.  The chair is claudiub|2. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:25 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:00:27 <openstack> The meeting name has been set to 'hyper_v'
13:00:45 <sagar_nikam> Hi
13:00:48 <claudiub|2> damn, I have a bad nickname
13:00:52 <claudiub|2> hello. :)
13:01:19 <sagar_nikam> how was the mid cycle meetup ?
13:01:42 <claudiub|2> well, tirering. :)
13:01:54 <claudiub|2> we'll get to the details shortly. :)
13:02:06 <sagar_nikam> ok
13:02:15 <claudiub|2> anyone else joining us today?
13:02:34 <claudiub|2> lpetrut won't be with us today, as he's on vacation
13:02:42 <claudiub|2> anyways, let's get started
13:02:51 <claudiub|2> #topic os-brick status
13:03:03 <claudiub|2> sooo, we have a voting CI on os-brick
13:03:06 <claudiub|2> yeay. :)
13:03:12 <sagar_nikam> good ...
13:03:37 <claudiub|2> buut, currently, it only tests iscsi and smb. fibre channel will be added soon, probably by next week.
13:03:55 <atuvenie_> hi guys!
13:03:56 <claudiub|2> lpetrut has worked diligently on it. :)
13:04:04 <claudiub|2> atuvenie_: hi. :)
13:04:05 <abalutoiu> hello guys
13:04:26 <sagar_nikam> thanks lpetrut:
13:04:53 <itoader> hi
13:04:53 <claudiub|2> for now, I'll bug hemna and smcginnis to review the smb patch on os-brick for now.
13:05:03 <sagar_nikam> hi atuvenie_: abalutoiu:
13:05:13 <sagar_nikam> ok
13:05:17 <claudiub|2> that's mergeable, and they shouldn't have any more complains regarding ci for it
13:05:28 <sagar_nikam> i will also try to get hemna's review
13:05:39 <claudiub|2> sagar_nikam: cool, thanks. :)
13:05:51 <claudiub|2> #topic designate status
13:06:16 <claudiub|2> abalutoiu: the patch didn't merge yet. abalutoiu has been addressing comments
13:06:54 <claudiub|2> hopefully it'll get merged soon.
13:07:28 <claudiub|2> #topic shielded VMs
13:08:05 <claudiub|2> sooo, we've said this in the past meetings: Hyper-V 2016 comes with a new feature called shielded vms
13:08:32 <claudiub|2> it's a pretty neat feature, the instance is fully encrypted and safe
13:08:49 <claudiub|2> it is merged at the moment in compute-hyperv
13:09:04 <sagar_nikam> planned for "o" ? upstream
13:09:11 <sagar_nikam> BP approved ?
13:09:22 <claudiub|2> if you guys are planning to use it, that would be great. :)
13:09:38 <sagar_nikam> we will try it ....
13:09:43 <sagar_nikam> nice feature to have
13:10:19 <claudiub|2> sagar_nikam: the blueprint was approved in the past, but it wasn't approved in newton, since there were some changes in how shielded vms were implemented, so we had to rewrite some parts of the spec.
13:10:32 <sagar_nikam> ok
13:11:08 <claudiub|2> itoader: can you share some links on this topic?
13:11:22 <claudiub|2> on how to use them / how to create the env for it?
13:11:43 <itoader> Here is explained the concept and how to do the setup https://cloudbase.it/hyperv-shielded-vms-part-1/
13:12:22 <itoader> And this is the link on shielded vms in openstack https://cloudbase.it/hyper-v-shielded-vms-part-2/
13:12:54 <claudiub|2> cool, thanks. :)
13:13:18 <itoader> I consider that everything needed it's explained in the blog posts, but if you have any questions, I'll gladly answer them :)
13:13:42 <sagar_nikam> we will check
13:13:54 <sagar_nikam> was  reading the blog now... sounds very intresting
13:14:28 <claudiub|2> cool, moving on. :)
13:14:42 <claudiub|2> #topic OpenStack Summit presentations
13:14:53 <sonu> claudiub: Can you give reference to use cases of using shielded vms with openstack?
13:15:26 <claudiub|2> soo, just a short topic, the voting for presentations in barcelona are open.
13:15:30 <claudiub|2> #link https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/presentation/16466
13:15:43 <claudiub|2> sonu: just a sec.
13:15:51 <sonu> Is it Telco use case.
13:16:33 <claudiub|2> soo, there are a few presentations regarding windows and hyper-v that would be nice to have them there. for that, they need voting.
13:17:09 <claudiub|2> unfortunately, the vote-for-presentations link is a bit... bad, and I cannot link the exact presentations directly
13:17:26 <sagar_nikam> ok
13:18:07 <claudiub|2> but, a quick seach for Alessandro, Samfira, Vladu, and Sonu will reveal those presentations. :)
13:19:09 <claudiub|2> so, if you could, please vote on those presentations. :)
13:19:32 <sagar_nikam> sure
13:20:00 <claudiub|2> sonu: ok, so now answering your question: the usecase is whenever security is a huge concern
13:20:14 <claudiub|2> sonu: and the VMs and their data needs to be protected
13:20:24 <sonu> I got my answer Claudiu Thanks
13:20:52 <claudiub|2> e.g.: vms related to banks, financial transactions, personal data / info, etc.
13:20:57 <sonu> Yes
13:21:15 <claudiub|2> k
13:21:33 <claudiub|2> #topic nova midcycle meeting status
13:21:48 <claudiub|2> ok, so this is going to take a while
13:22:18 <claudiub|2> it'll be hard to compress 3 days worth of discussions into 35 mins.
13:22:53 <sagar_nikam> ok
13:23:06 <claudiub|2> for a comprehensive view on the topics discussed at the midcycle, there's an etherpad
13:23:09 <claudiub|2> #link https://etherpad.openstack.org/p/nova-newton-midcycle
13:23:11 <sagar_nikam> how was the discussion on clusterdriver
13:24:15 <claudiub|2> soo, on the cluster driver, the nova folks reaaaallly don't like the fact that failover migration can occur without nova's consent
13:24:56 <claudiub|2> they say that ideally, the hyper-v cluster driver should call nova's api to actually do the failover
13:25:14 <sagar_nikam> oh ....
13:25:23 <sagar_nikam> driverf calling api ?
13:25:30 <claudiub|2> so that the claims, affinity, and other scheduling rules will be applied to the failover
13:25:33 <sagar_nikam> driver calling api ?
13:25:53 <claudiub|2> yeah, don't like it either. :)
13:26:18 <claudiub|2> but the thing is, the whole scheduler / claiming / placement logic is being heavely refactored.
13:26:33 <claudiub|2> right now, the claims are being made locally, on each compute-node
13:26:50 <claudiub|2> and for now, the claims are correct with the cluster driver
13:27:05 <claudiub|2> but all the claims and all its logic are going to be moved to the scheduler
13:28:01 <claudiub|2> so, unless we do the failover "via the api" as they say, the resource claims won't be correct when a failover occurs.
13:28:37 <claudiub|2> right now, I'm thinking how we can manually do the failover...
13:28:37 <sagar_nikam> but ... driver calling api ... may not be good
13:29:58 <claudiub|2> so, there's a field on the MSCluster_Resource object, that says how many times can it failover. wondering if we can set it to 0, detect whenever a failover needs to happen, and then call the api for it
13:30:21 <claudiub|2> that's going to require some experiments.
13:30:55 <sagar_nikam> you mean nova driver detects when the failover has to happen ?
13:31:07 <claudiub|2> yep
13:31:20 <atuvenie_> claudiub|2 but: but that kind of defeats the purpose of failover
13:31:23 <claudiub|2> so, unless we can do something like this, they won't say yes to the spec.
13:31:31 <claudiub|2> atuvenie_: i know
13:31:47 <claudiub|2> atuvenie_: it is going to slow the failover a lot. :)
13:31:54 <sagar_nikam> how will hardware failures on another node be detected ?
13:32:17 <atuvenie_> claudiub|2: also, if we set that field to 0, it will not failover to another node ever. That means, how many times the vm can be moved around
13:32:32 <atuvenie_> claudiub|2: that field I mean
13:32:58 <sagar_nikam> i think setting the field to 0 defeats the purpose of HA
13:33:18 <claudiub|2> sagar_nikam: so, cluster resources are available at the cluster level. the cluster service detects whenever a failover needs to occur and does it. at the moment, in the compute-hyperv we have a wmi listener, which detects whenever a cluster resource changes its host.
13:34:10 <claudiub|2> atuvenie_: i know, that's the point. basically to disable the automatic hyper-v failover, so we can attempt to do it manually.
13:34:10 <sagar_nikam> and where is the wmi listener running ?
13:34:41 <claudiub|2> sagar_nikam: on all nodes. they all listen to the event "when a cluster resource changes the its host to me"
13:34:46 <sagar_nikam> doing it manually can mean the VM can get powered off... which may not be a good solution
13:34:57 <sagar_nikam> oh...
13:35:17 <atuvenie_> atuvenie_: yeah, wait, if that value is 0 and the system needs triggers a failover, then the vm will be in error state
13:35:41 <atuvenie_> claudiub|2: so you mean we should detect this and move it manually then?
13:35:44 <sagar_nikam> you mean nova-compute running on all the nodes in the cluster will listen to this wmi listener ?
13:36:10 <claudiub|2> atuvenie_: pretty much, yeah.
13:36:12 <atuvenie_> claudiub|2: error state in hyperv, not in nova I mean
13:36:46 <claudiub|2> atuvenie_: not sure it is going to be explicitly in error state
13:36:58 <atuvenie_> claudiub|2: then how is this different than a cold migration? I don't even know if we can recover it from that state
13:37:01 <claudiub|2> we'll have to see what exactly happens if failover count is 0
13:37:13 <atuvenie_> claudiub|2: I think we can actually, but what about hardware failure?
13:37:48 <claudiub|2> atuvenie_: if there's a hw failure, the vm will be in off state anyways
13:37:59 <atuvenie_> claudiub|2: no it will not
13:38:10 <sagar_nikam> agree with atuvenie_: we need to handle hardware failures
13:38:13 <atuvenie_> claudiub|2: it will be restarted on another node pretty fast
13:38:21 <claudiub|2> the hyper-v cluster documentation says that it will not guarantee that the failover vms will have the same state as before failover
13:38:22 <atuvenie_> claudiub|2: from a saved state
13:38:24 <sagar_nikam> currently ... the mscluster handles it
13:38:47 <atuvenie_> claudiub|2: it's not the exact same state, but pretty close, and certainly not from off state
13:39:17 <atuvenie_> claudiub|2: it's the closest saved state the hyper-v cluster has
13:39:57 <claudiub|2> if there is such a saved state, we can restore that state on another host then.
13:40:37 <atuvenie_> claudiub|2: also, if we do this, this is not taking advantage of any of the clustering features in hyper-v, we can just make our own cluster manually and be done with it, cause this way, we use the hyper-v cluster for nothing if we don't use any of it's features
13:40:52 <atuvenie_> claudiub|2: I don't think we can access that state
13:41:52 <claudiub|2> why not? why don't we have access to it, but the cluster service magically has access to it?
13:42:40 <atuvenie_> claudiub|2: it's how the hyper-v cluster works. I assume we don't have access there, but we can check
13:43:37 <atuvenie_> claudiub|2: still, this sounds like a pretty nasty hack to be honest.
13:44:37 <claudiub|2> well, nova core didn't offer any other solution
13:45:32 <claudiub|2> anyways.
13:45:42 <claudiub|2> remains to be seen how we can address this
13:46:01 <claudiub|2> as for other news, there are a couple of them
13:46:09 <claudiub|2> live-resize is going to be a thing, finally
13:46:13 <claudiub|2> yeay
13:46:31 <claudiub|2> but I'll have to do a blueprint beforehand, for a new api
13:46:59 <sagar_nikam> ok
13:46:59 <claudiub|2> which can basically tell you "what you can do", given your permissions as a user and the capabilities of your cloud.
13:47:41 <claudiub|2> the live-resize will be implemented for all drivers, there are volunteers for each of the drivers.
13:47:59 <claudiub|2> as for host capabilities, there's going to be a new project called os-capabilities
13:48:34 <claudiub|2> and there will be all sorts of capabilities, for cinder and neutron, not only for nova.
13:49:05 <claudiub|2> so, we'll have to handle the hyper-v related capabilities in the near future on that project.
13:49:11 <sagar_nikam> ok
13:49:40 <claudiub|2> multiple ephemerals, nova does them, but they have no idea why they were introduced in the first place
13:49:50 <claudiub|2> and they might be deprecated in the future.
13:50:22 <claudiub|2> then there's the new placement api
13:50:34 <claudiub|2> which will be the next step for the scheduler
13:50:54 <claudiub|2> and which will be outside of nova, as it will also be used by nova, neutron, cinder.
13:51:21 <sagar_nikam> so where is this new API coming ?
13:51:24 <sagar_nikam> if not nova
13:51:36 <claudiub|2> although there's still plenty of work to be done on that, plus, they want to make sure the host capabilities fits very well in it.
13:51:59 <claudiub|2> sagar_nikam: it is going to be a separate project
13:52:11 <sagar_nikam> ok
13:52:16 <sagar_nikam> os-capabilities ?
13:52:33 <claudiub|2> no, another
13:52:39 <claudiub|2> no name yet
13:53:18 <sagar_nikam> ok
13:53:45 <claudiub|2> and yeah, there were a lot of talks about how to evolve the nova api in the future
13:53:59 <claudiub|2> as they want to get rid of most of the api extensions
13:54:14 <claudiub|2> as you know, they already removed the legacy v2.0 api
13:54:33 <claudiub|2> and how to finally get rid of nova-network
13:54:55 <claudiub|2> by constantly breaking bits of it, until people finally move to neutron.
13:55:28 <claudiub|2> anyways.
13:56:32 <claudiub|2> there were other topics as well, you can read them in the etherpad
13:56:52 <claudiub|2> but those were the major things, and things that we had an interest in.
13:57:02 <claudiub|2> #topic open discussion
13:57:20 <claudiub|2> anything here?
13:57:49 <sagar_nikam> back to cluster driver in compute-hyperv... any further discussion on iscsi support ?
13:58:34 <claudiub|2> atuvenie_: ^
14:00:42 <claudiub|2> hm, she got disconnected
14:00:43 <atuvenie_> sagar_nikam: if you want to go ahead with the idea of having each node login all targets you can propose a patch
14:01:02 <claudiub|2> hm, i was wrong. :)
14:01:11 <sagar_nikam> ok
14:01:15 <claudiub|2> anyways... need to end the meeting
14:01:27 <sagar_nikam> thanks
14:01:27 <claudiub|2> thanks folks for joining, see you next week!
14:01:30 <claudiub|2> #endmeeting