13:00:59 #startmeeting hyper-v 13:01:00 Meeting started Wed Apr 20 13:00:59 2016 UTC and is due to finish in 60 minutes. The chair is alexpilotti. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:01:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:01:03 The meeting name has been set to 'hyper_v' 13:01:19 hi folks 13:01:21 Hi 13:01:30 hi all 13:01:35 hi 13:01:43 hmm, isn't neutron_qos meeting here at this time? 13:02:08 To begin a quick announcement: we'll skip the meeting next week as most of us are going to be at the Austin summit 13:02:30 hello 13:02:37 ajo: I also thought that :) 13:02:55 ajo: looks like there's some timezone issue here :) 13:02:56 slaweq_, ours is later: http://time.is/es/UTC 13:02:58 sorry :) 13:03:02 aha 13:03:03 ok 13:03:14 DST 13:03:46 ajo slaweq_: the beauty of having meetings in UTC is that around this time of the year it becomes a mess :) 13:04:00 hehehehe alexpilotti yes ':) 13:04:40 yep 13:04:46 sorry for disturb You :) 13:05:12 slaweq_: np! :) 13:05:41 #topic tpool for mixed threads / greenlets 13:06:12 We identified some race conditions around WMI events which are handled in native threads 13:06:47 there's a new patch that merged now in os-win for master and Mitaka 13:07:01 what symptoms can be observed? random WMI operation failing? 13:07:10 basically, we have to avoid using eventlet primitives (or monkey patched ones, such as locks, events, etc) when running code in a different native thread 13:07:38 lpetrut: do you have a trace at hand or even better the link to the bug? 13:07:44 the event listener was dying unexpectedly with a greenlet error, complaining that it cannot switch to a different thread 13:07:48 sure 13:08:43 https://bugs.launchpad.net/os-win/+bug/1568824 13:08:43 Launchpad bug 1568824 in os-win "Event listeners die unexpectedly" [Undecided,Fix released] 13:08:46 thanks 13:10:01 I just noticed that it was unprioritized, added it now 13:11:40 talking about MI and WMI, it's important to note that the Hyper-V WMI v2 provider in Windows Server 2012 has some issues 13:11:50 just added a trace to the bug report as well 13:12:04 so for 2012 we revert to the old WMI when needed 13:12:50 this means that since the old WMI is not supporting parallel operations, we limit the workers to 1 in networking-hyperv 13:13:10 important: this DOES not apply to 2012 R2, only 2012 13:13:21 alexpilotti: if using 2012, we need to use WMI instead of pyMI ? 13:13:44 I think this is all PyMI's internal magic, isn't it? 13:13:45 just for some operations, os-win takes care of that 13:13:54 I meant os-win :) 13:14:23 basically os-win selects the proper WMI module based on the OS 13:14:48 ok 13:14:52 what is annoying, is that we still need to ship pywin32 for supporting 2012 13:15:12 as the old WMI depends on it 13:15:31 that's indeed annoying, but at least everything works everywhere 13:15:38 nothing critical but it's important to note it 13:16:18 #topic Mitaka release 13:16:58 today we're done with all bugs and backports 13:17:39 a new PyMI 1.0.2 release is under test and will be released later today if nothing new shows up on the radar 13:18:18 which means that final tests for our MSI will start as well immediately afterwards 13:18:39 what are the fixes in 1.0.2 as compared to the previous version ? 13:18:57 alexpilotti: once the MSI tests are done it is safe to say that upgrading to Mitaka should be painless? 13:19:35 sagar_nikam: mostly minor things: https://github.com/cloudbase/PyMI/commits/master 13:19:56 ok 13:20:28 the main "feature" is adding a field to the previous WMI instance object during events 13:20:33 this is needed for clustering 13:20:47 ok 13:21:26 beside this, better error reporting when trying to retrieve a non-existing WMI instance property / method 13:23:33 current version is 1.0.2.dev2, which will be a signed 1.0.2 release after tests (unless regressions show up of course) 13:24:05 next 13:24:28 #topic console support specs 13:24:47 after the millionth rebase we have been asked to add a spec for this BP 13:25:17 bp link ? 13:25:22 reason is that we are also working on objects, so it's not a so called trivial BP anymore 13:25:36 lpetrut: do you have a link at hand? 13:25:39 basically, the serial console implementation can go in, they need a spec just for adding the the image props 13:25:47 sure 13:26:24 here's the BP: https://blueprints.launchpad.net/nova/+spec/hyperv-serial-ports 13:26:44 but for the image props, we'll need a spec, which I have not submitted yet 13:27:01 thanks lpetrut 13:27:41 to be clear console support is part of compute-hyperv since ages, but since it needs to merge in nova eventually, this is the status 13:28:25 those were also the first patches in the Newton review queue, so we are now bumping up the next in line 13:29:29 #topic OVS 2.5 13:30:15 we have a release-blocker bug for 2.5, when using LACP, the switch is dropping packets in some conditions 13:30:30 oh I see 13:30:38 ATM this is the last blocker before release 13:30:53 this can be worked around using NetLBFO 13:31:22 but it's obviously much better if LACP can be handled directly by the vmswitch + OVS 13:31:48 especially considering that the user can just apply the same openflow rules on Linux and Windows 13:32:12 sagar_nikam: did you guys did tests with the OVS 2.5 beta? 13:32:57 i think not yet. sonu: team was planning, but as per my understanding it is not yet done 13:33:01 it is planned 13:33:06 ok thanks 13:33:30 #topic Cinder SMB3 / iSCSI driver release 13:33:44 we would do testing, but without documentation we are not sure what to do...should we try to do it based on your previous write up? 13:34:09 oops, sorry, got caught up in the middle of topic change :) I'll leave my questions to the end 13:34:11 domi007: good point, we're still writing the updated docs, let me ping a colleague 13:34:23 thanks :) 13:36:17 domi007: I'm starting an email thread 13:36:40 all right, looking forward to it :) 13:37:02 so you dont have to wait for the public docs to be available 13:37:21 good, I'm grateful 13:37:39 so getting back to the Cinder driver, it's aslo getting released 13:38:32 nothing major to signal, there are various bug fixes but core functionality is retained from Liberty 13:38:55 #topic open discussion 13:39:51 alexpilotti: have you tried Windows NLB ? 13:39:52 Most topics today were clearly focused around the Mitaka release as that's where most of the focus was in the last days 13:39:57 network load balancer 13:40:04 sagar_nikam: in what context? 13:40:18 i am thinking of using it for freerdp 13:40:29 run freerdp on 3 machines 13:40:40 have a cluster IP using NLB 13:41:07 any load balancer that can preserve client affinity and supports websockets is good 13:41:10 and then provide that Cluster IP in nova.conf 13:41:34 so we have this weird issue I already put up on ask and Alin was nice and started looking into it - basically if a network security group is applied to a VM and the group is modified the modification isn't applied to the VM somehow. I couldn't get any traces of it yet, but as far as I know the issue still exists 13:42:24 sagar_nikam: haproxy could be used maybe, we will need some kind of freeRDP clustering as well good that you mentioned this 13:42:26 alexpilotti: i had some questions on NLB, anybody from your team has knowledge on it ? 13:42:49 sagar_nikam: we are using haproxy as well much more than NLB 13:43:04 domi007: haproxy for windows ? i think it is not available, hence i am thinking of using NLB 13:43:29 oh I see, I thought of it because we already have one in our system that balances openstack management apis :) 13:43:32 alexpilotti: haproxy on windows ? how does it work 13:43:46 we use NLB as well, but not being that popular we dont use it as a load balancer in test deployments 13:43:53 sagar_nikam: linux 13:44:18 it's just sitting in front of the Windows cluster 13:44:26 also, IIS has a nice LB feature 13:44:33 as an alternative 13:44:49 alexpilotti: since i am running freerdp on windows machines, i thought NLB was the only option 13:44:51 it's called ARS 13:45:28 alexpilotti: can you/your team help me with using haproxy and windows cluster 13:45:49 If you want a pure Windows deployment, NLB or ARS are the only choices 13:46:14 if not, any LB frontend with the above limits can do 13:46:56 domi007: abalutoiu says he tested on Mitaka and all is good, he's switching to Liberty now 13:47:02 in your environments you use haproxy on linux and it balancing windows cluster ? 13:47:16 do you also have the issue of wsgate sometimes simply not working, but a restart of the service fixes it? 13:47:33 sagar_nikam: the FreeRDP hosts, not the Windows cluster itself 13:48:01 sagar_nikam: in our env we have all linux VMs running openstack's management machines, and between them all API calls are routed through haproxy 13:48:04 alexpilotti: right i meant freerdp hosts 13:48:40 alexpilotti: and what IP is given in nova.conf for freerdp on hyperv hosts ? 13:49:17 domi007: it happened for us on previous versions due to a lock, do you have a dump that we can analyze? 13:49:54 alexpilotti: by dump you mean a log file? does wsgate keep logs at all? haven't really checked this 13:49:56 domi007: if not, if happens again, get in touch with cmunteanu in our team 13:50:15 oh, okay, that's good 13:50:31 domi007: no, a Windows memory dump. You can generate one when the service becomes unresponsive 13:50:58 oh, all right...but there is a newer version available if I understood correctly? 13:51:03 domi007: Windows can generate dumps on BSODs and application crashes, but you can also dump one anytime 13:51:14 makes sense, sure 13:51:28 domi007: last one was in december AFAIK 13:51:57 okay, I think our installation is newer then that, so we will cluster and dump as well 13:52:03 :) 13:52:36 alexpilotti: haproxy VIP is given in nova.conf on hyperv hosts ? 13:53:21 sagar_nikam: yes correct 13:53:30 ok 13:53:35 will try that 13:53:39 since that's what horizon passes to the user 13:54:00 proxies to the user :) 13:54:22 then the LB connects to the FreeRDP farm 13:54:35 ok 13:54:41 and it works well ? 13:54:42 and those connect to the individual Hyper-V hosts based on the Nova token 13:55:01 sagar_nikam: only issues we had are for websocket affinity 13:55:04 never tried using haproxy running on linux connecting windows machines 13:55:28 we do it all the time, mostly for web apps 13:55:36 ok 13:55:52 it's a very simple and lightweight solution 13:55:56 alexpilotti: will mail you if i need any help 13:55:59 since HTTP is universal, haproxy simply pings based on http 13:56:09 this seems much simpler than using NLB 13:56:19 so it should work...we will try this as well, so if you need help sagar_nikam email us 13:56:22 sure, but yeah as domi007 says it's just HTTP :) 13:56:25 as well 13:56:37 sure domi007: 13:57:08 NLB is not limited to HTTP only, but in this case that's all you need 13:57:26 -3' 13:57:31 alexpilotti: agree 13:57:35 any other topic? 13:58:17 Timing out... :) 13:58:24 alexpilotti: 4 meetings scheduled in summit. freezer, monasca, magnum and designate, let me know if you need any help 13:58:46 Good luck in Austin guys, see you in two weeks! Hopefully Mitaka MSIs will be done with testing and out for the public :) 13:59:13 alexpilotti: till then we'll look into OVS 13:59:36 sagar_nikam: thanks for all your help there, we're still waiting for an answer from the Magnum TPL AFAIK, but all the rest of the meetings are planned 13:59:57 thanks guys! 14:00:01 #endmeeting