13:00:39 #startmeeting hyper-v 13:00:40 Meeting started Wed Feb 3 13:00:39 2016 UTC and is due to finish in 60 minutes. The chair is lpetrut. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:45 The meeting name has been set to 'hyper_v' 13:01:10 Hi guys. Alessandro will join us in a few minutes, but we can start the meeting now 13:01:28 Hi 13:01:29 Hi 13:02:51 Hi All 13:02:51 sagar_nikam: anyone else from HP to join the meeting? 13:03:07 lpetrut: me and vinod 13:03:11 we can start 13:03:16 Hi sagar_nikam 13:03:39 hi guys 13:04:21 great, sure. so, as I previously said, Alessandro will join us in a few minutes. Would you like to propose a topic to begin with? 13:04:41 yes, cluster driver 13:04:54 #topic Hyper-v clustering 13:04:55 I have following topic from neutron front 13:05:08 Native thread Fixes in liberty branch 13:05:08 we started that discussion last week towards the end of the meeting 13:05:20 Enhanced RPC merge? 13:05:33 OVS kernel panic 13:05:50 yep, so, on the clustering topic: Adelina was working on this. She said that she got in touch with you 13:06:10 yes, i did ping Adelina yesterday 13:06:22 we are setting up the hyperv cluster 13:06:29 should be ready in couple of days 13:06:47 once that is ready, we can start testing the cluster driver 13:07:17 only issue, our setup is liberty, so it would be good if we have the cluster driver which is liberty based 13:07:23 that's great. unfortunately, she's not in the meeting right now, otherwise it would've been useful if she could've provide some updates on the status 13:07:33 sure 13:07:58 she mentioned to me that liberty based code can be provided 13:08:03 so waiting for that 13:08:03 hey guys 13:08:14 hi alexpilotti: 13:08:25 Hi all, sorry, my internet is acting up 13:09:10 alexpilotti: we are discussing the cluster driver 13:09:25 cool, was just back reading 13:09:33 as mentioned few mins back, we can start testing it 13:09:49 currently setting up the hyperv cluster on 2 machines 13:10:01 sagar_nikam: great! 13:10:02 i did chat with adelina yesterday 13:10:09 on this topic 13:10:19 atuvenie told me that you guys spoke 13:10:33 our setup is liberty based, so it would be good if that patch is liberty based 13:10:40 spoke about it to adelina 13:10:56 once she provides it, we will start 13:11:00 I am currently working on master, a liberty patch may be a little bit tricky given no os-win there 13:11:08 sagar_nikam they introduced changes (versioned objects) afterwards that are not in Liberty 13:11:25 so adapting the patch to Liberty requires extra work 13:11:41 ok 13:11:52 all our setups are liberty based 13:12:05 difficult to find a mitaka setup 13:12:21 will it be very difficult to rebase ? 13:12:36 or, we had a liberty patch sometime back 13:12:40 does it work ? 13:12:47 sagar_nikam: non dramatically difficult, but it requires work 13:12:52 we can pick that 13:13:12 you can take the mitaka patches and backport them sure 13:13:37 on our side we're evaluating if we are going to put them in compute-hyperv liberty 13:13:46 ok 13:14:11 i can pick up from compute-hyperv liberty whenever it is ready (if you decide that) 13:14:15 but we surely won't backport on Nova (as part of a community effort, at least) 13:14:51 not a issue, if not backported to nova 13:15:05 we just need a patch which works in liberty 13:15:07 anyway first we need it to get it top notch on Mitaka :-) 13:15:16 right 13:15:38 i remember that there was a patch in Liberty , does it not work ? 13:15:51 even kilo 13:16:04 we did it initially in kilo 13:16:07 if that works, we can start with it 13:16:44 we can definitely share those patches as well 13:16:57 sure 13:17:06 although back then it was a separate effort, as it was not merged in compute-hyperv 13:17:17 let us know which will be better to test, liberty-nova or compute-hyperv-liberty 13:17:24 we can wait 13:17:54 We'll send you an update as soon as we are done w the Mitaka one, if that works for you 13:18:09 sure 13:18:23 can we move to the next topic? 13:18:29 question on networking in cluster driver 13:18:40 sure 13:18:47 if the instance is moved from host H1 to host H2 13:18:56 done outside of nova 13:18:56 by 13:19:03 failover cluster 13:19:15 the instance will retain the same ip 13:19:26 we just need the switch on both hosts 13:19:33 of same name 13:19:43 is my understanding correct ? 13:20:15 well, the networking is fairly transparent 13:20:29 and similar to what happens in live migration 13:20:50 ok 13:20:59 the networking-hyperv agent (or the OVS one) pick up the new port(s) 13:21:29 and IP and security rules will work as earlier ? 13:21:29 the cluster is used for the VMs, not for IPs or other resources 13:21:53 sure, IP and sec groups are still managed by Neutron 13:22:01 ok 13:22:23 kvinod: do you have any questions on networking in cluster driver ? 13:22:25 our goal is to make cluster support almost "invisible" 13:22:43 alexpilotti: thats good 13:23:04 cool 13:23:06 next 13:23:07 sonu: kvinod: any questions from your end 13:23:19 #topic RPC networking-hyperv patch 13:23:23 The case of OVS will be little different 13:23:37 yes on neutron front the neutron agent will have to start with binding as if if a port was created? 13:23:54 Sorry I am bit slow on network. 13:24:28 i mean after migration to H2 neutron agent will get a new port to prcess for which it will have to do the binding? 13:24:29 Sonu, kvinod: correct the port itself is created differently between OVS ad networking-hyperv 13:24:41 but even it that case it does not affect the cluster itself 13:24:53 it's very similar to the live migration use case 13:25:20 ok , so that menas neworking will not have any impact? 13:25:35 and not work required for neutron 13:25:38 ? 13:25:39 Yes. Nova vif must create an OVS port. Neutron will bind it in normal way 13:26:00 fully transparent 13:26:10 k 13:26:32 getting back to the new topic 13:26:36 RPC patch 13:26:57 Claudiu was last week at the midcycle meetup and is getting back to the office on Monday 13:27:08 but he told me that unit tests are missing on the patch 13:27:40 while abalutoiu tested the patch and results are all good 13:27:55 We covered some tests post claudius comment 13:27:55 last week I uploaded a patch set with some UT code 13:28:31 Request you to review the same and advise if more is needed. 13:29:32 Thanks for testing 13:29:43 * alexpilotti looks for the patch link.. 13:29:44 alexpilotti: could you please convay message to Claudiu to have a look again and post comment about what additional test he is expecting 13:30:00 #link https://review.openstack.org/#/c/263865/ 13:30:47 I see that you amended the commit msg and uploaded a BP, which is great 13:31:35 hello, just dropping in for a moment. the unit test coverage is not 100%, 13:31:42 approved BP and set to "needs review" 13:31:56 also, I've suggested to add some unit tests with some raw enhanced SG rules. 13:32:03 patch 4 had UT code 13:32:04 Thank you much 13:32:10 abalutoiu just conformed that he tested it again on top of the latest patchsets, so seems all good 13:32:26 hyper-v CI is happy as well 13:32:33 good 13:33:00 woot 13:33:09 kvinod: typically, you need to add a unit test per new method / function, covering all code branches 13:33:32 or more, if there are mutually exclusive code branches.. 13:34:49 Vinod do you want to re check on this please. 13:34:53 claudiub welcome back! :) 13:35:03 o/ 13:35:25 claudiub: will have a check again and do the needful 13:35:47 claudiub: to speed up the communication, could you please tell kvinod which are the methods / code paths that still need test coverage? 13:36:06 k, will re-review once the unit tests are submitted. 13:36:12 that would be great, thanks 13:36:48 claudiub: I hope you saw patchset 4 13:37:12 on top of it do let me know what additional UT is required 13:37:56 yeah, _generate_rules, _select_sgr_by_direction, _split_sgr_by_ethertype, update_security_group_rules are uncovered 13:38:41 ok, will cover those also 13:41:04 kvinod: in general we try to achieve max coverage 13:41:18 teh idea is that if there's a code path, a test needs to run it 13:41:27 next topic? 13:42:08 OVS core dump 13:42:38 That we faced on windows 2012 r2. 13:42:43 great 13:42:50 #topic OVS core dump 13:43:14 so, the dump shows that the error is in the underlying driver 13:44:30 and if it doesnt show up on our test environments (Mellanox, Chelsio, etc), we need to find a way to access a similar environment 13:44:44 or at least get a couple of Emulex cards 13:45:06 alexpilotti I'm looking who to reach out for that as well 13:45:29 Ok 13:45:37 Sonu: any chance you could let us access it or send a few boxes to Cambridge so that primeministerp could add them to the test racks? 13:46:29 Sonu: is that going to be your reference architecture or just some "random hardware"? 13:46:36 Sonu, we may only need the hba's 13:46:42 Random 13:47:05 I will find that out. 13:47:07 Sonu: do you have some other NICs around? E.g. mellanox or Intel? 13:47:26 Sonu: this was basically what I was asking in the previous email exchange 13:47:27 how are the nics configured? e.g. teaming, etc 13:47:38 I will find that out. 13:48:06 Sonu since the error is in the driver, we might need to speak with the Emulex driver team 13:48:09 No team 13:48:21 at Emulex I mean :-) 13:48:56 sonu: did you mean no NIC teaming ? 13:49:03 I understand. Let me try with other nic model. 13:49:20 it's a closed source driver, so we have no idea what happens in it, so debugging options are limited :-) 13:49:58 Sagar we did not team the nics yet for OVS 13:50:06 It could be that: 1) we send some perfectly fine packets that the diver is not handling properly or 2) or packets that we send have issues 13:50:35 It is very consistent, so most likely it is 2 13:51:04 Sonu: consistent on different blades ? 13:51:04 Sonu: consistent across different NIC models? 13:51:08 With Windows 2012 r2 data center edition we faced this issue. 13:51:25 Sonu: can you try on another blade 13:51:54 Sure. 13:52:11 Tried only on emulex. 13:52:30 Should get some qlogic adapters ad test. 13:52:36 I already asked aserdean on my OVS team to look into this, so we'll try to replicate it ASAP, but since it seems like a fairly straightforward use case, I'm not expecting much 13:53:02 Thx 13:53:59 anything else on this topic? 13:54:09 we have 7' to go 13:54:24 in last week's meeting 13:54:25 lpetrut: did you speak about FC already? 13:54:48 we discussed about thala using the liberty based patches for his scale testing 13:55:01 he is still waiting for those patches 13:55:26 alexpilotti: well, I have no updates related to the FC topic, other than the fact that the Nova patches have been -2ed because of the feature freeze 13:55:36 sagar_nikam: yeah, claudiub is working on that, but as I was saying he was off the last two weeks 13:56:17 alexpilotti: sure, we will wait and test it when available 13:57:05 also, thala was not able to proceed with testing due to difficulty in porting native thread and enhanced rpc patch on top of stable liberty 13:57:42 kvinod: couldn't Thala just run a test without those patches for now? 13:58:23 No, the idea was to run the test with all updates and available patches 13:58:27 alexpilotti: i think he was waiting based on the discussion we had last week 13:58:29 kvinod: also, the enhanced RPC one if I'm not mistaken requires a Neutron patch, which merged in master 13:58:37 and that is what we discussed in last IRC 13:59:12 neutraon dependent patch was also taken 13:59:35 kvinod: did you backpot it to livberty already? 13:59:44 so with all patches on top of liberty is causing exception 13:59:54 ah ok 14:00:17 no I thought of dicussing this today about the plan to backport 14:00:36 backporting to liberty 14:00:54 the neutron patch can already be backported I guess 14:01:06 do we have plan for this about backporting the patches to liberty 14:01:08 should this topic be discussed in hyperv channel now ? 14:01:19 since we are almost done with time 14:01:22 to stable/liberty in upstream Neutron, I mean 14:01:32 uh oh out of time folks :) 14:01:40 #endmeeting