13:03:21 #startmeeting hyper-v 13:03:21 o/ 13:03:22 Meeting started Wed Dec 9 13:03:21 2015 UTC and is due to finish in 60 minutes. The chair is alexpilotti. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:03:23 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:03:25 The meeting name has been set to 'hyper_v' 13:03:40 morning folks 13:03:43 o/ 13:03:44 Hi Everybody 13:03:49 Hi 13:03:53 Hello 13:03:58 Hello everyone 13:04:28 o/ 13:04:40 hi 13:04:59 sagar_nikam: anybody that we are waiting on the HP side? 13:05:17 no, everybody is present 13:05:21 we can start 13:05:47 cool, primeministerp is a bit late, everybody else is here 13:05:53 so 13:06:00 #topic mitaka patches 13:06:42 #link https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking 13:06:45 based on ^ 13:07:05 the current 3 top patches are 13:07:08 #link https://review.openstack.org/#/c/237593/ 13:07:16 #link https://review.openstack.org/#/c/175479/ 13:07:24 #link https://review.openstack.org/#/c/246298/ 13:07:49 sagar_nikam: you asked me about OVS, the 3rd one is one of them 13:08:29 this one ? https://review.openstack.org/#/c/246298/ 13:08:33 got a review yesterday from Nithin, which works with us on the OVS Hyper-V driver community 13:08:40 that's not networking 13:08:55 https://review.openstack.org/#/c/140045/ 13:09:10 #link https://review.openstack.org/#/c/179727/ 13:09:14 this one 13:09:43 my bad, I copied the next 3 patches :) 13:09:44 yes 13:09:50 that is the correct one 13:10:04 so, again, the 3 patches are: 13:10:15 #link https://review.openstack.org/#/c/237593/ 13:10:21 sonu: this is of higher priority, can you give details 13:10:23 #link https://review.openstack.org/#/c/184038/ 13:10:36 #link https://review.openstack.org/#/c/179727/ 13:11:09 networking related is only #link https://review.openstack.org/#/c/179727/ 13:11:22 what about https://review.openstack.org/#/c/140045/ 13:11:26 this is a priority for us too. Since this code is not in the driver, either we get it in Nova or we need to cherry-pick it in a fork 13:12:03 so the whole OVS thing is something we want to get in ASAP, based on Nova's review bandwidth 13:12:09 Sagar for us to consume, we need it in Nova. 13:12:42 I can add some details about the patch that snonu is refering to 13:12:47 Sonu: those are the 3 top patches, there's only one for OVS, as stated above 13:12:48 Sonu: fine, got it 13:13:12 atuvenie_: please go ahead 13:13:32 so, it's work in progress. It doesn't account for live migration 13:13:43 by the end of the day there is going to be a new patchset 13:14:06 it's tested on compute-hyperv and I will cherry pick it and test it on nova as well 13:14:16 atuvenie_: there was some refactoring from Kilo / Liberty related to Live migration if I well recall 13:15:14 sagar_nikam Sonu: anything you'd like to add on this patch? 13:15:31 where will HP consume the content of this patch from? 13:15:36 yes, when is live migration support planned 13:15:40 from openstack/nova or compute-hyperv 13:16:34 sonu: if it does not get merged in nova, we cherry-pick the patch in review 13:16:42 fine 13:17:10 Sonu: we dont get your comment in the review 13:17:11 alexpilotti:my question on live migration 13:17:27 sagar_nikam: atuvenie_ said she's uploading a new patch 13:17:29 regarding live migration, if we apply OVS flow rules in br-int bridge, will live migration work as expected 13:17:49 ok, that supports live migration. got it 13:17:54 Sonu: of course 13:18:11 Sonu: I saw you left a -1 on the patch, but all you have is a question 13:18:26 nope 13:18:27 Sonu: in general, if you have questions please leave a neutral comment. 13:18:37 I have given code comment 13:19:15 atuvenie_: did you see Sonu's comment? 13:19:58 Sonu: I totally dont get what you mean 13:20:08 yes, I don't think the cache should be moved but investigating options before answering 13:20:12 with "Not a good idea to put the caching requirement on client. If the implementation demands a single instance, it must be handled in this function." 13:20:44 this is a cache of classes implementing the behaviours by vif type 13:20:53 vif driver instance need not be maintained in cache in vmops. 13:21:24 could you explain why not? :) 13:21:50 _get_vif_driver() is a form of factory methiod 13:22:06 this just to avoid loading the same class over and over all the time 13:22:11 and a factory method generally defines when to create a new instance etc. 13:22:27 not necessarily 13:22:35 calling a factory method can still return a single instance, provided factory caches it. 13:22:39 those drivers are stateless 13:23:06 there's no reason to create new instances all the time 13:23:41 said that, it's a relatively minor performance improvement 13:23:45 there's a comment: 13:23:49 # with instantiated classes will cause tests to fail on 13:23:56 # non windows platforms 13:25:06 Please do comment on my review comment, and I shall take appropriate action. 13:25:08 my main reason for not moving the cache would be that for live migration I have to call the driver post_start method. That would imply importing the vif which would mean I have to copies of the cache 13:25:13 hmm looks like adelina dropped off, waiting for her to join back 13:26:08 I'm back. comment above 13:26:33 hmm please do put the same comment in the review, I shall take a look at it. 13:26:39 ok 13:26:51 alexpilloti:on the earlier question by Sonu: OVS Flow on br-int, will it work in cluster driver as well, 13:26:56 Sonu: did you ever run this code? 13:27:13 I am running OVS and Hyper-V with VLAN and security group enabled 13:27:42 sagar_nikam: we already said that whatever we do in this area will support the current and planned features sets 13:27:49 basically we are trying to check how it works with live migration triggered by Nova as well as triggered by failover cluster manager 13:27:53 this includes live migration and clusters of course 13:27:58 o, good 13:28:00 ok 13:28:07 thats great. I will try that and let you know. 13:28:28 Sonu:you had a question of security groups 13:28:38 is that answered ? 13:28:43 Sonu sagar_nikam: you saw that we are rebasing the cluster BP patches? 13:29:07 yes. br-int will host all the security group rules for Hyper-V using ovs firewall driver 13:29:10 alexpilotti: we added review comments on it yesterday 13:29:44 and as per alexpilotti live migration will migrate these rules as well as it was the case with native HV Vswtich 13:29:55 Sonu: what ovs firewall driver? the conntrack based one? 13:30:13 live migration will be done by the Failover Cluster Manager. 13:30:13 you mean liver migration triggered by failover cluster ? 13:30:23 will also migrate these rules 13:31:11 For ESX based solution, we have OVS firewall driver with connection tracking using learn flows. We are evaluating the driver with OVS hyper-V 13:31:50 Sonu: you might want to wait for conntrack to be implemented in Hyper-V for this :) 13:32:35 alexpilotti:this is the patch we reviewed and gave some comments https://review.openstack.org/#/c/199037/ 13:33:13 dont yet see any rebase of it, 13:33:41 were you refering to some other patch, when you mentioned cluster patch is rebased 13:34:27 * alexpilotti checking 13:35:07 you colleague put a -1 asking questions 13:35:54 as a general rule, if you dont want to add further delays on reviews, it's better to put neutral reviews when you dont understandhow things work :) 13:36:30 is snraju taking part of the meeting by any chance? 13:36:38 no, i think what he meant was how will DB update work and also if the glance image gets downloaded to CVS and concurrent downloads happen on 2 hosts in a cluster for same image, things will nto work 13:37:02 i'm going to answer the comments today. as for the rebase, the cluster utils will have to be submitted to os-win 13:37:17 sagar_nikam: yeah, it won't be a problem 13:37:33 claudiub_: thanks 13:37:53 also any plans of supporting multiple CSVs ? 13:37:53 sagar_nikam: there is a lock for the image path, so the same image won't be downloaded twice 13:37:58 sagar_nikam: CSV itself, being a shared resource, is not handled properly by Nova ATM 13:38:05 we already talked about this 13:38:40 ok, lock makes sense 13:38:50 last time I synced with claudiub_ about this, jaypipes AFAIK said he;d like to implement support for similar cases 13:38:57 support for multiple CSV ? 13:39:15 is that planned ? 13:39:17 the idea is that the CSV storage will be seen as a single storage by all Nova nodes in the cluster 13:39:30 agree 13:39:42 say that you have 1000GB on the CSV volume and that we have 4 hosts 13:39:43 but the host can also have multiple CVS 13:39:51 each of them will report 1000GB 13:40:15 sagar_nikam: the multiple CSV for now is just a secondary extension 13:40:16 which i think is fine. any issue with it 13:40:45 sagar_nikam: of course: the scheduler thinks that there's 4 * 1000 GB space 13:41:37 atthe same time, when we deploy an instance with, say 100GB disk flavor 13:41:43 ok 13:41:45 agree 13:41:56 all nodes will see 900GB free after allocating the space 13:42:17 let's now get to the point were the cluster has 50GB 13:42:34 Nova sees 4 * 50 GB 13:42:54 but if we try to spin 4 instances with 40GB hdd it will fail 13:43:10 although the scheduler will think it's perfectly fine :) 13:43:31 makes sense? 13:43:43 the vmware cluster driver also has the same issue, though it is for CPU and memory 13:43:58 got it. so what is the fix planned 13:44:15 you are asking me like we own Nova :) 13:44:33 this is a Nova problem, not a driver problem 13:44:46 ok, you mean fix is outside of driver ? 13:44:48 we need a BP at the Nova resourse tracker level 13:44:53 ok 13:45:01 sagar_nikam: lolz 13:45:28 sagar_nikam: as written above: 13:45:45 ok 13:45:59 jaypipes said he wanted to take a stab at it 13:46:08 but that was at the last midcycle meeting 13:46:14 alexpilotti: https://review.openstack.org/#/c/225546/1 13:46:15 we dont have updates 13:46:41 will it stop from the cluster patch getting merged ? 13:46:46 alexpilotti: I am in the process of pushing a decomposition of that spec into three smaller chunks. 13:46:47 jaypipes: sweet tx!! 13:47:18 alexpilotti: here is the one most relevant to you: https://review.openstack.org/#/c/253187/2/specs/mitaka/approved/generic-resource-pools.rst 13:47:29 sagar_nikam: no it wont, it will be just be a note 13:47:45 jayppipes: Hi, nice to see you 13:48:07 sagar_nikam: the patch that jaypipes just linked will help us in a ton of situations 13:48:24 jaypipes:thank you 13:48:36 np guys 13:48:41 including Gen2 VMs, remotefx and in general all the compute resources that the driver exposes 13:48:47 jaypipes:thank you 13:48:54 jaypipes: do you have a timeline for this to merge? 13:49:13 jaypipes: as in N, O... 13:49:29 I guess Mitaka is quite impossible 13:49:38 alexpilotti: sorry, I do not know the answer for that. we will work on it in Mitaka. no idea when it would merge though 13:50:15 jaypipes: fair enough, please let me know if you need resources to work on it / review / etc 13:50:53 jaypipes: among the features outside of the driver, it's possibly our main pain point 13:51:26 sagar_nikam: any other questions on this? 13:51:40 no 13:51:49 ok, changing topic 13:51:53 #topic FC 13:52:17 things are progressing very well 13:52:24 lpetrut: want to add something? 13:53:33 alexpilotti: sure, the implementation for passthrough attached FC disks is almost ready 13:54:23 the issue which we discussed last time, which required help from MS Storage team, is it resolved ? 13:55:00 do you mean the issue we had in retrieving the physical disk path for a specific volume? 13:55:17 yes 13:55:40 that got solved by using native APIs instead of WMI 13:55:57 yes, there seemed to be an issue with the WMI API, so I've rewritten that using the hbaapi functions. The only issue with that is that it did not work remotely, so I had to do a small refactoring on live migration 13:56:01 so that we don't break that 13:56:26 ok 13:57:01 one more point made by Kurt, on using os-brick 13:57:10 next, I'll take care of the other scenario, when we expose virtual HBA ports to the instance directly, that should be the last step 13:57:32 time is almost up 13:57:44 #topic open discussion 13:57:47 sure, but for the beginning, I was considering merging this in Nova, next to the other volume drivers we currently have, and move those out as a later effor 13:58:13 alexpilotti: Regarding Microsoft official support statement for OVS Ext in Hyper-V, I did not get any response from MSFT. Can this be done for OVS 2.4? 13:58:29 Just checking. 13:58:47 primeministerp: would you like to answer this one? 13:59:26 Sonu: I won't make official statements on behalf of MSFT :-) 13:59:37 sure. Not a problem. 13:59:56 Sonu: your question on building OVS on windows 14:00:02 Sonu: what I can tell you is that the current plan is to submit 2.5 to WHQL 14:00:06 can you check that 14:00:12 this does not mean any support from MSFT 14:00:52 Cloudbase supports and will support commercially OVS on Hyper-V 14:01:04 I get that. And when is OVS 2.5? 14:01:11 tentatively? FY16? 14:01:20 for the rest, the upstream code is OSS so anybody can take it and compile it 14:01:36 I get it. 14:01:59 Sonu: we're waiting for the OVS TPL to branch 14:02:16 so it will happen very soon 14:02:28 Thanks. 14:02:35 more details on the OVS community, we have a weekly meeting on Tue there 14:02:42 we need to start the neutron_qos meeting guys :) 14:02:45 hi 14:02:53 bye everyone. 14:02:53 #endmeeting