13:03:00 #startmeeting hyper-v 13:03:00 Meeting started Wed Jan 27 13:03:00 2016 UTC and is due to finish in 60 minutes. The chair is alexpilotti. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:03:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:03:03 The meeting name has been set to 'hyper_v' 13:03:05 Hi 13:03:08 o/ 13:03:10 Hi 13:03:18 morning folks 13:03:24 o/ 13:03:29 o/ 13:03:33 o/ 13:03:45 kvinod: sonu and sagar are joining us today? 13:03:47 hi all 13:04:07 alexpilotti: Sagar will join in 10 min and Sonu will not be able to make it 13:04:12 Hi All 13:04:22 oh Sagar is in 13:04:23 just a bit late 13:04:48 hi all! 13:04:54 #topic FC 13:05:05 sagar_nikam: any updates on the reviews by any chance? 13:05:47 alexpilotti: Kurt and Hemna are in cinder midcycle meetup 13:05:54 hence reviews are delayed 13:06:07 will ping them again 13:06:14 tx 13:07:04 about midcycle, claudiub is at the Nova one. Any updates already worth sharing here? 13:07:22 hello 13:07:33 a loooot of talk about the scheduler and resource pools basically 13:07:47 #topic Nova midcycle updates 13:07:56 apparently, in the future the scheduler might be a separate project itself 13:08:16 wow, right about time, if you ask me :) 13:08:24 as there is talk about networking resource pools and volume resource pools, basically cinder and neutron to use the same scheudler 13:08:47 sweet 13:09:01 anyways, using the jaypipes' resource pools, the storage problem on the cluster will be solved. 13:09:07 any target release? e.g. N or O? 13:09:32 but the thing is, it a 7 part spec, so, for M, they say the first 3 will be implemented 13:09:48 the rest will be for N. 13:10:03 ok, so our nephews will use it, got it :) 13:10:50 claudiub: are those first 3 realistically merging in M? 13:11:10 any discussion on drivers ? separate project for nova drivers ? like we have in neutron 13:11:13 other than that, a lot of discussion for libvirt's live migration, they are going to move away from the current implementation with ssh, which is slow 13:11:33 alexpilotti: yeah, that is the plan 13:11:58 claudiub: hyper-v has by far the best live migration, hopefully that wont require bit changes on our side 13:12:03 alexpilotti: but i don't think those 3 will be enough to solve the problem, as the last part is the scheduler part, which is the most important for solving this issue 13:12:25 * alexpilotti sobs 13:12:37 as for splitting the drivers, no, same things, unfortunately 13:12:48 ok 13:12:50 although, there's a new project on the block, called os-vif 13:12:50 claudiub: any other relevant updates from the midcycle front? 13:13:07 which is going to refactor the vif stuff from nova and neutron into a separate project 13:13:26 so, I think any ovs vif plugging we have in nove would go there. 13:13:42 ok 13:13:53 from which release ? N or O ? 13:14:02 as for other updates, specs for N are open, going to try to get live-resize approved. 13:14:51 sagar_nikam: you mean the os-vif? there are still a couple of issues that have to be addressed before it officially replace the vifs in nova 13:15:07 ok 13:15:16 so, it's a work in progress at the moment 13:16:08 as for other things, for the second day, its been mostly cross-project topics with neutron and cinder mostly 13:16:53 i guess one thing that we'll have to take care of in the future, is the neutron hyper-v ml2 driver 13:17:13 as it will have to be changed, to return an os-vif versioned object 13:17:33 from what I understood. 13:17:59 questions? 13:19:33 if there are no other questions, I'd go w the next topic 13:19:34 no from my side 13:19:37 cool 13:20:02 #topic networking-hyperv scalability 13:20:35 the pymi + threading patches are pretty good to go 13:20:55 ok 13:21:05 we're keeping the original BP that kvinod created 13:21:22 since anyway it does not refer to implementation details 13:21:24 tested for all scenarios ? will we now have a version of pyMI ? 13:21:33 cool 13:22:15 what I understand from claudiub is that the existing rst file will be modified with the implemented approach 13:22:15 sagar_nikam: we are testing on all the scenarios that we already discussed 13:22:34 ok 13:22:41 kvinod: yep claudiub is rewriting the spec to match the current implementation 13:22:47 kvinod: yeah, I knew I was forgetting something last night. :( 13:23:25 sagar_nikam: do you think you could test the patches in the meantime? 13:23:38 we're also testing sonu's patch 13:23:48 I have already conveyed this in review link but wanted to bring this up again 13:23:55 kvinod: thala: testing, can you add these patches 13:24:15 in that way we can check the perf with pyMI and the patches 13:24:34 sagar_nikam: sure will ask thala he is on leave till Thursday 13:25:02 kvinod:ok, please let him know that the patches need to be included 13:25:22 his tests are on liberty, so the patches needs to be liberty based 13:25:43 kvinod: some work needs to be done before it is given to thala 13:25:47 alinb tested it, found some issues but most probably due to the that fact that sonu's neutron patch was not there on the neutron side 13:26:00 my comment on native thread was about using the existing patches which were already committed rather then creating new one 13:26:25 so we'll provide a complete update ASAP as soon as we test w the updated neutron master 13:26:56 kvinod: which patches? 13:27:30 alexpilotti: thala will be testing liberty + pyMI, since that is what we requested him last meeting. does the same plan hold good today as well ? 13:28:13 claudiub: we can use networking-hyperv master on liberty, correct? 13:28:15 alexpilotti:https://review.openstack.org/#/c/235793/ 13:28:42 we could have used this one rather than creating a new one 13:29:00 alexpilotti: Don't really think so, requirements are different, so it could conflict with nova requirements. 13:30:03 kvinod: the implementation differs 13:30:39 yes agreed but the concept and few file contents were same 13:30:43 claudiub: since they are testing on liberty, does it make sense to prepare a backport now? 13:31:20 we could have uploaded the new patches on top of existing one with new implementation approach 13:31:39 backport what? 13:31:56 sorry, my attention is split in 3 directions. :( 13:31:57 the changes were already in master 13:32:04 the native threads ones? 13:32:14 the liberty native threads patches 13:32:54 well, the native threads implementation is fairly straightforward and simple, we could. 13:33:40 claudiub: agreed, the implementation is straight and good 13:33:43 since HP has already a scheduled test run, we can add it on top, w/o having to wait for another test run 13:34:38 sagar_nikam: when is thala going to do run the tests? 13:35:58 alexpilotti: probably on friday, we should be able to delay by one or two days 13:36:19 is monday a good day to start the tests ? india monday morning 13:36:21 alexpilotti: Thala scheduled the test and went on leave, we will login to his setup and collect the results 13:37:08 kvinod: can we add some bits to networking-hyperv or it's "sealed"? 13:37:13 kvinod: the next tests-- with the patches, can it be re-run on monday ? setup available on monday 13:37:22 we did not had his setup details so was not able to collect the results 13:38:42 alexpilotti: Anyways the test might have completed by now, if required we can bring in the changes and schedule the test again 13:39:02 Probably on Friday or Monday 13:39:05 kvinod: that will be good 13:39:28 kvinod: some Hyper-V Windows updates came in very recently that apparently fixed the VLAN issue 13:39:38 alexpilotti: are there any patches which you want to apply for the next run of tests ? 13:40:01 alinb has to come back with details, in case it'd be very important to have it in your setup 13:40:21 sagar_nikam: the networking-hyperv liberty backport if we have enough time to prepare them :-) 13:40:27 alexpilotti: so apart from native thread patch you want to pull windows recent update? 13:40:49 kvinod: yes, the goal is to run always on 100% udpated OSs 13:40:58 ok 13:41:06 alexpilotti: we should be able to wait for some time, by when will those patches be available ? 13:41:07 there were a few prereq that I sent to thala 13:41:35 alexpilotti:ok 13:41:59 claudiub or alinb will backport them, by early next week we can have them 13:42:35 alexpilotti:so thala already knows which and how to pull the windows update 13:42:35 kvinod: i think we should delay thala: next tests till early next week 13:42:38 depends really on when you want to run the tests 13:43:04 we run scenario tests very often, so it does not really matter 13:43:27 i think the next tests should have the latest windows updates and the patches 13:43:33 sagar_nikam: then in that case let us know when to trigger the test 13:43:34 on the other side since your planning takes more time, we prefer to be flexible around your scheduling 13:44:09 kvinod: and the high perf powercfg scheme 13:44:18 kvinod: lets wait to hear from alexpilotti: when the patches are ready and then use them 13:44:49 alexpilotti: sagar_nikam: fine 13:44:53 sagar_nikam cool! 13:45:09 next topic 13:45:18 alexpilotti: as soon the patches are ready, let us know 13:45:22 #topic Hyper-V cluster 13:45:40 alexpilotti: wanted to know when you are planning to merge the Enhanced RPC Changes done by Sonu 13:46:05 kvinod: as soon as we have tests results from alinb 13:46:15 Blueprint spec is created and commit message is modified 13:46:31 ok, in the mean time I will upload the .rst file 13:46:36 I wrote you above that we had issues but most probably due to the missing neutron patch that just merged 13:46:47 ok 13:46:52 kvinod: no need for an RST for this one 13:47:15 it's quite straight forward 13:47:16 alexpilotti: fine, will not upload it 13:47:24 i mean the rst file 13:47:36 thanks 13:47:47 kvinod: also, unit tests should be added on the enhanced rpc patch. 13:47:47 back to the topic 13:48:04 sonu wrote that he's adding them 13:48:40 claudiub: I added one to test the newly added function and committed 13:49:00 folks: we have a different topic now :-) 13:49:17 claudiub: please review and let me know anything additional required 13:49:35 alexpilotti: how is the cluster driver progressing ? 13:49:40 here we go 13:49:40 sorry, please continue with topic 13:49:46 thanks 13:50:13 atuvenie is porting it from the original kilo implementation to mitaka 13:50:23 some important areas 13:51:02 volumes: ensuring that passthrough volumes are logged in the target host during a failover is quite tricky 13:51:22 the first implementation will support SMB3 volumes 13:51:46 which is also MSFT's recommended approach 13:52:29 for iSCSI / FC, we're investigating how to trigger an event on the target and block the VM start until all LUNs are mounted 13:52:57 in short, replicating what happens for live migration 13:53:50 on the other side, the interaction with Nova became easier since we dont have to explicitly call into the conductor 13:54:11 thanks to the new versioned objects, this becomes more transparent 13:54:46 the patches will be ready in the short term (February) 13:55:23 i got disconnected 13:55:26 i am back 13:55:35 welcome back 13:55:37 ;) 13:55:46 alexilotti: what about iscsi and FC volumes ? 13:56:02 sagar_nikam_: talked about it while you were disconnected :-) 13:56:04 for cluster driver 13:56:08 sagar_nikam_, : for iSCSI / FC, we're investigating how to trigger an event on the target and block the VM start until all LUNs are mounted 13:56:08 ok 13:56:28 sagar_nikam_: in short: SMB3 now, iSCSI / FC later 13:56:42 ok 13:57:03 later as in Mitaka or N ? 13:57:11 in compute-hyperv ? 13:57:13 we need also to contain the patches if we want to hope for merging them in N or O 13:57:35 they will go straight in compute-hyperv in Mitaka of course 13:57:58 and we might even think about a Liberty backport if customers ask, but that's a separate topic 13:58:19 alexpilotti: in Mitaka, compute-hyperv will have support for iscsi and FC for cluster driver ? 13:58:28 we're alsmost out of time 13:58:44 sagar_nikam_: depends if we find a feasible solution 13:58:50 ok 13:58:57 sagar_nikam_: the issue is with how the clustering works 13:59:14 ok 13:59:21 the VM gets migrated before we can trigger an event and mount the LUNs on the target 13:59:25 lets discuss in detail in next meeting 13:59:31 resulting in I/O issues 13:59:37 we are almost out of time now 13:59:43 we have a few ideas in an embryonal state 13:59:52 can we have this topic as first topic next meeting ? 13:59:58 which require some C/C++ native components on the cluster side 14:00:11 sagar_nikam_: sure 14:00:24 ok guys time's up! 14:00:31 thanks for joining! 14:00:33 thanks, bye 14:00:34 bye 14:00:37 #endmeeting