16:00:15 #startmeeting hyper-v 16:00:16 Meeting started Tue Jun 11 16:00:15 2013 UTC. The chair is primeministerp. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:19 The meeting name has been set to 'hyper_v' 16:00:26 hi everyone 16:00:31 hi ... 16:00:40 hi everyone 16:00:45 hi tavi 16:00:49 pnavarro: hi pedro 16:01:16 hmm was hoping lluis would show up 16:01:31 let's wait a couple minutes before we get started 16:02:17 alexpilotti: i'm waiting a couple in hopes that lluis will join 16:02:30 morning! 16:02:35 hehe 16:02:41 morning! 16:03:07 hello ! 16:03:13 pnavarro: hola! 16:03:22 pedro! 16:03:27 ok 16:03:34 that was a couple 16:03:48 #topic status update wmiv2 16:04:17 alexpilotti: how is the wmiv2 code migration progressing 16:04:21 we are almost done 16:04:28 the idea is simple: 16:04:36 V2 utils classes inherit from V1 16:04:52 provide the new namespace and different implementations whenever needed 16:05:17 a fatory method (like the one we have for volume utils) takes care of instantiating the right one based on the OS 16:05:24 *factory 16:05:25 alexpilotti: so this will allow the backward compatitbility for 2008r2 to remain intact 16:05:35 yes 16:05:49 alexpilotti: perfect 16:05:55 everythinhg that used to work on 2008 R2 will still work 16:06:04 only new features will be applied to V2 only 16:06:07 nice 16:06:11 alexpilotti: although obviously this there for "legacy" purposes" 16:06:51 only 16:06:51 yeah, the idea is to eventually remove the V1 classes 16:07:12 in a few years, which are geological ages in this domain :-) 16:07:19 our goal to reiterate is to use 2012 and on as the default platform for development 16:07:55 ok 16:08:01 most meaningful features, including OpenVSwitch and Ceph 16:08:08 will be on 2012+ 16:08:15 when will the bits be ready for testing 16:08:20 re: wmiv2 16:08:47 maybe we'll manage to have them up for review this week already 16:08:58 great 16:09:15 Claudiu, one of our guys is working on it 16:09:38 oki shall we move on then 16:09:43 sure 16:09:50 alexpilotti: shared mem? 16:10:20 oki 16:10:30 #topic hyper-v shared memory support 16:10:46 shared mamory is a fairly easy thing 16:10:55 but we'll need 2 options 16:11:19 actually it's "dynamic memory" 16:11:29 sorry 16:11:39 np, I was also going on with it :-) 16:11:41 balooning 16:11:43 yep 16:11:49 re ballooning 16:12:02 so we need one optio: "enable_dynamic_memory", defaulting to false 16:12:09 I knew what i was talking about too ;) 16:12:16 primeministerp: is that supported with regular LIS or just on windows VMs ? 16:12:16 and "dynamic_memory_initial_perc" 16:12:26 schwicht: yes 16:12:43 schwicht: in newer releases i believe 16:12:44 schwicht: yep, including Linux now 16:13:06 ok, good .. we had that discussion the other days, and I was not sure about the RHEL 6.4 situation ... 16:13:16 the latter option is needed to tell Hyper-V how much memory to allocate initially in percentage 16:14:31 let's say that a flavor states 10GB ram, if the above option's value is 60, the initial memory will be 6GB 16:14:35 as easy as that 16:15:08 there's of course the risk of overcommitting and what applies for any balooning consideration applies to openstack as well 16:15:15 obviously 16:15:28 in particular the scheduler can get confused by the amount of free memory 16:16:14 considering that the host reports to Nova the current free memory, w/o considering VM requirements 16:16:29 any comment on this? 16:16:41 hmm 16:16:43 i'm thinking 16:17:00 the scheduler issues can be dangerous leading to cascading failure 16:17:01 it's a very useful feature, but misusage can create huge issues 16:17:19 alexpilotti: exactly 16:17:33 that's why I bvote for disabling it by default 16:17:43 +1 16:17:44 > so we need one optio: "enable_dynamic_memory", defaulting to false 16:17:58 pnavarro: ehat's your opinion? 16:18:05 *what 16:18:36 in vmare exists the same functionality... I was thinking how they are doing that... 16:18:36 * primeministerp pokes pnavarro 16:18:50 in the esxi plugin 16:18:51 pnavarro: VMWare uses shared paging 16:19:11 pnavarro: the implementation differs from the Hyper-V way 16:19:34 since we are using SLAT on Hyper-V shared pages make little sense 16:20:01 ok, but from an API point of view, you set % and intervals too.. 16:20:09 pnavarro: sure 16:20:27 pnavarro: I'm not familiar with their settings, let me take a look 16:21:07 about overcommitment there are some flags to make the scheduler aware 16:22:23 pnavarro: could we theoritically reuse those? 16:22:28 pnavarro: I'm looking at the VMWare driver, but I don't see anything 16:22:47 pnavarro: do you have a link to teh code line by any chance? 16:23:58 no, I've used the vmware API in the past, I didn't know how the esxi was using that 16:24:15 I was just wondering... 16:24:46 pnavarro: are you sure is not the scheduler's overcommit? 16:25:12 well, I'm not sure... 16:25:18 sorry 16:25:24 ok 16:25:26 no worries 16:25:42 pnavarro: ram_allocation_ratio 16:25:54 that's the option name 16:26:52 ok, if any additional idea should come up, let's add comments to the BP whiteboard in case, please 16:27:02 perfect 16:27:15 I was hoping luis would be here 16:27:33 i'll skip the puppet discussion as I need his input 16:27:39 primeministerp: can you please add Ceph to the topics? 16:27:42 o 16:27:43 yes 16:27:53 #topic ceph 16:28:30 alexpilotti: you want to begin or do you want me to start? 16:28:30 So, we're prelimiarily discussing the porting of BRD to Hyper-V with Inktank 16:28:43 alexpilotti: RBD 16:28:59 my initial idea was to "simply" port the linux kernel driver to Windows 16:29:22 primeministerp: yep sorry, mixing up acronysm is a passion of mine :-) 16:29:31 alexpilotti: i'm here to keep you honest 16:29:33 ;) 16:29:35 lol 16:29:48 back to RBD, 16:29:49 alexpilotti: although that could totally be taken out of context 16:30:05 ;) 16:30:11 the problem is that the kernel driver is way behind librbd 16:30:26 which is the userspace library used by qemu, for example 16:30:52 for example cloning is not implemented in the kernel module, only in the userspace lib 16:31:13 in Hyper-V we have no choice, we need a kernel driver 16:31:17 alexpilotti: meaning that's it consumed more via vms through qemu than by native kernel drivers? 16:31:42 primeministerp: yep, the main business case is KVM 16:32:07 my initial idea is to write a userspace filesystem driver using librbd 16:32:18 the kernel drivers were more for the clustered storage replacement 16:32:44 moving to a full kernel one once Inktank (or the community) will bring the kernel module on parity 16:32:56 advantages of this approach: 16:33:11 security: we are in userspace, no blue screen 16:33:39 alexpilotti: i'm assuming it's quciker development too 16:33:41 fast development: we avoid all the kernel debugging nightmares 16:33:47 hehe 16:33:48 yep :) 16:34:12 so are we talking h timeframe? 16:34:20 we can use threads, work in C++, no IRQ dispatch issues, etc etc 16:34:32 disadvantages: 16:34:55 context switches might affect quite a bit the performance side 16:35:02 sorry, I was late 16:35:10 had to prep for HostingCon panel next week 16:35:27 zehicle_at_dell: ok 16:35:31 zehicle_at_dell: and you are? 16:35:41 although all the traffic will go on TCP/IP 16:35:41 zehicle_at_dell: care to introduce yourself? 16:35:57 zehicle_at_dell: nice to see you here :-) 16:36:21 haha 16:36:22 rob 16:36:27 zehicle_at_dell: hi rob 16:36:32 L) 16:36:40 zehicle_at_dell: nice name 16:36:48 zehicle_at_dell: hi Rob 16:36:51 This is Rob Hirschfeld, I'm on the Dell OpenStack team 16:36:58 zehicle_at_dell: we're talking ceph on hyperv 16:37:06 Cool! 16:37:25 zehicle_at_dell: alexpilotti is talking about the userspace being ported to hyper-v 16:37:33 zehicle_at_dell: can you see the previous msgs in the chat? 16:37:42 yy 16:37:50 zehicle_at_dell: I'd like to hear your opinion on this 16:38:38 zehicle_at_dell: but we can talk about it later if you need time 16:38:40 intersting, so are you saying that the HyperV Cinder integration cannot use block store? 16:39:04 zehicle_at_dell: only the iSCSI wrapper for now 16:39:14 ok, that should be OK 16:39:31 zehicle_at_dell: fairly ok, not as good as native Ceph of course 16:39:53 *native RBD 16:39:57 the goal would be to have the RBD driver I'm assuming 16:40:00 client 16:40:01 yep 16:40:08 that's what we're talking about 16:40:56 alexpilotti: move on to ovs? 16:41:04 sure 16:41:15 #topic OpenVSwitch 16:41:16 I just wanted to introduce the topic 16:41:24 (i mean RBD) 16:41:29 alexpilotti: yep 16:41:57 alexpilotti: we should do the same now for ovs, considering your recent announcement? 16:42:01 (closing on RBD, I think it's interesting for us. Will have to reflect on it some more, it's not our first use case) 16:42:13 zehicle_at_dell: I don't know if the news reached you, we're porting OVS to Hyper-V 16:42:41 I think that's awesome 16:43:01 do you have any specific requirements for OVS? 16:43:11 like specific tunnelling protocols, etc 16:44:49 zehicle_at_dell: ^^ 16:44:56 alexpilotti: ideall the same ones as the core project 16:44:58 We're looking at GRE tunnels as the integration approach 16:45:16 ok, tx 16:45:39 talking in general about OVS, it's a fairly big effort 16:45:57 lot's of kernel code to be ported (including GRE and VXLAN) 16:46:15 lots of userspace posix -> Windows migration work 16:46:35 we're ramping up a "swat" team for it 16:46:49 what features of OVS are being planned? 16:47:02 for example, is VLAN support higher than GRE 16:47:24 zehicle_at_dell: well, VLAN altready works with OVS on Hyper-v now 16:48:21 zehicle_at_dell: for the rest: OpenFlow, ovsdb, all userspace tools, tiunnelling (GRE, VXLAN) 16:48:41 those are roughly the main features involved 16:49:20 alexpilotti: all the usual suspects 16:49:25 ;) 16:49:31 among the goals, we want to use the Quantum OVS agent on Hyper-V 16:49:46 which means thet the CLI tools must be completely ported as well 16:50:00 do you have a feel for the relatative priorities? 16:50:16 beause GRE would be high on our list since we've built infrastructure to work with that 16:50:59 zehicle_at_dell: is the switch integration stronger now? 16:51:02 zehicle_at_dell: we're collecting customer based priorities now 16:51:16 good question -> the choice for GRE allowed use to bypass that for now 16:51:34 zehicle_at_dell: good to know 16:51:40 so, it was a good first pass because it was easier to get adoption started 16:51:50 VLAN would have required more switch config work 16:52:03 zehicle_at_dell: understood 16:52:06 zehicle_at_dell: GRE is anyway already before VXLAN oin our GANTT 16:52:10 downside is that GRE will have performance impacts 16:52:13 perfect 16:52:36 VXLAN introduction in OVS is very recent 16:52:55 and I expect some big changes in the tunnelling world soon 16:53:19 so it's hard to predict now how much VXLAN will get traction in QUantum 16:53:45 zehicle_at_dell: do you have specific topics to add? 16:53:46 (I keep on calling it Quantum if you don't mind for now) ;-) 16:54:08 pfkaq 16:54:18 lol 16:54:19 lol ! 16:54:48 alexpilotti: any final words on ovs? 16:55:12 for the current working set, you said VLAN was enabled? 16:55:17 against Grizzly? 16:55:17 yep 16:55:19 yep 16:55:20 zehicle_at_dell: yes 16:55:37 that was our goal 16:55:52 ok, I'll need to see where we stand on VLAN. I think we've got it in place but not our primary test path 16:56:15 using the Hyper-V Quantum agent with 100% OVS compatibility was the main use case 16:56:24 on VLAN, I mean 16:56:49 primeministerp: should we move to the next topic? 16:56:54 alexpilotti: yes 16:57:00 alexpilotti: is there one? 16:57:20 alexpilotti: puppet bits are on hold until luis and i sync 16:57:21 if nexttopic == NULL -> endmeeting() ;-) 16:57:26 ok 16:57:35 zehicle_at_dell: anything you want to add? 16:58:18 looks like frank left 16:58:19 ok 16:58:24 #endmeeting