16:01:05 #startmeeting hyper-v 16:01:06 Meeting started Tue May 21 16:01:05 2013 UTC. The chair is primeministerp. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:10 The meeting name has been set to 'hyper_v' 16:01:17 Hi Everyone 16:01:22 hi! 16:01:29 hi 16:01:32 hi 16:01:35 wow 16:01:36 hello 16:01:39 lots of new folk 16:01:50 great 16:01:51 hi 16:01:55 ociuhandu: where's alex? 16:02:42 let's wait a minute for those who are late 16:02:52 luis_fdez: thanks for the pull requests 16:03:05 hi, this is Xiao Pei from China 16:03:05 luis_fdez: we should talk about this more off line 16:03:13 liuxpei: hellow 16:03:16 primeministerp: entering now 16:03:17 hi all 16:03:20 ociuhandu: great 16:03:23 primeministerp, ok, I have some suggestions/ideas to discuss 16:03:31 luis_fdez: we need to coordinate 16:03:42 luis_fdez: so you can understand what I already have done 16:03:52 luis_fdez: and what's not included in those bits yet 16:03:53 hi there! 16:03:58 alexpilotti: hi alex 16:04:01 full house today 16:04:05 hi alexpilotti ! 16:04:05 primeministerp, ok 16:04:18 ok let's begin then 16:04:30 liuxpei: hi, thanks for joining us! 16:04:41 #topic open issues 16:04:56 so there have been some new bugs 16:05:00 thanks, I wil try to join as much as I can :) 16:05:07 I can start on this one 16:05:14 alexpilotti: please 16:05:23 There's an annoying bug on snapshot management 16:05:38 related to the size of the image 16:06:01 liuxpei: thanks for your help with that one 16:06:26 I attempted the easy way, consisting in trying to convince Glance that the size of the image is the virtual one, not th eVHD file size 16:06:37 but that's a dead end 16:07:09 alexpilotti: please link? 16:07:11 the only other way to get this done is to consider the VHD file size 16:07:23 pnavarro: right away :-) 16:07:24 https://bugs.launchpad.net/nova/+bug/1177927 16:07:25 Launchpad bug 1177927 in nova "VHD snapshot from Hyper-V driver is bigger than original instance" [Undecided,In progress] 16:07:27 thanks 16:07:30 tx liuxpei 16:07:38 yw 16:07:53 #link https://bugs.launchpad.net/nova/+bug/1177927 16:08:18 so when we spawn an image, we could simply resize the VHD to instance['root_gb'] - vhd_header_size 16:08:54 this way we can guarantee that the file size will be alway less than the flavor size 16:09:07 alexpilotti: and I'm assuming this is currently the easiest way to overcome the size difference 16:09:13 we have to be consistent with this during resize as well 16:09:36 yep. The shortest path after trying to trick Glance failed :-) 16:09:38 alexpilotti: does this make it easier to be consistant? 16:09:50 we have no choice 16:09:54 alexpilotti: ok then 16:10:05 I mean, thin disks are working now 16:10:12 alexpilotti: then let's do it 16:10:16 because the file size limit is not capped 16:10:23 a question: for VHDs with different size, does vhd_header_size always the same? 16:10:34 note: we have also to backport it to grizzly 16:10:42 liuxpei: yep 16:11:02 we have to remember to check aslo the VHDX header size ;-) 16:11:04 alexpilotti: does it increase for vhdx? 16:11:09 hehe 16:11:19 it's a different value, but still fiuxed 16:11:25 *fixed 16:11:29 but different 16:11:36 i would assume it would increase 16:11:40 since we are talking about a few KB 16:11:53 ok 16:12:02 we can just add a fixed vlaue > max(vhd_header, vhdx_header) 16:12:16 *value 16:12:23 i understood 16:12:24 and solve this issue 16:12:25 that works 16:12:42 I'm all for taking this approach, I'm assuming the ammount of code is trivial 16:12:43 the alternative would be to try to add a "tolerance" in manager.py 16:13:03 but I have to convince the rest of the Nova world for that to happen ;-) 16:13:12 alexpilotti: we don't want to have to do that 16:13:18 ;) 16:13:44 I think we have enough w the rest there :-) 16:13:52 I agree 16:13:58 ok, comments on this issue or should we move on? 16:13:58 ok are w/ good with this topic? 16:14:03 lol 16:14:05 pnavarro: ? 16:14:11 +1 ! 16:14:11 another question: file size for a VHD is the actual size for it? 16:14:56 liuxpei: right now? 16:15:00 liuxpei: I believe so 16:15:02 liuxpei: can you please define "actual size"? 16:15:11 disk size 16:15:19 I think she means what's currently being used as disk size 16:15:23 yep file size = disk size 16:15:55 ok 16:16:17 #topic H1 16:16:18 after adding a fixed vlaue > max(vhd_header, vhdx_header), then will the file size continue to be the disk size? 16:16:36 yep 16:16:53 and file size = disk size <= flavor size :-) 16:17:03 ok, I am ok with that now~ 16:17:08 great 16:17:14 cool 16:17:18 alexpilotti: H1 16:17:24 ouch 16:17:27 alexpilotti: hehe 16:17:33 I thought you were going to say that 16:17:34 let me fetch the link with the blueprints 16:17:40 lol 16:17:41 I know dust is still settling 16:18:13 pnavarro: while he's mentioning blueprints 16:18:14 #link https://blueprints.launchpad.net/nova?searchtext=hyper-v 16:18:24 pnavarro: are there any specific to cinder we will require? 16:18:24 so here' the list 16:18:54 I aded "only" nova and ceilometer so far 16:19:12 cinder and quantum are missing (the latter for a good reason) :-) 16:19:14 primeministerp: I'd add some one to complete the missing features that were added in G 16:19:25 pnavarro: great 16:19:31 pnavarro: thank you 16:19:41 alexpilotti: yes understood 16:19:58 but, I won't have time for H1 16:20:06 pnavarro: fair enough 16:20:09 H1 is close 16:20:21 H1 is just for waking up 16:20:28 hahaha 16:20:36 H2 is the real deal and H3 is for the last minute panic 16:21:01 so there's plenty of time :-) 16:21:08 ok 16:21:22 alexpilotti: do you want to update on the status of the clustering discussion 16:21:26 kidding, we'll have a lot of stuff in on H2 16:21:38 we missed that whole thing last week 16:21:40 ohh yeah, clustering 16:21:43 maybe for the record 16:21:46 yep 16:21:53 #topic clustering 16:22:09 I want to hear also IBM opinion here :-) 16:22:18 alexpilotti: I know they have one 16:22:34 so the idea is that we got tons of requests for supporting Hyper-V host level clusters 16:22:56 aka old school MSCS clusters with CSV storage etc 16:22:59 primeministerp: he he 16:22:59 **hyper-v cluster as compute node*** 16:23:02 yes. this is a hot topic 16:23:07 schwicht: ahh you joined us 16:23:16 schwicht: glad you made it frank 16:23:29 sorry it took so long ... 16:23:32 np 16:23:37 schwicht: nice to meet you 16:23:54 alexpilotti: please continue, we have proper ibm representation in the channel 16:24:23 teh diea is that most Nova core guys are totally against clustering at the Nova level 16:24:50 theyr main argument is that it simply doesn't belong to OpenStack 16:25:17 it's not particularly easy from a technical standpoint, but feasible 16:25:36 on the other side, support for vCenter and SCVMM might be on the roadmap 16:25:40 one thing I want is that for VM hA, to evacuate a vm from a failure hyper-v host to another 16:25:41 by using cells 16:25:53 If I am not mistaken Xen clusters were supported by OpenStack. At least I heard that it should work. 16:25:58 liuxpei: that's our goal as well 16:26:09 in Havana? 16:26:30 gokrokve: they are using a aspecific Nova feature for "grouping" the servers 16:26:52 gokrokve: but they still have a single scheduler on top (nova-scheduler) 16:27:16 alexpilotti: is that something we could take advantage of? 16:27:23 the main issue in using any type of failvore clustering solution is that nova-scheduler will have a second scheduler in front 16:27:51 which is understandable 16:27:55 alexpilotti: in this case the hyper-v cluster resource mgr 16:28:17 alternative would be to consider the cluster as single macro-host 16:28:35 leaving the actual host sceduling to the cluster 16:28:46 but that doesn't work for a lot of reasons 16:28:59 alexpilotti: i can see how 16:28:59 alexpilotti: can you name a few? 16:29:01 it doesn't 16:29:04 well 16:29:08 you need a single point 16:29:11 of entry 16:29:19 if it's handling hte host scheduling 16:29:23 which means 16:29:27 one node would have to be that 16:29:58 issues: 16:29:59 bc it would make sence 16:30:12 to have a nova compute on each individual node 16:30:16 nova-scheduler would see a single host with e.g. 200GB memory free 16:30:31 which asctually are separated in e.g. 20GB on 5 hosts 16:30:36 ok that does not work , I agree ... 16:30:43 exactly 16:30:43 it makes sense to see how vmware solves that 16:31:01 at this point it could try to boot a VM with 40GB, which is not working 16:31:10 they have a total capacity and the largest deployable capacity 16:31:42 still, the largest deployable can be above the current limit 16:31:54 schwicht: they handle it by having it talk to the higher vsphere layer 16:32:00 guys, I have to leave, I'll read the logs later 16:32:08 pnavarro: thanks again 16:32:13 pnavarro: bye! 16:32:59 beside that, the stats would be very coarse, but that's a trivial issue 16:33:28 another problem is related to manual failover 16:33:35 or live-migration 16:33:49 schwicht: make sense 16:34:07 being a single host from the nova perspective, there's no way to interact with each single host 16:34:27 I proposed an alternbative, while waiting for the cell based approach to take shape 16:35:07 there's no particular reason for being forced to use MSCS 16:35:16 as long as we can achieve proper HA 16:35:47 Why not to expose all cluster components individually but provide a hint to the scheduler that it is a cluster? 16:35:50 the idea is to add a heartbeat service on each compute host 16:36:11 gokrokve: they ditched that idea 16:36:27 gokrokve: it was proposed for baremetal initially 16:36:39 also in theory you can reach the same result of vm avialability with out the complexity of a cluster underneith 16:37:00 anyway, to finish with the potential solution 16:37:13 we can provide agents for heartbeat 16:37:20 alexpilotti: primeministerp: I think you need both .. a single point of entry for the scheuling and vm management and the enumerate cluster members to be able to set a node in maint mode 16:37:24 and provide failover on top of nova itself 16:37:46 Nova has a feature called "evacuate" 16:38:06 we can use that to failover in case of missing heartbeat reports 16:38:24 and handle the usual live-migration for manual failvovers as we already do 16:38:52 IMO this would be a fairly simple approach, working with any hypervisor and available in a few days 16:39:20 alexpilotti: you would miss MS system Center co-existance .. that you may get with Cell, or did I misunderstand that ? 16:39:21 think about HALinux or similar solutions as a reference 16:39:37 schwicht: you would get that w/ a cell 16:39:38 schwicht: correct 16:40:03 ok 16:40:04 schwicht: but for that we need to wait for cells to get to the stage to support it 16:40:28 schwicht: and even with all of VMWare pressure on the subject, I doubt it will happen for Havana 16:40:49 so to recap we have 3 choices: 16:40:58 1) fork and do our cluster 16:41:11 2) a clustering project on top of Nova 16:41:15 * primeministerp doesn't support forks 16:41:16 3) wait for cells 16:41:36 primeministerp: it was just for completeness ;-) 16:41:43 I'll throw a +1 for #3 16:41:47 alexpilotti: i know 16:42:16 any other votes or comments? 16:42:27 I like # 3 best because it seems clean 16:42:35 ok 16:42:40 gokrokve: ? 16:43:03 Number 2 looks like more general approach and might work not only for Hyper-V 16:43:30 While 1 and 3 looks like workarounds 16:43:57 ideally help and not just wait :-) 16:44:15 russellb: agreed 16:44:19 russellb: hehe 16:44:26 schwicht: 16:44:43 ok 16:44:45 moving on 16:44:54 russellb: do you think it's feaible to get it done for Havana? 16:45:04 doesn't look like it, nobody is working on it yet 16:45:18 schwicht: 16:45:25 russellb: that was my feeling as well ;-) 16:45:33 yep, just an idea so far 16:45:39 primeministerp: I heard the first one - gesundheit 16:45:48 schwicht: ;) 16:46:00 russellb: do you know of anybody else interested in this? 16:46:15 russellb: we could set up a cross-driver team, to say so :-) 16:46:19 plenty of interest in the idea ... nobody saying they want to help write it 16:46:31 ideally anyone interested in the affected systems ... so the VMware sub-team 16:46:48 We are looking for clstering support in OpenStack. 16:47:07 russellb: ok, beside us and VMWare, nobody else? 16:47:19 russellb: just to know to who I should reach out 16:47:31 openstack-dev list in general 16:47:36 Mirantis will join development. 16:47:41 some core nova folks that need to be involved in the architecture/design of it 16:47:49 gokrokve: cool! 16:48:14 russellb: I guess I'm going to bug dansmith as usual then :-D 16:48:15 those that seem most interested seem to be me, dansmith, and comstud 16:48:52 russellb: do you know the IRC nick of the VMWare-sub lead mantainer? 16:48:55 but probably mostly from an advisory capacity 16:49:02 hartsocks, IIRC 16:49:04 hartsocks: <--- 16:49:15 dansmith: tx guys 16:49:45 russellb: thanks 16:49:50 maybe his around and can join us 16:50:10 alexpilotti: so additional issues to address? 16:50:48 I'd love to see if the VMWare guys are around as we have a pretty good group online to talk about this briefly 16:50:54 but it doesn't seem so 16:51:01 alexpilotti: I would reach out via the list 16:51:06 alexpilotti: as a start 16:51:10 cool 16:51:19 ping dan wendlant too 16:51:20 VMware sub team meets tomorrow 16:51:27 kirankv: thdx 16:51:29 er thx 16:51:31 kirankv: tx 16:51:37 primeministerp: a common approach is best for the openstack consumers (like us) as well 16:51:48 primeministerp: you wnated to introduce the vhdx subject? :-) 16:51:55 o 16:51:59 did i? 16:52:07 #topic vhdx 16:52:15 alexpilotti: that work? 16:52:30 * primeministerp might be missing something 16:52:55 alexpilotti: actually i've forgotten was there something specific to vhdx we needed to discuss? 16:52:58 primeministerp: it was just a follow up on your "thdx" typo :-) 16:53:02 haha 16:53:03 ok 16:53:06 which had a perfect timing 16:53:15 Vmware subteam meeting time is 1700 UTC 16:53:27 we are working on doing the V2 WMI api support 16:53:33 that will unlock VHDX 16:53:46 VHDX itself is a fairly easy feature 16:53:46 alexpilotti: H2 timeframe? 16:53:54 primeministerp: sure 16:53:56 hehe 16:54:06 ancutaly most blueprints depend on V2 16:54:15 *actually 16:54:33 which means that we won't have new features on V1 16:54:43 aka 2008 R2 16:54:58 I just wanted to make this clear 16:55:07 and hear if somebody has different ideas about it 16:55:15 alexpilotti: I think we've been clear about that for some time 16:55:28 cool 16:55:30 alexpilotti: the platform is 2012 16:55:43 cool 16:55:53 should we move to the RDP console? 16:56:01 #topic RDP Console 16:56:36 schwicht: are you guys planning to add graphical console support on top of Hyper-V? 16:57:03 this is important to us lets say 16:57:43 ok, I'm just interested to understand the extent of the interest 16:57:43 I would image it's important to anyone planning on using hyperv 16:58:14 that was my point as well, until I met a lot of people who simpole didn't care :-) 16:58:23 for the product consuming we try to get it into the release, but need to see if the code is solid 16:58:31 anyway, from a technicalk standpont we are there 16:59:02 but it requires a bit of work to add a few simple nova changes 16:59:24 alexpilotti, is it feasible for Havana? 16:59:28 that will inpact of all Nova, not only the Hyper-V driver 16:59:37 luis_fdez: technically for sure 16:59:50 luis_fdez: all we need is a sigle rest API call 17:00:08 luis_fdez: and rename any reference to VNC in Nova ;-) 17:00:21 ok 17:00:26 unless we want to add a "get_rdp_console" method 17:00:44 which would add on top of get_vnc_console and get_spice_console 17:01:02 the interface is getting crowded and IMO needs some cleanup 17:01:14 alexpilotti: primeministerp: we will follow up off line on the topic, need to check the latest 17:01:24 schwicht: ok 17:01:26 schwicht: sure 17:01:30 guys we're out of time 17:01:44 let's end it and make note to pick up the rdp discussion next week 17:01:53 schwicht: tx, let me know if you'd like to schedule a meetinmg for that 17:01:59 #endmeeting