17:02:59 #startmeeting VMwareAPI 17:03:00 Meeting started Wed Jul 3 17:02:59 2013 UTC. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:03:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:03:03 The meeting name has been set to 'vmwareapi' 17:03:13 Who's around for VMwareAPI subteam meeting time! 17:03:18 Name, company this time. 17:03:22 Just because... 17:03:24 :-) 17:03:27 hello! Dan Wendlandt, vmware 17:03:43 Hi! Kiran, HP 17:03:57 Hi Eustace, HP 17:04:02 though technically people should feel free to not indicate a company, if they prefer not to. In openstack, people are also free to contribute as individuals 17:04:04 Hi all 17:04:16 yaguang, Canonical 17:04:40 @danwent thank you. yes. 17:04:42 If you don't want to name a company, you don't have to. 17:05:23 I'm Shawn Hartsock from VMware tho' and this is the part of the meeting where we talk bugs... 17:05:31 #topic bugs 17:05:46 Anyone have a pet bug that needs attention? 17:06:31 The silence is deafening. 17:06:36 I have one that slove uncompatible issue with PostgreSQL 17:07:13 Hrm. Well, I meant bugs that are related to VMware's API's and drivers specifically. :-) 17:07:26 oh, sorry 17:07:28 its related to vmware :) 17:07:32 Is it? 17:07:38 https://bugs.launchpad.net/nova/+bug/1195139 17:07:41 Launchpad bug 1195139 in nova "vmware Hyper doesn't report hypervisor version correctly to database" [Undecided,In progress] 17:07:44 the version issue 17:08:13 My apologies. 17:09:09 no worries 17:09:15 Hmm… I will look more closely at this one later… but IIRC you can have versions like 5.0.0u1 17:09:29 Not sure how that would work. 17:09:51 yes, i would suggest moving to a String/Text field type in the database 17:10:19 well but the field that is being retrieved gives the numberals only and never the update versions u1,u2... not sure it that has changed noew 17:10:33 nova libvirt driver use integer version to do a version compare 17:11:06 would that affect VMware drivers ? 17:11:37 I mean the column is set to interger is for that use case 17:12:43 I haven;t yet seen a code in the VMware driver that has such use case, may be moving to String wouldnt harm 17:12:51 Interesting… version numbers is one of those things that most systems treat as strings so I'm surprised this is an issue. 17:12:57 hmmm 17:13:36 well if i have both libvirt and vmware then changing it to string would break libvirt, so id prefer not doing a db change 17:13:54 agree with kirankv 17:13:57 oh yeah, I almost forgot that point 17:14:07 okay. 17:14:12 I see your point. 17:14:14 understood 17:14:32 #action hartsocks to follow up on https://bugs.launchpad.net/nova/+bug/1195139 17:14:34 Launchpad bug 1195139 in nova "vmware Hyper doesn't report hypervisor version correctly to database" [Undecided,In progress] 17:14:58 I'll figure out what the right triage actions are after the meeting. 17:15:23 Any other bugs to bring up? 17:15:31 https://bugs.launchpad.net/nova/+bug/1190515 17:15:32 Launchpad bug 1190515 in nova "disconnected ESXi Hosts cause VMWare driver failure" [High,In progress] 17:15:56 There were couple of bugs related to the fix I am working on this issue. 17:16:10 It would be better to raise the priority of this bug 17:16:43 It's already rated as "high" ... 17:16:49 you think this is critical? 17:17:08 Question: does a patchset for higher priority bug get reviewed faster? 17:17:28 no. not really. 17:17:35 Sorry, I thought I saw a different priority on the bug. 17:17:41 oh! 17:17:44 It's just a priority helper for us to decide. 17:18:01 ok 17:18:22 I'm on the bug triage team though and this might help explain the priorities... 17:18:25 #link https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29 17:18:47 Critical means prevents a key feature from working properly 17:19:08 If there's a work-around then it can't be "Critical" 17:19:24 Just FYI. 17:20:09 Any other bugs we need to discuss? 17:20:59 Okay moving on to blueprints in ... 17:21:02 3... 17:21:03 2... 17:21:13 #topic blueprints 17:21:37 #link https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver 17:21:47 This is the FC support blueprint. 17:22:16 I think this is not even set for Havana right now. 17:22:28 @kirankv I think this is one of yours 17:22:33 yes, 17:23:01 working on this, refactoring the iSCSI code so that it can be used for FC as well 17:23:17 this week a WIP patch should get posted 17:23:20 are you trying for Havana-3 for this? 17:23:48 (I don't see a series goal) 17:23:50 will initially post it as for Havana2 17:24:02 will set it when I post the patch 17:24:43 okay, you can try… 17:24:50 ok 17:25:07 (lot of reviews for the core to get through so H2 will be hard) 17:25:20 Let's see what else (before I get the big ones) 17:25:43 #link https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy 17:25:50 My blueprint turned out to be pretty simple. 17:26:11 I've posted code but it is "work in progress" 17:26:36 I've got a bug in my gerrit account… my "work in progress" won't show up 17:26:42 So just FYI 17:26:49 isnt the clone strategy something to be decided at instance creation time rather than by the image itself 17:27:16 That's why there's a patch this early. 17:27:57 This is one strategy that was easy. Decide that this "type" of machine performs best as a linked-clone or as a full-clone. 17:28:19 Considering that you don't turn a web-server image into a database server image using "nova boot" this seems reasonable to me. 17:28:46 ok, let me see if there are options that can be specified for nova boot 17:29:32 This feeds @yaguang 's work... 17:29:39 there are metadata can by used to describe the instance 17:29:59 and we can get it from db 17:30:18 and how do we put it into the db? 17:30:33 hmm, the only other option would be scheduler hints 17:30:43 you just need to specify as nova boot options 17:31:02 #link http://docs.openstack.org/cli/quick-start/content/nova-cli-reference.html 17:31:27 section on Insert metadata during launch 17:31:31 gives the details 17:32:00 okay. just like the glance CLI 17:32:19 yes 17:32:28 So the real questions are: 17:32:41 1. what should the default be "full" or "linked"? 17:32:54 2. where should you be able to override the default? 17:33:23 but I think we can set it as a image property full clone or linked 17:33:35 s/a/an/ 17:33:43 I like that. (that's in the patch up right now) 17:33:55 I *also* like putting it at nova boot 17:33:59 in the instance. 17:34:17 I think I can do both. Letting nova boot's meta data override what is in the image. 17:34:26 Is that too much freedom? 17:34:29 Is that confusing? 17:35:09 we might have to see how it is being done and used in kvm today 17:35:13 but the customer would have to load 2 images into glance when they can just load 1 and then tell the instance how to use it 17:35:25 that might give us insights on how admins use it 17:35:26 that seems simpler from a user point of view to me 17:35:28 in kvm 17:35:46 @tjones that's why I like the idea of letting the image control a "default" but letting nova boot control an override. 17:35:48 we can config in nova.conf to set what kind of image type to use 17:35:50 kvm = libvirt 17:35:59 qcow2 or raw , 17:36:01 For a while I was looking at ways to specify options at boot time. At least, I thought that the metadata service can only be used after the instance is created. I am not sure if that information will be passed along to the driver. Just curious, if some knows if that can be done. 17:36:16 a raw image just like a full clone 17:36:22 ok 17:36:27 @hrtsocks - i agree with that and i don't think its too much freedom 17:37:36 I'll post a patch in time for next Wednesday that shows both image and "nova boot" meta data switches. Then solicit your feedback again. I don't think we should worry if this is different from what KVM does. 17:37:45 (at least on this one small point) 17:38:09 But I will look at raw vs qcow2 to help me figure out how best to write this. 17:38:17 Next topic? 17:38:31 #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage 17:38:42 This is yaguang's BP 17:38:49 I am working on it 17:39:08 I noticed a setting "use_linked_clone" that defaults to true. 17:39:18 This is already in the driver code on master. 17:39:31 yes , I also see it 17:39:39 So editing localrc for your devstack should let you work. 17:40:16 How will you deal with "linked-clone" disks? I noticed the instance has a link back to the image that made it. 17:40:25 That's why I was working to put "linked clone" in the image. 17:41:06 please take a look at this https://etherpad.openstack.org/vmware-disk-usage-improvement 17:42:05 okay… that's interesting... 17:42:43 you're still planning on using linked clones, just off of new resized copies? 17:42:45 yaguang: does this mean that when we use different flavors the image is copied over again just to be resized? 17:44:22 @yaguang hello? 17:44:30 @kirankv, yes if we use linked clone , 17:45:23 @yaguang could you resize a full-clone VMDK in place? 17:45:25 @hartsocks, yes I want to get confirm from you guys 17:45:32 ok since its a local copy it would be much faster than transferring from glance 17:45:36 if this is ok 17:46:25 this is also how nova with kvm does to cache images and speed up instance build time 17:47:49 The only one that confuses me is the "linked clone" … the first part steps 1 to 4 17:47:51 @hartsocks, a full clone vmdk is first copied to instance dir , and then do a resize 17:48:22 @yaguang That's not the bit that confuses me. 17:49:01 @yaguang the "full clone" steps 1 to 3 make sense and that's what I thought the blueprint would do... 17:49:16 let me explain, the idea is when using linked clone, we cache a base image first, 17:50:10 because there may be different flavors of instance created in the same VMware Hypervisor 17:50:29 can't the next copy from glance be skipped as the original image is still there? Just copy and resoze it? 17:50:31 and they have different size of root size 17:51:08 download from glance is just once 17:51:21 yes - great 17:51:25 Okay I get it… 17:51:38 download once and copy/resize after than 17:51:49 when a flavor of instance is to be created, we first check in local cache dir 17:52:04 image-a gets image-a-small image-a-large in the image cache 17:52:07 ? 17:52:10 if the resized image disk is there 17:52:42 if it isn't we will do a copy and resize 17:52:55 so the steps should be - 1 - check if the image is in local cache and if not download 17:53:05 yes 17:53:24 ok to make the edit in etherpad to make sure it's clear and we are all on the same page? 17:53:31 so if a request for image-abcdef-small is there we use it and do a linked clone from that point. 17:53:59 no 17:54:11 Should we put some notes on the Blueprint to explain this? 17:54:23 that's what i was getting at :-D 17:54:29 we do a full copy 17:54:50 and resize it to image--abdfsad_10 17:55:07 this new image disk is used to be a linked clone of the instance 17:56:02 Let's be sure to write this down in the blueprint. 17:56:06 each flavor will have its own base image, but the base image is not copied over from glance everytime, instead the local cached image is copied and resized 17:56:15 so different flavor of instances in the same server have different linked clone base image 17:56:44 @kirankv, exactly 17:57:17 Just to be clear, each flavor + image … since you can have different images with different flavors… right? 17:57:31 yaguang: with custom flavours having same base size and different ephermal sizes use the same procedure? 17:57:33 yes 17:57:49 For example… image Debian and image Win32 each with flavors small, medium, large means 2 x 3 images. 17:58:24 I think the ephermal disk handle is independent from root disk 17:58:43 are ephermal disks are linked or not? 17:59:29 is it make sense to use linked for ephermal disks ? 17:59:42 s/does/is/ 17:59:44 #action yaguang to document blueprint https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage based on https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage 18:00:16 We're out official meeting time. 18:00:35 The room at #openstack-vmware is open for discussion. 18:00:43 yaguang: i will have to check more on ephermal disk 18:01:59 okay. See you all next week. 18:02:04 I'll open next week on this topick 18:02:06 . 18:02:09 #endmeeting