17:02:59 <hartsocks> #startmeeting VMwareAPI
17:03:00 <openstack> Meeting started Wed Jul  3 17:02:59 2013 UTC.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:03:01 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:03:03 <openstack> The meeting name has been set to 'vmwareapi'
17:03:13 <hartsocks> Who's around for VMwareAPI subteam meeting time!
17:03:18 <hartsocks> Name, company this time.
17:03:22 <hartsocks> Just because...
17:03:24 <hartsocks> :-)
17:03:27 <danwent> hello!  Dan Wendlandt, vmware
17:03:43 <kirankv> Hi! Kiran, HP
17:03:57 <Eustace> Hi Eustace, HP
17:04:02 <danwent> though technically people should feel free to not indicate a company, if they prefer not to.  In openstack, people are also free to contribute as individuals
17:04:04 <yaguang> Hi all
17:04:16 <yaguang> yaguang, Canonical
17:04:40 <hartsocks> @danwent thank you. yes.
17:04:42 <hartsocks> If you don't want to name a company, you don't have to.
17:05:23 <hartsocks> I'm Shawn Hartsock from VMware tho' and this is the part of the meeting where we talk bugs...
17:05:31 <hartsocks> #topic bugs
17:05:46 <hartsocks> Anyone have a pet bug that needs attention?
17:06:31 <hartsocks> The silence is deafening.
17:06:36 <yaguang> I have one that  slove  uncompatible  issue with PostgreSQL
17:07:13 <hartsocks> Hrm. Well, I meant bugs that are related to VMware's API's and drivers specifically. :-)
17:07:26 <yaguang> oh, sorry
17:07:28 <kirankv> its related to vmware :)
17:07:32 <hartsocks> Is it?
17:07:38 <yaguang> https://bugs.launchpad.net/nova/+bug/1195139
17:07:41 <uvirtbot> Launchpad bug 1195139 in nova "vmware Hyper  doesn't report hypervisor version correctly to database" [Undecided,In progress]
17:07:44 <kirankv> the version issue
17:08:13 <hartsocks> My apologies.
17:09:09 <kirankv> no worries
17:09:15 <hartsocks> Hmm… I will look more closely at this one later… but IIRC you can have versions like 5.0.0u1
17:09:29 <hartsocks> Not sure how that would work.
17:09:51 <sabari_> yes, i would suggest moving to a String/Text field type in the database
17:10:19 <kirankv> well but the field that is being retrieved gives the numberals only and never the update versions u1,u2... not sure it that has changed noew
17:10:33 <yaguang> nova libvirt  driver  use  integer  version to  do  a  version compare
17:11:06 <sabari_> would that affect VMware drivers ?
17:11:37 <yaguang> I mean  the column is set to interger is for that  use case
17:12:43 <sabari_> I haven;t yet seen a code in the VMware driver that has such use case, may be moving to String wouldnt harm
17:12:51 <hartsocks> Interesting… version numbers is one of those things that most systems treat as strings so I'm surprised this is an issue.
17:12:57 <sabari_> hmmm
17:13:36 <kirankv> well if i have both libvirt and vmware then changing it to string would break libvirt, so id prefer not doing a db change
17:13:54 <yaguang> agree with kirankv
17:13:57 <sabari_> oh yeah, I almost forgot that point
17:14:07 <hartsocks> okay.
17:14:12 <hartsocks> I see your point.
17:14:14 <sabari_> understood
17:14:32 <hartsocks> #action hartsocks to follow up on https://bugs.launchpad.net/nova/+bug/1195139
17:14:34 <uvirtbot> Launchpad bug 1195139 in nova "vmware Hyper  doesn't report hypervisor version correctly to database" [Undecided,In progress]
17:14:58 <hartsocks> I'll figure out what the right triage actions are after the meeting.
17:15:23 <hartsocks> Any other bugs to bring up?
17:15:31 <sabari_> https://bugs.launchpad.net/nova/+bug/1190515
17:15:32 <uvirtbot> Launchpad bug 1190515 in nova "disconnected ESXi Hosts cause VMWare driver failure" [High,In progress]
17:15:56 <sabari_> There were couple of bugs related to the fix I am working on this issue.
17:16:10 <sabari_> It would be better to raise the priority of this bug
17:16:43 <hartsocks> It's already rated as "high" ...
17:16:49 <hartsocks> you think this is critical?
17:17:08 <kirankv> Question: does a patchset for higher priority bug get reviewed faster?
17:17:28 <hartsocks> no. not really.
17:17:35 <sabari_> Sorry, I thought I saw a different priority on the bug.
17:17:41 <kirankv> oh!
17:17:44 <hartsocks> It's just a priority helper for us to decide.
17:18:01 <kirankv> ok
17:18:22 <hartsocks> I'm on the bug triage team though and this might help explain the priorities...
17:18:25 <hartsocks> #link https://wiki.openstack.org/wiki/BugTriage#Task_2:_Prioritize_confirmed_bugs_.28bug_supervisors.29
17:18:47 <hartsocks> Critical means prevents a key feature from working properly
17:19:08 <hartsocks> If there's a work-around then it can't be "Critical"
17:19:24 <hartsocks> Just FYI.
17:20:09 <hartsocks> Any other bugs we need to discuss?
17:20:59 <hartsocks> Okay moving on to blueprints in ...
17:21:02 <hartsocks> 3...
17:21:03 <hartsocks> 2...
17:21:13 <hartsocks> #topic blueprints
17:21:37 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver
17:21:47 <hartsocks> This is the FC support blueprint.
17:22:16 <hartsocks> I think this is not even set for Havana right now.
17:22:28 <hartsocks> @kirankv I think this is one of yours
17:22:33 <kirankv> yes,
17:23:01 <kirankv> working on this, refactoring the iSCSI code so that it can be used for FC as well
17:23:17 <kirankv> this week a WIP patch should get posted
17:23:20 <hartsocks> are you trying for Havana-3 for this?
17:23:48 <hartsocks> (I don't see a series goal)
17:23:50 <kirankv> will initially post it as for Havana2
17:24:02 <kirankv> will set it when I post the patch
17:24:43 <hartsocks> okay, you can try…
17:24:50 <kirankv> ok
17:25:07 <hartsocks> (lot of reviews for the core to get through so H2 will be hard)
17:25:20 <hartsocks> Let's see what else (before I get the big ones)
17:25:43 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
17:25:50 <hartsocks> My blueprint turned out to be pretty simple.
17:26:11 <hartsocks> I've posted code but it is "work in progress"
17:26:36 <hartsocks> I've got a bug in my gerrit account… my "work in progress" won't show up
17:26:42 <hartsocks> So just FYI
17:26:49 <kirankv> isnt the clone strategy something to be decided at instance creation time rather than by the image itself
17:27:16 <hartsocks> That's why there's a patch this early.
17:27:57 <hartsocks> This is one strategy that was easy. Decide that this "type" of machine performs best as a linked-clone or as a full-clone.
17:28:19 <hartsocks> Considering that you don't turn a web-server image into a database server image using "nova boot" this seems reasonable to me.
17:28:46 <kirankv> ok, let me see if there are options that can be specified for nova boot
17:29:32 <hartsocks> This feeds @yaguang 's work...
17:29:39 <yaguang> there  are  metadata  can by used to describe the  instance
17:29:59 <yaguang> and we  can  get it from db
17:30:18 <hartsocks> and how do we put it into the db?
17:30:33 <sabari_> hmm, the only other option would be scheduler hints
17:30:43 <yaguang> you just  need to  specify  as nova boot  options
17:31:02 <kirankv> #link http://docs.openstack.org/cli/quick-start/content/nova-cli-reference.html
17:31:27 <kirankv> section on Insert metadata during launch
17:31:31 <kirankv> gives the details
17:32:00 <hartsocks> okay. just like the glance CLI
17:32:19 <kirankv> yes
17:32:28 <hartsocks> So the real questions are:
17:32:41 <hartsocks> 1. what should the default be "full" or "linked"?
17:32:54 <hartsocks> 2. where should you be able to override the default?
17:33:23 <yaguang> but I think we  can  set  it as  a  image  property  full clone or linked
17:33:35 <yaguang> s/a/an/
17:33:43 <hartsocks> I like that. (that's in the patch up right now)
17:33:55 <hartsocks> I *also* like putting it at nova boot
17:33:59 <hartsocks> in the instance.
17:34:17 <hartsocks> I think I can do both. Letting nova boot's meta data override what is in the image.
17:34:26 <hartsocks> Is that too much freedom?
17:34:29 <hartsocks> Is that confusing?
17:35:09 <kirankv> we might have to see how it is being done and used in kvm today
17:35:13 <tjones> but the customer would have to load 2 images into glance when they can just load 1 and then tell the instance how to use it
17:35:25 <kirankv> that might give us insights on how admins use it
17:35:26 <tjones> that seems simpler from a user point of view to me
17:35:28 <yaguang> in  kvm
17:35:46 <hartsocks> @tjones that's why I like the idea of letting the image control a "default" but letting nova boot control an override.
17:35:48 <yaguang> we  can  config in nova.conf to  set  what  kind of  image  type to use
17:35:50 <kirankv> kvm = libvirt
17:35:59 <yaguang> qcow2 or  raw  ,
17:36:01 <sabari_> For a while I was looking at ways to specify options at boot time. At least, I thought that the metadata service can only be used after the instance is created. I am not sure if that information will be passed along to the driver. Just curious, if some knows if that can be done.
17:36:16 <yaguang> a raw  image just like  a  full clone
17:36:22 <kirankv> ok
17:36:27 <tjones> @hrtsocks - i agree with that and i don't think its too much freedom
17:37:36 <hartsocks> I'll post a patch in time for next Wednesday that shows both image and "nova boot" meta data switches. Then solicit your feedback again. I don't think we should worry if this is different from what KVM does.
17:37:45 <hartsocks> (at least on this one small point)
17:38:09 <hartsocks> But I will look at raw vs qcow2 to help me figure out how best to write this.
17:38:17 <hartsocks> Next topic?
17:38:31 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
17:38:42 <hartsocks> This is yaguang's BP
17:38:49 <yaguang> I am working on it
17:39:08 <hartsocks> I noticed a setting "use_linked_clone" that defaults to true.
17:39:18 <hartsocks> This is already in the driver code on master.
17:39:31 <yaguang> yes  , I also see it
17:39:39 <hartsocks> So editing localrc for your devstack should let you work.
17:40:16 <hartsocks> How will you deal with "linked-clone" disks? I noticed the instance has a link back to the image that made it.
17:40:25 <hartsocks> That's why I was working to put "linked clone" in the image.
17:41:06 <yaguang> please  take a look at  this  https://etherpad.openstack.org/vmware-disk-usage-improvement
17:42:05 <hartsocks> okay… that's interesting...
17:42:43 <hartsocks> you're still planning on using linked clones, just off of new resized copies?
17:42:45 <kirankv> yaguang: does this mean that when we use different flavors the image is copied over again just to be resized?
17:44:22 <hartsocks> @yaguang hello?
17:44:30 <yaguang> @kirankv, yes if we use linked clone ,
17:45:23 <hartsocks> @yaguang could you resize a full-clone VMDK in place?
17:45:25 <yaguang> @hartsocks, yes  I want to get  confirm from you guys
17:45:32 <kirankv> ok since its a local copy it would be much faster than transferring from glance
17:45:36 <yaguang> if this is ok
17:46:25 <yaguang> this is also how nova with kvm does to cache images  and speed up instance build time
17:47:49 <hartsocks> The only one that confuses me is the "linked clone" … the first part steps 1 to 4
17:47:51 <yaguang> @hartsocks,  a full clone vmdk is first copied to instance dir , and then do a  resize
17:48:22 <hartsocks> @yaguang That's not the bit that confuses me.
17:49:01 <hartsocks> @yaguang the "full clone" steps 1 to 3 make sense and that's what I thought the blueprint would do...
17:49:16 <yaguang> let me explain, the idea is when using linked clone,  we  cache a  base image first,
17:50:10 <yaguang> because there may be different  flavors of instance created in the same  VMware Hypervisor
17:50:29 <tjones> can't the next copy from glance be skipped as the original image is still there?  Just copy and resoze it?
17:50:31 <yaguang> and they have different  size of root size
17:51:08 <yaguang> download from glance is just  once
17:51:21 <tjones> yes - great
17:51:25 <hartsocks> Okay I get it…
17:51:38 <tjones> download once and copy/resize after than
17:51:49 <yaguang> when  a flavor of instance is to be created, we first  check  in local cache dir
17:52:04 <hartsocks> image-a gets image-a-small image-a-large in the image cache
17:52:07 <hartsocks> ?
17:52:10 <yaguang> if the  resized  image disk is there
17:52:42 <yaguang> if it isn't  we will do a  copy and resize
17:52:55 <tjones> so the steps should be - 1 - check if the image is in local cache and if not download
17:53:05 <yaguang> yes
17:53:24 <tjones> ok to make the edit in etherpad to make sure it's clear and we are all on the same page?
17:53:31 <hartsocks> so if a request for image-abcdef-small is there we use it and do a linked clone from that point.
17:53:59 <yaguang> no
17:54:11 <hartsocks> Should we put some notes on the Blueprint to explain this?
17:54:23 <tjones> that's what i was getting at :-D
17:54:29 <yaguang> we do a full copy
17:54:50 <yaguang> and resize  it to  image--abdfsad_10
17:55:07 <yaguang> this new image disk is used to be a linked clone of the  instance
17:56:02 <hartsocks> Let's be sure to write this down in the blueprint.
17:56:06 <kirankv> each flavor will have its own base image, but the base image is not copied over from glance everytime, instead the local cached image is copied and resized
17:56:15 <yaguang> so different  flavor of instances  in the same server  have different  linked clone base image
17:56:44 <yaguang> @kirankv, exactly
17:57:17 <hartsocks> Just to be clear, each flavor + image … since you can have different images with different flavors… right?
17:57:31 <kirankv> yaguang: with custom flavours having same base size and different ephermal sizes use the same procedure?
17:57:33 <yaguang> yes
17:57:49 <hartsocks> For example… image Debian and image Win32 each with flavors small, medium, large means 2 x 3 images.
17:58:24 <yaguang> I think the ephermal disk handle is independent  from  root disk
17:58:43 <kirankv> are ephermal disks are linked or not?
17:59:29 <yaguang> is it make sense  to use linked for ephermal disks ?
17:59:42 <yaguang> s/does/is/
17:59:44 <hartsocks> #action yaguang to document blueprint https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage based on https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
18:00:16 <hartsocks> We're out official meeting time.
18:00:35 <hartsocks> The room at #openstack-vmware is open for discussion.
18:00:43 <kirankv> yaguang: i will have to check more on ephermal disk
18:01:59 <hartsocks> okay. See you all next week.
18:02:04 <hartsocks> I'll open next week on this topick
18:02:06 <hartsocks> .
18:02:09 <hartsocks> #endmeeting