17:09:15 <hartsocks> #startmeeting VMwareAPI
17:09:16 <openstack> Meeting started Wed Jun  5 17:09:15 2013 UTC.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:09:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:09:19 <openstack> The meeting name has been set to 'vmwareapi'
17:10:10 <hartsocks> #salutations
17:10:11 <hartsocks> Shawn Hartsock, VMware here.
17:10:27 <ivoks> Ante Karamatic, Canonical
17:10:32 <tjones1> Tracy  Jones, VMware
17:10:53 <kirankv> Kiran KV, HP
17:11:47 <hartsocks> Anyone else want to say hi?
17:12:12 <zhiyan> hi Zhi Yan
17:12:24 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda
17:12:39 <Eustace> Hi Eustace here
17:12:48 <rsacharya_> Srinivas from HP
17:13:14 <hartsocks> I'm alternating each meeting ...
17:13:44 <hartsocks> some meetings I start bugs first… some I start blueprints first...
17:13:48 <Sabari_> Hi Sabari here, I recently started contributing to VMware driver
17:14:04 <hartsocks> Anybody else?
17:14:22 <hartsocks> #topic Bugs
17:14:35 <hartsocks> Critical bugs...
17:14:46 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1180044
17:14:47 <uvirtbot> Launchpad bug 1180044 in nova "nova boot fails with multiple vCenter managed datacenters" [Critical,In progress]
17:15:06 <hartsocks> I'm going to work on this today and tomorrow, I hope to put up a patch this week on it.
17:15:22 <hartsocks> I've changed the name to reflect a better triage of the problem.
17:15:41 <hartsocks> It seems when you have multiple datacenters in some setups the OpenStack driver gets confused.
17:16:01 <hartsocks> The other high priority bug is:
17:16:14 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1183192
17:16:16 <uvirtbot> Launchpad bug 1183192 in nova "VMware VC Driver does not honor hw_vif_model from glance" [Critical,In progress]
17:16:24 <hartsocks> Is there anyone around who can comment on this?
17:16:45 <ivoks> all i can see is that commit requires just a test case
17:17:35 <ivoks> i'll ping yaguang this week to provide some info about it
17:17:51 <ivoks> s/this week/tomorrow
17:18:09 <hartsocks> okay
17:18:21 <hartsocks> #action ivoks to follow up with yaguang
17:18:46 <hartsocks> Are there any other bugs that are critical/blockers that we should be putting developer effort on right away?
17:19:45 <hartsocks> What about patches in need of reviews?
17:19:48 <hartsocks> I have one:
17:20:18 <hartsocks> https://review.openstack.org/#/c/30036/
17:20:22 <hartsocks> Any others?
17:20:28 <hartsocks> #link https://review.openstack.org/#/c/30036/
17:20:36 <kirankv> Ive got a couple
17:20:37 <kirankv> https://review.openstack.org/#/c/29396/
17:20:52 <kirankv> https://review.openstack.org/#/c/29552/
17:21:09 <kirankv> https://review.openstack.org/#/c/30282/
17:21:21 <Sabari_> I have one: https://review.openstack.org/#/c/30822/
17:21:41 <hartsocks> looks like pep8 changed on me. My local pep8 passes.
17:22:17 <hartsocks> hmmm...
17:23:02 <hartsocks> Let's try and review each other's changes then and watch for new formatting rules.
17:23:41 <hartsocks> Anything else on bugs before we move to blueprints?
17:23:44 <Sabari_> okay, I didn't see that the Jenkins build failed for the latest patchset. Should look into it.
17:24:30 <hartsocks> @sabari_ yeah, watch that.
17:25:09 <hartsocks> #link https://review.openstack.org/#/q/status:open+project:openstack/nova,n,z
17:25:21 <hartsocks> lists all open reviews for the openstack/nova project
17:25:46 <hartsocks> I look for "vmware" in the subject line in these for patches to pay attention to reviewing.
17:26:26 <hartsocks> Okay anything else?
17:26:38 <Sabari_> @kirankv Would like to know about https://review.openstack.org/#/c/30628/
17:27:28 <hartsocks> BTW: if folks would like to have me look at a patch add my username "hartsocks" as a reviewer and it will come up on my list of things todo.
17:27:52 <kirankv> ok, so the change being done is to pick up the *shared* datastore that has enough space to contain the VM disk
17:28:16 <kirankv> by shared, it means shared across all the hosts in the cluster
17:29:25 <Sabari_> okay, I am interested in the review because I may have a dependency on it. I am working on the fix for bug https://bugs.launchpad.net/nova/+bug/1104994
17:29:26 <uvirtbot> Launchpad bug 1104994 in nova "Multi datastore support for provisioning of instances on ESX" [High,In progress]
17:30:18 <Sabari_> the basic goal was just that an admin way to use some datastores for openstack, but not others, and a regex seems like an easy way that the admin could identify a set of datastores but exclude some others.
17:32:06 <kirankv> its like having two different filters for the datastores, the regex way or consider all shared datastores
17:32:13 <kirankv> both can co-exist
17:32:59 <kirankv> and one of them is configured for the driver
17:33:15 <hartsocks> Would the order be:
17:33:35 <hartsocks> find all datastores (use regex if present)
17:33:46 <hartsocks> then find datastore in this list with the most space
17:33:47 <hartsocks> ?
17:34:00 <Sabari_> @kirankv This change can complement your fix to address the capacity issue. But depending on what goes in first, the other has to resolve conflicts.
17:34:34 <Sabari_> I had some irc issues reading the latest comments. Hold on.
17:34:36 <kirankv> yes, there would be rebasing and merge conflict resolution that needs to be done based on who goes in first
17:34:57 <hartsocks> That's going to happen.
17:35:06 <hartsocks> Who ever goes last has to manage the merge.
17:36:01 <Sabari_> If the regex is not specified, the default behavior kicks-in. And that would be the shared datastore filter, if merged.
17:36:12 <Sabari_> Yes, that would be the resolution
17:36:45 <hartsocks> Okay, so are we clear on what to do with 1104994?
17:37:13 <hartsocks> I would like to move on to blueprints.
17:37:17 <Sabari_> Yes, I am now.
17:37:22 <hartsocks> #topic blueprints
17:37:35 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
17:37:54 <hartsocks> Firstly, thanks for getting those pesky copyright issues taken care of!
17:38:24 <kirankv> must thank Eustace for following it up
17:39:02 <kirankv> just submitted a patch today for the bp, couldnt work on getting it earlier
17:39:06 <hartsocks> Thanks Eustace!
17:39:21 <hartsocks> #link https://review.openstack.org/#/c/30282/
17:39:26 <hartsocks> that the one?
17:39:45 <hartsocks> Looks like someone put it in my reviewer queue
17:40:01 <kirankv> yes
17:40:24 <Eustace> welcome..
17:40:40 <hartsocks> so, this is working? I see tests. I like tests.
17:41:36 <hartsocks> Okay moving on...
17:42:00 <hartsocks> There has been some discussion over in #openstack-vmware about
17:42:03 <hartsocks> #link https://blueprints.launchpad.net/glance/+spec/hypervisor-templates-as-glance-images
17:42:06 <kirankv> yes, but had to some code to make the patchset smaller, will follow it up with other patchsets once this gets through
17:42:40 <kirankv> once the current patchset is through will submit a new one
17:42:54 <hartsocks> From my reading this blueprint is about making glance use vCenter as a kind of image storage server.
17:43:02 <kirankv> that handle the templates bp
17:43:18 <hartsocks> So am I reading correctly?
17:43:49 <hartsocks> I think @ivoks had some comments on this.
17:44:03 <ivoks> i can just repeat what i said...
17:44:09 <kirankv> there wouldnt be code change in glance for this, figured out that the image metadata can be used. changes will be in nova to deploy the template
17:44:55 <hartsocks> @ivoks probably a good idea to summarize things here for the record.
17:45:01 <ivoks> sure
17:45:21 <ivoks> so, i don't think we should have image type (template) that can be used only by one hypervisor
17:46:01 <ivoks> instead, if possible, i think that having another storage backend for glance, that would communicate with vcenter, would be a better approach
17:46:41 <ivoks> but i don't have any strong feelings about this; it just crossed my mind that hypervisor specific things should be contained within nova
17:47:48 <hartsocks> my though here was that if the template is in vCenter, then it should be possible to modify the OpenStack + VMwareAPI driver to make use of that fact.
17:48:12 <hartsocks> If the template and the VM are both in the same vCenter Glance can do less work.
17:48:31 <hartsocks> If the template and the VM are under different hypervisors … glance still needs to stream the image.
17:48:39 <kirankv> ivoks: for this specific bp, changes are contained in nova, however the drawback is that  the template is restricted to a vCenter
17:50:03 <kirankv> the other problem is that linked clone deployment does not happen for templates and they do happen for other glance images
17:50:16 <ivoks> exactly
17:50:31 <kirankv> but it gives the admin the ability to use the existing templates
17:51:27 <hartsocks> so… how does the glance change work?
17:51:33 <hartsocks> Can we document that?
17:52:24 <kirankv> yes, the meta data that needs to be set needs to be documented, will send that long with the patch for the bp
17:52:52 <ivoks> so, in this case you would still have regular glance images + templates (vsphere specific)
17:52:55 <ivoks> ?
17:53:40 <kirankv> yes, the regular glance images co-exist with the templates, in case of templates no image is uploaded, just an ovf
17:53:42 <ivoks> (excuse me if i sound silly)
17:53:53 <ivoks> right...
17:54:22 <tjones1> not silly - i was wondering the same thing
17:54:46 <hartsocks> @kirankv is it possible to make your change in a way that allows vCenter templates to potentially serve other hypervisors?
17:54:54 <ivoks> hartsocks: in that case, yaguang's BP does make sense; vsphere needs to handle images better
17:55:30 <ivoks> but we'll get to that one later
17:55:46 <hartsocks> the other BP is here:
17:55:55 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
17:56:20 <hartsocks> This one isn't slated as a high priority, but I put it on the agenda because it seems related.
17:56:36 <kirankv> hartsocks: that would be a nice feature to have, but not sure of how that would be done
17:57:12 <ivoks> it's more a blueprint for vsphere than for the nova :)
17:57:57 <hartsocks> Well, I'm going to publicly admit I have no idea what an "ephemeral disk" is.
17:58:08 <hartsocks> It sound like our "thin provision" disk
17:58:13 <hartsocks> But it might not be.
17:58:34 <hartsocks> We're short on time.
17:59:00 <ivoks> it's a disk that's not kept after the instance is terminated
17:59:12 <ivoks> you can call it temporary disk
17:59:20 <hartsocks> ah… well… we *do* have an API for that.
17:59:33 <hartsocks> You just have to put driver code in that calls it.
18:00:14 <tjones1> LOL - i just googled it too.  Sounds like a ram disk concept
18:00:20 <hartsocks> I would like @kirankv and @ivoks, and whomever is interested to head over to #openstack-vmware to continue discussion on this design issue.
18:00:29 <ivoks> sure
18:01:18 <kirankv> ivoks: what time zone are you in, Im +5.30 UTC, so its little late for me today
18:01:19 <hartsocks> I would also like to encourage any of you who have issues to make use of #openstack-vmware to hold impromptu meetings and discussions and then report those back here when it's warranted.
18:01:48 <ivoks> kirankv: utc +2
18:02:39 <ivoks> kirankv: we can catch up during the day tomorrow
18:02:42 <kirankv> ivoks: will discuss on openstack-vmware tomm/early next week
18:02:46 <ivoks> ok
18:02:58 <hartsocks> okay, we're out of time
18:03:04 <kirankv> ok
18:03:27 <hartsocks> We kind of got shorted by the other team. But, I think we made good use of our time today.
18:03:40 <hartsocks> #endmeeting