17:02:38 <hartsocks> #startmeeting VMwareAPI
17:02:40 <openstack> Meeting started Wed Jul 17 17:02:38 2013 UTC.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:02:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:02:43 <openstack> The meeting name has been set to 'vmwareapi'
17:02:48 <kirankv> hi Shawn
17:03:00 <hartsocks> @kirankv hey
17:03:02 <Eustace> Hi All
17:03:39 <hartsocks> anybody else lurking about?
17:03:53 <hartsocks> I know a number of folks were traveling this week?
17:04:08 <tjones> Hi shawn - i am here now
17:04:19 <hartsocks> groovy
17:04:20 <danwent> hi
17:05:09 * hartsocks listens for a moment
17:05:34 <hartsocks> Okay. Let's get started.
17:05:39 <hartsocks> #topic bugs
17:06:01 <hartsocks> So if you aren't "done" with a bug by now, I'd say you've missed Havana-2 with that fix.
17:06:10 <hartsocks> Here's what's open...
17:06:13 <hartsocks> #link https://bugs.launchpad.net/nova/+bugs?field.tag=vmware
17:06:27 <tjones> im so excited - i have 1 +2 and 3 +1 on my 1st patch :-)
17:06:34 <garyk> hi guys
17:06:37 <hartsocks> nice!
17:06:54 <hartsocks> hey
17:07:10 <hartsocks> So… I've got some bug triage work to catch up on.
17:07:28 <hartsocks> Does anyone have a pet bug they want to talk about?
17:07:46 <hartsocks> Any new blockers or any new bugs I should look at right away?
17:08:09 <garyk> hartsocks - i am going to push a patch addressing comments from https://review.openstack.org/#/c/36411/
17:08:43 <hartsocks> Nice.
17:09:23 <hartsocks> I'll be sure to cycle back around to this for review.
17:09:35 <garyk> it would also be nice if people can look at https://review.openstack.org/#/c/37389/ (quantum is broken with vmware)
17:10:01 <hartsocks> ah. This hasn't been triaged.
17:10:10 <hartsocks> So this is a new issue.
17:11:02 <hartsocks> I'd say this is Critical since there's no work-around. Is that fair?
17:11:16 <garyk> hartsocks: yes, that is correct
17:11:29 <hartsocks> okay then
17:12:02 <hartsocks> done
17:12:20 <garyk> thanks
17:12:23 <hartsocks> I'm glad to have you working on this.
17:12:57 <hartsocks> Does anyone else have a bug that needs attention?
17:13:30 <garyk> hartsocks: i'd like to talk about configuration variables
17:13:43 <hartsocks> okay...
17:13:51 <hartsocks> What do you have?
17:14:16 <hartsocks> Is this a bug or is this a blueprint?
17:14:37 <garyk> at the moment i find it very confusing with all of the variables with the prefixe vmwareapi_. i'd like to suggest that we have a vmware section. wanted to know if others have opinion regarding to this.
17:14:48 <garyk> i have made changes and would like to push a patch if others agree
17:14:51 <tjones> i like that idea
17:15:03 <hartsocks> I think that's more of a blueprint.
17:15:38 <hartsocks> If no one has any other bugs we can talk blueprints now.
17:15:48 * hartsocks listens
17:15:54 <garyk> ok, i'll open a bp
17:16:14 <kirankv> @garyk: This idea was suggested duringa review when the VCdriver was initilly posted
17:16:28 <hartsocks> #topic blueprints
17:16:37 <hartsocks> since that's what we're talking about anyway...
17:17:02 <hartsocks> So… yes. There was talk about shifting the way we handle configurations in previous meetings/reviews.
17:17:06 <garyk> kirankv: ok thank. i'll open the bp and push the code in a few minutes
17:17:23 <garyk> hartsocks: the code will be backward compatible oslo config has options for this
17:17:31 <hartsocks> wow.
17:17:33 <hartsocks> nice.
17:18:07 <hartsocks> #action garyk posting a new blueprint
17:18:41 <hartsocks> One suggestion I saw was to add a configuration validation phase to the driver.
17:18:58 <hartsocks> This would check to see if you have specified a good cluster name,
17:19:24 <tjones> and a password if vnc is enabled :-D
17:19:24 <hartsocks> have the proper datastores, etc.
17:19:33 <hartsocks> @tjones yep.
17:19:37 <Sabari__> But those validations already exist today (ref to cluster name)
17:20:13 <hartsocks> The idea would be to move this up to some initial session setup so that you could error out before work began.
17:20:36 <Sabari__> CUrrently they happen when the driver is initialized. may be we just need to reorganize the code.
17:20:42 <hartsocks> This was also coupled with an idea to make the user specify a datacenter.
17:20:48 <Sabari__> oh okay
17:20:58 <kirankv> if only of the validation fails, what action you intend to take, the conf has two clusters specified, one exists and one doesnt, you log an error and continue?
17:21:25 <hartsocks> hmm...
17:21:37 <hartsocks> what would be wrong with dying out completely?
17:22:16 <hartsocks> I can see error and continue could lead someone to believe they had a "working" config when they had a bad one.
17:22:16 <tjones> dying is safer and wastes less time when you are trying to debug
17:22:37 <Sabari__> With multi-cluster by one service, it's still okay not to die completely but it's important not to start the driver with VCDriver.
17:23:35 <hartsocks> So there's some debate on the right behavior then.
17:23:37 <Sabari__> because the admin may choose to add clusters on the fly or something like that. *thinking*
17:23:52 <kirankv> ok.... stratup failure is fine
17:24:25 <hartsocks> So I'm thinking this is an icehouse (next release 2014.1) timeframe blueprint
17:24:43 <hartsocks> I have also seen review comments on how we could alter some of the code around session...
17:24:48 <hartsocks> and how that's handled.
17:25:00 <tjones> @hartsocks - i was thinking that too.
17:25:00 <kirankv> today there in no dynamic update of the conf without a service restart, but if/when that changes, config checks can be tricky
17:25:28 <hartsocks> weird idea: could we have a command to add a cluster?
17:25:33 <hartsocks> That might be nuts.
17:25:57 <hartsocks> I'm afraid the idea of adding clusters on the fly feels dangerous.
17:26:22 <danwent> hartsocks: i don't think that's too nuts :)
17:26:28 <kirankv> well dynamic addition would be required since you dont want to bring down the service to add another cluster
17:26:53 <Sabari__> Just start a new service with the new cluster :)
17:26:55 <kirankv> the existing clusters status should be smileys
17:27:19 <hartsocks> ah… so it is a use case then?
17:27:30 <danwent> in fact, i was thinking a bit about a slightly different model, where one uses tags in VC to indicate whether a cluster should be managed by Nova.  So you'd only need to tweak nova.conf when adding a new VC
17:27:32 <kirankv> well if thats the approach then config checks at starup are fne and stopping the service if config failures are detected
17:27:51 <Sabari__> That's a good idea, more like a filter
17:28:53 <hartsocks> @danwent interesting idea… and that makes me feel we need to get better inventory integration in the VCDriver
17:29:02 <Sabari__> It depends on tagging api's in VC. Need to check.
17:29:33 <hartsocks> @Sabari_ I'm going to saddle you with following up on that :-)
17:29:39 <danwent> anywya, didn't want to throw the discussion off track, but it an interesting option as updating nova.conf with info for each new cluster seems painful.
17:29:58 <Sabari__> @hartsocks No issues :)
17:29:59 <garyk> hartsocks: https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section
17:30:19 <hartsocks> #action Sabari_ follow up on how to have VCDriver interact with vCenter using "tags" or other means (maybe regex again?)
17:30:31 <garyk> danwent: i think that oslo config now has a feature that enables configuration updates without retarting services
17:30:32 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section
17:31:17 <danwent> garyk: interseting, but it still requires a manual step of updating nova.conf.  It would be nice if the person could simply identify the cluster as a Nova cluster when they were creating it in vCenter.
17:31:33 <garyk> danwent: agreed
17:32:15 <kirankv> regex seems to be a easiest approach
17:32:36 <hartsocks> #action hartsocks rough draft of VMware configuration validation blueprint
17:33:39 <garyk> hartsocks: danwent: it may be worth discussion a sync between nova and vcenter - enabling us to try and have as klittle configuration as possible
17:34:07 <danwent> yeah, I'd at least like to flush out a long-term goal for that, even if we don't get to it in havana
17:34:45 <garyk> i'd be happy to try and investigate and propose something moving forwards
17:34:54 <hartsocks> Either way, I'd like the driver to work with the inventory in vCenter more intelligently.
17:35:07 <hartsocks> I don't like having to push everything to configurations.
17:35:41 <hartsocks> okay.
17:36:13 <hartsocks> There's a new blueprint up...
17:36:25 <hartsocks> #link https://blueprints.launchpad.net/cinder/+spec/vmware-vmdk-cinder-driver
17:36:39 <hartsocks> "The goal of this blueprint is to implement VMDK driver for cinder."
17:37:12 <garyk> hartsocks: configuration section support - https://review.openstack.org/37539
17:37:17 <hartsocks> Just so people are aware this is underway.
17:37:48 <hartsocks> okay, on my list
17:38:14 <garyk> the bp was very detailed and informative
17:39:26 <hartsocks> as far as I'm concerned… if you're going to make a mistake on a BP put more detail rather than less. At least people will have some idea of what you're up to.
17:39:55 <Sabari__> I am would like to discuss https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler sometime. When is this targeted ?
17:39:56 <garyk> :)
17:40:23 <hartsocks> Anyone have comment on this?
17:40:51 <hartsocks> @kirankv I think that's yours
17:41:17 <garyk> Sabari__: i'll go over the BP. There is a regular scheduling meeting. It may be worthwhile bring that up there too.
17:41:31 <Sabari__> @garyk Sure
17:42:01 <garyk> Sabari__: i think that there are some scheduling features that are currently bing worked in that may cover part of this (it is worth checking too)
17:42:18 <hartsocks> #action follow up on https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler with scheduler team
17:43:45 <hartsocks> Is there anything else on that blueprint?
17:43:58 <kirankv> well implementation of this bp requires changes in scheduler core
17:44:11 <Sabari__> @gark I will also check the new scheduling features and see if we have somethingi n common
17:44:23 <kirankv> this is even applicable today even for libvirt
17:44:28 <garyk> Sabari__: thanks.
17:45:08 <Sabari__> Yes, I would also like to idea of booting instances to specific resource pools (at runtime). May be added to the bp
17:45:51 <kirankv> rp/clusters isnt much of a difference
17:46:26 <hartsocks> hmm...
17:46:26 <Sabari__> yes but the configuration is with the driver. What if certain instance need to be booted in a particular resource pool. This is a usecase in VC
17:46:42 <garyk> Sabari__: https://wiki.openstack.org/wiki/Meetings/Scheduler
17:46:56 <Sabari__> This cannot be done dynamically until scheduler supports force provision on certain resource pools.
17:47:17 <Sabari__> gark: Thanks
17:47:17 <kirankv> the scheduler just sees them as compute nodes, so we would have to do the same when we want to boot an instance to a specific compute node/(s)
17:48:28 <hartsocks> so… why not a resource pool == a compute node … since everything has at least a default resource pool
17:48:35 <hartsocks> ?
17:49:06 <hartsocks> Then we just shift things to talking about resource pools everywhere...
17:49:47 <ssurana> I like that idea.. as that would eliminate the confusion around which host / cluster to use for provisioning the instances
17:50:12 <Sabari__> I think scheduler already supports hints like force_hosts and the user can specify the compute node
17:50:14 <kirankv> thats how we have got rp support done, had to pull it out of the multi cluster patch since reviewers said the patchset was big
17:50:30 <kirankv> @Sabari; yes it does
17:50:36 <garyk> Sabari__: correct the user can specificy a specific node by a hint
17:51:16 <Sabari__> It all boils down to which part of the inverntory that the driver is looking at in VC
17:51:29 <hartsocks> I think the shift to resource pools is pretty important. It might be more important than the shift to multi-cluster. Here's why I think that...
17:51:40 <Sabari__> May be we need to align our strategy towards that.
17:51:52 <kirankv> in the initial multicluster driver patch, if you specify a resource pool name it would show up  as a compute node similar to how it works for cluster today
17:52:10 <hartsocks> Many of the bugs I've traced have to do with identifying where a particular part of the inventory is in relationship to another.
17:52:31 <ssurana> I would say most of the bugs are because of that..
17:52:39 <Sabari__> What is the basic unit of compute in VC mapped to a nova driver ? Currently, we have different drivers handling this problem
17:53:06 <hartsocks> I see host == None in most situations
17:53:17 <hartsocks> That forces the vCenter to make a guess at placement.
17:53:28 <hartsocks> If we switch to Resource Pool ...
17:53:49 <hartsocks> the structure of the driver shifts so this isn't a problem.
17:54:08 <hartsocks> (or as frequent a prolbem)
17:55:19 <hartsocks> @kirankv reviewing your multiple clusters using single compute service as my next priority this week.
17:55:32 <ssurana> The only caveat the resource pool can be nested, so we need a proper way to represent them in the configuration, so that we can uniquely identify them
17:56:04 <hartsocks> @kirankv I will try and get you something in by this time next week as a review, but I think you've got a good comment to consider right now.
17:56:34 <hartsocks> @ssurana so something like PoolA.PoolB ?
17:56:41 <kirankv> with resource pools especially the once with configuration set to use parent capacity, providing the resource stats to the scheduler becomes complicated
17:56:51 <hartsocks> @ssurana or PoolA->PoolB
17:56:59 <ssurana> well i think we will need to use the inventory path format in the config
17:57:16 <hartsocks> ah
17:57:33 <kirankv> @ssurana: the full path support was in the multi cluster patch
17:57:49 <hartsocks> @kirankv if we just use full inventory paths for resource pools we can avoid a large number of bugs.
17:57:49 <kirankv> follows the pattern dc/folder/cluster/rp1/rp2
17:58:09 <ssurana> yes thats correct
17:58:15 <hartsocks> @garyk you can do more than one patch per blueprint right?
17:58:34 <garyk> hartsocks: yes.
17:58:54 <garyk> hartsocks: it is just a matter of updating the commit message to contain the bp
17:58:54 <hartsocks> @kirankv so perhaps you could provide multiple smaller patches to accomplish your goal?
17:59:25 <hartsocks> @kirankv this would allow you to give all your features but also break up the patch so people wouldn't be over-whelmed in reviewing it.
17:59:51 <kirankv> well im waiting for the first one to get through so that i can push the remaining once, adding them as dependent patches will cause me to doa lot of rebasing
17:59:54 <hartsocks> @garyk thanks
18:00:16 <hartsocks> @kirankv you can make a series of patches dependent on each other…
18:00:26 <hartsocks> I started to sketch a tutorial on it here...
18:00:43 <hartsocks> #link http://hartsock.blogspot.com/2013/06/the-patch-dance-openstack-style.html
18:01:09 <kirankv> ok will check, hopefully its simple to do it :)
18:01:21 <hartsocks> I didn't break it down to specific commands… but if you know enough about git, the pattern will be enough to make rebasing less painful.
18:01:59 <hartsocks> We're out of time.
18:02:11 <garyk> have a good one guys
18:02:15 <hartsocks> The room at #openstack-vmware is always open for impromptu discussion.
18:02:18 <kirankv> @hartsocks: thanks
18:02:36 <hartsocks> Sorry the meeting was a little chaotic this week… but I feel we had a great discussion.
18:02:43 <hartsocks> Thanks to everyone for your contributions.
18:02:48 <hartsocks> See you back here next week.
18:02:52 <hartsocks> #endmeeting