17:01:47 <tjones> #startmeeting vmwareapi
17:01:48 <openstack> Meeting started Wed Jun 25 17:01:47 2014 UTC and is due to finish in 60 minutes.  The chair is tjones. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:01:51 <openstack> The meeting name has been set to 'vmwareapi'
17:01:53 <browne> hi
17:01:58 * mdbooth waves
17:02:00 <arnaud> hi
17:02:05 <kirankv> hi
17:02:14 <KanagarajM> hi
17:02:16 <garyk> hi
17:02:32 <tjones> hi guys - lets get started with approved BP status - we have more BP approved (hurrah)
17:02:40 <tjones> #link https://blueprints.launchpad.net/openstack?searchtext=vmware
17:02:56 <KanagarajM> do we have meeting agenda link for this meeting
17:03:06 <tjones> lets skip vui for now cause i don't see him here
17:03:37 <tjones> garyk you have a couple - lets start with you
17:04:11 <garyk> tjones: ok
17:04:23 <garyk> tjones: the first is the vmware hot plaug.
17:04:27 <garyk> there are 3 patches:
17:04:52 <garyk> https://review.openstack.org/#/c/90689/8 - cleanup
17:05:02 <garyk> https://review.openstack.org/#/c/91005/9 - fix tests
17:05:13 <garyk> https://review.openstack.org/#/c/59365/ - actual hot plug
17:05:40 <garyk> this code has been languishing since 2013
17:06:09 <garyk> tjones: in addition to this there is the start of the ESX deprecation. Not a BP though
17:06:22 <tjones> i see that last one still does not have any reviews from this team
17:06:25 <garyk> thats me. go nigeria!
17:07:12 <tjones> im wondering if we should have our own review day for vmwareapi patches to get all of us to sit down and review this stuff.  what do you guys think?
17:07:28 <garyk> tjones: no, i don't think so.
17:07:39 <garyk> i just would hope that people would review the code as they do with the other code.
17:08:00 <garyk> reviews are a real issue with no vmware cores
17:08:04 <tjones> well i've been asking people to review that patch for 3 weeks and we (including me) are not doing it
17:08:09 <tjones> so……
17:08:40 <garyk> at the moment we are also blocked by minesweeper
17:08:51 <vuil> hi
17:08:55 <mdbooth> tjones: I've been wondering if it's worth having a maximum queue length beyond which there's no point in anybody doing anything other than review
17:08:58 <arnaud> apparently not anymore
17:08:58 <tjones> yeah i was going to mention that - it's slowly getting back on it's feet
17:09:00 <garyk> so i guess we have a fairly decent excuse at the moment
17:09:12 <garyk> mdbooth: there are about 40 vmware patches
17:09:25 <mdbooth> garyk: I'm looking at a list. Needs some tidying.
17:09:27 <garyk> about 30 are trivial
17:09:40 <garyk> what do you mean tidying?
17:09:47 <mdbooth> garyk: Perhaps we need a triage exercise, too
17:09:59 <garyk> there are criticla bugs that we need to beg guys to review
17:10:04 <garyk> what more do we need?
17:10:12 <mdbooth> I mean, if we can only get patches reviewed at rate X, there's no point in producing patches faster than rate X
17:10:16 <tjones> lets hold this topic until open discussion ok?
17:10:31 <mdbooth> tjones: Sure, not important for this meeting anyway
17:10:32 <tjones> vui - how's your work going?
17:10:36 <garyk> mdbooth: i do not agree with that. we might as well stop developing then
17:10:41 <mdbooth> garyk: +1 ;)
17:10:55 <garyk> i will spend tomorrow reviewing….
17:11:04 <vuil> the early patches are waiting on minesweeper.
17:11:19 <vuil> one more phase 2 patch is getting updated.
17:11:20 <tjones> you have refactor, ova, and vsan approved as well as oslo.vmware - you have a ton on your plate
17:11:21 <mdbooth> Where's minesweeper's queue?
17:11:26 <vuil> phase 3 as well.
17:11:53 <tjones> mdbooth: i think it is only an internal link
17:11:56 <vuil> oslo.vmware integration was refreshed, hopefully others can put some review eyes, or help with updating it too.
17:12:04 <garyk> mdbooth: at the moment it is doing about 4 reviews a day :(
17:12:17 <vuil> ova/vsan are heavily dependent on the refactoring and oslo.vmware
17:12:22 <garyk> there is an issue on the internal cloud.
17:12:24 <tjones> yeah i think you may need a hand.  are there specific things people can help with?
17:12:47 <mdbooth> tjones: I'm happy to take on maintenance of the refactor patches
17:13:05 <mdbooth> And/or oslo.vmware integration
17:13:13 * mdbooth has been looking at both of those a fair bit anyway
17:13:24 <vuil> oslo.vmware is one that we held off for the refactoring, but in the interest of time, I think we need to get back to reviewing them.
17:13:59 <vuil> That will needs some fixin. I think mdbooth mentioned something I noticed as well re: session management.
17:14:09 <tjones> ok so just review review review then ?
17:14:18 <mdbooth> vuil: Yeah, that's not a big fix iirc.
17:14:28 <mdbooth> tjones: And rebase, rebase, rebase
17:14:30 <vuil> mdbooth: yeah.
17:14:55 <tjones> who was taking over the epemeral disk BP from tang? https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
17:15:10 <garyk> i posted the bp
17:15:18 <garyk> i do not recall seeing any comments
17:15:22 <garyk> i'll do the dev
17:15:28 <tjones> can you change the assignee to you??  it's approved btw
17:15:37 <garyk> ah, i did not see that
17:15:41 <garyk> i am on it then
17:15:44 <tjones> thanks
17:15:55 <tjones> arnaud: how is https://blueprints.launchpad.net/glance/+spec/storage-policies-vmware-store going?
17:16:16 <arnaud> will start to work on it today
17:16:28 <arnaud> I need to do a few modifications
17:16:39 <tjones> ok anything else about approved BP before moving on?
17:16:46 <mdbooth> arnaud: Can we try to ensure that work is based on oslo.vmware?
17:16:54 <arnaud> yep
17:16:58 <mdbooth> I know it's a bit of a moving target atm
17:17:02 <arnaud> definitely
17:17:07 <tjones> #topic BP in review
17:17:09 <arnaud> I posted
17:17:10 <tjones> #link https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:vmware,n,z
17:17:12 <arnaud> a couple of patches
17:17:14 <arnaud> to oslo.vmware
17:17:26 <arnaud> like https://review.openstack.org/#/c/102409/
17:17:32 <tjones> #undo
17:17:32 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0x2e3b350>
17:17:42 <arnaud> which is needed to not have to specify the wsdl location manually ....
17:18:09 <tjones> great
17:18:26 <mdbooth> arnaud: You'll want to rebase that on top of rgerganov's oslo.vmware pbm patch
17:18:56 <mdbooth> arnaud: https://review.openstack.org/#/c/102480/
17:18:57 <tjones> hartsocks was working on a wsdl repair BP #link https://blueprints.launchpad.net/nova/+spec/autowsdl-repair should this be included or different?
17:18:58 <arnaud> there is no overlap
17:19:00 <arnaud> afail
17:19:04 <arnaud> afaik
17:19:32 <arnaud> different tjones
17:19:48 <tjones> ok i didn't open it up and look :-)
17:19:51 <mdbooth> tjones: That's only relevent to older vsphere iirc, right?
17:19:55 <tjones> ok lets go to BP in review
17:20:10 <tjones> #link https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:vmware,n,z
17:20:20 <tjones> kirankv: hows your going?
17:20:37 <kirankv> had reviews and have responded back
17:21:14 <kirankv> We are addressing the race condition as well using the same locks used by the image cache cleanup
17:21:15 <tjones> ok i see you are in discussion with jaypipes on this one
17:21:58 <kirankv> are there concerns on using the config option for cross datastore linked clones?
17:22:22 <mdbooth> kirankv: link?
17:22:28 <kirankv> https://review.openstack.org/#/c/98704/
17:22:36 <jaypipes> what did I do? :)
17:23:10 <mdbooth> Did we get an answer on the performance impact of that?
17:23:26 <kirankv> @jaypipes: do you have further concerns based on my response
17:23:42 <tjones> lol - we were discussing https://review.openstack.org/#/c/98704/5
17:24:50 <jaypipes> kirankv: gimme a few...will respond within 10 mins
17:24:56 <jaypipes> (on the review)
17:25:07 <tjones> ok any other BP we need to discuss?
17:25:21 * mdbooth would still be nervous about creating a performance bottleneck, but not enough to make a fuss about it
17:25:42 <garyk> mdbooth: can you please elaborate about the issue
17:25:52 <garyk> i have not been following the review discussion
17:25:55 <mdbooth> garyk: Later
17:26:02 <mdbooth> But sure
17:26:13 <KanagarajM> if no other BP, i would like to discuss on the review https://review.openstack.org/#/c/99623/
17:26:15 <tjones> if we run out of time we can move over to -vmware
17:26:23 <vuil> garyk: it was the uncertainty/impact of having lots of VMs from other datastores using a base disk from a single datastore.
17:27:01 <vuil> seems like the BP change is to not make the behavior of link-cloning across datasores the default.
17:27:25 <mdbooth> Right. And if you find that you've accidentally created a mess, can you clean it up afterwards?
17:27:35 <kirankv> mdbooth: since no write operations happen, the impact on performance should be less
17:28:09 <kirankv> vuil
17:28:09 <kirankv> ;
17:28:46 <kirankv> vuil: yes the default willbe today's behavior, only improvement will be to do a local copy from another datastore's cache
17:29:11 <KanagarajM> tjones: i think only  2 mins left over, i want to discuss on the https://review.openstack.org/#/c/99623/, shall i bring it to -vmware ?
17:29:25 <tjones> KanagarajM: we have 30 more minutes
17:29:57 <tjones> as soon as we are done with the BP we can discuss it
17:30:16 <KanagarajM> tjones: sure thanks
17:30:25 <mdbooth> kirankv: How often is _get_resource_for_node() called, and is vim_util.get_about_info() expensive?
17:31:17 <mdbooth> Sorry, above was for KanagarajM
17:31:27 <KanagarajM> mboot: get_resource_for_node is called on every instance action and during the driver intialization
17:32:56 <mdbooth> vim.get_service_content().about
17:33:08 <mdbooth> That's not going to involve a round trip, is it?
17:33:17 <mdbooth> Because the service content object is already cached
17:33:41 <kirankv> Kanagaraj: get_resource_for_node  get called every minute even during stats reporting isnt it?
17:34:02 <mdbooth> If it's just plucking an already-cached value, I don't see that it matters
17:37:02 <mdbooth> KanagarajM: Do I recall some interference with 1 cluster per Nova?
17:38:22 <KanagarajM> mdbooth: yea sure, i was proposing 1 cluster per nova-compute  by means of multi backend support
17:38:54 <kirankv> garyk: can you share your proposal for 1 cluster per nova-compute
17:39:40 <garyk> kirankv: at the moment it is just to ensure that we do not have more than one configured
17:39:55 <tjones> are we done with KanagarajM review?
17:39:56 <garyk> the idea was to try and drop the nodename but that would break the driver
17:40:04 <kirankv> no
17:41:04 <kirankv> garyk: so the conf will only have one cluster, is that the correct interpretation
17:41:06 <KanagarajM> tjones: right now i see 3 reviewers +1 and not sure to whom to get +2, would any one please have a look and provide +2
17:41:14 <garyk> kirankv: yes :(
17:41:18 <mdbooth> Ok, lets step back. Let's not hold up KanagarajM for something which doesn't even have an approved BP, yet :)
17:41:25 <garyk> this will mean you will need to run n processes for n clusters
17:41:40 <tjones> we can't - none of us are core reviewers.  You can go on #openstack-nova and ask for cores to take a look at it
17:41:44 <garyk> it has lots of stirage implications and i do not like it. but that is what the community wanted
17:41:58 <tjones> wait until you get a minesweeper run on it though
17:42:07 <kirankv> ok with Kanagaraj's bp, we can get the same result and also have the nova computes running as different process on a single machine
17:42:17 <kirankv> just like how cinder does
17:42:28 <garyk> i have yet to look at the bp
17:42:33 <mdbooth> So, putting the vcenter uuid in the node name makes sense to me
17:43:06 <mdbooth> I'm +1 on the design
17:43:14 <KanagarajM> mdbooth: sure
17:44:08 <arnaud> could we switch to another problem? :)
17:44:33 <KanagarajM> i am done with the review https://review.openstack.org/#/c/99623/, but like to continue on the one cluster per nova-compute
17:44:51 <tjones> yeah i feel like we have lost momemum - what do you have arnaud.  we can contineu in open discussion on the one cluster per compute
17:45:34 <arnaud> how to run a periodic task at the driver level :)
17:45:39 <KanagarajM> in one line: multi backend will be suitable for the proxy nova-computes and it will facilitate to configure as many driver in a nova.conf and run one nova-compute for each of the driver configured in nova.conf
17:45:51 <mdbooth> arnaud: +1
17:46:12 <arnaud> that's the last issue I need to solve to have the iscsi stuff reliable
17:46:26 <mdbooth> arnaud: Can we kick it off when the driver is initialised?
17:46:58 <tjones> we have a periodic task that does cache cleanup
17:46:59 <KanagarajM> ok, its more of how we create the nova.service() for a driver with compute_manager
17:47:06 <arnaud> last time I talked to Dan Smith, we ended up the discussion by saying you should hook into the image cache clock
17:47:09 <mdbooth> tjones: That's not ours iirc
17:47:20 <mdbooth> arnaud: sigh. That's evil.
17:47:24 <arnaud> oh yeah
17:47:25 <arnaud> :)
17:47:27 <tjones> yeah
17:47:30 <vuil> mdbooth:+1
17:47:42 <arnaud> that's why I think we need to come with a solution...
17:47:51 <arnaud> the problem
17:47:52 <vuil> cleanup cleanup task is common to whoever wants to use it, but can't be for *this* :-)
17:48:16 <arnaud> being that it seems we have the problem because we are managing a cluster
17:48:28 <mdbooth> arnaud: This needs to be discussed on the mailing list
17:48:29 <tjones> i was not saying we should use the same thread - but we can follow the same pattern right?
17:48:38 <mdbooth> I don't think there's any other way we can get consensus on this
17:48:50 <arnaud> ok let's try that
17:49:05 <tjones> ok lets do open discussion cause we already are
17:49:11 <tjones> #topic open discussion
17:49:23 <tjones> 11 minutes - go
17:49:30 <mdbooth> tjones: Can you give us a status on Minesweeper?
17:49:41 <mdbooth> That's an 'all hands to the pumps' problem imho
17:49:54 <tjones> it's up and running 5 slaves - has a ton of reviews to get through
17:50:08 <mdbooth> So it's at full tilt, just backlogged because it's been down?
17:50:25 <tjones> it was derailed by some checkins in tempest, devstack, and some internal stuff
17:50:30 <tjones> yep
17:50:45 <mdbooth> Can it temporarily borrow some resources?
17:51:02 * mdbooth wonders if anybody knows anything about cloud provisioning resources for it ;)
17:51:04 <tjones> not easily - but in the future
17:51:09 <tjones> lol
17:51:34 <mdbooth> Ok
17:51:45 <tjones> we are getting more gear for it but it has to be wired in
17:51:58 <mdbooth> Cool
17:52:00 <tjones> not very cloudy i know
17:52:07 <tjones> what else folks?
17:52:12 <mdbooth> The other one I had was about about review consolidation
17:52:30 <garyk> minesweepr is back - https://review.openstack.org/#/c/59365/
17:53:37 <mdbooth> If something is going to be touched by PBM and/or oslo.vmware and/or spawn, can we please make sure we make an explicit and deliberate decision about which comes first
17:53:42 <mdbooth> And rebase accordingly
17:54:05 <garyk> mdbooth: i am not sure i follow
17:54:16 <garyk> this three touch all of the files in the code base for the driver
17:54:17 <vuil> I think the default is base on spawn,
17:54:28 <vuil> unless there is strong reason to go other way.
17:54:28 <mdbooth> vuil: works for me
17:54:34 <vuil> depends on what patch touches.
17:54:48 <garyk> i am in favor, but it would be nice to know if there were eye on these patches.
17:55:03 <mdbooth> The respawn patches?
17:55:08 <mdbooth> s/re//
17:55:08 <garyk> i think that there will be conflicts and we just need to be pragmatic.
17:55:18 <garyk> if the conflicts are going to be very big then we should base
17:55:24 <mdbooth> garyk: +1
17:55:31 <garyk> but if it is a line or 2 then that is not really good
17:55:35 <mdbooth> The PBM stuff was a classic example
17:56:08 <garyk> other than the first pacth that code can live in parallel with the rest (i think)
17:56:30 <tjones> gotta get these refactors merged…..
17:56:36 <mdbooth> tjones vuil: Would you like me to own the refactor patches, btw?
17:56:52 * mdbooth will aggressively update and push for review
17:57:05 * mdbooth isn't nearly as loaded as vuil
17:57:10 <garyk> can you push for review ithout being owners?
17:57:11 <vuil> The would be helpful for the phase 2 stuff.
17:57:40 <vuil> I am hoping they are mostly ready to go if not for minesweeper.
17:57:52 <tjones> mdbooth: thanks
17:58:15 <vuil> https://review.openstack.org/#/c/87002/ is the last in the phase 2 chain to we need more eyes (hands?) on
17:58:21 <tjones> mdbooth: can you send that script you have that shows the dependency chain (tired of clicking through…)
17:58:30 <mdbooth> Didn't I post it?
17:58:41 <tjones> don't recall seeing it
17:58:53 <mdbooth> mdbooth/heatmap.git on github
17:58:56 <tjones> vuil: jenkins barfed on that one
17:59:39 <tjones> oh it needs a rebase
17:59:45 <mdbooth> vuil: I'm going to monitor those patches and assume that I can respond/update as required without stomping on your toes
17:59:45 <tjones> sigh///
17:59:47 <vuil> not related
17:59:59 <tjones> ok 1 more minute
18:00:05 <vuil> mdbooth: yep. thanks
18:00:10 <tjones> anything else lets move over to -vmware
18:00:34 <tjones> #endmeeting