17:01:47 #startmeeting vmwareapi 17:01:48 Meeting started Wed Jun 25 17:01:47 2014 UTC and is due to finish in 60 minutes. The chair is tjones. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:51 The meeting name has been set to 'vmwareapi' 17:01:53 hi 17:01:58 * mdbooth waves 17:02:00 hi 17:02:05 hi 17:02:14 hi 17:02:16 hi 17:02:32 hi guys - lets get started with approved BP status - we have more BP approved (hurrah) 17:02:40 #link https://blueprints.launchpad.net/openstack?searchtext=vmware 17:02:56 do we have meeting agenda link for this meeting 17:03:06 lets skip vui for now cause i don't see him here 17:03:37 garyk you have a couple - lets start with you 17:04:11 tjones: ok 17:04:23 tjones: the first is the vmware hot plaug. 17:04:27 there are 3 patches: 17:04:52 https://review.openstack.org/#/c/90689/8 - cleanup 17:05:02 https://review.openstack.org/#/c/91005/9 - fix tests 17:05:13 https://review.openstack.org/#/c/59365/ - actual hot plug 17:05:40 this code has been languishing since 2013 17:06:09 tjones: in addition to this there is the start of the ESX deprecation. Not a BP though 17:06:22 i see that last one still does not have any reviews from this team 17:06:25 thats me. go nigeria! 17:07:12 im wondering if we should have our own review day for vmwareapi patches to get all of us to sit down and review this stuff. what do you guys think? 17:07:28 tjones: no, i don't think so. 17:07:39 i just would hope that people would review the code as they do with the other code. 17:08:00 reviews are a real issue with no vmware cores 17:08:04 well i've been asking people to review that patch for 3 weeks and we (including me) are not doing it 17:08:09 so…… 17:08:40 at the moment we are also blocked by minesweeper 17:08:51 hi 17:08:55 tjones: I've been wondering if it's worth having a maximum queue length beyond which there's no point in anybody doing anything other than review 17:08:58 apparently not anymore 17:08:58 yeah i was going to mention that - it's slowly getting back on it's feet 17:09:00 so i guess we have a fairly decent excuse at the moment 17:09:12 mdbooth: there are about 40 vmware patches 17:09:25 garyk: I'm looking at a list. Needs some tidying. 17:09:27 about 30 are trivial 17:09:40 what do you mean tidying? 17:09:47 garyk: Perhaps we need a triage exercise, too 17:09:59 there are criticla bugs that we need to beg guys to review 17:10:04 what more do we need? 17:10:12 I mean, if we can only get patches reviewed at rate X, there's no point in producing patches faster than rate X 17:10:16 lets hold this topic until open discussion ok? 17:10:31 tjones: Sure, not important for this meeting anyway 17:10:32 vui - how's your work going? 17:10:36 mdbooth: i do not agree with that. we might as well stop developing then 17:10:41 garyk: +1 ;) 17:10:55 i will spend tomorrow reviewing…. 17:11:04 the early patches are waiting on minesweeper. 17:11:19 one more phase 2 patch is getting updated. 17:11:20 you have refactor, ova, and vsan approved as well as oslo.vmware - you have a ton on your plate 17:11:21 Where's minesweeper's queue? 17:11:26 phase 3 as well. 17:11:53 mdbooth: i think it is only an internal link 17:11:56 oslo.vmware integration was refreshed, hopefully others can put some review eyes, or help with updating it too. 17:12:04 mdbooth: at the moment it is doing about 4 reviews a day :( 17:12:17 ova/vsan are heavily dependent on the refactoring and oslo.vmware 17:12:22 there is an issue on the internal cloud. 17:12:24 yeah i think you may need a hand. are there specific things people can help with? 17:12:47 tjones: I'm happy to take on maintenance of the refactor patches 17:13:05 And/or oslo.vmware integration 17:13:13 * mdbooth has been looking at both of those a fair bit anyway 17:13:24 oslo.vmware is one that we held off for the refactoring, but in the interest of time, I think we need to get back to reviewing them. 17:13:59 That will needs some fixin. I think mdbooth mentioned something I noticed as well re: session management. 17:14:09 ok so just review review review then ? 17:14:18 vuil: Yeah, that's not a big fix iirc. 17:14:28 tjones: And rebase, rebase, rebase 17:14:30 mdbooth: yeah. 17:14:55 who was taking over the epemeral disk BP from tang? https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage 17:15:10 i posted the bp 17:15:18 i do not recall seeing any comments 17:15:22 i'll do the dev 17:15:28 can you change the assignee to you?? it's approved btw 17:15:37 ah, i did not see that 17:15:41 i am on it then 17:15:44 thanks 17:15:55 arnaud: how is https://blueprints.launchpad.net/glance/+spec/storage-policies-vmware-store going? 17:16:16 will start to work on it today 17:16:28 I need to do a few modifications 17:16:39 ok anything else about approved BP before moving on? 17:16:46 arnaud: Can we try to ensure that work is based on oslo.vmware? 17:16:54 yep 17:16:58 I know it's a bit of a moving target atm 17:17:02 definitely 17:17:07 #topic BP in review 17:17:09 I posted 17:17:10 #link https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:vmware,n,z 17:17:12 a couple of patches 17:17:14 to oslo.vmware 17:17:26 like https://review.openstack.org/#/c/102409/ 17:17:32 #undo 17:17:32 Removing item from minutes: 17:17:42 which is needed to not have to specify the wsdl location manually .... 17:18:09 great 17:18:26 arnaud: You'll want to rebase that on top of rgerganov's oslo.vmware pbm patch 17:18:56 arnaud: https://review.openstack.org/#/c/102480/ 17:18:57 hartsocks was working on a wsdl repair BP #link https://blueprints.launchpad.net/nova/+spec/autowsdl-repair should this be included or different? 17:18:58 there is no overlap 17:19:00 afail 17:19:04 afaik 17:19:32 different tjones 17:19:48 ok i didn't open it up and look :-) 17:19:51 tjones: That's only relevent to older vsphere iirc, right? 17:19:55 ok lets go to BP in review 17:20:10 #link https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:vmware,n,z 17:20:20 kirankv: hows your going? 17:20:37 had reviews and have responded back 17:21:14 We are addressing the race condition as well using the same locks used by the image cache cleanup 17:21:15 ok i see you are in discussion with jaypipes on this one 17:21:58 are there concerns on using the config option for cross datastore linked clones? 17:22:22 kirankv: link? 17:22:28 https://review.openstack.org/#/c/98704/ 17:22:36 what did I do? :) 17:23:10 Did we get an answer on the performance impact of that? 17:23:26 @jaypipes: do you have further concerns based on my response 17:23:42 lol - we were discussing https://review.openstack.org/#/c/98704/5 17:24:50 kirankv: gimme a few...will respond within 10 mins 17:24:56 (on the review) 17:25:07 ok any other BP we need to discuss? 17:25:21 * mdbooth would still be nervous about creating a performance bottleneck, but not enough to make a fuss about it 17:25:42 mdbooth: can you please elaborate about the issue 17:25:52 i have not been following the review discussion 17:25:55 garyk: Later 17:26:02 But sure 17:26:13 if no other BP, i would like to discuss on the review https://review.openstack.org/#/c/99623/ 17:26:15 if we run out of time we can move over to -vmware 17:26:23 garyk: it was the uncertainty/impact of having lots of VMs from other datastores using a base disk from a single datastore. 17:27:01 seems like the BP change is to not make the behavior of link-cloning across datasores the default. 17:27:25 Right. And if you find that you've accidentally created a mess, can you clean it up afterwards? 17:27:35 mdbooth: since no write operations happen, the impact on performance should be less 17:28:09 vuil 17:28:09 ; 17:28:46 vuil: yes the default willbe today's behavior, only improvement will be to do a local copy from another datastore's cache 17:29:11 tjones: i think only 2 mins left over, i want to discuss on the https://review.openstack.org/#/c/99623/, shall i bring it to -vmware ? 17:29:25 KanagarajM: we have 30 more minutes 17:29:57 as soon as we are done with the BP we can discuss it 17:30:16 tjones: sure thanks 17:30:25 kirankv: How often is _get_resource_for_node() called, and is vim_util.get_about_info() expensive? 17:31:17 Sorry, above was for KanagarajM 17:31:27 mboot: get_resource_for_node is called on every instance action and during the driver intialization 17:32:56 vim.get_service_content().about 17:33:08 That's not going to involve a round trip, is it? 17:33:17 Because the service content object is already cached 17:33:41 Kanagaraj: get_resource_for_node get called every minute even during stats reporting isnt it? 17:34:02 If it's just plucking an already-cached value, I don't see that it matters 17:37:02 KanagarajM: Do I recall some interference with 1 cluster per Nova? 17:38:22 mdbooth: yea sure, i was proposing 1 cluster per nova-compute by means of multi backend support 17:38:54 garyk: can you share your proposal for 1 cluster per nova-compute 17:39:40 kirankv: at the moment it is just to ensure that we do not have more than one configured 17:39:55 are we done with KanagarajM review? 17:39:56 the idea was to try and drop the nodename but that would break the driver 17:40:04 no 17:41:04 garyk: so the conf will only have one cluster, is that the correct interpretation 17:41:06 tjones: right now i see 3 reviewers +1 and not sure to whom to get +2, would any one please have a look and provide +2 17:41:14 kirankv: yes :( 17:41:18 Ok, lets step back. Let's not hold up KanagarajM for something which doesn't even have an approved BP, yet :) 17:41:25 this will mean you will need to run n processes for n clusters 17:41:40 we can't - none of us are core reviewers. You can go on #openstack-nova and ask for cores to take a look at it 17:41:44 it has lots of stirage implications and i do not like it. but that is what the community wanted 17:41:58 wait until you get a minesweeper run on it though 17:42:07 ok with Kanagaraj's bp, we can get the same result and also have the nova computes running as different process on a single machine 17:42:17 just like how cinder does 17:42:28 i have yet to look at the bp 17:42:33 So, putting the vcenter uuid in the node name makes sense to me 17:43:06 I'm +1 on the design 17:43:14 mdbooth: sure 17:44:08 could we switch to another problem? :) 17:44:33 i am done with the review https://review.openstack.org/#/c/99623/, but like to continue on the one cluster per nova-compute 17:44:51 yeah i feel like we have lost momemum - what do you have arnaud. we can contineu in open discussion on the one cluster per compute 17:45:34 how to run a periodic task at the driver level :) 17:45:39 in one line: multi backend will be suitable for the proxy nova-computes and it will facilitate to configure as many driver in a nova.conf and run one nova-compute for each of the driver configured in nova.conf 17:45:51 arnaud: +1 17:46:12 that's the last issue I need to solve to have the iscsi stuff reliable 17:46:26 arnaud: Can we kick it off when the driver is initialised? 17:46:58 we have a periodic task that does cache cleanup 17:46:59 ok, its more of how we create the nova.service() for a driver with compute_manager 17:47:06 last time I talked to Dan Smith, we ended up the discussion by saying you should hook into the image cache clock 17:47:09 tjones: That's not ours iirc 17:47:20 arnaud: sigh. That's evil. 17:47:24 oh yeah 17:47:25 :) 17:47:27 yeah 17:47:30 mdbooth:+1 17:47:42 that's why I think we need to come with a solution... 17:47:51 the problem 17:47:52 cleanup cleanup task is common to whoever wants to use it, but can't be for *this* :-) 17:48:16 being that it seems we have the problem because we are managing a cluster 17:48:28 arnaud: This needs to be discussed on the mailing list 17:48:29 i was not saying we should use the same thread - but we can follow the same pattern right? 17:48:38 I don't think there's any other way we can get consensus on this 17:48:50 ok let's try that 17:49:05 ok lets do open discussion cause we already are 17:49:11 #topic open discussion 17:49:23 11 minutes - go 17:49:30 tjones: Can you give us a status on Minesweeper? 17:49:41 That's an 'all hands to the pumps' problem imho 17:49:54 it's up and running 5 slaves - has a ton of reviews to get through 17:50:08 So it's at full tilt, just backlogged because it's been down? 17:50:25 it was derailed by some checkins in tempest, devstack, and some internal stuff 17:50:30 yep 17:50:45 Can it temporarily borrow some resources? 17:51:02 * mdbooth wonders if anybody knows anything about cloud provisioning resources for it ;) 17:51:04 not easily - but in the future 17:51:09 lol 17:51:34 Ok 17:51:45 we are getting more gear for it but it has to be wired in 17:51:58 Cool 17:52:00 not very cloudy i know 17:52:07 what else folks? 17:52:12 The other one I had was about about review consolidation 17:52:30 minesweepr is back - https://review.openstack.org/#/c/59365/ 17:53:37 If something is going to be touched by PBM and/or oslo.vmware and/or spawn, can we please make sure we make an explicit and deliberate decision about which comes first 17:53:42 And rebase accordingly 17:54:05 mdbooth: i am not sure i follow 17:54:16 this three touch all of the files in the code base for the driver 17:54:17 I think the default is base on spawn, 17:54:28 unless there is strong reason to go other way. 17:54:28 vuil: works for me 17:54:34 depends on what patch touches. 17:54:48 i am in favor, but it would be nice to know if there were eye on these patches. 17:55:03 The respawn patches? 17:55:08 s/re// 17:55:08 i think that there will be conflicts and we just need to be pragmatic. 17:55:18 if the conflicts are going to be very big then we should base 17:55:24 garyk: +1 17:55:31 but if it is a line or 2 then that is not really good 17:55:35 The PBM stuff was a classic example 17:56:08 other than the first pacth that code can live in parallel with the rest (i think) 17:56:30 gotta get these refactors merged….. 17:56:36 tjones vuil: Would you like me to own the refactor patches, btw? 17:56:52 * mdbooth will aggressively update and push for review 17:57:05 * mdbooth isn't nearly as loaded as vuil 17:57:10 can you push for review ithout being owners? 17:57:11 The would be helpful for the phase 2 stuff. 17:57:40 I am hoping they are mostly ready to go if not for minesweeper. 17:57:52 mdbooth: thanks 17:58:15 https://review.openstack.org/#/c/87002/ is the last in the phase 2 chain to we need more eyes (hands?) on 17:58:21 mdbooth: can you send that script you have that shows the dependency chain (tired of clicking through…) 17:58:30 Didn't I post it? 17:58:41 don't recall seeing it 17:58:53 mdbooth/heatmap.git on github 17:58:56 vuil: jenkins barfed on that one 17:59:39 oh it needs a rebase 17:59:45 vuil: I'm going to monitor those patches and assume that I can respond/update as required without stomping on your toes 17:59:45 sigh/// 17:59:47 not related 17:59:59 ok 1 more minute 18:00:05 mdbooth: yep. thanks 18:00:10 anything else lets move over to -vmware 18:00:34 #endmeeting