17:01:43 #startmeeting VMwareAPI 17:01:44 Meeting started Wed Jun 12 17:01:43 2013 UTC. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:45 or all hartsocks :) 17:01:46 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:48 The meeting name has been set to 'vmwareapi' 17:01:53 #topic salutations 17:02:08 Greetings programmers! Who's up for talking VMwareAPI stuff and nova? 17:02:16 i am :) 17:02:34 anyone else around? 17:02:38 HP in the house? 17:02:41 Canonical? 17:02:58 Hi ! 17:03:05 im here 17:03:05 man, its like we are trying to get reviews for our nova code :) 17:03:12 Hi 17:03:16 *lol* 17:03:47 ivoks, are you around? 17:03:52 looks like Sabari_ is here now 17:04:08 Hi, this is Sabari here. 17:04:36 Okay... 17:04:44 #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda 17:04:51 Here's our agenda today. 17:04:55 Kind of thick. 17:05:11 Since we started last week with bugs, I'll open this week with blueprints. 17:05:23 #topic blueprint discussions 17:05:46 I have a special note from the main nova team... 17:05:51 They would like us to look at: 17:06:03 #link https://blueprints.launchpad.net/nova/+spec/live-snapshots 17:06:17 This is a new blueprint to add live-snapshots to nova 17:06:20 hartsocks, Canonical is lurking 17:06:34 @med_ hey! 17:06:53 * hartsocks gives everyone a moment to scan the blueprint 17:07:25 Do we have folks working on or near this blueprint? Can anyone speak to how feasible it is to get this done? 17:08:49 * hartsocks listens to the crickets a moment 17:09:31 No comments on the "live-snapshots" blueprint? 17:10:03 note: I need to talk to the main nova team about this tomorrow and say "yes we can" or "no we can't" 17:10:06 from a technical feasibility it is :) 17:11:05 What about person-power? Do we have someone who can take this on? 17:12:46 #action hartsocks to follow up on "live-snapshots" to find an owner for the vmwareapi implementation 17:12:52 Okay, moving on... 17:12:59 hartsocks: would be good to check with our storage pm + eng folks on this. 17:13:08 do you know alex? if not, I can introduce you to him. 17:13:29 @danwent we should definitely follow up then… 17:13:52 Next blueprint: 17:14:00 #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service 17:14:09 How is this coming along? 17:15:00 uploaded a new patch set, got rid of many other minor improvements to make the patch set small for review 17:15:21 Looks like we need to look at your patch-set 7 collectively. 17:15:23 kirankv: great 17:15:35 kirankv: are unit tests added yet? 17:15:51 last i checked we were in WIP status waiting for those, but that was a while ago 17:16:09 yes, I will run the coverage report and check and try adding if we have missed it for the new code added 17:16:53 great 17:17:26 I am looking to see coverage on any newly added code… in general, if you add a new method I want to see some testing for it. 17:17:27 I'll mention here that if you want me to track, follow up, or review you changes … add me as a reviewer: http://i.imgur.com/XLINkt3.png 17:17:44 This is chiefly how I will build weekly status reports. 17:17:57 Next up: 17:18:12 #link https://blueprints.launchpad.net/glance/+spec/hypervisor-templates-as-glance-images 17:18:43 This is the VMware Templates as Glance images blueprint. 17:18:50 Im waiting for the multi-cluster to go through before submitting this one 17:18:57 It is currently slated for H-2 17:19:01 Hm... 17:19:17 You can submit a patch and say Patch A depends on Patch B 17:19:22 There is a button.. 17:19:58 I'll see about writing a tutorial on that for us. 17:20:01 im only concered about rebasing two patchsets 17:20:23 You can cherry-pick Patch A for the branch you use for Patch B 17:20:43 It's a bit more than I want to go into in a meeting, but … there's a "cherry pick" button in gerritt 17:20:48 even now i havent rebased the current patchset, and on openstack-dev mailing list I noticed that the preffered way is to rebase every patchset and submit 17:21:07 Sure. 17:21:17 These are not mutually exclusive activities. 17:21:20 Both are possible. 17:21:28 Both are preferably done together. 17:21:34 Our reviews are taking a long time. 17:21:42 Let's try and do reviews regularly. 17:21:58 I will start sending emails Monday and Friday to highlight patches that need review. 17:22:07 ok, let me see if i can sublit a patch this week, 17:22:15 reviews are going thru from a +1 perspective 17:22:23 *submit 17:22:24 hartsocks: yes, and we also need to get more nova core developers reviewing our patches 17:22:35 we need to get core reviewers attention 17:22:47 Let's make sure that we can say: 17:23:01 "If *only* more core developers gave their approval we would be ready" 17:23:08 Right now, this is not always the case. 17:23:43 hartsocks: agree. we need to make life as easy as possible for the core reviewers by making sure the obvious comments have already been addressed before they spend cycles. 17:24:01 med_: who does canonical have a nova core dev? 17:24:04 Arent all the nova reviews being monitored by nova core reviewers? 17:24:18 @Divakar they are but... 17:24:35 @Divakar it's like twitter… so much is happening it's easy to lose the thread 17:24:39 Divakar: in theory, there are just a LOT of them, so sometimes areas of the codebase that fewer people are familiar with get less love 17:24:58 also is dansmith around and listening? 17:25:18 i think he is a nova core who has attended the vmwareapi discussions before 17:25:37 hartsocks: am i thinking of the right person? 17:25:46 @russellb are you there? 17:25:47 We need to see if we can talk to russelb on how to get core reviewers to look into vmware related bps and bug fixes 17:25:56 @danwent I've talked with Russell the most. 17:26:06 hartsocks: yes, but PTLs are very busy :) 17:26:30 so definitely let's encourage him, but we also need to make sure others are paying attention to vmwareapi reviews as well. 17:26:42 @danwent yeah, we should probably bring him in rarely. 17:27:13 I think if we have 8 +1 votes and we are waiting for two +2 votes that looks pretty clear. 17:27:32 yes, but there's a reason we have core devs :) 17:27:34 I think it will also look like we are a concerted and coordinated effort. 17:27:56 anyway, i think we all agree on the need for more core reviews.. i am continuing to encourage people, and I'd appreciate help from anyone else who can do the same 17:28:01 i was not asking for russelb's time.. as PTL he can let his core reviewers attention on these 17:28:55 Okay… let's agree that our followups should be to... 17:29:00 1. get more core reviewers 17:29:17 2. be more vigilant on reviews/feedback cycles ourselves 17:29:18 sending a mail with the link to the review asking for +2 might be other option when things are not working 17:29:41 @Divakar that has not worked favorably for me… 17:30:37 Let's table this topic since we can't do more. 17:30:55 #action solicit participation in reviews by core-developers 17:31:14 #action get regular +1 reviews to happen more frequently 17:31:19 These are for me. 17:31:25 Last blueprint... 17:31:34 #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage 17:31:46 Has anyone followed up with the developer working on this? 17:32:47 Anyone from Canonical follow up with Yaguang Tang? 17:33:08 hartsocks: hey 17:33:42 @Daviey hey, how are we doing? Will we meet H-2 deadline for this? 17:34:15 Remember: it can take *weeks* for the review process to work out. 17:34:36 That makes July 18th our H-2 deadline kind of "tight" by that rate of speed. 17:34:41 Ugh 17:35:07 qustion on the blueprint, by "ephemeral disks", does tang mean thin provisioned, or something else? 17:35:07 I will follow up with him 17:35:25 I solicited some help on "ephemeral disks" 17:35:38 I have two different understandings... 17:35:45 hartsocks: help a newbie out :) 17:35:53 1. it's a disk that "goes away" when the VM is deleted 17:36:04 2. It's a "RAM" disk 17:36:14 ah, got it 17:36:27 Someone was going to follow up with the BP author on that... 17:36:49 Okay... 17:36:56 #topic Bugs! 17:37:04 Or "ants in your pants" 17:37:11 Tell me, first up.... 17:37:25 Are there any newly identified blockers we have not previously discussed! 17:37:27 ? 17:38:05 Things look good? 17:38:14 https://bugs.launchpad.net/nova/+bugs?field.tag=vmware 17:38:35 No *new* news is good news I suppose... 17:38:36 hartsocks: we're still having issues when more than one datacenter exists, right? 17:38:51 #link https://bugs.launchpad.net/nova/+bug/1180044 17:38:52 Launchpad bug 1180044 in nova "nova boot fails when vCenter has multiple managed hosts and no clear default host" [High,In progress] 17:38:53 and I haven't seen anyone looking at https://bugs.launchpad.net/nova/+bug/1184807 17:38:54 So I'll go... 17:38:55 Launchpad bug 1184807 in nova "Snapshot failure with VMware driver" [Low,New] 17:39:06 This is my status update on that. 17:39:23 Chiefly, the bug root cause is... 17:39:30 once the driver picks a host... 17:39:43 it ignores the inventory-tree semantics of vCenter. 17:39:52 This is the root cause for *a lot* of other bugs. 17:40:10 For example Pick HostA but accidentally pick a DataStore on HostB 17:40:18 Or … in the case I first observed... 17:40:20 Yes, I agree with hartsocks 17:40:44 Pick HostA then you end up picking a Datastore on HostB which is in a totally different datacenter 17:41:04 This also indirectly applies to Clustered hosts not getting used... 17:41:23 and is related to "local storage" problems in clusters... 17:41:39 (but only because it's the same basic problem of inventory trees being ignored. 17:41:40 ) 17:41:57 I'm currently writing a series of Traversal Specs to solve these kinds of problems. 17:42:09 I am working on the bug related to resource pool and I figure out the root cause is the placement of VM within a VC is still unclear to the driver 17:42:12 I hope to post shortly. 17:42:45 @Sabrari_ post your link 17:43:11 https://bugs.launchpad.net/nova/+bug/1105032 17:43:12 Launchpad bug 1105032 in nova "VMwareAPI driver can only use the 0th resource pool" [Low,Confirmed] 17:43:40 ok, let's make sure this gets listed as "critical" 17:43:53 whichever bug we decide to use to track it. 17:44:02 #action list 1105032 list as critical ... 17:44:20 #action list 1180044 as critical 17:44:23 Okay. 17:44:41 BTW…. 17:44:46 #link https://bugs.launchpad.net/nova/+bug/1183192 17:44:47 Launchpad bug 1183192 in nova "VMware VC Driver does not honor hw_vif_model from glance" [Critical,In progress] 17:44:47 @Sabari: how are we deciding which resource pool to pick? 17:44:53 We can obviously allow the driver to be placed in a resource pool specified by the user, but still we need to figure out a way to make a default decision. 17:45:21 Currently, we don't. VC places the VM in the root resource pool of the cluster 17:46:05 This is one of those behaviors which might work out fine in production if you just know that this is how it works. 17:46:25 arent we moving scheduling logic into the driver by having to make such decisions? 17:46:28 Of course, it completely destroys the concept of Resource Pools. 17:47:27 @kirankv yes… we have several blueprints in flight right now that are essentially doing that. 17:47:45 Yes, it depends on the admin and the way he has configured VC. If one chooses not to use Resource Pools, he stays fine with the existing setup. 17:48:30 well the blueprints leave the decision to the scheduler, the driver only makes available resource pools also as compute nodes 17:48:31 in a way managing resource pool as compute is resolving this 17:48:43 We have two time-lines to think about. 17:48:54 1. near-term fixes 17:49:03 2. long-term design 17:49:32 danwent, sorry. That would be yaguang as core nova 17:49:41 I dont think we need to worry about the default resource pool in a cluster 17:50:12 let the cluster decide where it wants to place the vm 17:50:14 med_: ah, thanks, didn't realize he was a core. great to hear, now we just need more review cycles from him :) 17:50:24 :) 17:50:51 in case option of placing it in a resource pool is required, then lets address that through representing the resource pool as compute 17:50:55 I will take a look at the blueprint and the patch sets 17:51:40 @Sabari: would like to see your patch set too since it addresses the bug 17:51:40 Is this about ResourcePools or ResourcePools in clusters? 17:52:03 @kirankv Sure, I am working on it. 17:52:06 if we start putting the scheduler logic inside the driver we will break other logical constructs 17:52:53 @hartsocks I was talking about resource pools within the cluster. 17:53:32 @Sabari_ then I have to agree with the assessment about not bothering with a fix. However, stand-alone hosts can have resource pools. 17:53:48 Is this a valid use case: 17:53:57 An administrator takes a stand-alone host... 17:54:08 … creates a Resource Pool "OpenStack" 17:54:23 and configures the Nova driver to only use the "OpenStack" pool 17:54:25 ? 17:54:32 @hartsocks: agree that fix is required for stand-alone hosts 17:54:38 Should we allow that? 17:54:43 Yes, that's valid too, but that cannot be done at this moment 17:55:09 @Sabari_ so that's a *slightly* different problem. Is that worth your time? 17:55:39 but im not sure how ESX is mostly used - 1. stand-lone 2. using vCenter? Im thinking its #2 using vCenter 17:55:43 I think I wholly agree that we don't need to change the Cluster logic though... 17:55:45 solution could be similar to allowing regex that we followed for datastore selection 17:56:23 @kirankv good point. 17:57:20 You could have an ESXi driver change and a slightly different change in the VCDriver too 17:57:29 I'll leave that up for the implementer. 17:57:31 we still support ESXDriver so that a nova-compute service can talk to a standalone host right. In that case, shouldn't we support resource pools. 17:58:06 @Sabari_ I think you have a valid point. 17:58:52 Anything else on this topic before I post some links needing code reviews (by core devs) 17:58:54 ? 17:59:21 In the cloud semantics do we really want to sub divide a host further into resource pools? I agree we will need this in a Cluster though 17:59:50 I think I need to look at the blueprint and the related patch on how it addresses the issue in cluster. In the meanwhile, I don;t have anything more 18:00:12 @Divakar I'm allowing for a specific use case where we have an admin "playing" with a small OpenStack install. I think we will see that more and more. 18:00:30 We're out of time... 18:00:40 I'll post some reviews... 18:00:50 #topic in need of reviews 18:00:52 • https://review.openstack.org/#/c/29396/ 18:00:52 • https://review.openstack.org/#/c/29552/ 18:00:52 • https://review.openstack.org/#/c/30036/ 18:00:52 • https://review.openstack.org/#/c/30822/ 18:01:08 hartsocks: just those 4? 18:01:12 Thanks Shawn 18:01:15 These are some patches that looked like they were ready to get some +2 18:01:18 Is there a beter way we can track inflight reviews 18:01:20 ? 18:01:25 Also... 18:01:40 #link http://imgur.com/XLINkt3 18:01:53 If you add me to your review it will end up in this list. 18:02:25 If I look (just before the meeting) and see a "bunch" of +1 votes I'll consider it ready to get some "+2" love. 18:02:56 Talk to you next week! 18:03:03 #endmeeting