16:59:39 #startmeeting VMwareAPI 16:59:41 Meeting started Wed Aug 7 16:59:39 2013 UTC and is due to finish in 60 minutes. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:59:42 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:59:44 The meeting name has been set to 'vmwareapi' 16:59:49 Hello stackers! 16:59:57 Who's around to talk vmwareapi stuff and things? 16:59:58 Hi Shawn 17:00:22 hi 17:01:34 anyone else around? 17:01:40 is Kiran around? 17:01:42 garyk was just in the other channel, right? 17:02:16 I saw garyk join 17:02:19 i'm here - just had to make sure that i was not locked inthe office :) 17:02:35 okay :-) 17:02:38 Lol 17:03:28 well… VMware people in the house! 17:03:43 from earlier, i think kiran is occupied, though eustace may be available. 17:03:55 I;m here 17:04:01 hey! 17:04:11 hi All... 17:04:19 Hi 17:04:53 Let's cover bugs quickly and focus most of the time on blueprints this week. 17:05:04 #topic bugs 17:05:24 Are there any blocker do-or-die bugs we aren't tracking right now? 17:05:54 I've got 3 new bugs waiting on triage... 17:05:56 hartsocks: do we have someone doing a weekly triage of all bugs tagged with 'vmware'? 17:05:58 :) 17:06:01 good timing 17:06:16 Should we bump the prio of do or die bugs for Havana? 17:06:18 danwent: I'm on the Nova Bug Team and it's my official duty. 17:06:23 i'll also try and take a look when i have a few cycles 17:06:36 Any help is appreciated. 17:06:49 hartsocks: k, great 17:06:55 Let's move the target away from Havana-3 if there's a bug that's lower priority. 17:07:12 The bug priority is set by a Nova policy. 17:07:23 … let me see ... 17:07:32 i haven't seen any action on this 'high' bug https://bugs.launchpad.net/nova/+bug/1187853 17:07:34 Launchpad bug 1187853 in nova "VMWAREAPI: Problem with starting Windows instances on ESXi 5.1" [High,Confirmed] 17:07:44 The official guide FYI: https://wiki.openstack.org/wiki/BugTriage 17:08:09 does that bug actually require a code change, or just different metadata with the image? 17:08:22 (i.e., is it a code change or a doc change?) 17:08:28 I think both. I can take it if u want 17:08:46 It has no milestone on it. 17:09:11 booting windows vms seems valuable :) 17:09:15 If you want something "to be done" the "next" milestone communicates that. 17:09:21 danwent: the bug will require a code change as it is hard coded 17:09:26 danwent: that's why I put it as high. 17:09:34 cool 17:09:53 i think that it is a good bug for low hanging fruit - that is if someone wants to dive in and learn the flow 17:09:55 danwent: anyone who has bandwidth should see it and pick it up. 17:10:14 Lets use the target to communicate to the team its needed or not for Havana. I'll take the bug 17:10:29 thanks. 17:11:17 Anyone else see something that needs addressing? 17:12:21 Dan (or someone) can u make sure the must haves are marked Havana-3 and the others are not? 17:13:02 tjones: i will take a pass, thanks 17:13:13 Thx 17:13:23 tjones: you're seeing milestone, right? 17:13:28 Yes 17:13:36 and someone is already switching the windows bug 17:14:05 Here's a quick link to filter for open bugs: http://goo.gl/xW4XMu -> just passed through a url shortener 'cuz it's a monster query. 17:14:34 Using that link you can see there are only 14 open bugs to keep track of and prioritize. 17:14:42 hartsocks: https://review.openstack.org/#/c/35374/ 17:15:02 are we still planning on trying to merge this patch, or just relying on the larger blueprint from HP to handle this 17:15:40 AFAIK we are still trying to merge this… 17:15:59 No activity in a while. 17:16:26 That Sabari is working full out on another project so I don't think we can count on getting this soon. 17:16:31 ok, i wasn't sure if it was "compatible" with the approach being taken in the HP patch, as they also allow you to add resource pools as hosts, right? 17:16:36 i have a few minor nits with that patch. otherwise it looks good 17:17:06 If it comes to it I can run a merge on the patches to see what comes out. 17:17:37 I try to put these things through their paces with an integration environment before giving a +1 17:17:43 https://bugs.launchpad.net/nova/+bug/1184807 needs h-3 milestone 17:17:45 Launchpad bug 1184807 in nova "Snapshot failure with VMware driver" [High,In progress] 17:18:53 what are our plans for https://bugs.launchpad.net/nova/+bug/1192192 ? 17:18:54 Launchpad bug 1192192 in nova "Nova initiated Live Migration regression for vmware VCDriver" [Medium,Confirmed] 17:19:05 it seems like we should at least give the user a clean error message 17:19:17 as we know this should not work with the VC-driver, correct? 17:19:37 (or does that change once we have resource pools as hosts, since they could be in the same cluster?) 17:19:40 I'm bumping that one out past Havana. 17:20:09 The problem is subtle. 17:20:26 It's only a problem when you use the clusters. If you use one host name per ESXi host then it works. 17:21:02 The issue is that nova can't send a command that could work since you need to use a pair of hosts source, destination that are vMotion compatible and that only happens in clusters. 17:21:27 yeah, i get why it doesn't work. 17:21:35 i'm just wondering if we can have a cleaner failure 17:21:43 for example, seems like there is a method check_can_live_migrate_destination 17:21:45 So it's more interesting in that it potentially highlights an architectural impedance mismatch. 17:22:05 could we implement that method, and just always return false, so the user gets a reasonable error message, rather than a traceback? 17:22:19 That's reasonable. 17:22:30 We can document that live migrations should be via vCenter for now 17:22:46 Ah. More documentation to write. 17:22:52 you know me :) 17:23:13 IIRC other supported hypervisors will try a migration any which way you ask for and then fail. 17:23:20 In an expert now. My doc change got merged. 17:23:26 hartsocks: maybe create a public wiki page of things we need to document, so others can pitch in as well 17:23:29 tjones: awesome! 17:23:58 good idea. 17:24:16 #action document the needed documents in a document to speed documentation 17:24:32 doc^4 17:25:06 But, we do need to start collecting these. There's starting to be too many for me to keep in my head. 17:25:23 Okay... 17:25:48 so I usually do triage after this meeting based on feedback I get here. 17:26:06 So do we have anything else we need to highlight or triage ASAP? 17:27:05 hartsocks: mulit-datacenter 17:27:10 where did we leave that? 17:27:20 Let's move to blueprints then. 17:27:24 #topic blueprints 17:27:30 did we say that we were going to try and get the "simple" patch fixed and backported to grizzly? 17:27:37 So we're in the last push for Havana 17:27:53 #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service 17:28:22 I think there's general consensus that this is *the* most important patch to get into Havana! 17:28:36 This is Kiran's patch. And he's been on vacation a while now. 17:28:43 It looks nearly ready to go. 17:29:22 But there are a few nits that I would want to have cleaned up before a core-reviewer saw it and gave it a −2 … which would waste their time and ours. 17:29:51 garyk: you're our back port guru… how bad do you think this will be to back port? 17:30:20 the aforementioned bp? 17:30:25 yeah 17:30:34 multi-something-something. 17:30:41 multi-clusters. 17:30:46 can't backport features to public stable branches 17:30:50 only fixes 17:30:58 or am i misunderstanding something? 17:31:02 yup, danwent is correct. 17:31:09 I'm sorry. 17:31:17 we'll probably want to backport it on our "own" stable branch, for customer to use, and post it publicly though 17:31:18 I just scrolled back and re-read what I wrote. 17:31:34 #undo 17:31:35 Removing item from minutes: 17:31:46 #undo 17:31:47 Removing item from minutes: 17:32:00 I'm obviously itching to get to blueprints. 17:32:26 LOL 17:32:53 hartsocks: its fine, we can talk about the datacenters thing later if you prefer 17:33:11 I'm trying to find the patch. 17:33:20 I have a patch that's in abandoned state. 17:33:47 I found out my account was screwed up and it looks like the patch doesn't show as under my id even though it has my name on it. Weird. 17:34:11 Short answer: I think I can fix multi-datacenter … and make it easy to backport. 17:35:17 #topic blueprints 17:35:29 irc://irc.freenode.net:6667/#link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service 17:35:32 blah blah blah 17:35:45 last update 7 days old. 17:36:21 Next most important... 17:36:27 #link https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support 17:36:43 that one is you garyk 17:36:59 yeah - the first draft of the code is ready for review - / 17:37:08 awesome 17:37:10 review can be found at https://review.openstack.org/#/c/40245/ 17:37:24 There is one lacking development - need to boot from volume. 17:37:30 This will be addressed tomorrow. 17:37:42 nice 17:37:54 great work gary! 17:38:09 gracias. can you guys please take a look and provide comments 17:38:52 okay... 17:38:54 the patch is based on https://review.openstack.org/#/c/40105/6 - there are excepions when volume attachment/detachment takes place 17:39:41 If it's a requirement then I'll bump it to H3 and call it high-priority 17:40:35 Looks like you already cleaned up my nit-picks from earlier. 17:40:38 outstanding! 17:40:44 yeah 17:41:06 Okay… so next on my list is... 17:41:09 #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage 17:41:19 Which surprised us as needed by a customer. 17:41:31 that sentence akward. 17:41:50 We were surprised when a customer asked for this behavior specifically. 17:41:58 How's that? 17:42:11 the customer is always right 17:42:26 So this actually has a really solid use-case I didn't know about. 17:42:45 It is used in conjunction with the config-drive feature. 17:43:28 I'm trying to classify the failure of config-drive with vSphere as a bug… since the feature is there it's just dead. But this could be forced to be labeled as a blueprint. 17:43:44 In which case we'll be tracking one more blueprint. 17:44:12 #action query Tang on progress 17:44:30 Last (and least) up is: https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy 17:44:31 hartsocks: small, well-understood features as bugs is likely reasonable, but i would expect that someone might shoot us down if that didn't merge by the havana-3 deadline. 17:45:55 This is my little feature to expose use_linked_clone to the CLI 17:46:29 It's there to support the afore mentioned use case where the ephemeral disk, linked_clone, and config-drive work in concert to quickly spin up a VM 17:47:08 i love how you guys are using "aforementioned" in a sentence so much today ;-) 17:47:20 *lol* 17:48:16 So the multi-cluster patch (see multi-cluster, multi-datastore, multi-datacenter … so much multi it comfuses me) 17:48:32 … supports a basic deploy. 17:48:46 The 2 cinder blueprints support volume operations! 17:49:46 The ephemeral blueprint + linked_clone blueprint + config-drive support this common do-or-die use case: 17:49:46 Here's where i have the BP for hanava listed (fyi) https://wiki.eng.vmware.com/OpenStack/2013#Critical_BP_for_Havana_3 17:50:32 1. spin op instance with OS drive + epehmeral drive 17:50:38 2. first boot with config drive 17:51:03 3. pull in dependencies using apt-get, yum, puppet whatever 17:51:31 4. config network (er… i guess 3 and 4 swap places) 17:51:45 5. pull in custom application/data 17:51:57 … on the ephemeral drive 17:52:21 So it turns out that all flavors except the smallest usually include 2 drives for the instance. 17:52:37 The first for the OS… the second for application specific stuff. 17:52:42 And now we all know! 17:53:12 And that's why these are our most important fixes/bp to get in. 17:53:19 #topic open discussion 17:53:24 Any thoughts? 17:53:26 guys, i am sorry but i need to leave now. i'll be online a little later 17:53:44 garyk: thanks for hanging around :-) 17:54:48 Okay. 17:54:54 If that's it we're done early! 17:55:09 gr8 17:55:24 I'll be pinging all the key players in email so we all stay in step as we drive toward Havana-3 deadline. 17:55:30 Happy stacking! 17:55:32 #end 17:55:34 hartsocks: thanks, that is great 17:55:48 #endmeeting