17:00:09 #startmeeting VMwareAPI 17:00:10 Meeting started Wed May 15 17:00:09 2013 UTC. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:14 The meeting name has been set to 'vmwareapi' 17:00:36 Hello everyone, welcome to the first official meeting of the VMwareAPI dedicated subteam. 17:00:42 Who is with us today? 17:00:51 hello! 17:00:56 Hi 17:01:03 hi 17:01:10 Hi ! 17:01:20 Hi 17:01:25 Hello 17:01:49 ok, that seems like a quorum :) 17:01:58 Okay then. 17:02:03 For the record... 17:02:05 #info https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda 17:02:44 Let's open first with any blocking or high priority bugs that people feel may be open but not being addressed... 17:02:56 could we have people quickly intro first? 17:03:03 Ah. 17:03:06 i'm not sure everyone knows everyone 17:03:19 Okay… good point. It is the first meeting. 17:03:22 (especially with IRC nicks 17:03:32 so i'm dan wendlandt, from vmware 17:03:56 I'm Shawn Hartsock and I'm one of a cadre of developers from VMware working on Nova compute drivers for VMwareAPI's 17:04:19 im tracy jones from vmware. new to the team - don't officially start until june 1 ;-) 17:04:32 tjones: getting a head start, nice :) 17:04:39 well, i'm Shanshi Shi, from ctrip.com, a chinese booking website. we started to look into openstack this March. 17:04:54 nice to meet you ssshi 17:05:04 Hi guys - thanks for getting this rolling! 17:05:18 I'm Patrick Woods from IBM and new to this group but quite interested in its success. 17:05:53 ok, anyone else want to intro themselves? 17:05:56 Hi, Im Kirankumar Vaddi from HP 17:06:11 Hi, I'm Eustace from HP 17:06:16 this is yatin kumbhare from HP 17:06:47 Nice. 17:06:51 sweet, a good number of people. very excited about what this will mean for vmware support in openstack 17:07:09 In honor of the folks in India and China I'm playing "Up all night" by Daft Punk. 17:07:39 Any other folks want to introduce themselves or shall we move to our agenda? 17:08:19 Am Ananth from HP 17:08:36 I work with Kiran and Eustace 17:09:41 BTW: if the last time you've used IRC was the 1990's ... 17:09:54 We're using Meetbot in this channel. 17:10:14 #info http://wiki.debian.org/MeetBot 17:10:29 #topic high priority bugs 17:10:54 #info we are tagging all bugs with 'vmware', and so you can see all vmware bugs with a search like: https://bugs.launchpad.net/nova/+bugs?field.tag=vmware 17:11:10 That tag is really important. 17:11:39 Does anyone have any "pet" bugs that might be blockers or otherwise? 17:11:57 https://bugs.launchpad.net/nova/+bug/1178369 17:11:58 Launchpad bug 1178369 in nova "VNC console does not work with VCDriver" [Medium,Confirmed] 17:12:18 https://bugs.launchpad.net/nova/+bug/1171930 17:12:20 Launchpad bug 1171930 in nova "vsphere driver hardcoded to only use first datastore in cluster" [Wishlist,Confirmed] 17:12:23 this are big issues with our internal deployment 17:12:46 it looks like that second bug is duplicated by: https://bugs.launchpad.net/nova/+bug/1104994 17:12:48 Launchpad bug 1104994 in nova "Multi datastore support for provisioning of instances on ESX" [Undecided,In progress] 17:13:09 or maybe the second one has a bit broader scope? 17:13:18 anyway, there seems to be overlap at the least 17:13:53 and the bug you just filed is also quite important: https://bugs.launchpad.net/nova/+bug/1180471 17:13:56 Launchpad bug 1180471 in nova "vmware: vmdk fail to attach to new VMs when using "thin" vmdk" [Undecided,New] 17:14:12 I've assigned the first two to myself initially. I don't actually have the bandwidth to close all these rapidly. 17:14:18 We are addressing 1171930 "vsphere driver hardcoded to only use first datastore in cluster" 17:14:51 Eustace: could you take that over or discuss it with whomever is assigned? 17:14:55 Eustace: would be good to see a proposed design, as I have talked with a few customers about their needs there, and it would be good to compare notes 17:15:10 we can have the discussion in the bug thread 17:15:22 we should be able to provide the necessary details 17:15:27 but we should likey converge to a single bug, and get this fixed quickly 17:16:07 kirankv: were you also mentioning an issue with how resources are reported by the VCdriver? 17:16:13 danwent: Would you put in your requirements in the bug thread for 1171930 17:16:27 sure 17:17:11 Divakar: sure. in fact, there's a proposal in there already for one possible model, see comments about "datastore_regex" 17:17:20 there are other ways to solve it as well. 17:17:27 danwent: currently vcdriver is reporting the capacity of the first host in the cluster 17:17:58 Hmm… interesting. Is that in the bug description? 17:17:59 Divakar: ok, is there a bug filed for this and tagged with vmware? we'll likely want to backport that change, so we should track it separately. 17:18:23 hartsocks: which item are you talking about… datastore issue, or resource issue? 17:18:42 sorry… I was talking about the vcdriver capacity reporting. 17:18:43 I am not sure we filed that as bug or blueprint for enhancement 17:19:12 We have a blueprint 17:19:23 Divakar: ok, can you file a bug? it should be separate from a blueprint, I suspect, as we will want to backport it to grizzly as a bug fix. If it is just included in a larger blueprint, it will likely get lost. 17:19:29 and not backported. 17:19:37 the datastore_regex is exactly what we used to do, the regex is configured in the database, should we move the datastore calculation to an upper level, and the vcdriver just report all the datastores to the scheduler/conductor 17:19:53 ssshi: nice 17:20:16 ssshi: can you post a patch or something? That might at least inform/inspire the bugfixer 17:20:23 thanks eustace.. yeah we have it filed as blueprint https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler 17:20:49 well, I'm still new to the community, might take a while 17:21:28 Divakar: is it a large change? 17:21:29 ssshi: that's part of why we need this kind of forum, to help people start contributing. If you can only post a writeup of what you did to the bug that is better than nothing. 17:21:59 something tracked as a blueprint are usually large and therefore cannot be backported. 17:22:39 danwent: btw, blueprints are up next 17:22:44 hartsocks: ok, could you please tell me where should I post it? 17:23:10 hartsocks: sure, just trying to figure out if this should actually be a bug. but we can talk about it later as well. 17:23:56 that blueprint seems to also include new stuff around resources pools. so yeah, let's talk about it in the next section. may need to be split. 17:24:31 ssshi: https://bugs.launchpad.net/nova/+bug/1171930 <- just write a comment right here in the bug, be as detailed as you like/can. If you have a patch post it. Thanks! 17:24:32 Launchpad bug 1171930 in nova "vsphere driver hardcoded to only use first datastore in cluster" [Wishlist,Confirmed] 17:25:06 only other bug to mention is: https://bugs.launchpad.net/nova/+bug/1178791 17:25:07 Launchpad bug 1178791 in nova "Nova compute backtraces with nova KeyError: 1 when using VMwareVCDriver" [Undecided,New] 17:25:18 hartsocks: thx. i'll do it tomorrow (it's midnight here) 17:25:58 this bug may be quantum specific. we're going to ping the dev who made the recent change that seems to have caused the issue, as he has already been doing some work with the vsphere driver. 17:26:40 Is this a blocker or is there a work-around? 17:27:18 hartsocks: i believe nova-compute crashes whenever it is restarted if there are instances provisioned and you are using quantum. 17:27:23 if that is the case, seems like a blocker :) 17:27:49 seems related to this commit: https://github.com/openstack/nova/commit/45e65d8c0301da689de1afcbc9f45756e71097ab 17:28:03 Okay. I'll add it to my personal list. (but I need to start delegating here soon) 17:28:23 hartsocks: i think Yaguang Tang will be able to take a look 17:28:37 i was expecting him here at this meeting, but since he's not, i'll ping him offline 17:29:01 Let's get Yaguang Tang to assign himself this bug then. 17:29:20 Okay, any other critical bugs not discussed? 17:30:22 #topic Blueprints 17:30:43 #info https://wiki.openstack.org/wiki/NovaVMware#Proposed_Blueprints 17:30:51 Thanks to Dan for compiling this list. 17:31:11 When I went searching for other blueprints I couldn't find anything not listed. 17:31:33 That did cause me to spot that not all blueprints have the keyword "vmware" in their subjects. 17:33:19 https://blueprints.launchpad.net/nova?searchtext=vmware 17:34:09 about capacity reporting, we want to use nova-compute to manage existing VMs as well, so any ideas how to import them? 17:34:18 one thing that russellb mentioned was that we need to get those blueprints assigned to the people that will actually be pushing the code 17:34:34 otherwise it looks like more work than one person could actually do 17:34:36 https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service - working on this one and 17:34:56 we also need to get the copyright statement removed from the description 17:35:36 and there is an active mail thread, it would help if you could review and leave comments on the whiteboard or send out a mail on openstack-dev list 17:35:44 HP Legal is looking into this and give an update on the copyright issue 17:35:57 Good, that's a bit of a faux pas. 17:36:00 We are waiting for HP Legal's feedback 17:36:34 Eustace: yeah, i think it sends a very bad message the community 17:36:54 content on launchpad is about sharing our ideas and contributing to the community. 17:37:09 copyright notices are well, kind of the opposite :) 17:37:25 what about in terms of who these BPs shoudl be assigned to? 17:37:39 is anybody following the "virt driver thread" conversion 17:37:40 #action Eustace will follow up with HP Legal about copyright 17:37:51 kirankv: are you actually going to be the one pushing the code here, and working through reviews for each of them? 17:38:43 yes for clusters, resource pools and templates related blueprint 17:38:51 yes please follow up, i'm not going to approve anything until it's resolved, because i don't think it's appropriate 17:39:08 kirankv: you mentioned you'd post your code in 2 weeks in the summit video, any updates? 17:39:15 at least as far as the roadmap goes ... doesn't stop you from submitting code 17:39:18 The fc driver needs further discussion 17:39:59 kirankv: yes, i didn't see that blueprint in the list shawn posted. we should add it. hadn't been on my radar. 17:40:45 Can we also set a convention of putting "vmware" in the title or subject of blueprints so they are easy to find? 17:40:59 russellb: with virt driver thread i see that the proxy model defined doesnt violate the compute constructs thats been followed with virt drivers 17:41:12 ah, i guess its on the wiki, just didn't have vmware in the title? 17:41:22 danwent: yep. 17:41:32 hartsocks: agreed. make searching much easier 17:41:45 rather than maintaining a list manually, you can just have a link that stays up to date. 17:41:47 infact the proposed proxy model is leveraging all the existing openstack constructs and putting all the things working together 17:42:34 so its bringing in the hypervisor manager in use seemlessly 17:43:07 russelb: any comments on that? 17:43:09 #action update all vmware related blueprints to have the keyword vmware somewhere in their subject 17:43:53 I also just modified a few line to get it recognize hosts in a cluster 17:44:40 ssshi: you are a hidden treasure! Seriously, let's get some of your work out there for people to see. 17:45:15 ssshi: would be good to identify what other issues you had to work around, as they are likely to be stumbling blocks for other people as well. 17:45:37 Divakar: as for the specific blueprint about nova-compute manaing N clusters instead of 1, that doesn't really change what's there today too much, so it's probably ok ... for now. What I'm leaning toward is a model for the future where vCenter support may not be in this layer at all, and it be implemented a different way. 17:45:54 I'm still new to the overall architecture. Currently the blocker is how to import existing instances on the hypervisor to nova database. 17:46:19 ssshi: I would really rather that not be done 17:46:45 ssshi: yeah, that seems to be jumping off a cliff of complexity :) 17:46:54 I think either nova manages something completely, or it doesn't manage it 17:47:10 i.e., nova started it in the first place 17:48:29 importing existing instances is a special use case which needs to be addressed if we are looking for bringing in cloud context to the existing datacenter 17:48:34 perhaps one could imagine an approach were you snapshot a traditional vm, create an image, then have nova "boot" the image, but my focus right now is just getting the basic "all nova" use cases working reliably. 17:48:57 so here's our story: our ops team want to use our platform to manage existing vsphere production environment 17:49:17 would a one-time import tool solve your problem? 17:49:41 btw, we only have 10 minutes left 17:49:43 this is basically anti-cloud talk :-) 17:49:49 Okay. 17:50:03 This is a "deep future" problem anyway. 17:50:07 Back on task... 17:50:13 so perhaps we should move this to a "futures" discussion, as I think its a pretty big item, and one that we won't get around to even looking at until th ebasics are more solid 17:50:14 #topic permanent meeting time 17:50:30 hartsocks: actually, i had one more BP 17:50:31 https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler 17:50:41 Okay... 17:51:02 hartsocks: you can do #undo 17:51:08 hartsocks: for the meeting minutes 17:51:11 #undo 17:51:12 Removing item from minutes: 17:51:20 russellb: thanks. 17:51:33 hmmm ... seems that removed his link instead of the topic action ... might need to do it one more time 17:51:42 #undo 17:51:43 Removing item from minutes: 17:51:44 for this issue, as i mentioned in the bugs section, is there a "bugfix" portion of this that we should split out, so we can potentially backport the change to grizzly. it seems like the blueprint as a whole has a broader scope with resource pools, etc. that would likely make a backport not make sense. 17:51:52 hartsocks: good to go 17:52:05 #link https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler 17:52:36 Divakar: is it correct to say that the existing cluster model needs a bug fix in terms of reporting capacity? 17:52:57 but that the blueprint is proposing this fix, and additional logic around resource pools? 17:53:07 i think to handle that it would be more than a bug fix 17:53:09 that is what i'm gathering from the blueprint text, but i'm not sure. 17:53:37 the intent of the blueprint is to ensure that a instance gets created successfully based on the capacity reported by the compute node 17:54:01 we can have a followup discussion thread on this 17:54:06 ok, given that we are low on time, perhaps we take this offline 17:54:07 yeah 17:54:39 does that blueprint mean we could query the vmware api directly for overall capacity, instead of adding all the instance information from database? 17:54:44 hartsocks: ok, let's talk about meeting time, then in open discussion i can talk about unit tests / ci tests 17:55:02 The last topic is on meeting times for next week. Since we're low on time I'll just take a vote on keeping this time or not. If the vote is to change the time… then 17:55:30 I will have a discussion in the next meeting. 17:55:35 #topic next meeting time 17:55:41 #vote keep the same time? 17:55:49 Respond Yes to keep the same time. 17:55:56 yes 17:55:58 Yes 17:56:06 Yes 17:56:10 its good for me… i worry that the people it is bad for may not be here :) 17:56:32 ok, let's at least keep it the same for the time being 17:56:41 *lol* good point. 17:56:50 it's 1 a.m. in china, might be a little late for people from japan 17:57:13 yes 17:57:29 yes 17:57:48 ok, i need to run, but next week, we need to talk about unit tests + ci testing as well :) 17:57:51 may be 2 or 3 hours earlier would be better 17:58:31 looking forward to see all you guys from VMware and HP :-) 17:58:33 Okay. So I'll hold the meeting at the same time next week, but I'll take a vote on the mailing list as well so that way people who aren't here can vote too. 17:58:51 ok, thanks folks. talk to you all next week! 17:58:58 #endmeeting