17:01:41 #startmeeting VMwareAPI 17:01:42 Minutes: http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-05-22-16.00.html 17:01:43 Minutes (text): http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-05-22-16.00.txt 17:01:44 Log: http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-05-22-16.00.log.html 17:01:45 Meeting started Wed May 22 17:01:41 2013 UTC. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:46 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:48 The meeting name has been set to 'vmwareapi' 17:01:59 #topic salutations 17:02:06 Greetings programmers! 17:02:13 :) 17:02:20 hi 17:02:25 #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda 17:02:30 Our agenda today. 17:02:50 Since this is only our second meeting I don't think it would be amiss to go around and do a quick introduction. 17:03:13 I'm Shawn Hartsock from VMware and I'm working on VMwareAPI in Nova full time. Who else is here? 17:03:24 hi, its dan wendlandt, also from vmware 17:03:43 I'm Patrick Woods from IBM. 17:03:49 Shanshi Shi here from ctrip.com 17:04:12 is eustace not here? 17:04:13 Hi, kiran from HP 17:04:22 Are the HP guys in the house? Ah hi! 17:04:24 Eustace from HP 17:04:40 Srinivas from HP 17:04:59 i know a few others are lurking, but perhaps they are shy :) 17:05:13 o/ hola 17:05:15 They can just say hey if they like… or don't 17:05:25 ok, let's dive in :) 17:05:26 I'm loud and proud to be here. o/ 17:05:31 *lol* 17:05:53 yes Daviey, no one would confuse you as being shy 17:06:05 In case it gets dropped off due to time again… I've opened #openstack-vmware and I hang out there a bit. 17:06:17 You can have general discussion there. 17:06:18 all the cool kids are doing it :P 17:07:08 If that's everyone, we have some unfinished discussion from last time... 17:07:23 #topic continued Havana Blueprint discussion 17:07:42 #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service 17:07:52 #link https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler 17:08:39 Eustace, did you get back with legal on those copyright notices? Who can help with that? 17:08:43 we need to get those blueprints assigned to the "havana" series 17:08:49 and targeted to a specific milestone 17:08:54 either h-1 or h-2 17:09:00 its getting quite late for h-1 17:09:11 Discussions still in progress with Legal 17:09:24 if we want to shoot for h-1, i'd suggest we get the code posted for a Work-in-progress review ASAP 17:09:33 or actually, may be too late already 17:09:59 I'm thinking it's already too close a deadline unless the code can be posted *right now* 17:10:00 https://wiki.openstack.org/wiki/Havana_Release_Schedule 17:10:24 hartsocks: yeah, i agree. if HP is still dealing with legal stuff. 17:10:25 I dont think the copyright text is preventing from them getting tagged to a specific milestone 17:10:51 kirankv: ah, i misunderstood what Eustace was talking about 17:10:58 do you have permission to post the code for review publicly? 17:11:09 or do you need legal approval for that as well? 17:11:27 ok, the code in work in progress state will be sent for review before the cutoff 17:11:47 kirankv: what cutoff are you referring to? 17:11:48 We can post the code publically 17:12:02 h-1 17:12:31 kirankv: i expect a lot of review comments, so we likely need it posted for feedback more than a week before it would need to merge. 17:13:07 and the merge cutoff for H-1 is likely next tuesday 17:13:08 from merge perpective h-2 seems realistic 17:13:16 kirankv: i agree. 17:13:27 I would still encourage for us to start getting feedback as soon as possible though. 17:13:37 agree that there will be lot of comments and posting it earlier would help 17:13:52 hartsocks: can you work with russellb to make sure these blueprints get assigned to the 'havana' series and the 'h-2' milestone? 17:14:05 Okee dokee… 17:14:13 goes into my review gueue when it's assigned, milestone is set, and then proposed to havana release series 17:14:14 if kirankv thinks they can get code posted for WIP in a week, h-2 should be a good target. 17:14:26 #action hartsocks to work with russellb to tag blueprints to H-2 17:14:27 we had done our dev using G3 rc1 and its taking some time to get it on h 17:15:28 russellb: there's already an 'assignee' in lp, if that is what you mean, or are you talking about somethign else? 17:15:50 just need to set havana-2, and then set the series goal to havana 17:15:56 got it. 17:16:00 then it goes into my list of stuff proposed for havana that i need to review/approve 17:16:08 no one on our side has the permissions, so hopefully you can do that. 17:16:12 ok, got it. 17:16:25 once you're the assignee, you should be able to set the milestone and series 17:16:38 russellb: are you sure? 17:16:48 ok, so kirankv, can you se the milestone on https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler ? 17:16:50 99.8% sure of that :-) 17:16:51 to h-2? 17:16:59 we can test it live :) 17:17:10 oh sorry, thought we were talking about a bug. 17:18:06 milestone and series gone have been set 17:18:11 #undo 17:18:12 Removing item from minutes: 17:18:27 awesome. 17:18:37 will do the same for other blueprint as well 17:18:52 ok, great 17:18:58 #action kirankv will set milestones on blueprints assigned 17:19:28 ok, in terms of other blueprints, I think we'll be putting one in for being able to do volumes based on vSphere datastores 17:19:42 this is something we talked about at the summit, but I don't think there is a BP for yet 17:20:20 #link https://blueprints.launchpad.net/nova?searchtext=vmware 17:21:02 hartsocks: yeah, i haven't seen one for it. 17:21:22 but this would likely be a blueprint first for cinder 17:21:38 and then a bug or blueprint to add 'attach' support for this type of datastore in nova. 17:21:43 or rather, this type of volume 17:21:52 danwent: Is there planed to be a Quantum integration one? I understood that is a current area of weakness ? 17:22:12 Daviey: there is the existing NVP integration that will work. 17:22:37 Daviey: we're also looking at doing a quantum integration with the non-NVP vwmare networking, for existing deployments. 17:22:50 danwent: Ah cool, there is interest in that aswell. 17:22:53 if there's anyone interested in working on that, please let me know, as I'm trying to find resources for it. 17:23:17 i have some people within vmware that can likely help, but may not be the full owner. 17:23:57 ok, shall we move on to bugs? 17:24:00 Anything else on blueprints? 17:24:02 there's also a glance bp https://blueprints.launchpad.net/glance/+spec/hypervisor-templates-as-glance-images 17:24:31 danwent: I think we can offer some help, but probably not own 17:24:44 I can't speak on behalf of who would be driving it. 17:24:46 We are working on a solution for that BP 17:24:52 nearly done 17:25:02 Daviey: ok, let's sync on that offline. 17:25:25 Eustace: ok, looks already targeted for h-2 17:25:57 we'll be able to get that code for review within a few days 17:26:04 great. 17:26:05 ok, great. 17:26:09 that's great 17:26:28 anything else else on blueprints? 17:27:00 #topic high priority bugs 17:27:19 #link all bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=vmware 17:27:29 https://bugs.launchpad.net/nova/+bug/1180897 17:27:31 Launchpad bug 1180897 in nova "nova compute fails when vmware cluster has more than one ESXi Host" [Critical,Confirmed] 17:27:49 seems like this should be a very high priority… 17:27:56 I'm working on https://bugs.launchpad.net/nova/+bug/1178369 which was identified as a blocker… I am planning to move to 1180897 17:27:58 Launchpad bug 1178369 in nova "VNC console does not work with VCDriver" [Medium,In progress] 17:28:09 I was on 1180897 but the VNC thing was raised as a blocker. 17:28:40 I have a work in progress patch on the VNC issue … still diagnosing the cluster issue. 17:28:55 yup, i think that makes sense. if there's someone else with available cycles though, perhaps they could jump on the cluster issues as well. 17:29:00 Personally, i'm not sure how significant lack of VNC is as a blocker. 17:29:23 Daviey: IKR? That's why I was ignoring it. Turns out it's a huge issue. 17:29:42 I mean, if that is users primary interface into instances.. then they are doing it wrong™ 17:29:56 Daviey: its certainly a blocker in our internal cloud deployments, which use Horizon 17:30:13 and there you go… 17:30:22 So let's not beat a dead horse. 17:30:23 its just an ease of use thing. 17:30:29 ok 17:30:36 We're working on these two big issues. 17:30:39 espeically if you are using private networks, and don't want everything to have a floating ip 17:30:48 What else is out there? 17:30:54 https://bugs.launchpad.net/nova/+bug/1180779 17:30:56 Launchpad bug 1180779 in nova "VMwareESXDriver and VMwareVCDriver report incorrect stat" [Undecided,In progress] 17:31:04 two things here 17:31:20 there is a patch referenced in this bug that actually tries to address two things. I think it should be split, one patch per bug 17:31:26 https://review.openstack.org/29552 17:31:59 kirankv: this is your patch, correct? 17:32:22 the second point on this bug is about how the "Multiple datastores" issue is resolved. 17:32:29 yes, it's Kirankv's patch 17:32:52 yes, but its a little effort to get them seperated since needs changes in the unit tests as well 17:32:54 i think different people have different thoughts on how the datastore issue should be addressed. 17:33:22 kirankv: i'm pretty sure once a nova core dev looks at it, there advice would be to split it 17:33:29 well since its a cluster driver, IMHO only shared datastore must be used 17:33:31 We had another developer at vmware who has a patch for https://bugs.launchpad.net/nova/+bug/1171930 but is holding on to it because it's obvious HP is working here. 17:33:32 Launchpad bug 1171930 in nova "vsphere driver hardcoded to only use first datastore in cluster" [Wishlist,In progress] 17:33:48 hartsocks: yeah, that is why i think we need a discussion on the right way to solve the problem. 17:34:17 it seems like the currently proposed patch will not allow the use of local disks ever? 17:34:27 or am i misreading that? 17:34:45 yes shared datastores for cluster driver, and all datastores for ESX driver 17:35:07 is the VCDriver to be interpreted as "the cluster driver" ? 17:35:52 i think cluster will be common, but I wasn't seeing it as required 17:36:11 i know of customers talking about using it in a non-clustered (i.e., one cluster per ESX) model. 17:36:19 not sure if there are other issues with that model though. 17:36:20 yes, if its acceptable to add another driver called cluster driver and more all cluster specific changes to this driver, i'd like to go with that approach 17:36:35 Hmm... 17:36:39 i'm also working on that. when the scheduler received a dict capabalities, it simply make a copy. so i replaced the copy function with a customizable function to process the capabilities. 17:37:12 anyway, we don't need to have the design discussion in this meeting, but my point is that i think there are still design discussions to be had here, and so I think we need to talk about that before this portion of the patch can merge. 17:37:35 Shall we set up a separate discussion for this design talk? 17:37:53 i'd suggest splitting the two bug fixes, merging the first one, and discussing how to address the datastore issue. 17:38:08 hartsocks: we can do it right after this meeting in openstack-vmware if people like. 17:38:43 agree with danwent 17:38:47 #action separate discussion in #openstack-vmware on datastore design issue 17:39:09 Any other high-priority bugs? 17:39:42 I need to file a bug on this, but the new vif-plugging changes broken quantum integration in nova when the vcdriver is used. 17:39:54 fix should be relatively simple though. 17:40:10 we REALLY need to be moving toward an automated CI infrastructure to get this working 17:40:14 and keep it working :) 17:40:17 danwent: great! I need a simple bug to onboard some new developers we just hired 17:40:41 hartsocks: ok, sounds good. will send you a link. 17:41:02 #topic bug etiquette 17:41:24 Not on the official agenda… but… I'd like to just stress that if someone's marked that they are working on a bug…. 17:41:52 please don't just take the bug away from them. Perhaps send them an email or make a comment on the bug as to why you are taking the bug from them. 17:41:55 (danwent: We should talk about CI soonly.) 17:42:06 Daviey: happily! 17:42:27 we have some time now. 17:43:24 Daviey: i know some HP folks are looking into CI as well, and hopefully mtaylor can help us on that front. 17:43:35 #topic CI discussion 17:43:53 danwent: How happy have openstack-ci been to introduce real vmware testing? 17:43:56 anyone from the HP team able to talk about the status there? I think the ball was in your court. 17:44:31 We are not in the loop on that topic 17:44:33 I, probably incorrectly, assumed they'd not want to touch non-free components. 17:45:10 Daviey: well, openstack on vSphere is priority for HP, and HP contributes a lot of resources to openstack-ci, including developers, so I think there's plenty of opportunity here. 17:45:49 especially given that the majority of existing nova devs don't dev-test on vSphere, automated CI tests are all that much more important :) 17:45:55 I think that is the best direction :) 17:46:11 #action #danwent contact HP contact for CI work again. 17:46:39 okay... 17:46:51 Last topic on my agenda... 17:46:58 #topic group meeting time 17:47:12 I held a vote on the meeting time... 17:47:22 #link http://i.imgur.com/u4XyLLL.png 17:47:33 here's a screenshot of the votes. 17:48:03 * Daviey has to leave now. Thanks o/ 17:48:06 It's pretty clear that this is a good meeting time for most people willing to vote on a survey monkey survey about meeting times. 17:48:42 I'll hold the voting open one more week 17:48:45 #link http://www.surveymonkey.com/s/DN8YFSL 17:49:06 Barring any sudden rush of votes I think this is our regular meeting time. 17:49:26 agreed 17:49:59 Okay then 17:50:07 #topic open discussion 17:50:22 Anything else folks need to chat about? 17:51:24 alright, see you next week 17:51:35 #endmeeting