17:01:41 <hartsocks> #startmeeting VMwareAPI 17:01:42 <openstack> Minutes: http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-05-22-16.00.html 17:01:43 <openstack> Minutes (text): http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-05-22-16.00.txt 17:01:44 <openstack> Log: http://eavesdrop.openstack.org/meetings/cinder/2013/cinder.2013-05-22-16.00.log.html 17:01:45 <openstack> Meeting started Wed May 22 17:01:41 2013 UTC. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:46 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:48 <openstack> The meeting name has been set to 'vmwareapi' 17:01:59 <hartsocks> #topic salutations 17:02:06 <hartsocks> Greetings programmers! 17:02:13 <danwent> :) 17:02:20 <ssshi> hi 17:02:25 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda 17:02:30 <hartsocks> Our agenda today. 17:02:50 <hartsocks> Since this is only our second meeting I don't think it would be amiss to go around and do a quick introduction. 17:03:13 <hartsocks> I'm Shawn Hartsock from VMware and I'm working on VMwareAPI in Nova full time. Who else is here? 17:03:24 <danwent> hi, its dan wendlandt, also from vmware 17:03:43 <woodspa> I'm Patrick Woods from IBM. 17:03:49 <ssshi> Shanshi Shi here from ctrip.com 17:04:12 <danwent> is eustace not here? 17:04:13 <kirankv> Hi, kiran from HP 17:04:22 <hartsocks> Are the HP guys in the house? Ah hi! 17:04:24 <Eustace> Eustace from HP 17:04:40 <rsacharya> Srinivas from HP 17:04:59 <danwent> i know a few others are lurking, but perhaps they are shy :) 17:05:13 <Daviey> o/ hola 17:05:15 <hartsocks> They can just say hey if they like… or don't 17:05:25 <danwent> ok, let's dive in :) 17:05:26 <Daviey> I'm loud and proud to be here. o/ 17:05:31 <hartsocks> *lol* 17:05:53 <danwent> yes Daviey, no one would confuse you as being shy 17:06:05 <hartsocks> In case it gets dropped off due to time again… I've opened #openstack-vmware and I hang out there a bit. 17:06:17 <hartsocks> You can have general discussion there. 17:06:18 <danwent> all the cool kids are doing it :P 17:07:08 <hartsocks> If that's everyone, we have some unfinished discussion from last time... 17:07:23 <hartsocks> #topic continued Havana Blueprint discussion 17:07:42 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service 17:07:52 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler 17:08:39 <hartsocks> Eustace, did you get back with legal on those copyright notices? Who can help with that? 17:08:43 <danwent> we need to get those blueprints assigned to the "havana" series 17:08:49 <danwent> and targeted to a specific milestone 17:08:54 <danwent> either h-1 or h-2 17:09:00 <danwent> its getting quite late for h-1 17:09:11 <Eustace> Discussions still in progress with Legal 17:09:24 <danwent> if we want to shoot for h-1, i'd suggest we get the code posted for a Work-in-progress review ASAP 17:09:33 <danwent> or actually, may be too late already 17:09:59 <hartsocks> I'm thinking it's already too close a deadline unless the code can be posted *right now* 17:10:00 <danwent> https://wiki.openstack.org/wiki/Havana_Release_Schedule 17:10:24 <danwent> hartsocks: yeah, i agree. if HP is still dealing with legal stuff. 17:10:25 <kirankv> I dont think the copyright text is preventing from them getting tagged to a specific milestone 17:10:51 <danwent> kirankv: ah, i misunderstood what Eustace was talking about 17:10:58 <danwent> do you have permission to post the code for review publicly? 17:11:09 <danwent> or do you need legal approval for that as well? 17:11:27 <kirankv> ok, the code in work in progress state will be sent for review before the cutoff 17:11:47 <danwent> kirankv: what cutoff are you referring to? 17:11:48 <Eustace> We can post the code publically 17:12:02 <kirankv> h-1 17:12:31 <danwent> kirankv: i expect a lot of review comments, so we likely need it posted for feedback more than a week before it would need to merge. 17:13:07 <danwent> and the merge cutoff for H-1 is likely next tuesday 17:13:08 <kirankv> from merge perpective h-2 seems realistic 17:13:16 <danwent> kirankv: i agree. 17:13:27 <danwent> I would still encourage for us to start getting feedback as soon as possible though. 17:13:37 <kirankv> agree that there will be lot of comments and posting it earlier would help 17:13:52 <danwent> hartsocks: can you work with russellb to make sure these blueprints get assigned to the 'havana' series and the 'h-2' milestone? 17:14:05 <hartsocks> Okee dokee… 17:14:13 <russellb> goes into my review gueue when it's assigned, milestone is set, and then proposed to havana release series 17:14:14 <danwent> if kirankv thinks they can get code posted for WIP in a week, h-2 should be a good target. 17:14:26 <hartsocks> #action hartsocks to work with russellb to tag blueprints to H-2 17:14:27 <kirankv> we had done our dev using G3 rc1 and its taking some time to get it on h 17:15:28 <danwent> russellb: there's already an 'assignee' in lp, if that is what you mean, or are you talking about somethign else? 17:15:50 <russellb> just need to set havana-2, and then set the series goal to havana 17:15:56 <danwent> got it. 17:16:00 <russellb> then it goes into my list of stuff proposed for havana that i need to review/approve 17:16:08 <danwent> no one on our side has the permissions, so hopefully you can do that. 17:16:12 <danwent> ok, got it. 17:16:25 <russellb> once you're the assignee, you should be able to set the milestone and series 17:16:38 <Daviey> russellb: are you sure? 17:16:48 <danwent> ok, so kirankv, can you se the milestone on https://blueprints.launchpad.net/nova/+spec/accurate-capacity-of-clusters-for-scheduler ? 17:16:50 <russellb> 99.8% sure of that :-) 17:16:51 <danwent> to h-2? 17:16:59 <danwent> we can test it live :) 17:17:10 <Daviey> oh sorry, thought we were talking about a bug. 17:18:06 <kirankv> milestone and series gone have been set 17:18:11 <hartsocks> #undo 17:18:12 <openstack> Removing item from minutes: <ircmeeting.items.Action object at 0x3370650> 17:18:27 <hartsocks> awesome. 17:18:37 <kirankv> will do the same for other blueprint as well 17:18:52 <danwent> ok, great 17:18:58 <hartsocks> #action kirankv will set milestones on blueprints assigned 17:19:28 <danwent> ok, in terms of other blueprints, I think we'll be putting one in for being able to do volumes based on vSphere datastores 17:19:42 <danwent> this is something we talked about at the summit, but I don't think there is a BP for yet 17:20:20 <hartsocks> #link https://blueprints.launchpad.net/nova?searchtext=vmware 17:21:02 <danwent> hartsocks: yeah, i haven't seen one for it. 17:21:22 <danwent> but this would likely be a blueprint first for cinder 17:21:38 <danwent> and then a bug or blueprint to add 'attach' support for this type of datastore in nova. 17:21:43 <danwent> or rather, this type of volume 17:21:52 <Daviey> danwent: Is there planed to be a Quantum integration one? I understood that is a current area of weakness ? 17:22:12 <danwent> Daviey: there is the existing NVP integration that will work. 17:22:37 <danwent> Daviey: we're also looking at doing a quantum integration with the non-NVP vwmare networking, for existing deployments. 17:22:50 <Daviey> danwent: Ah cool, there is interest in that aswell. 17:22:53 <danwent> if there's anyone interested in working on that, please let me know, as I'm trying to find resources for it. 17:23:17 <danwent> i have some people within vmware that can likely help, but may not be the full owner. 17:23:57 <danwent> ok, shall we move on to bugs? 17:24:00 <hartsocks> Anything else on blueprints? 17:24:02 <ssshi> there's also a glance bp https://blueprints.launchpad.net/glance/+spec/hypervisor-templates-as-glance-images 17:24:31 <Daviey> danwent: I think we can offer some help, but probably not own 17:24:44 <Daviey> I can't speak on behalf of who would be driving it. 17:24:46 <Eustace> We are working on a solution for that BP 17:24:52 <Eustace> nearly done 17:25:02 <danwent> Daviey: ok, let's sync on that offline. 17:25:25 <danwent> Eustace: ok, looks already targeted for h-2 17:25:57 <Eustace> we'll be able to get that code for review within a few days 17:26:04 <hartsocks> great. 17:26:05 <danwent> ok, great. 17:26:09 <ssshi> that's great 17:26:28 <hartsocks> anything else else on blueprints? 17:27:00 <hartsocks> #topic high priority bugs 17:27:19 <danwent> #link all bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=vmware 17:27:29 <danwent> https://bugs.launchpad.net/nova/+bug/1180897 17:27:31 <uvirtbot> Launchpad bug 1180897 in nova "nova compute fails when vmware cluster has more than one ESXi Host" [Critical,Confirmed] 17:27:49 <danwent> seems like this should be a very high priority… 17:27:56 <hartsocks> I'm working on https://bugs.launchpad.net/nova/+bug/1178369 which was identified as a blocker… I am planning to move to 1180897 17:27:58 <uvirtbot> Launchpad bug 1178369 in nova "VNC console does not work with VCDriver" [Medium,In progress] 17:28:09 <hartsocks> I was on 1180897 but the VNC thing was raised as a blocker. 17:28:40 <hartsocks> I have a work in progress patch on the VNC issue … still diagnosing the cluster issue. 17:28:55 <danwent> yup, i think that makes sense. if there's someone else with available cycles though, perhaps they could jump on the cluster issues as well. 17:29:00 <Daviey> Personally, i'm not sure how significant lack of VNC is as a blocker. 17:29:23 <hartsocks> Daviey: IKR? That's why I was ignoring it. Turns out it's a huge issue. 17:29:42 <Daviey> I mean, if that is users primary interface into instances.. then they are doing it wrong™ 17:29:56 <danwent> Daviey: its certainly a blocker in our internal cloud deployments, which use Horizon 17:30:13 <hartsocks> and there you go… 17:30:22 <hartsocks> So let's not beat a dead horse. 17:30:23 <danwent> its just an ease of use thing. 17:30:29 <Daviey> ok 17:30:36 <hartsocks> We're working on these two big issues. 17:30:39 <danwent> espeically if you are using private networks, and don't want everything to have a floating ip 17:30:48 <hartsocks> What else is out there? 17:30:54 <danwent> https://bugs.launchpad.net/nova/+bug/1180779 17:30:56 <uvirtbot> Launchpad bug 1180779 in nova "VMwareESXDriver and VMwareVCDriver report incorrect stat" [Undecided,In progress] 17:31:04 <danwent> two things here 17:31:20 <danwent> there is a patch referenced in this bug that actually tries to address two things. I think it should be split, one patch per bug 17:31:26 <danwent> https://review.openstack.org/29552 17:31:59 <danwent> kirankv: this is your patch, correct? 17:32:22 <danwent> the second point on this bug is about how the "Multiple datastores" issue is resolved. 17:32:29 <Eustace> yes, it's Kirankv's patch 17:32:52 <kirankv> yes, but its a little effort to get them seperated since needs changes in the unit tests as well 17:32:54 <danwent> i think different people have different thoughts on how the datastore issue should be addressed. 17:33:22 <danwent> kirankv: i'm pretty sure once a nova core dev looks at it, there advice would be to split it 17:33:29 <kirankv> well since its a cluster driver, IMHO only shared datastore must be used 17:33:31 <hartsocks> We had another developer at vmware who has a patch for https://bugs.launchpad.net/nova/+bug/1171930 but is holding on to it because it's obvious HP is working here. 17:33:32 <uvirtbot> Launchpad bug 1171930 in nova "vsphere driver hardcoded to only use first datastore in cluster" [Wishlist,In progress] 17:33:48 <danwent> hartsocks: yeah, that is why i think we need a discussion on the right way to solve the problem. 17:34:17 <danwent> it seems like the currently proposed patch will not allow the use of local disks ever? 17:34:27 <danwent> or am i misreading that? 17:34:45 <kirankv> yes shared datastores for cluster driver, and all datastores for ESX driver 17:35:07 <hartsocks> is the VCDriver to be interpreted as "the cluster driver" ? 17:35:52 <danwent> i think cluster will be common, but I wasn't seeing it as required 17:36:11 <danwent> i know of customers talking about using it in a non-clustered (i.e., one cluster per ESX) model. 17:36:19 <danwent> not sure if there are other issues with that model though. 17:36:20 <kirankv> yes, if its acceptable to add another driver called cluster driver and more all cluster specific changes to this driver, i'd like to go with that approach 17:36:35 <hartsocks> Hmm... 17:36:39 <ssshi> i'm also working on that. when the scheduler received a dict capabalities, it simply make a copy. so i replaced the copy function with a customizable function to process the capabilities. 17:37:12 <danwent> anyway, we don't need to have the design discussion in this meeting, but my point is that i think there are still design discussions to be had here, and so I think we need to talk about that before this portion of the patch can merge. 17:37:35 <hartsocks> Shall we set up a separate discussion for this design talk? 17:37:53 <danwent> i'd suggest splitting the two bug fixes, merging the first one, and discussing how to address the datastore issue. 17:38:08 <danwent> hartsocks: we can do it right after this meeting in openstack-vmware if people like. 17:38:43 <ssshi> agree with danwent 17:38:47 <hartsocks> #action separate discussion in #openstack-vmware on datastore design issue 17:39:09 <hartsocks> Any other high-priority bugs? 17:39:42 <danwent> I need to file a bug on this, but the new vif-plugging changes broken quantum integration in nova when the vcdriver is used. 17:39:54 <danwent> fix should be relatively simple though. 17:40:10 <danwent> we REALLY need to be moving toward an automated CI infrastructure to get this working 17:40:14 <danwent> and keep it working :) 17:40:17 <hartsocks> danwent: great! I need a simple bug to onboard some new developers we just hired 17:40:41 <danwent> hartsocks: ok, sounds good. will send you a link. 17:41:02 <hartsocks> #topic bug etiquette 17:41:24 <hartsocks> Not on the official agenda… but… I'd like to just stress that if someone's marked that they are working on a bug…. 17:41:52 <hartsocks> please don't just take the bug away from them. Perhaps send them an email or make a comment on the bug as to why you are taking the bug from them. 17:41:55 <Daviey> (danwent: We should talk about CI soonly.) 17:42:06 <danwent> Daviey: happily! 17:42:27 <hartsocks> we have some time now. 17:43:24 <danwent> Daviey: i know some HP folks are looking into CI as well, and hopefully mtaylor can help us on that front. 17:43:35 <hartsocks> #topic CI discussion 17:43:53 <Daviey> danwent: How happy have openstack-ci been to introduce real vmware testing? 17:43:56 <danwent> anyone from the HP team able to talk about the status there? I think the ball was in your court. 17:44:31 <Eustace> We are not in the loop on that topic 17:44:33 <Daviey> I, probably incorrectly, assumed they'd not want to touch non-free components. 17:45:10 <danwent> Daviey: well, openstack on vSphere is priority for HP, and HP contributes a lot of resources to openstack-ci, including developers, so I think there's plenty of opportunity here. 17:45:49 <danwent> especially given that the majority of existing nova devs don't dev-test on vSphere, automated CI tests are all that much more important :) 17:45:55 <Daviey> I think that is the best direction :) 17:46:11 <danwent> #action #danwent contact HP contact for CI work again. 17:46:39 <hartsocks> okay... 17:46:51 <hartsocks> Last topic on my agenda... 17:46:58 <hartsocks> #topic group meeting time 17:47:12 <hartsocks> I held a vote on the meeting time... 17:47:22 <hartsocks> #link http://i.imgur.com/u4XyLLL.png 17:47:33 <hartsocks> here's a screenshot of the votes. 17:48:03 * Daviey has to leave now. Thanks o/ 17:48:06 <hartsocks> It's pretty clear that this is a good meeting time for most people willing to vote on a survey monkey survey about meeting times. 17:48:42 <hartsocks> I'll hold the voting open one more week 17:48:45 <hartsocks> #link http://www.surveymonkey.com/s/DN8YFSL 17:49:06 <hartsocks> Barring any sudden rush of votes I think this is our regular meeting time. 17:49:26 <danwent> agreed 17:49:59 <hartsocks> Okay then 17:50:07 <hartsocks> #topic open discussion 17:50:22 <hartsocks> Anything else folks need to chat about? 17:51:24 <hartsocks> alright, see you next week 17:51:35 <hartsocks> #endmeeting