17:00:51 <hartsocks> #startmeeting VMwareAPI 17:00:52 <openstack> Meeting started Wed Aug 14 17:00:51 2013 UTC and is due to finish in 60 minutes. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:53 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:55 <openstack> The meeting name has been set to 'vmwareapi' 17:00:58 <hartsocks> greetings stackers! 17:01:08 <hartsocks> Who's around to talk VMwareAPI stuffs? 17:01:08 <garyk> hi guys 17:01:23 <tjones> hi 17:02:07 <hartsocks> anyone else? 17:02:09 <danwent> hi 17:02:42 <hartsocks> kirankv: ping 17:02:48 <danwent> was anyone from vmware at the cinder meeting? 17:03:00 <tjones> yes, kartik gary and i were there 17:03:12 <hartsocks> yaguang: ping 17:04:01 <hartsocks> danwent: not on my calendar. 17:04:04 <danwent> tjones: awesome, thanks 17:04:20 <danwent> hartsocks: we don't need everyone there, i was just making sure that we had a representative 17:04:29 <hartsocks> danwent: cool 17:04:38 <tjones> hartsocks: don't think we all need to attend ;-) - yeah what dan said 17:05:17 <hartsocks> okay… so do we want to get started? 17:06:09 <hartsocks> #topic bugs 17:06:44 <hartsocks> #link http://goo.gl/pTcDG 17:06:57 <garyk> hartsocks: its not bugs, just features 17:06:58 <hartsocks> (I shortened the link, it's a long thing) 17:07:13 <hartsocks> So I've got only 3 things left to triage. 17:07:36 <hartsocks> Most of these I'm waiting for confirmation or more info to help reproduce. 17:07:49 <hartsocks> Anyone have a pet bug they need to trot out? 17:08:07 <hartsocks> #link https://bugs.launchpad.net/nova/+bugs?field.tag=vmware 17:09:05 <hartsocks> We have one "Critical" bug open: https://bugs.launchpad.net/nova/+bug/1207064 17:09:06 <uvirtbot> Launchpad bug 1207064 in nova "VMWare : Disabling linked clone does not cache images on the datastore" [Critical,In progress] 17:09:24 <tjones> im working on that 17:09:33 <hartsocks> I'm not 100% sure this is "Critical" since the "work-around" is "use_linked_clone=True" 17:09:47 <danwent> yeah, i had the same reaction 17:09:52 <tjones> well - and it does work - it just copies again 17:10:07 <tjones> it's more of a performance improvement IMO 17:10:11 <hartsocks> General consensus is bump this down to "High"? 17:10:56 <hartsocks> no objection? 17:11:12 * hartsocks clicks button. 17:11:16 <garyk> hartsocks: your patch set should solve this issue. correct? 17:11:56 <hartsocks> garyk: actually, it exposed the issue in test. The patch is about allowing override behaviors. A customer asked for the ability to control when linked_clone behavior was used. 17:12:06 <hartsocks> garyk: we discussed the feature a while ago. 17:12:21 <hartsocks> garyk: in general, I admit the idea is a bit confusing. 17:12:50 <hartsocks> garyk: I've tried to do a better job of explaining what needs to happen in the code itself. 17:12:56 <tjones> the issue is when use_linked_clone is false the image is allways streamed to the instance directory instead of checking vmware_base and copying it there 17:13:20 <tjones> copying it from vmware_base to the instance directory and saving the network bandwidth 17:14:20 <garyk> hartsocks: understood. 17:15:08 <hartsocks> yep. So the *bug* is that use_linked_clone = False … does not mean ... 17:15:14 <hartsocks> … do not use cache = True 17:15:26 <hartsocks> The idea of caching is a separate concern. 17:15:43 <hartsocks> We might want to allow an optional config to tweak caching behavior somehow... 17:15:57 <hartsocks> but that's not what used_linked_clone is about. 17:16:33 <hartsocks> And... 17:16:46 <hartsocks> That's not something to tackle in Havana. The deadlines are too close. 17:17:29 <hartsocks> Any other questions on this topic? 17:18:00 <hartsocks> … okay ... 17:18:11 <hartsocks> I think most people are heads-down right now anyhow. 17:18:55 <hartsocks> I'll point out we have 9 "High" priority bugs in "In Progress" state right now. And one in "Confirmed" state. That implies we have a lot of people cranking away on the driver. 17:19:46 <hartsocks> I've identified these bugs as "fix soon" (but not necessarily higher priority than "High" 17:20:07 <hartsocks> #link https://review.openstack.org/#/c/30822/ - merged! 17:20:33 <hartsocks> #link https://review.openstack.org/#/c/33100/ - needs +2s 17:20:33 <tjones> hurrah! 17:21:03 <hartsocks> #link https://review.openstack.org/#/c/40029/ - needs help and discussion 17:21:28 <tjones> i was hoping dims would be here. i've got some questions on it 17:21:41 <tjones> "it" being https://review.openstack.org/#/c/40029/ 17:21:41 <hartsocks> I've picked these out based on which ones support a customer critical use cases. (Thanks to danwent BTW for helping highlight these.) 17:22:14 <hartsocks> tjones: let me try and ping him… 1 sec 17:22:56 <tjones> doesn't look like he's online - at least not in the other rooms 17:23:06 <hartsocks> yep 17:23:10 <hartsocks> oh well. 17:23:50 <hartsocks> #action someone follow up with dims on https://bugs.launchpad.net/nova/+bug/1206584 17:23:52 <uvirtbot> Launchpad bug 1206584 in nova "VMware ESX and vSphere drivers do not support config drive" [High,In progress] 17:24:14 <garyk> hartsocks: i am on that one 17:24:20 <hartsocks> Thanks. 17:25:21 <hartsocks> I have a backlog of 22 reviews / VMwareAPI help to give. But these three are the higher priority ones. 17:25:51 <hartsocks> I'll start adding your names to reviews I need help on. 17:25:59 <tjones> gr8 17:26:04 <hartsocks> With that I think bugs are put to bed. 17:26:12 <garyk> hartsocks: i think that it would be great if everyone jumps in on the reviews 17:26:16 <garyk> there are quite a few open 17:26:29 <garyk> https://review.openstack.org/#/q/message:vmware+OR+message:vcenter+OR+message:vsphere+OR+message:esx+OR+message:vcdriver+OR+message:datastore,n,z 17:26:57 <hartsocks> I'v opened a number of documentation bugs and I'm trying to close some of the most pressing this week. I'll be back on reviews by week's end. 17:27:19 <tjones> im spending much of my time on reviews this week 17:27:24 <hartsocks> garyk: nice query 17:27:57 <hartsocks> okay. 17:27:57 <garyk> hartsocks; i thought you wrote it :). guess danwent maybe 17:28:17 <hartsocks> garyk: I don't remember half of what I write these days :-/ 17:28:45 <garyk> :) 17:28:57 <hartsocks> garyk: Probably danwent. I've been using a python script that deep links and reads the tags off of the connected blueprint or bug report. 17:29:15 <hartsocks> garyk: it's too shoddily written to share... 17:29:16 <garyk> please note that we have 2 in the queue for the stable grizzly and i guess that there may be more when some of the aforemntioned reviews get through 17:30:01 <hartsocks> grayk: can you mention them here quickly for the record before we move to blueprints? 17:30:17 <garyk> hartsocks: sure 17:30:31 <garyk> https://review.openstack.org/#/c/41657/ 17:31:20 <garyk> https://review.openstack.org/#/c/40359/ (this is -2 at the moment as the pacth int he master has yet to be approved - it was posted so that we can test it with the stable branch) 17:31:32 <garyk> that is the stable branch stuff. we can move on now. 17:31:41 <hartsocks> #action reviews for https://review.openstack.org/#/c/41657/ 17:31:52 <hartsocks> #action reviews for https://review.openstack.org/#/c/40359/ 17:32:21 <hartsocks> #action reviews for https://review.openstack.org/#/c/33100/ (fixes host stats from earlier) 17:32:26 <hartsocks> #topic blueprints 17:32:47 <hartsocks> Okay. So we have 5 blueprints that are our priority for Havana-3 17:33:08 <hartsocks> The rule is, you *must* have code posted and reviewed by August 22nd. 17:33:46 <hartsocks> So, I wanted to be sure that all 5 of these got into the state where people had a lot of +1s on these and no −1s. 17:34:04 <hartsocks> We had a few blueprints who's code was in −2 abandoned state. 17:34:17 <hartsocks> But, as of this AM everything is in need of reviews. 17:34:30 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service 17:34:48 <hartsocks> #action reviews for https://review.openstack.org/#/c/30282/ 17:35:06 <hartsocks> This is Kiran's change. He's a very busy fellow. 17:35:27 <hartsocks> It is also our most important blueprint as it makes the driver *much* easier to use. 17:35:56 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support 17:36:05 <hartsocks> This has two reviews: 17:36:22 <hartsocks> #action reviews for https://review.openstack.org/#/c/40105/ 17:36:35 <hartsocks> #action reviews for https://review.openstack.org/#/c/40245/ 17:37:27 <hartsocks> … this might be the fastest blueprint-to-code-to-merge I've seen on this driver since April. Most things take at least 2 months to get through review. 17:37:37 <tjones> go gary! 17:37:41 <danwent> :) 17:37:54 <garyk> hartsocks: there is also https://review.openstack.org/#/c/41387/ (this is the boot from volume - this is pending cinder support at the moment) we are working on that in parallel 17:38:16 <garyk> i need to make up for the fact that i run very slow 17:38:30 <hartsocks> *lol* 17:39:00 <hartsocks> I've added that review to my list of things to keep a close eye on. 17:39:10 <hartsocks> #action reviews for https://review.openstack.org/#/c/41387/ 17:39:28 <hartsocks> I have some other good news ... 17:39:42 <hartsocks> #link https://blueprints.launchpad.net/cinder/+spec/vmware-vmdk-cinder-driver 17:39:53 <hartsocks> We have a cinder driver out there... 17:40:12 <hartsocks> #action reviews for https://review.openstack.org/#/c/41600/ 17:40:22 <hartsocks> AND it has code posted for review! 17:40:25 <hartsocks> Woo! 17:41:25 <hartsocks> But, this is in the Cinder project, not in Nova in case anyone reading the notes later is confused. (I think everyone here knows this already). 17:42:17 <hartsocks> I think if the next two blueprints fall off, we're okay but they are still targeted for Havana-3 so I've not left them out. 17:42:29 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage 17:42:38 <garyk> hartsocks: i need to head on home. i'll be on line later. have a good one guyys and sorry for bailing early 17:42:38 <hartsocks> Which has review: 17:42:52 <hartsocks> garky: thanks for hanging out gary! 17:43:16 <hartsocks> #action reviews for https://review.openstack.org/#/c/37659/ 17:44:25 <hartsocks> … this blueprint allows for dynamic resizing of VMDK based on the flavor. Ephemeral disk issues came up and as part of making this work a change to the ephemeral disk code was made that should hopefully fix the issue. This was recently revised. 17:45:02 <hartsocks> Work on this feature exposed some bugs in that "use_linked_clone" flag's behavior. 17:46:00 <hartsocks> It also exposed the fact that we didn't offer the ability to tune the linked_clone driver behaviors. This was mentioned by some customers on a phone call … way back in June IIRC… and brought up in this meeting. 17:46:02 <hartsocks> So... 17:46:19 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy 17:46:29 <hartsocks> #action reviews for https://review.openstack.org/#/c/37819/ 17:47:00 <hartsocks> I wrote that as a very simple patch to allow people to override. It should have been very very simple, but just allowing for the control exposed other issues. 17:47:12 <hartsocks> So that covers it. 17:47:41 <tjones> i hope that gets in - it's a pain to have to restart nova when you want a different behavior 17:48:00 <hartsocks> Thanks. :-) That's the idea. 17:48:51 <hartsocks> #topic open discussion 17:49:27 <hartsocks> I assume most folks are heads-down. I'm seeing lots of review activity and email activity outside IRC right now. 17:50:58 <tjones> so you asked me to write a BP on "detecting" the storage adaptor based on the vmdk - this came up in the context of ide vs lsilogic with thin disks. It turns out it's not possible to programatically detect this from the vmdk file. I would have to try lsilogic and if it fails to boot, try ide. This sounds really hacky to me. Wouldn't the customer normally know what kind of storage adaptor they have with the image? 17:51:56 <hartsocks> tjones: yeah. I agree. So I guess this turns out to be a documentation/education issue. 17:52:43 <tjones> actaully if this image is a monolothicSparse type you can ask the metat-data is at the top of the file - but that's only 1 of the many types we support. I did add to the documentation more info on image types to cover this confusing issue 17:53:25 <hartsocks> #link https://bugs.launchpad.net/openstack-manuals/+bugs?field.searchtext=vmware&search=Search+Bug+Reports&field.scope=project&field.scope.target=openstack-manuals 17:53:36 <hartsocks> Lookit all the docs whot need wrote! 17:54:22 <hartsocks> #link https://bugs.launchpad.net/openstack-manuals/+bug/1212310 17:54:23 <tjones> ugh 17:54:24 <uvirtbot> Launchpad bug 1212310 in openstack-manuals "vmware documentation needed: use of VMDK working types with OpenStack" [Undecided,New] 17:54:45 <tjones> yeah what i wrote is for thick vs thin but i didn't get into sparse 17:54:59 <hartsocks> worms. can of. 17:55:03 <tjones> ja 17:55:59 <hartsocks> So I've started tagging the documentation bugs. Hopefully we can find some folks to work on documentation. It's so bad right now it's sort of a crisis. 17:56:03 <tjones> i don't fully understand how sparse would work with glance. Isn't sparse multiple files? 17:56:50 <tjones> after vmworld many we can each take a couple and knock them out. Won't be as nice as a doc person since none of us are doc people 17:57:00 <hartsocks> tjones: that's part of it… we might have to invent a zip archive type to handle it… or just use OVA what's puzzling is there's code in the driver to handle "sparse" but I have no idea how it would ever be executed. 17:57:18 <tjones> you can say "sparse" but it really just means "thin" 17:57:39 <tjones> it would not work. i think we will need a zip format for ovf too 17:58:00 <hartsocks> … and there's the confusion. "sparse" is not "thin" 17:58:10 <tjones> i know - but the code does not 17:58:17 <hartsocks> ah, a new bug. 17:58:17 <tjones> the code does not know that 17:58:28 <tjones> yes - basically sparse would not work as written 17:58:34 <hartsocks> tada 17:58:45 <hartsocks> *failure* 17:59:02 <tjones> i guess none of our customers are wanting sparse 17:59:03 <hartsocks> Well. 17:59:34 <hartsocks> I don't see how we could even support it without OVA the OVF support is only half way there. 17:59:53 <hartsocks> We're out of time. 17:59:59 <tjones> i thnk we should focus on ova personally 18:00:29 <hartsocks> I'll be over on #openstack-vmware in about 5 minutes. 18:00:56 <hartsocks> #endmeeting