17:01:10 #startmeeting vmwareapi 17:01:10 Meeting started Wed Jul 2 17:01:10 2014 UTC and is due to finish in 60 minutes. The chair is tjones. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:14 The meeting name has been set to 'vmwareapi' 17:01:34 hi who's here today? 17:01:46 I am in :-) 17:01:57 hi 17:01:58 hi 17:01:58 hi 17:01:58 o/ 17:02:06 hi 17:02:34 hi guys - lets get starting with approved BP 17:02:36 #link https://blueprints.launchpad.net/openstack?searchtext=vmware 17:02:49 mdbooth: vuil: lets start with refactor as usual 17:03:10 I am having two blue prints to discuss 1> mutibacked: https://review.openstack.org/103054 and 2> NFS glance datastore: https://review.openstack.org/104211 17:03:19 o/ just got in 17:03:42 KanagarajM: when we get to unapproved BP we can discuss those 17:04:01 So, Vui's phase 2 patches got in :) 17:04:07 I added a few more patches before to cleanup ds_util, those are farely small ones 17:04:13 fairly 17:04:15 hurrah! 17:04:21 tjones: thanks 17:04:27 87002 is being broken up further by mdbooth. 17:04:36 Next in the list was some more from vui and a big one by hartsocks 17:04:39 I am working on the same for the phase 3 stuff that follows. 17:04:48 I've split up the big one as you've probably seen 17:05:03 Initial comments from garyk and rgerganov 17:05:14 the BP has a TON of patches. mdbooth can you share the link to the bottom patch? 17:05:20 I am going to look them over today. 17:05:38 bottom == 1st to review in the series 17:05:38 I've only looked at comments from the first 2 in the series, but it's predictably going to require updates 17:05:39 are we going to abandon https://review.openstack.org/#/c/87002/ ? 17:05:49 and just update https://etherpad.openstack.org/p/vmware-subteam-juno 17:05:57 rgerganov: I think we should, yes 17:06:00 that seems to be the idea 17:06:18 vuil: yes - i like having the updated list there 17:06:28 I think garyk said some of my patches he reviewed conflict with vuil's refactor patches 17:06:39 I haven't got to that bit yet, but it wouldn't surprise me at all 17:06:43 ugh 17:07:00 My patch series has to begin with https://review.openstack.org/#/c/102224/ 17:07:01 mdbooth: yes, they do. i think that it was the later ones with the datstore changes 17:07:18 I have reviewed Vui's patches for ds_util and they are all good stuff 17:07:20 Because that conflicts with the refactor significantly 17:07:38 vuil: I'm happy to rebase on top of your patches 17:07:48 But could you please rebase on top of ^^^? 17:07:53 mdbooth: I understand that was the idea too :-) 17:08:06 i'll leave it to you two to sort out offline 17:08:14 That means my series can avoid conflict with both 17:08:20 mdbooth: yeah let's do that 17:08:51 vuil: next we have oslo which i think you are also working on but blocked by phase 2 right? 17:09:22 I think we are proceeding in parallel, whichever lands first, the other effort can pick it up. 17:09:45 ok garyk - you have ephemeral and hot plug. any update? 17:10:46 we held off oslo because there was no work that needed it that did not also need the refactoring, but there is some now 17:11:12 tjones: hot plug - reviewed by rgerganov and mdbooth. i need to address some comments from matt 17:11:39 the ephemeral sypport will be placed on top of the image refactoring (that will wait a few weeks :() 17:11:47 ah ok 17:12:01 the first 2 patches of the hot plug series are good to go and it would be a pity to wait another 6 months 17:12:49 i'll address the comments tomorrow morning 17:12:54 * mdbooth found that catching johnthetubaguy in a good mood was very useful for getting reviews :) 17:12:57 ok, i'm done on this topic :) 17:13:02 yes - lets reiterate that we need people to review those. thanks rgerganov and mdbooth for starting. then finally we have spbm, v3 diags, ova, and vsan. all importatnt to get in and all blocked by refactor. 17:13:36 tjones: sadly spbm is still not approved 17:14:09 rgerganov: oops - i read the title and thought spbm. this one is for glance 17:14:14 rgerganov: Is there anything blocking it from a design pov? 17:14:32 anything else on approved bp before we move on to unapproved? (like spbm) 17:14:41 rgerganov: i need to address some comments. they are real nits … 17:14:42 mdbooth: the spec that garyk wrote looks good to me 17:15:01 #topic BP under review 17:15:01 johnthetubaguy: has a few comments that need addressing 17:15:12 (since we have already moved on) 17:15:23 #link https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:vmware,n,z 17:15:38 kinf of like la law "moving on douglass" 17:16:14 i see spbm and storage opt (kirankv BP) still moving. kirankv anything needed for your BP? 17:16:53 yes reviews would help, had jay pipes +1 it 17:17:03 looks like you have addressed john's comments 17:17:27 KanagarajM: i don't see your BP In the list of specs.... 17:18:03 tjones: oh, not sure, could you please use this 1> mutibacked: https://review.openstack.org/103054 and 2> NFS glance 17:18:49 i think you need the word vmware in the title if it is vmware driver specific 17:18:59 yes 17:19:13 otherwise no one on this team will see it :-) 17:19:15 #link https://review.openstack.org/#/c/103054/ 17:19:30 #link https://review.openstack.org/104211 17:19:45 sure, i will add after the meeting, thanks. 17:19:53 as the 1st word i think 17:20:02 the first one will mitigate the implications of dropping the support for multiple compute nodes 17:20:47 yes, 17:20:53 yes, that in my opinion is very importnat 17:20:53 rgerganov: yes 17:20:58 KanagarajM, I am interested in the second one 17:21:06 still the memory concerns remain 17:21:44 kirankv: the 100+ MB footprint is on the VC or on nova-compute? 17:22:11 arnaud: i have d one the poc for that and we have seen good improvement on the VM creation total time, its reduced from 16 mins to 6 mins for 800 MB disk 17:22:21 rgerganov: on the nova-compute 17:22:48 kirankv: yeah, I guess this is because of the suds client mainly 17:22:49 how long is takes to upload the image to glance KanagarajM? 17:23:38 rgerganov: yes its the suds client 17:23:44 arnaud: uploading to glance i am not sure, i will get back to you the figure 17:24:42 KanagarajM, ok let's continue this offline. I am looking at the same kind of optimizations with https://review.openstack.org/#/c/84281/ 17:25:22 regarding the memory, yes, suds is consuming about 500-600 MB per vc connection, and usually nova-compute node will be of high end configuration and admin should be able to properly design the deployment model based on number clusters and node configuration 17:25:41 arnaud: sure, 17:26:02 ok lets give folks time to review these BP. KanagarajM please update the title so we can find them :-). 17:26:10 Any other BP needing discussion? 17:26:27 tjones: sure thanks. 17:27:11 tjones: its not vmware specific, it applies to any nova driver. 17:27:39 ok prob a good topic for -nova unless we have extra time later 17:28:03 anyone have bugs that are of concern? 17:28:10 * mdbooth reads back... 17:28:16 500-600MB! 17:28:37 KanagarajM: That's bad 17:28:43 #topic open discussion 17:29:01 mdbooth: ok lets spend the rest of the time talking about this and whatever other topics 17:29:27 * mdbooth doesn't have any specific topics we haven't covered 17:29:35 * mdbooth has been buried in refactor work this week 17:30:58 anyone else? im happy to end early if we are done 17:31:11 Has the iscsi thing moved? 17:31:24 * mdbooth has been distracted and hasn't followed it at all 17:31:41 arnaud: want to update us? 17:31:57 tbh, last week was a glance issues week 17:32:05 lol 17:32:11 didn't have the time to update iscsi 17:32:21 will get back to it soon hopefully 17:32:55 we can safely assume that every week is a nova issues week :) 17:33:08 every week is a refactor review week ;-) 17:33:17 tjones: that too :) 17:33:44 and don't forget to give garyk hot plug patches some love 17:34:20 :) 17:34:46 maybe if i learn to dive like robben i'll get a few extra reviews 17:35:10 garyk: at least he doesn't bite 17:35:46 rgerganov: agree :) 17:36:24 ok i think we are done. thanks and have a nice week 17:36:33 Good night, all 17:36:38 bye 17:36:45 #endmeeting