17:01:33 #startmeeting vmwareapi 17:01:34 Meeting started Wed Jun 11 17:01:33 2014 UTC and is due to finish in 60 minutes. The chair is tjones. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:35 can we provide pointer to both WIPs on meeting schedule? 17:01:36 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:39 The meeting name has been set to 'vmwareapi' 17:01:45 ejgriffith:np 17:01:48 hi who is here today? 17:01:52 o/ 17:01:53 * mdbooth is here 17:02:14 hi 17:02:16 hello 17:02:43 vuil: you here? 17:02:45 hi! 17:03:05 hey kirankv - haven't seen you in a while. welcome back 17:03:09 yep 17:03:24 ok great - lets get started with our fav topic 17:03:25 tjones: :) thx 17:03:35 #topic approved BP 17:03:48 vuil: want to let us know how phase 2 refator is going? 17:04:15 phase 2+3 are happening in parallel. 17:04:29 o/ 17:04:31 I just updated the phase 2 patches, they are ready for review 17:04:39 do you have a link to share with all of the patches in order? 17:04:46 vuil: can you please post the first patch 17:04:53 phase 3 needs to be updated on top of them, and still needs some decomposition 17:04:55 even i am getting lost 17:04:56 tjones: you beat me :) 17:05:13 a sec 17:05:30 Not first, but https://review.openstack.org/#/c/87002/ 17:05:46 and all its dependencies technically are phase 2 stuff 17:05:54 following the links - this one looks 1st https://review.openstack.org/#/c/99238/2 17:06:08 there were some reordering of the patches 17:06:59 there is an 'image handling' patch that I am trying to rebase to the right parent after all these motion. if you start at 99238 you may see it hanging out in an odd place. Will fix after the meeting 17:07:03 vuil, commit message is missing blueprint info - https://review.openstack.org/#/c/99238/ 17:07:53 vuil: it's pretty confusing with the multiple dependencies. if you could add a section to https://etherpad.openstack.org/p/vmware-subteam-juno with all of the patches in review order i think it would help. then i can send a message to the ML with this and other info on what we are doing 17:07:54 well that one technically is just code cleanup, encountered while I refactor the code for sure... 17:08:38 ah there is a fork as well in the dependency chain 17:08:44 sure. Gimme 1/2 hour after the meeting. Once the rebase is done we should have a main chain of commits which should make things clearer 17:08:57 as a team our absoute highest prio is the review of this patchset. 17:09:06 s/this/these 17:09:10 +1 tjones. sounds awesome vuil 17:09:21 Unfortunately I'm not at my desk, but when I get in to the office tomorrow I have a little tool which generates dependency trees from outstanding gerrit patches 17:09:31 ohh - nice! mdbooth 17:09:48 that would be very helpful (not just for this one) 17:09:50 Where should I post the output? 17:10:03 in that etherpad would be great 17:10:04 Or the tool, for that matter 17:10:08 Ok 17:10:33 anything else about the refactor? 17:10:46 #action vuil to update https://etherpad.openstack.org/p/vmware-subteam-juno on the patchsets 17:10:56 #action tracy to send out email to the ML asking for reviews 17:11:05 #action all of us review these patches 17:11:16 #action mdbooth post the tool and output to https://etherpad.openstack.org/p/vmware-subteam-juno 17:11:31 tjones: garyk and I also have outstanding refactor patches, although not directly related 17:11:41 @dims re: fork, yes and no, (98322 looks like one, but after rebase will be part of chain). That said, I set 98529 aside as a branch as I think of it as somewhat orthogonal to the refactoring work. 17:12:35 vuil, thanks for the clarification 17:12:50 yeah but i am worried about this. we need the big ones done so we can add features for juno. i know there allways will be refactor work - but i don't want to block new features on the piecemeal ones. how do you guys think we should handle this? 17:13:04 with the core team? 17:14:03 one could argue that power_off should have been part of phase 1 17:14:47 tjones, from discussions with mriedem, he is expecting disk and image handling in spawn to get broken down, once that is done the refactor is done IMHO 17:14:50 as long as the changes are orthogonal to the main refactor work, I am somewhat okay. Then it is just a review priority thing. 17:15:18 dims__: good as long as once that is done we can continue feature work i am happy 17:15:31 ok anything else on approved BP? 17:15:35 things that interferes with the main commit refactor chain can cause rebase headaches, so please be cognizant of it 17:15:49 lets keep the patch series in refactor to just that and not expand the scope 17:15:57 right vuil 17:16:03 vuil: garyk just had a patch +2d which will touch almost everything 17:16:05 garyk: mdbooth could you pause on those patches until phase 2/3 is done? 17:16:06 although trivially 17:16:30 tjones, i don't think we should stop pushing 17:16:47 which one is it? 17:16:57 https://review.openstack.org/#/c/91352/ 17:17:15 oh yeah that one 17:17:23 It's been approved, in fact 17:17:24 it's merging. we are ok 17:17:27 :) 17:17:32 :-D 17:17:35 tjones, mdbooth rebase is the cost we have to bear :) 17:18:11 just don't want to step on eachothers toes as *much* as we were in icehouse. that was unbearable. and in fact the one of the purposes of refactor 17:18:12 will be fun, but this we should be okay. 17:18:24 ok lets move on 17:18:38 garyk: hot plug?? 17:18:57 How many toes does https://review.openstack.org/#/c/97170/ step on? 17:19:26 mdbooth: it does not step on any :) 17:19:47 even any of vuil? 17:19:47 i think that the only contenious code at the moment is the actual spawn and vmops related stuff 17:20:00 the power off does not even clash with those 17:20:09 great 17:20:19 i think that we should be prgamatic about things (but that is me) 17:20:21 good news all around 17:20:25 Ok, in that case I will continue trying to get reviews on that 17:20:26 +1 garyk 17:20:37 then we are ok. garyk are you pausing on hotplug until refactor is done? 17:20:55 tjones: at the moment it is in review. has been for over 2 months now :( 17:21:04 i know…. 17:21:13 i guess it will be a matter of rebasing after the spawn code is in 17:21:23 yeah ok lets leave that then for now 17:21:32 even though it is authogonal (most prba spelt wrong) 17:21:37 +1 gary,mdbooth on the general approach to this 17:21:39 there are 2 new vmops methods :) 17:22:24 it sucks to not work on features but i think satellite things to vmops can and should be done 17:22:38 did anyone pick up the work to "ensure one nova compute process == one cluster" 17:23:15 do we need a blueprint for it? 17:23:22 listens…. 17:23:24 not that I am aware of 17:23:30 on both qns 17:23:44 me either. just a note in my head that it needs doing 17:23:45 on first qn :-) 17:23:49 ok, let me poke on it and let u all know 17:23:52 dims__: tjones: vuil: i was planning on doing that 17:23:55 dims__: thanks 17:24:06 ok lets talk about unapproved BP now 17:24:13 my major concern with it is backward and forwards compatibility. 17:24:16 #topic BP under review 17:24:37 #undo 17:24:37 Removing item from minutes: 17:24:41 garyk, ok, i'll help, you have the pen :) 17:24:55 dims__: ok. 17:25:03 i'll sync up with you tomorrow 17:25:12 ok good 17:25:20 #topic BP in review 17:25:23 #link https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:vmware,n,z 17:25:43 who wants to discuss their BP? 17:26:13 * mdbooth *still* hasn't written up the oslo.vmware api BP 17:26:15 VMware nova-Storage optimization for clusters with multiple datastores 17:26:30 garyk: ephemeral disks 17:26:35 #link https://review.openstack.org/98704 17:26:40 https://review.openstack.org/#/c/95907/5/specs/convergence.rst 17:26:44 idea is to waut till after the spawn to add the code 17:27:00 yes but we do need to get the specs approved 17:27:07 oops wrong link 17:27:09 https://review.openstack.org/#/c/98704/4 17:28:01 dims__: has a -1 17:28:18 Looks like trivia, though 17:28:22 but not techical issues 17:28:33 should we vote 0 on non-technical issues? 17:29:00 kirankv: I read that earlier. I wondered what the impact was of over-using a single datastore 17:29:23 Do we balance datastores for performance? 17:29:34 What would the impact be of a popular impact ending up on a datastore? 17:29:49 Would it be worth dedicating a simple datastore to the image cache? 17:30:05 s/simple/single/ 17:30:07 mdbooth: As long as we stay in the max limits mentioned in the vsphere doc, we should be fine 17:30:46 kirankv, waiting for a response to my questions :) 17:31:25 dims wrote: If for some reason some datastores need to be off line then it would be very difficult to figure out which vm(s) will be affected. Right? 17:31:38 So not all trivia 17:31:44 right :) 17:32:00 didn't scroll down enough - sorry 17:32:10 mbooth: how much to size the simple datastore that caches images would be a challenge 17:32:23 we don't need to do this here on IRC do we? 17:32:35 dims__: Guess not 17:32:42 it really depends on the workloads (flavors) getting deployed 17:33:19 we do have time now and it's faster for you guys to discuss it when you are all here. we have 1 more to discuss after that 17:33:21 dims__ : will respond and address you comments by tommorrow 17:33:51 The performance balancing thing is a question for platform experts (i.e. not me) 17:34:07 Just strikes me as a potential issue 17:34:41 vuil: ^^^ 17:35:07 :-) 17:35:32 I will take a look at the bp and give you my 2c 17:35:55 vuil: mdbooth was brining up some performance issues above 17:36:05 can you address those as well? 17:36:15 vuil: My question was: if we're going to create linked clones across datastores rather than making copies, how much of an issue would it be in practise for a popular image to end up on a particular datastore 17:36:29 Especially when that datastore was picked effectively at random 17:38:18 the blueprint tries to address that by first trying to put the VM on the datastore where the cache is present, another improvement is to distribute the cache across the datastores so that cross referencing across datastores is avoided 17:39:01 kirankv: do you mean https://review.openstack.org/84662 17:39:09 Latter would emasculate the patch though, right? 17:39:46 Although we could copy directly from another cache 17:40:19 mdbooth, right that may be easier for maintenance 17:40:35 tjones: thats a different bp, thats only to optimize performance by avoiding a network transfer 17:40:43 That would be a different patch, though 17:40:45 kirankv: right 17:40:54 I hope folks are not waiting for me. Still reading... 17:41:37 ok lets take an action for kirankv to address dims concerns and vui to address the performance issues and move on? ok with that ? 17:41:48 +1 17:41:51 s/issues/concerns 17:41:58 tjones: ok 17:42:26 #action kirankv to address dims__ comments on https://review.openstack.org/98704 and vui to address performance concners 17:42:34 #link https://review.openstack.org/86074 17:42:38 garyk: you are up 17:42:47 tjones: not much to update 17:42:54 ephemeral is up for review 17:43:03 i'll post a link in a sec 17:43:14 #link https://review.openstack.org/#/c/86074/ 17:43:40 :) 17:43:44 johnthetubaguy: was interested in this before 17:43:44 spbm is also in review 17:44:03 https://review.openstack.org/84652 17:44:37 funny that is not on my list 17:44:42 so basically both BP's are waiting for cores 17:44:55 the SPBM code has been posted in review since last year december 17:45:16 lets make sure that we have 2 guys from our side +1 on the specs before we start pushing cores to review it 17:45:40 ok np 17:46:05 rado reviewd spmb and arnaud reviewed ephemeral. can someone else please take a look at these this week? 17:46:21 i'll ping them for the reviews. thanks 17:46:23 #action review https://review.openstack.org/84652 and https://review.openstack.org/#/c/98704/4 17:46:42 no i mean they all ready did - we should get 2 fresh people to look ;-) 17:47:13 any other BP discussions? 17:48:10 #topic bugs 17:48:11 https://bugs.launchpad.net/nova/+bugs?field.tag=vmware 17:48:33 2 new ones 17:48:40 tjones: i have been over the bugs and have posetd fixes for 2 networking issues. don't recall the bug #'s off hand 17:48:49 ok thanks gary 17:49:07 #link https://bugs.launchpad.net/nova/+bug/1325866 17:49:08 Launchpad bug 1325866 in nova "Datastore in a Cluster-Datastore fails to enter maintenance mode if instance has a CD-Rom attached to it" [Undecided,New] 17:49:37 #link https://bugs.launchpad.net/nova/+bug/1328455 17:49:39 Launchpad bug 1328455 in nova "vmware: Instance going to error state while resizing" [Undecided,New] 17:49:55 these 2 look like things that customers would get really annoyed by and they have no owner. 17:50:29 tjones: i'll try and look at that one tomorrow 17:51:05 thanks - other people can volunteer too ;-) 17:51:33 #topic opendiscussion 17:51:40 ok what else do you want to talk about? 17:52:21 garyk: if you need help in testing them, let me know 17:52:34 browne1: to reporter is asking for an update on https://bugs.launchpad.net/nova/+bug/1317393 17:52:35 Launchpad bug 1317393 in nova "VMware: operating system not found after upgrade to 5.5" [Medium,New] 17:53:17 kirankv: thanks 17:53:18 tjones: ok, yes, i saw that. i'll reply 17:53:25 keep going back to bugs - mdbooth hows your iscsi fun going? 17:53:29 #link https://bugs.launchpad.net/nova/+bug/1324036 17:53:30 Launchpad bug 1324036 in nova "Can't add authenticated iscsi volume to a vmware instance" [Undecided,Confirmed] 17:53:46 tjones: Discussing it with arnaud right now, actually :) 17:54:08 should we assign that one to you since you are working on it? 17:54:38 tjones: Sure 17:55:19 tjones: I spent yet more time thinking about locking 17:55:21 btw 17:55:25 Way too much time, in fact 17:55:41 lol - we have 5 minutes if you want to discuss 17:55:44 Which then led me to wondering why I was bothering 17:56:05 Specifically: under what circumstances might 2 nova instances be accessing the same datastore 17:56:17 as I understand it, a datastore can only be a member of 1 cluster, right? 17:56:36 So we'd need to have multiple novas managing a single cluster 17:56:41 that would be a clean way to go, but not necessarily 17:56:54 vuil: So clusters can share a datastore? 17:57:10 yeah. 17:57:44 vuil: Sight 17:57:47 s/t// 17:57:47 but its recommended not to share datastores except for datastores that are used to host templates etc 17:58:09 vuil: I thought I'd talked myself out of it :( 17:58:28 vuil: I came up with a horrible solution involving abusing Events in vsphere 17:58:46 Also zookeeper, but that requires fencing 17:58:57 All solutions require session transactions 17:59:40 yikes. 17:59:58 * mdbooth has a solid proposal for session transactions 18:00:15 I seem to recall there was some other use case where we need the lock, not just multi cluster -> single DS 18:00:34 it's escaping me right now 18:00:38 If we can assure ourselves that there's only a single nova involved, we can use nova locks 18:00:45 Which are going to be much faster anyway 18:01:20 vuil: There's also the vsphere 'solution' thing 18:01:34 Although I read all the docs and I still don't really understand what it is 18:01:56 Looks like we need to take it over to -vmware 18:01:58 But it sounds like a mechanism to implement custom services 18:02:01 vuil: Indeed 18:02:31 oops - sorry gotta end now 18:02:40 we can move over to openstack-vmware 18:02:47 #endmeeting