17:00:05 #startmeeting vmwareapi 17:00:06 Meeting started Wed Aug 21 17:00:05 2013 UTC and is due to finish in 60 minutes. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:09 The meeting name has been set to 'vmwareapi' 17:00:16 Greetings stackers! 17:00:20 Who's around to meet? 17:00:22 Hi! 17:00:32 hi 17:00:43 garyk: around? 17:00:57 hi. 17:01:02 sorry i was day dreamin 17:01:08 night dreaming? 17:01:17 *lol* 17:01:29 this time i am at home and will not be locked in the office.... 17:01:38 nice 17:01:57 Anybody else around? 17:02:17 I know a number of folks are traveling for VMworld next week. 17:02:28 * hartsocks listens 17:03:02 yeah, HP people are all traveling i know 17:03:15 well… okay… 17:03:19 #topic bugs 17:03:23 kiran was responding to email just now though 17:03:49 He might be in an airport … or in the sky? 17:04:22 Any pet bugs we need to discuss? 17:04:56 #link http://goo.gl/pTcDG 17:05:12 We have two bugs I'm not sure how to classify. 17:05:23 I think they aren't important. 17:05:28 Am I wrong? 17:05:47 i was unable to reproduce 1194076 and asked for clarifications 17:06:07 i think https://bugs.launchpad.net/nova/+bug/1193980 17:06:08 Launchpad bug 1193980 in nova "Cinder Volumes "unable to find iscsi target" for VMware instances" [Undecided,Incomplete] 17:06:11 is relatively important 17:06:18 this one should be closed (sabari's fix) https://bugs.launchpad.net/nova/+bug/1104994 17:06:20 Launchpad bug 1104994 in openstack-vmwareapi-team "Multi datastore support for provisioning of instances on ESX" [Critical,Fix committed] 17:06:30 as it affects anyone trying to use a direct iscsi device for cinder volunes 17:06:45 please note that 1194076 was on grizzly and a lot of code was fixed in havana. i'll check this with the stable grizzly version 17:06:49 this may be related to the issue I discussed before. 17:07:16 with respect to iscsi 17:07:22 on 1193980 … looks like I asked "Can you verify you can make the network traversal?" 17:07:50 192.168.125.27 canot reach iqn.2010-10.org.openstack:volume-d0433ba1-d2c1-478c-96a1-b9074c57c62e 17:07:56 has anyone internally ever successfully used VCDriver with an external iscsi array? 17:07:57 is the error reported on that bug. 17:08:11 Honestly, I've not vetted that myself. 17:08:29 I can see the trace and it *looks* unrelated to the code. 17:08:57 let me ask Ryan if he has tried this as well 17:09:02 if we had tested it and it worked for us, then I'd probably just let the bug sit until we get a response from the reporter, but I think we should either test iscsi before the havana release, or document it has not supported 17:09:18 okay. 17:09:32 with our havana cinder driver, this becomes less important 17:09:40 #action Tracy to follow up with Ryan in CI team to test 17:09:42 but is still something we should be clear about whether it shoudl work or not 17:10:53 okay… we'll have the CI guys take a moment and tell us if they can verify the iScsi stuff works. 17:11:08 just dropped him an email 17:11:17 any other pet bugs? 17:11:39 * hartsocks listens 17:12:16 my pet bug is config drive support - I think Dims has a BP for it 17:12:31 hi. 17:12:36 yeah, dims has taken care of this (and very well)\ 17:12:40 Yeah… let's pull that up... 17:12:45 gr8 17:12:55 #link https://review.openstack.org/#/c/40029/ 17:13:02 Looks like it needs reviews. 17:13:02 hartsocks: are there any critical of high bugs that need treatment or focus\ 17:13:40 sorry critical or high bugs 17:13:51 keyboard went wonky. 17:14:03 #action reviews for https://review.openstack.org/#/c/40029/ 17:14:17 As for high/critical on #bugs 17:14:23 hartsocks: i have tested that and it looks good. 17:14:40 your +1 is appreciated :-) 17:14:47 #link #link https://review.openstack.org/#/c/33100/ 17:15:11 waiting on core people to give +2 but it's aging out. 17:15:24 I just need 1 more +2 on this one https://review.openstack.org/#/c/33504/ 17:15:35 hartsocks: maybe we can ping russellb and let him know that there are reviews that have been in the queue for weeks 17:16:28 #action russellb we need your help https://review.openstack.org/#/c/33100/ https://review.openstack.org/#/c/33504/ 17:16:44 garyk: these are in the bi-weekly emails on reviews needing attention. 17:16:45 i know that the average stats are a week for review, but we seem to be lagging very far behind. there is https://review.openstack.org/#/c/39046/ which has been around for a very long time too 17:17:30 garyk: to be fair, that revision is only a day old and you already have a +2 17:17:53 hartsocks: not it has not - i just asked on the review for a review 17:18:09 last commit was Gary KottonAug 8, 2013 9:31 AM 17:18:37 oh I see. 17:18:44 Yeah. That's pretty bad. 17:18:57 August 8th you got a +2 then *nuthin* 17:19:19 I think that might be the longest lag I've seen. 17:19:38 my concern is that we have some patches which will help us get a clean tempest run which can help us with the review process - if we can post tempest runs on the code then we can show that it works and does not cause degrdaations 17:19:51 sorry for the bad spelling 17:20:14 yep. That's what the CI guys are working on. 17:20:21 oh, well. sorry for the interruptions. please continue 17:20:22 im working with our CI team on at least being able to publically post the tempest run output so we can reference it in our patches 17:20:34 tjones: that will br brilliant 17:20:52 yeah, a big win 17:21:02 just a matter of where to put them - should be easy to get it done 17:22:01 Not being able to show that a patch doesn't cause a regression is the biggest reason core-reviewers give for not reviewing our patches. There's a general fear of breaking the driver by approving a patch. So I figure we get that going and a lot of these other problems will go away. 17:22:24 Anything else on bugs? 17:23:49 I could paste my *reviewer* report in here but I'll spare you. I'll post to the mailing list shortly. 17:24:25 I've got 7 reviews that look good from a +1 perspective and just need some +2 love. 17:24:45 I'm not collecting the age of these directly. I may start doing that. 17:25:14 that mmight be good if it is easy to add to your script 17:25:20 In other news, the Jenkins and review sites got DDOSed earlier in the week from traffic. 17:25:23 hartsocks: if you could add in the age it will ne nice 17:25:59 The sheer number of reviews and revisions going through the servers is bogging things down. 17:26:38 I'll talk blueprints next. 17:26:48 #topic blueprints 17:27:02 hartsocks: i think that there is a 'xmas' rush at the moment. gating is take a few hours. hopefully in a dy or 2 things may calm down a little after everyone meets the 22nd deadline for features 17:27:03 #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service 17:27:40 garyk: fortunately all our blueprints targeted at Havana have *technically* met the 22nd deadline. 17:28:03 hartsocks: yeah the guys have done great work with that. kudos 17:28:12 the review: 17:28:24 #link https://review.openstack.org/#/c/30282/ 17:29:17 have we gotten a tempest run on this one? anyone know? 17:29:23 I am trying to adjust how picky I am in these reviews. I'm hoping to go back and forth on things ahead of a core-reviewer so when one finally does pay attention to us they don't have much to complain about. 17:29:35 hartsocks: 1 thing here is not clear to me and i need to dig in the code a little more - is how a cluster is selected. 17:29:51 garyk: it's a configuration parameter. 17:30:01 garyk: they explicitly name each cluster to be used. 17:30:12 garyk: the config is a "multi-config" 17:30:19 i was curious about that too - you say use this or that - who decides which one? VC or the scheduler? 17:30:24 hartsocks: the clusters are listed in the config file. but when an instance is spawned, which cluster will be selected 17:30:27 garyk: which means you list each new cluster on its own line. 17:30:47 hartsocks: yeah, i am aware of that. i am more concerned of the real time decisions 17:30:54 right but if you specify 2 of them - who decides which? 17:30:55 … I knew this at one point ... 17:31:09 … 1 sec 17:31:15 it seemed rather random when i was testing it 17:31:51 Yah. This patch has gone round-and-round 17:32:00 at one point it was round-robin 17:32:13 there are lines which have - nodename = instance['node'].partition('(')[0. I need to understan this more. i'll spend time on this review again tomorrow 17:32:49 ideally this would be something documented in the BP. but i may have missed that. 17:32:58 ah I see it here... 17:33:05 #link https://review.openstack.org/#/c/30282/25/nova/virt/vmwareapi/vmops.py 17:33:19 instance['node'] comes off the scheduler 17:33:35 then the resource pool is selected based on that... 17:33:36 so the scheduler decides then 17:33:41 but there's a bug... 17:34:09 res_pool_ref = vm_util.get_res_pool_ref <- 17:34:15 assumes one self._cluster 17:34:36 So it depends on 17:34:47 1. how is 'node' set on the instance (is it right?) 17:35:00 2. how is res_pool_ref selected (is it the right cluster?) 17:35:14 That's two areas to dig at off line. 17:35:28 Sounds like garyk has time for this. 17:36:03 hartsocks: garyk will make time for it :) i am on this one (highest prirotity) 17:36:04 #action follow up with Kiran on line 178 to 180 of vmops.py in https://review.openstack.org/#/c/30282 17:36:57 That feels like something the CI guys may have a hard time spotting right now unless we guide them. 17:37:12 (multiple clusters with multiple resource pools) 17:37:38 Next up? 17:37:40 hartsocks: good point. it would be nice if we could maybe add tempest test cases for threse features 17:38:26 #action message vmware Tempest CI team on multi-cluster and multi-resource pool testing 17:38:34 anything else? 17:39:08 #link https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support 17:39:14 This is Gary's work. 17:39:43 heh. 17:39:48 Status: needs reviews. 17:39:50 I have rebased and addressed your comments. i am still testing the cinder code. no problems seem at th emoment 17:40:03 While we're on the topic. 17:40:14 Is this something the Tempest tests can cover? 17:41:07 hartsocks" of course. as soon as i get cycles i'll see the test coverage and check if we can add 17:41:40 i think there are cinder test cases and they pass. 17:41:47 #action reviews for https://review.openstack.org/#/c/41387/ https://review.openstack.org/#/c/40105/ https://review.openstack.org/#/c/40245/ 17:42:02 Okay. 17:42:08 Any special attention here? 17:42:13 Any discussion topics? 17:42:33 "Nova boot from cinder volume" seems pretty important to me. 17:43:03 hartsocks: this support has been added. it was pending cinder support (which has been added) 17:43:04 What's up with the dependency on "https://review.openstack.org/#/c/40245/7" 17:43:14 i am currently testing the code and no issues at the moment 17:43:27 it literally says "[OUTDATED]" on the subject line. 17:43:42 Is that something a core-reviewer would notice/care about? 17:44:09 :) now i am happy that you asked that (cas i know the answer) 17:44:31 ? 17:44:33 prior to implementing the cinder support i tried doing attachment and detachment -everything blew up. 17:44:54 hence the base patch - which is also pending review... 17:45:12 i think that each time i mention review you guys need to dock me 10 reviews 17:45:22 yeah… but… why does it say "[OUTDATED]" 17:45:31 * hartsocks clicks around 17:45:36 let me check 17:46:01 Okay … it revision 7 of the patch set and the review is up to revision 8. 17:46:23 So basically the depended on patch has moved but you've not rebased the "upstream" patches. 17:46:58 I would −1 you on that… but I just figured out what it meant myself. 17:47:14 Could you just fix that? Seems like a few git commands. 17:47:49 on the 20th i rebased both. i think that the fact that we are all writing to the same files we may require rebasing when patches are approved. we should try and stay on top of that 17:48:24 huh. 17:48:37 yeah… everything moved together on the 20th. 17:49:17 are we on open discussion yet - i have a really silly question 17:49:23 How weird. Could you just try rebasing one of the "upstream" patches today just to see if it's some weird laggy thing in the server? 17:49:41 The infrastructure has been acting buggy the last week. 17:49:47 hartsocks: sure. i'll do that tomorrow morning 17:50:15 anything else blueprint related before we kick the can around? 17:50:26 * hartsocks listens 17:50:40 #topic open_discussion 17:50:45 go for it. 17:50:51 There's no such thing as a silly question. 17:50:54 as long as you are not kicking the can at me 17:51:06 *lol* 17:51:12 the vnc_password - if this is '' will the user be required to enter it? 17:51:24 maybe that question warrants a can to be thrown at me 17:51:57 Okay. You have the ball. Are you going to try for a goal? 17:52:43 So the VNC password stuff... 17:53:00 There's a bug in the Horizon client that forces you to set one. 17:53:15 The bug isn't at the nova-compute layer. 17:53:42 So presumably someone could make a fix to the Horizon project and make it so you don't *have* to set a password. 17:53:58 ok, thanks for the clarifications 17:54:37 i would be nice if the vc could nat the session to the esx 17:54:41 i added it to lib/nova in devstack so i didn't have to remember to do it 17:54:48 every time i re-stack 17:55:09 tjones: did you push that patch to devstack or is it just local? 17:55:51 it is local cause i don't know how to push to devstack. i was asking hartsocks about that earlier. there are a few changes i'd like to make 17:56:16 adding our stuff to the localrc sample 17:56:37 devstack's code review process is the same as ours. 17:56:46 But… specifically... 17:56:48 sssllllllloooooowwwww ? 17:56:53 *lol* 17:57:11 *rofl* 17:57:26 * hartsocks dusts self off 17:57:30 okay. 17:57:54 So I was going to say. I don't know how that magic localrc to CONF thing happens. 17:58:14 I tried to trace it myself but the state of the Bash code there made me sad. 17:58:21 Did you know bash has functions? 17:58:26 Oh yes it does. 17:58:41 No need for copy-pasta in bash either. 17:58:45 it's in the lib directory (i think) gary is telling me what to do off-thread 17:59:02 awesome. 17:59:22 we're out of time 17:59:26 same process commit/review 17:59:37 yep 18:00:21 we're in #openstack-vmware all the time if you need us 18:00:28 #endmeeting