17:01:18 #startmeeting vmwareapi 17:01:19 Meeting started Wed Jun 18 17:01:18 2014 UTC and is due to finish in 60 minutes. The chair is tjones. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:23 The meeting name has been set to 'vmwareapi' 17:01:31 hi folks 17:01:35 o/ 17:01:42 Hi! 17:02:24 hi 17:02:31 hi 17:02:54 ok so from our AI last time we were all going to review 2 patches to unblock some other work. no one (including me) has reviewed them 17:03:06 #link https://review.openstack.org/#/c/59365/ 17:03:17 #link https://review.openstack.org/#/c/91005/ 17:03:29 garyk: you around? 17:04:23 hmmm.. without garyk my plan is not going to work. I was going to review them in this meeting 17:04:31 hi 17:04:36 ah there you are 17:04:36 o/ 17:04:56 Other people can review too :) 17:04:59 so i was saying no one (including me) has reviewed the 2 patches we wanted to get reviewed from last week 17:05:07 hi. I will be around for 15-20min 17:05:11 mdbooth: they are garyk patches so i wanted him here 17:05:11 Whoa, patch set 35 17:05:43 tjones: Is it worth discussing rgerganov's stuff first? 17:05:45 i am here 17:06:19 ok let's pause this since rgerganov needs to drop off 17:06:25 and get back to it 17:06:36 rgerganov: you have stuff? 17:06:57 so I had updated the SPBM patch and tried to address some of the concerns that mdbooth brings into his api proposal 17:07:07 link please? 17:07:16 https://review.openstack.org/#/c/66666/ 17:08:10 I am thinking to implement the same changes in oslo.vmware, that is separate Vim and Pbm 17:08:38 we need to get john to remove -2. 17:08:49 tjones: the bp has not been approved 17:09:05 I like the patch 17:09:08 However... 17:09:17 i think that rado has a very nice direction 17:09:21 It's problematic 17:09:25 why? 17:09:38 Because it changes Vim in Nova in a way which is incompatible with oslo.vmware 17:09:57 not really, it is inline (i think) 17:09:57 While we already have an outstanding patch to migrate to oslo.vmware 17:10:08 Which I am simultaneously proposing we substantially rewrite 17:10:22 So, in isolation I like it 17:10:30 In context, I think it's adding to a mess 17:10:33 i am not in favor of a rewrite at the moment. 17:10:42 i added my comments to the spe that you posted 17:10:46 Well, rgerganov is proposing a rewrite 17:10:53 i have yet to see if you addressed them 17:10:53 mdbooth, that is not true 17:10:55 That is, an incompatible api change 17:11:12 how is using oslo.vmware right now? 17:11:13 I don't think it's worth making multiple incompatible api changes 17:11:16 s/how/who 17:11:26 rgerganov: glance, I believe 17:11:33 rgerganov: cinder and glance both 17:11:40 * mdbooth is at home and doesn't have a source tree to hand 17:12:19 I believe that we can still make API changes in oslo.vmware and increase the version 17:12:40 I think ceilometer has already started to use oslo.vmware 17:12:47 rgerganov: I'm in favour of api changes in oslo.vmware :) 17:12:55 We just need to coordinate 17:13:06 I'm against piecemeal changes 17:13:25 mdbooth, if you have suggestions for oslo.vmware, we have a GSoC intern who is looking at it 17:13:45 arnaud: I put out a bp last week for discussion 17:13:56 link? 17:14:00 https://review.openstack.org/99952 17:14:28 ty 17:15:38 lets finish up the spbm discussion before moving on to mdbooth bp. plus we need to get the spec approved. 17:16:01 If we're going to refactor Vim and PBM, I want session transactions, sane error handling and the death of invoke_api/_call_method() :) 17:16:20 tjones: Ok, but this was rgerganov's concern 17:16:33 ah ok 17:16:37 I suggest to do incremental changes in oslo.vmware 17:16:50 +1 rgerganov 17:16:55 I definitely want to separate Vim and Pbm 17:17:07 Consider what that means, though 17:17:28 Every incompatible incremental change now requires coordination with multiple projects 17:17:38 Every one requires a flag day 17:17:49 You might as well just fix it and have a single flag day 17:17:54 It's not a large amount of work 17:18:06 you can fix it without modifying the APIs directly 17:18:11 The only reason we don't just Get It Done(TM) is that it'll take donkey's years in review 17:18:17 and have the flag day later 17:18:37 you don't have an early flag day and then realize 17:20:14 i do not following regarding the flag day. theinetgartion should just be a bump on the oslo.version 17:20:18 what am i missing 17:20:30 garyk, that is my understanding as well 17:20:36 theinetgartion => integration 17:20:51 Bumping oslo.version and updating all the code which uses it 17:21:23 but that is in one patch 17:21:26 'atomic' 17:21:38 hmm no it's a patch per project using it 17:21:49 it's a patch per project, per update 17:22:35 but that is atomic per project. 17:22:44 per project, per update 17:22:48 mdbooth: i think your concern is with the number of *incompatible* changes. as rgerganov said incremental changes 17:22:52 that is how it is done with oslo changes 17:22:57 say for example oslo messaging 17:23:17 the oslo code always needs to be backward compatible 17:23:41 tjones: Right. The current implementation is quite a mess, though. There is very little you can fix in a backwards compatible way. 17:23:57 e.g. rgerganov's simple refactoring breaks it 17:24:01 lets try to get to a plan in the next 7 minutes as we have other stuff needing discussion. 17:24:24 mdbooth, I am not sure that my change is API breaking for oslo.vmware 17:24:25 How about we agree to actively discuss it on the list during the week? 17:24:34 mdbooth, the way I see it, is you write a totally new library and for all of the existings functions you change the implementation to call the new stuff 17:24:48 temporarily 17:25:06 arnaud: Right. That's in my proposal, too. 17:25:10 ok 17:28:09 im not seeing how rgerganov breaks stuff either but we are running out of time for this topic 17:28:32 how about we set a time tomorrow to discuss this on the vwware channel? 17:28:35 shall we agree to continue on the ML this week on this? 17:28:50 garyk, +1 17:29:20 good idea garyk. shall you propose a time? 17:29:31 tjones: See 2 lines changes in volumeops and vmops 17:29:40 +1 17:29:40 we can try this time tomorrow unless you guys cab do a little eralier 17:29:56 I can't do this time tomorrow 17:30:05 How early can we go? 17:30:10 earlier is fine 17:30:20 2 hours earlier would be fantastic 17:30:28 9am PST 17:30:53 5pm UTC 17:31:23 tjones: Works for me 17:31:33 cool 17:31:33 #action discuss https://review.openstack.org/#/c/66666/ and https://review.openstack.org/#/c/99952/1 tomorrow in openstack-vmware at 4PM UTC (that is 8am PST) 17:31:41 ok lets go to approved BP 17:31:47 vui - how's it going? 17:32:10 fine. Got some good comments re: earlier patches in the refactor review chain. 17:32:22 I am updating/rebasing the works 17:32:32 rinse and repeat from there 17:32:49 great - so moving along 17:33:20 the 2 reviews we needed to get to (from last time) still need to be reviewed. I don't want to do it here due to time, but they are . 17:33:24 as long as you are saving water. that is what is importnat 17:33:54 #action review https://review.openstack.org/#/c/59365/ and https://review.openstack.org/#/c/91005/ 17:34:03 that is blocking hotpug 17:34:20 #topic BP in review 17:34:21 i tried to break them into little ones .... 17:34:31 one needs to be rebased - will do tomorrow morning 17:34:33 #link https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:vmware,n,z 17:34:48 anyone have a BP to discuss? 17:35:37 looks like we lost kiran, but his spec is getting closer 17:35:40 i got reviews last week after this meeting and have addressed them, so if reviewers can check them it would help 17:35:46 ah there you are 17:36:08 #action review https://review.openstack.org/84662 17:36:18 kirankv: what about https://review.openstack.org/98704 17:37:28 tjones: https://review.openstack.org/98704 is the one to go first 17:37:34 ok good 17:37:35 Fun! That has inter-nova locking implications. 17:37:46 ok if no more BP discussion?? we can move on 17:37:59 I have following bug patches https://review.openstack.org/#/c/99623 https://review.openstack.org/#/c/92782/ 17:38:03 #topic bugs 17:38:06 the other one nova core has concerns of compute looking into datastores of another compute 17:38:15 #undo 17:38:16 Removing item from minutes: 17:38:25 lets wait a sec before bugs 17:38:56 ok you mean the 2nd on has concens, not the 1st right? 17:39:08 tjones: Copy from another nova 17:39:10 so if https://review.openstack.org/98704 can get reviewed that would be goof 17:39:29 Haven't looked at the patch, btw 17:39:34 * mdbooth is making assumptions 17:39:47 ok good we can focus on https://review.openstack.org/98704 17:39:54 #topic bugs 17:39:55 KanagarajM: thanks for addressing the comments. I have some minor ones to add. 17:39:57 ok thanks 17:40:51 sure Gary. thanks. 17:41:40 any other bugs? 17:41:45 there is a concern on timeout value used 17:42:05 I have a chain of 3 patches for the iSCSI stuff 17:42:13 The iSCSI one is still open. arnaud I started reviewing your patches, btw, but I haven't finished yet 17:42:18 so I would like to finalize the time out here 17:42:37 I was going to ask, though, that you rebase them on top of https://review.openstack.org/#/c/99370/ 17:42:38 https://review.openstack.org/#/c/97612/ https://review.openstack.org/#/c/100379/ https://review.openstack.org/#/c/100778/ 17:42:45 Again, no point in stomping on each other 17:43:26 mdbooth, this is making some tradeoff but I think this is fine 17:44:14 mdbooth, we discussed with Vui the fact 17:44:20 KanagarajM: so you want to timeout after 7200 seconds to be consistent with cinder? 17:44:25 that if the compute node dies when we remove the targets 17:44:35 the target will never be removed 17:45:07 on a steady system: this should not happen, and if this happen, I don't think this is the end of the world 17:45:15 yes that was the fact I followed 17:45:19 KanagrajM: It should be left to the earlier default and when VMFS cinder driver is used it should be set to 7200 in the conf 17:45:48 kirankv: KanagarajM: yes, 180 sounds more reasonable 17:46:48 arnaud: I've only looked at the first patch so far, and I haven't finished with that yet 17:46:56 I think the meat is in the second patch, right? 17:47:02 right 17:47:33 I think that as long as we don't have the respawning target problem it should be acceptable 17:48:01 anyway it will take some time. I asked Vui to look at it too 17:48:04 i.e. that if a target exists anywhere it will exist everywhere, and the only way to rid yourself of it is to delete it everywhere before the refresh job runs 17:48:28 having gotten around to, but will, promise :-) 17:48:31 I take your point that it may not be worth spending a lot of time on an edge case 17:48:42 arnaud: I will review them all tomorrow 17:49:07 ok awesome! thanks a lot mdbooth 17:49:18 KanagarajM: you ok with overridding the default if the VMFS driver is used? 17:49:48 I will set to 180 seconds as default in the code 17:50:16 ok any other bugs? 17:50:29 arnaud: How about the rebase? Principal change is a refactor to consolidate volumeops and volume_util 17:50:30 I have another one 17:50:41 KanagarajM: ok 17:50:46 arnaud: However, I don't expect that to affect the review in any substantive way 17:50:48 mdbooth, I will look at that after the meeting 17:50:49 It's just code motion 17:51:26 i say ship it :) 17:51:29 lol 17:51:38 KanagarajM: what was your other one? 17:52:46 vc driver breaks instances.hypervisor_hostname value https://review.openstack.org/99623 17:53:20 Nasty 17:53:43 Surprised they have the same morefs 17:54:10 that is why the uuid was suggested 17:54:14 morefs typically just grows in monotonically increasing numeric values 17:54:24 KanagrajM: Id rather put the uuid at the end than in between, makes a better reading 17:54:50 *thinking about icehouse->juno upgrade implications" 17:55:20 Ah, different centers 17:55:25 somewhat related - at the summit we were asked to drop the nodename support (multi cluster support). i am looking into that 17:55:27 vuil: it is handles by updating the hostname 17:55:29 mdbooth yep 17:55:46 vuil: Why not an upgrade job? 17:55:46 I tried the icehouse to Juno for an instance 17:55:59 and worked properly 17:56:10 garyk: do you have a spec for that? 17:56:24 kirankv: i am in the process of drafting a mail. 17:56:33 i'll shoot it past you first as you did this work 17:56:52 the normalize_nodename method take care of it 17:57:13 garyk: thanks 17:57:31 mdbooth: sounds like it is handled in the patch re: upgrade, will look at it further 17:57:50 vuil: Sounds like removing multi cluster makes this go away 17:57:51 mdbooth: this is an edge case, the user has to create the same named clusters in the same order to hit the bug 17:57:54 i.e. what garyk said 17:58:17 ok 3 minutes - apart from the ordering of displayname/uuid - any other issues? 17:58:17 i think that the normalize will help, but need to look at it in more detail 17:58:21 yes but I see this bug in multiple test env 17:58:32 I can see it happening a lot in CI type envs 17:58:56 yeah, the times for the football games are ridiculous. tjones can you please escalate to management 17:59:00 clearly this needs to be fixed - but need to make sure it doesn't break other htings 17:59:01 lol 17:59:02 lol 17:59:21 garyk: you mean soccer? :-D 17:59:30 yeah. 17:59:33 agree gary 18:00:01 tjones: One to throw over the wall and run away: given that we have no core reviewers, could we do with nominating our own tech lead? 18:00:07 ok 1 minute for #open discussion 18:00:28 lets move over to openstack-vmware for that discussion 18:00:28 Not enough time to discuss, but one to mull for next time, maybe? 18:00:37 * mdbooth has to shoot this evening 18:00:43 #endmeeting