17:00:22 <hartsocks> #startmeeting vmwareapi
17:00:22 <openstack> Meeting started Wed Mar 12 17:00:22 2014 UTC and is due to finish in 60 minutes.  The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:26 <openstack> The meeting name has been set to 'vmwareapi'
17:00:32 <hartsocks> Hi folks. Who's around?
17:00:41 <arnaud> o/
17:00:48 <browne> Eric from VMware
17:00:49 <chaochin> hi
17:01:29 * mdbooth is here
17:01:42 <hartsocks> #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda
17:02:46 <hartsocks> Let's touch blueprints today, should be short… move to bugs… then open discussion. Sound good?
17:03:03 <arnaud> ok
17:03:21 <hartsocks> #topic blueprints
17:03:26 <hartsocks> #link https://etherpad.openstack.org/p/vmware-subteam-icehouse
17:03:55 <hartsocks> Top half of the etherpad, we're tracking the Blueprints (new features) for IceHouse.
17:04:19 <hartsocks> Only two BP for Nova made it in during the whole icehouse development cycle.
17:04:20 <mdbooth> hartsocks: Did I miss something, or didn't image cache management make it?
17:04:35 <hartsocks> :-)
17:04:38 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management
17:04:59 <hartsocks> mdbooth: Currently approved but waiting on passing tests for merge last I checked a few minutes ago.
17:05:03 <mdbooth> Ok, interesting
17:05:13 <mdbooth> The changes got in, but the blueprint isn't approved
17:05:25 <hartsocks> #link https://review.openstack.org/#/c/56416/
17:05:37 <hartsocks> mdbooth: that would be a "bug" in the process then.
17:06:11 <hartsocks> heh. Didn't spot that the BP is marked "next" still.
17:06:28 <mdbooth> You commented on the mailing list thread about changing that process, so I guess I don't need to point it out :)
17:06:51 <hartsocks> slightly off topic: https://review.openstack.org/#/c/79363/
17:07:04 <mdbooth> Indeed
17:07:07 <tjones> im split-brained right now.  ping me if you need me
17:07:09 <hartsocks> mdbooth: that's where the start of changing the process begins...
17:07:28 <hartsocks> tjones: cool. Just fyi on that blueprint process change…
17:07:51 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-iso-boot <- our other FFE winner
17:08:18 <hartsocks> #link https://review.openstack.org/#/c/77965/ <- approved for merge but blocked by failing tests right now
17:09:16 <hartsocks> So that's our two features to merge.
17:09:50 <hartsocks> Anything else feature related has to be moved to  Juno now. So I'm told we should have a very active Juno-1 milestone for this group.
17:10:18 <hartsocks> Questions, comments?
17:10:52 <mdbooth> About Icehouse or Juno?
17:11:14 <hartsocks> Let's focus on icehouse for now.
17:12:08 <chaochin> hartsocks: when will we start to evaluate Juno blueprint?
17:12:21 <mdbooth> What's the priority for getting those Icehouse blueprints approved?
17:12:30 <mdbooth> The process has already been subverted
17:12:37 <mdbooth> So I guess it would just be a cleanup exercise
17:12:45 <hartsocks> chaochin: this is a part of the process that has been a bit screwy for me the last 2 cycles...
17:12:51 <hartsocks> that is to say…
17:12:55 <browne> question: so will some blueprints move from nova to oslo.vmare?
17:13:29 <hartsocks> chaochin: you can have a perfectly good blueprint for Juno but it will be -2 "hard" between FF for one release and open season for the next.
17:13:36 <arnaud> browne: at this point the priority is to refactor the code
17:13:48 <hartsocks> chaochin: so blueprints are usually in limbo between now and the RC
17:13:52 <arnaud> and integrate with oslo.vmware as it is today
17:14:28 <hartsocks> #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
17:15:08 <hartsocks> browne: on the etherpad https://etherpad.openstack.org/p/vmware-subteam-icehouse I've marked most of the BP that need to move away from Nova to oslo.vmware …
17:15:20 <browne> ok
17:16:13 <hartsocks> arnaud: we have an interesting problem, we have a mandate from Nova core to do a refactor of our driver, but that refactor might conflict with our own efforts for the oslo.vmware work. We will have to work that out with the Nova core over the next few weeks.
17:16:18 <arnaud> I think we need to slow down blueprints that introduce more complexity in vmops
17:16:39 <mdbooth> hartsocks: What's the conflict?
17:16:50 <hartsocks> mdbooth: I've been told that the deferred BP that "just missed" icehouse-3 will be first to be considered for Juno-1
17:17:26 <hartsocks> mdbooth: however, I feel this patch https://review.openstack.org/#/c/56416/ should be a cautionary tale for all of us.
17:17:49 <hartsocks> mdbooth: a fool learns from experience, a wise man learns from history.
17:18:05 <mdbooth> And everybody else uses a bit of both
17:18:23 <vuil> the vsan bp has among its series of patches a significant one that integrates nova with oslo.vmware
17:18:24 <hartsocks> Well, we should make use of both since we have it now. :-)
17:18:37 <vuil> I think that is an important one to get going first.
17:19:08 <vuil> it starts to the process, and eliminates more code that we can be concurrently stepping on.
17:19:18 <arnaud> the vsan patch is the most important to get at this point: I agree with vuil
17:19:21 <mdbooth> I feel like the oslo integration is going to be the most disruptive
17:19:24 <mdbooth> So it should go first
17:19:40 <vuil> yeah. It informs the rest of the refactoring work too.
17:19:46 <hartsocks> Actually, we have been told by Nova core to make this FOB: https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor … should we link the two?
17:20:07 <arnaud> hartsocks: I think this is part 2
17:20:34 <hartsocks> arnaud: I can make the two dependent on each other if folks feel that's best for the sanity of the driver.
17:20:40 <mdbooth> Yeah, I think oslo comes first
17:20:45 <vuil> btw I have started work on that, mostly coz the branch complexity in spawn has gotten to me as I rebase the vsan patches.
17:20:46 <arnaud> mdbooth: +1
17:21:02 <hartsocks> vuil: link to your BP?
17:21:48 <vuil> you mean vsan?
17:21:51 <mdbooth> vuil: The spawn refactor is complex, though, because there are so many things which can hang on it
17:21:58 <vuil> https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support
17:21:58 <mdbooth> Even from an internal nova pov
17:22:11 <mdbooth> Like fixes to BDM
17:22:15 <hartsocks> vuil: the part where you import oslo.vmware not vsan.
17:22:19 <mdbooth> Moving to nova objects
17:22:38 <vuil> well it was offered as part of the vsan bp.
17:22:46 <mdbooth> Majorly refactoring image cache handling now it has its own class
17:22:55 <vuil> kinda was a prerequisite coz vsan needs the functionality in oslo.vmwre
17:23:01 <hartsocks> okay, I need you to break those apart, or place the oslo.vmware work under the BP I just listed or we'll be blocked.
17:23:10 <browne> agree
17:23:18 <vuil> which, the spawn-refactor one?
17:23:26 <hartsocks> yes.
17:23:39 <hartsocks> I think you have two equally sane courses of action...
17:23:51 <hartsocks> 1. break open the import of oslo.vmware and make it its own BP
17:24:11 <hartsocks> 2. break open the import of oslo.vmware and make it a partial implementation of https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor
17:24:13 <vuil> I don't see why that would be the case, it does not really deal with refactoring spawn. It's just a good first thing to do before we even attempt to refactot.
17:24:27 <vuil> Pretty sure the cores would understand once they see it.
17:24:35 <hartsocks> Okay, then I will push https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor first and you can rebase vsan on top of that.
17:24:51 <hartsocks> The word is: https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor is going first.
17:24:59 <hartsocks> I'm not dictating that.
17:25:00 <mdbooth> We can work on the spawn refactor concurrently, as they probably won't step on each other architecturally
17:25:16 <mdbooth> However, we shouldn't try to get it reviewed until it's rebased on top of the oslo move
17:25:40 <vuil> mbooth: +1
17:25:45 <hartsocks> Okay. So these aren't dependent then?
17:26:15 <mdbooth> They will be loosely dependent
17:26:24 <hartsocks> My next question is should we bother to do https://blueprints.launchpad.net/nova/+spec/vmware-spawn-refactor in Nova or should we do it in oslo.vmware first?
17:26:31 <vuil> the oslo.vmware touches lots of things, but mostly in fairly mechanical way.
17:26:36 <hartsocks> (or is that a nonsense question?)
17:27:01 <vuil> it's like global renames, you want to do it in one shot, then have the rest come in later.
17:27:10 <hartsocks> okay.
17:27:26 <vuil> the refactor work can start concurrently, but what mbooth said.
17:27:47 <hartsocks> So I planned on doing the spawn refactor quickly since I've been told it will block our other BP unless core get to see it happen.
17:28:03 <hartsocks> This will be a very low-ambition refactor.
17:28:14 <hartsocks> Just sub-methods and tests. Nothing fancy.
17:28:42 <mdbooth> hartsocks: Is that worth it?
17:28:48 <mdbooth> No point doing it twice.
17:28:53 <arnaud> hartsocks: this will delay the actual refactor
17:29:11 <browne> i think at this point we're discussing Juno planning, no?  shouldn't we concentrate on icehouse?
17:29:41 <hartsocks> *lol*
17:29:44 <hartsocks> good point.
17:30:02 <hartsocks> tabling blueprint talk for now unless there's a strong objection.
17:30:10 * hartsocks waits
17:30:40 <hartsocks> #topic bugs
17:31:15 <hartsocks> back on https://etherpad.openstack.org/p/vmware-subteam-icehouse I've added a critical bugs section
17:31:48 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1290807
17:32:04 <tjones> ok i am back
17:32:05 <tjones> sorry
17:32:26 <hartsocks> per our rules for Importance we can't list https://bugs.launchpad.net/nova/+bug/1290807 as "critical" so I've mapped it appropriately.
17:32:36 <hartsocks> tjones: hey. Just talking bugs.
17:33:08 <tjones> so those 2 are not on russells list for RC - so they won't block but we can get them in as long as we do it fast
17:33:26 <hartsocks> This appears to be a critical regression introduced by performance "improvements"
17:33:49 <mdbooth> hartsocks: cf datastore browser cache
17:34:05 <hartsocks> ?
17:34:26 <hartsocks> je ne comprehende pas
17:34:28 <tjones> which one ?  resize or delete?
17:34:29 <mdbooth> hartsocks: Another object cache, which could potentially be broken by a change of object name?
17:34:52 <hartsocks> mdbooth: yes, one of my favorite (and very commonly seen) problems.
17:35:02 <hartsocks> tjones first one on the list is a regresson
17:35:09 <hartsocks> second on the list is
17:35:17 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1289397
17:35:26 <hartsocks> which is either just nutty or really bad.
17:35:46 <hartsocks> That is to say, it's either something silly in the user's environment or we have another big regression.
17:35:59 <hartsocks> I haven't had time to validate it.
17:36:07 <tjones> that one is not assigned - so we need to get someone on it
17:36:10 <mdbooth> Have you read the comments?
17:36:20 <mdbooth> issue is not related to vmdk. issue related to nova only.
17:36:30 <browne> 1289397 is currently incomplete.
17:36:38 * mdbooth doesn't know how much weight that carries
17:36:52 <browne> probably need more details on what image was used
17:36:58 <hartsocks> Yeah. If it's Nova proper it's critical. If it's us then it's critical/high…
17:37:18 <browne> oh debian-2.6.32-i686
17:37:20 <hartsocks> er… high/critical (we don't get to do "critical" when it's only one driver)
17:37:54 <hartsocks> Yeah. If someone can just run down quickly if that's just FUD or if it's real I'd appreciate it. I'm running a couple on-fire things right now.
17:38:05 <tjones> the issue is deleting a shutoff image?  should be easy to repo'
17:38:12 <tjones> logs would have been nice
17:38:38 <hartsocks> yep. it's simple to reproduce or it's an environment specific bug.
17:39:05 <tjones> i'll take it
17:39:09 <hartsocks> thanks.
17:39:52 <hartsocks> Does anyone else have pet bugs they think will be *really* bad to see unfixed in icehouse?
17:41:36 <hartsocks> #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
17:41:57 <hartsocks> okay so from now to the RC we're in bug-fix only mode on the master.
17:42:27 <tjones> hartsocks: i think this one should be closed with  image cache in https://bugs.launchpad.net/nova/+bug/1212790
17:42:44 <tjones> hartsocks: never mind
17:42:47 <tjones> different issue
17:43:18 <tjones> i mean image cache will fix the cleaning but not the "cannot delete file" thing
17:44:04 <hartsocks> tjones: that "cannot delete file" thing turned out to be a disk performance issue that crops up if the user is simulating disks or using very non-performant hardware.
17:44:09 <tjones> hartsocks: this one https://bugs.launchpad.net/nova/+bug/1240373
17:44:22 <mdbooth> tjones: Surely that wouldn't be an OpenStack thing if it's truly no longer in use
17:44:50 <tjones> mdbooth: yes that is what i meant about the "cannot delete" thing
17:45:27 <hartsocks> tjones: so 1212790 should just be closed it sounds like.
17:46:13 <tjones> hartsocks: not an openstack bug
17:46:20 <hartsocks> yeah.
17:46:50 <hartsocks> "won't fix"
17:47:06 <tjones> so i think we would like https://bugs.launchpad.net/nova/+bug/1240373 in icehouse.  i did not see that on your list
17:47:21 <hartsocks> I've added that one.
17:47:22 <tjones> also https://bugs.launchpad.net/nova/+bug/1235112 if we can
17:48:16 <hartsocks> hmm… that's set at "medium"
17:48:24 <hartsocks> should that be higher?
17:48:32 <tjones> that's why i said "if we can"
17:48:51 <tjones> it only affects people using iscsi - which has not been an issue so far
17:49:17 <hartsocks> it *seems* like it would be much more critical but if few people use the feature I guess it isn't.
17:49:34 <tjones> eactly - but as soon as we  have a customer who wants iscsi they are screwed
17:49:36 <tjones> so ....
17:49:45 <tjones> maybe it should be higher
17:50:34 <hartsocks> I just made it Medium/High because I can.
17:50:39 <tjones> woo hoo
17:50:43 <hartsocks> :-)
17:51:07 <hartsocks> vuil: are you going to be able to deliver on https://bugs.launchpad.net/nova/+bug/1240373 ?
17:51:49 <hartsocks> vuil: does that conflict/coincide with your other vsan changes at all?
17:52:04 <vuil> no not really.
17:52:25 <vuil> for this there was not recourse for a while
17:52:49 <vuil> but there was a recent BP that allows setting a virtual disk capacity size field in the image that we can leverage.
17:52:55 <vuil> icehouse+ only thought
17:52:57 <vuil> though
17:53:14 <hartsocks> so… you can't fix the bug in icehouse then?
17:53:14 <vuil> (digging… it's a glance bp)
17:53:27 <vuil> for icehouse yes.
17:53:43 <hartsocks> you can fix the bug in icehouse?
17:53:56 <tjones> hartsocks: if time - lets go through this bug list for our bugs.  we have a bunch https://blueprints.launchpad.net/nova/+milestone/icehouse-rc1
17:54:03 <vuil> I need to look into the bp further.
17:54:45 <hartsocks> vuil: okay. update us next week. I will ping each bug we just listed until RC1 ships.
17:55:15 <vuil> will do. fwiw bp I was referring to is (https://blueprints.launchpad.net/glance/+spec/split-image-size)
17:55:32 <hartsocks> hm 5 minutes.
17:55:41 <hartsocks> #link https://blueprints.launchpad.net/nova/+milestone/icehouse-rc1 as tjones mentioned …
17:57:05 <hartsocks> So basically, we need to be sure anything critical to RC1 gets listed there.
17:57:58 <hartsocks> Not sure what we can cover in 3 minutes so I'll just leave it at that.
17:58:11 <browne> can i bring up something real quick
17:58:19 <hartsocks> #topic open discussion
17:58:21 <hartsocks> go for it
17:58:59 <browne> i liked to get the openstack-vmware channel registered for gerritbot.  but in order to do so, people must be sure to leave the channel so we get ops on it
17:59:28 <browne> so i'd ask that people please leave the channel prior to their weekend break
17:59:55 <hartsocks> #action everyone log off of #openstack-vmware over the weekend so browne can do channel maintenance!
18:00:01 <hartsocks> :-)
18:00:05 <browne> thx
18:00:15 <mdbooth> browne: Bring that up continuously on the channel throughout the week!
18:00:37 <browne> mdbooth: will do
18:00:39 <hartsocks> Okay. That's time. We're on #openstack-vmware where I'm sure much discussion will continue.
18:00:43 <hartsocks> #end
18:00:54 <hartsocks> #endmeeting