13:01:36 <esberglu_> #startmeeting powervm_driver_meeting
13:01:37 <openstack> Meeting started Tue May 30 13:01:36 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu_. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:01:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
13:01:40 <openstack> The meeting name has been set to 'powervm_driver_meeting'
13:02:05 <edmondsw> o/
13:03:51 <esberglu_> #topic In Tree Driver
13:03:59 <esberglu_> #link https://etherpad.openstack.org/p/powervm-in-tree-todos
13:04:35 <esberglu_> List of todo's is above
13:05:17 <esberglu_> With the merge of the full flavor spawn/delete and console last week I think we are moving forward again here
13:06:03 <esberglu_> efried: You around?
13:08:00 <esberglu_> #topic Out Of Tree Driver
13:08:47 <esberglu_> I'm still slowly working backports from IT to OOT as able, there are a few other working items found in the link above that are for OOT as well
13:08:54 <thorst> let me ping efried on slack...see if he's around
13:10:11 <esberglu_> Anyone other topics OOT that need discussion?
13:10:18 <esberglu_> *any
13:10:21 <thorst> efried: is joining
13:11:28 <thorst> I think with OOT, we'll be getting a new iSCSI patch from chhavi_ soon
13:11:34 <thorst> which will depend on a new pypowervm release
13:11:39 <thorst> so we'll have to merge that in slowly
13:13:13 <efried> o/  Sorry I'm late, guys.
13:13:20 <esberglu_> efried: np
13:13:37 <esberglu_> Talking OOT now, we can loop back to IT after if you have stuff
13:13:50 <efried> The pypowervm release is branched, but as of last week, I didn't see it in pypi yet.
13:14:16 <efried> Does it have what you need in it, thorst ?
13:14:21 <thorst> efried: nope
13:14:28 <thorst> chhavi_ posted the review
13:14:34 <thorst> I +2'd it for pypowervm, but not in
13:14:37 <thorst> I'd like you to review
13:14:39 <efried> ack
13:15:00 <efried> 5267?
13:15:12 <thorst> yessir
13:16:33 <esberglu_> Cool. That it OOT?
13:16:47 <thorst> yeah, the iSCSI is for OOT
13:16:52 <thorst> but that's ongoing work
13:16:58 <esberglu_> *that it for OOT
13:18:26 <thorst> lol, whoops  :-)
13:18:29 <thorst> I have nothing else
13:18:37 <thorst> (I read your 'that it' wrong obviously)
13:19:00 <efried> Was "merges" all we talked about for in-tree?
13:19:06 <efried> Were we planning to come back to that?
13:19:27 <esberglu_> efried: Yeah I was gonna wait for you
13:19:39 <efried> The wait is over!
13:19:45 <chhavi_> efried can u qickly review 5267
13:19:51 <efried> Not quickly, no.
13:19:51 <chhavi_> its long cid
13:19:58 <efried> I'll do it first thing, though.
13:20:21 <efried> We're going to need to cut 1.1.6 with that, I guess.
13:20:37 <efried> What's the desired time frame on that?
13:21:02 <chhavi_> its iscsi which is for OOT
13:21:42 <edmondsw> I thought they cut 1.1.6 last week? So we might need 1.1.7 for the iSCSI change
13:22:10 <chhavi_> thorst: can u clarify i am not aware of this plan
13:22:40 <thorst> chhavi_: we have release branches for pypowervm that are independent of NL
13:22:57 <thorst> we can't accept the nova_powervm change that will consume your iSCSI change until it is in a RELEASE of pypowervm
13:23:06 <thorst> fortunately, we can turn around pypowervm releases kind of quickly
13:23:17 <edmondsw> oh, nm my comment above... I think 1.1.5 is still the latest
13:23:28 <edmondsw> I think it was NL that went to 1.0.0.6
13:23:32 <thorst> so once your pypowervm change is approved and merged, we then will need to make a release of pypowervm before we can consider merging the nova_powervm one
13:24:07 <efried> Glad we changed that release numbering scheme to be less confusing.
13:24:13 <edmondsw> chhavi_ that's what I was trying to say when I -1'd your nova_powervm change
13:24:44 <edmondsw> (what thorst was saying)
13:24:49 <chhavi_> ok, yeah that change is dependent on pypowervm
13:25:55 <edmondsw> and we normally bump the requirements version in a separate patch, right? Rather than in the nova_powervm change we want to make that will depend on it
13:25:59 <edmondsw> efried ^
13:26:20 <edmondsw> since we want to do that in global_requirements, not just in nova_powervm, right?
13:26:33 <efried> edmondsw Yes, we do that in g-r and let the bot propose the nova-powervm bump.
13:26:45 <edmondsw> that was going to be next question, if we had the bot setup... cool
13:27:28 <edmondsw> done with OOT then?
13:27:37 <edmondsw> back to IT?
13:28:14 <esberglu_> #topic In Tree Driver
13:28:15 <efried> let's do it
13:28:22 <efried> So I'm working on get_inventory().
13:28:24 <esberglu_> I wonder if that will merge the two topics
13:28:35 <efried> https://review.openstack.org/#/c/468560/
13:29:39 <efried> There's a lot of open questions here.  As it stands, the change set can probably be considered "good enough", algorithmically.  But we will eventually want to work through it and answer all the pending stuff.
13:30:12 <efried> Y'all want to discuss it in any more detail here?
13:30:14 <thorst> so give that a thorough review?
13:30:21 <thorst> I haven't looked at it yet.
13:30:42 <efried> Yeah, I'm afraid a "thorough review" could entail quite a bit of background reading...
13:31:02 <efried> Swhy I'm wondering whether it would be useful to do an overview discussion here.
13:31:10 <edmondsw> I haven't looked either yet, but I will try to do that today once I get through a couple PowerVC defects that have to get fixed today
13:31:12 <thorst> This is just the thing where...
13:31:17 <thorst> don't duplicate the storage space
13:31:21 <efried> Not yet.
13:31:29 <thorst> two nodes using same storage should have a single amount of storage
13:31:47 <thorst> and you'll be pulling that to OOT as well I assume  ;-)
13:32:04 <efried> So fare, this is just the thing where get_available_resource is being superseded by get_inventory, which (internally) uses the placement API to register how much CPU, memory, and disk the host has.
13:32:08 <efried> so far*
13:32:38 <efried> Yes, I'll pull it OOT once we have it settled.  Not going to even try that until it's approved and mergeable.
13:33:08 <efried> SSP-ness is one of the open questions, which I figured would probably be a future change set.
13:33:32 <efried> The other thing is figuring out "reserved" properly for all the resources.
13:33:51 <efried> And "allocation_ratio".
13:34:15 <efried> The other drivers aren't doing any of that yet, so I feel comfortable doing it incrementally.
13:34:33 <edmondsw> sounds good to me
13:35:14 <efried> Point is, when you're reviewing this, at least take a look at the other drivers' get_inventory() methods (I think only libvirt and ironic are currently implemented)
13:35:36 <efried> yeah
13:35:41 <efried> unless proposed-but-not-merged.
13:35:53 <thorst> neat
13:36:00 <thorst> cool to see we're relatively bleeding edge here
13:36:01 <thorst> :-)
13:36:35 <efried> I'll add blueprint references to the etherpad, in case you want to go more in-depth.
13:38:37 <efried> Other in-tree news, just the SSP change set is left on the nova team's perception of "complete" for the pike blueprint for the powervm driver.
13:38:57 <efried> It has sdague +2, so I'll pester mriedem at some point this week to do his pass.
13:39:21 <efried> The get_inventory change depends on it; otherwise I would have to cut one version of that without disk.
13:39:35 <efried> ...and then another version on top of the SSP change to add disk.
13:39:50 <efried> So I decided to do it this way - anyone have objections to that?
13:40:16 <thorst> efried: so...that means we can deploy without networks
13:40:16 <esberglu_> efried: Makes sense to me
13:40:19 <thorst> come pike?
13:40:26 <edmondsw> do you think the get_inventory change is going to make pike this way?
13:40:33 <edmondsw> and do we care if it slips to queens?
13:40:43 <edmondsw> sounds like it would
13:40:50 <efried> edmondsw Yes, I think it'll make pike, but no, it doesn't break the world if it slips to queens.
13:40:59 <efried> They're not getting rid of get_available_resource yet.
13:41:08 <thorst> yeah, queens is really where we expect to have something viable in-tree
13:41:31 <efried> The consuming code tries get_inventory, and if it's NotImplemented, falls back to get_available_resource.
13:41:54 <efried> I think they're going to deprecate in queens, remove in rocky.
13:42:15 <efried> assuming they can make the whole thing work in the first place, which I'm not totally convinced of yet.
13:42:48 <efried> One of the major changes is that get_inventory, unlike get_available_resource, is not responsible for telling you how much resource is used - just how much total the host has.
13:43:05 <efried> The theory being that nova is keeping track of how much is used based on the claims attached to the instances it has deployed.
13:43:26 <efried> Which makes the 'reserved' business much more important - and more difficult - for us to account for.
13:45:00 <efried> Perhaps we should set up a separate slot (or stick around after this meeting) to go into that stuff in more depth.
13:46:16 <efried> <crickets>
13:46:39 <thorst> <reading....>
13:46:57 <edmondsw> efried I would suggest a separate slot
13:46:58 <thorst> yeah.
13:47:04 <thorst> but I would like to read the review first
13:47:07 <thorst> bang my head on it
13:47:20 <edmondsw> yeah
13:47:31 <efried> Okay.  I tried to leave appropriate TODOs for the stuff I've mentioned above, but that's not a lot of context.
13:47:47 <efried> So just grab me on IRC when y'all are ready.
13:48:15 <efried> Otherwise in-tree: There's a couple of major pieces of work we should be looking at in the near future.
13:48:35 <efried> First priority, getting powervm into the support matrix.
13:48:59 <efried> Anyone volunteering to take that on?
13:49:16 <efried> This should be fairly straightforward.
13:49:55 <esberglu_> efried: I can do that. Is that just a docs change?
13:50:10 <efried> esberglu_ Yes, more or less.  (Code-as-doc)
13:50:29 <efried> esberglu_ The links in the etherpad should get you pointed in the right direction.
13:50:34 <efried> I'll put your name on it.  Thank you.
13:50:35 <esberglu_> efried: Perfect
13:51:49 <efried> Related, but more involved, and possibly not destined to make pike, is the item called "Document flavor extra_specs" in the etherpad.
13:52:00 <efried> Just documenting our extra_specs wouldn't be a huge deal...
13:52:16 <efried> ...but mriedem is looking at a bigger picture here, I think.
13:52:33 <efried> In terms of getting at least the framework for broader per-driver documentation into the tree.
13:53:27 <efried> If nobody feels like starting to look into that right away, we can put it off for a bit.
13:54:11 <efried> The last major item, more fun, actual code, is "Deactivate compute service if pvm-rest is dead."
13:55:21 <efried> Anyone feel like digging into that in the near future?
13:55:53 <edmondsw> efried what does deactivate entail?
13:56:12 <efried> Mark the compute host as down so nothing gets scheduled to it.
13:56:29 <edmondsw> and then keep listening and bring it back up when pvm-rest comes back?
13:56:36 <efried> Yeah.  The links in the etherpad point to code we can crib from vcenter to do the deactivate/reactivate part.
13:56:41 <edmondsw> cool
13:56:59 <edmondsw> are we trying to do that for pike?
13:57:06 <efried> So it would just be figuring out how (easy) and where (??) to listen for pvm-rest being dead.
13:57:22 <efried> edmondsw That would be nice, but I don't think it's critical.
13:57:40 <edmondsw> this or the extra_specs doc higher priority?
13:58:07 <efried> nova took some pressure off of that recently by putting in a change that deactivates the compute service if it fails some number of deploys in a row (configurable, 10 by default, I think).
13:58:21 <efried> edmondsw That's kind of an apples/oranges question.
13:58:52 <efried> If you have the space for it and it sounds interesting to you, grab it.
13:59:24 <edmondsw> efried I can't start on it today, but you can put my name there and I will start on it later this week
13:59:52 <efried> edmondsw Cool man.  Links in the etherpad should get things rolling; let me know if you want more pointers/background.
13:59:58 <edmondsw> tx
14:00:43 <edmondsw> time check... 1 minute
14:00:45 <efried> Only other in-tree stuff is the small stuff, low-hanging fruit.  I think esberglu_, you have your eye on picking that stuff off?
14:01:04 <esberglu_> efried: Yep, slowly picking it off when I'm not fighting CI
14:01:21 <esberglu_> #topic CI
14:01:25 <efried> Cool.  Just put your name on stuff and add review links as you go so nobody ends up duplicating.
14:01:42 <esberglu_> So master CI is back up and passing runs are going through
14:02:08 <esberglu_> But not a good percentage
14:02:25 <esberglu_> The stable branches were broken for everything except nova
14:02:34 <esberglu_> The patch application logic would try to apply the patches regardless of branch
14:02:45 <esberglu_> Which would merge fail for the stable branches
14:02:52 <esberglu_> Since all the patches we applied are now merged I updated the jenkins jobs
14:03:04 <esberglu_> So they should be working now, I have a newton and ocata CI check running now
14:03:13 <esberglu_> Long-term I can just add a branch check there
14:03:13 <esberglu_> But I'm going to be reworking that in other ways as well
14:03:35 <thorst> esberglu_: so nova_powervm is still borked?
14:03:41 <thorst> for stable/ocata?
14:03:51 <thorst> I know that the PVC team is...very eager...for that to be working
14:03:52 <esberglu_> thorst: Nope it should be good now. Let me confirm once the CI checks finish
14:04:12 <esberglu_> But I posted a recheck on larese's patch
14:04:22 <thorst> good good
14:04:31 <thorst> I think he had a heart attack on Friday when I said it was down
14:05:04 <edmondsw> tx for the quick fix esberglu_
14:05:43 <esberglu_> edmondsw: np
14:05:56 <edmondsw> esberglu_ anything else on the CI?
14:05:57 <thorst> +2
14:05:59 <esberglu_> If anyone has extra time this week and feels like looking into the failing CI logs let me know
14:06:06 <esberglu_> That's all I had CI
14:06:16 <efried> I'll be looking at my changes, anyway.
14:06:16 <esberglu_> #topic Driver Testing
14:06:32 <efried> I don't see Jay or Nilesh.
14:06:51 <esberglu_> Any new status?
14:06:53 <edmondsw> chhavi_ do you know anything there?
14:08:34 <esberglu_> Let's move on since we are running over time
14:08:43 <esberglu_> #topic Open Discussion
14:08:50 <esberglu_> Any final thoughts before I call it?
14:09:41 <thorst> everyone's doing real good.
14:09:44 <thorst> :-p
14:09:52 <edmondsw> +1
14:09:55 <efried> :)
14:10:26 <esberglu_> :)
14:10:43 <esberglu_> Thanks for joining
14:10:48 <esberglu_> #endmeeting