14:01:47 <esberglu> #startmeeting powervm_driver_meeting
14:01:48 <openstack> Meeting started Tue Dec 13 14:01:47 2016 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:51 <openstack> The meeting name has been set to 'powervm_driver_meeting'
14:02:08 <thorst> o/
14:02:11 <esberglu> No I didn't. I tried last night and couldn't get in. Now I can log in but everything I click on errors out
14:02:27 <thorst> You need to request access to a project
14:04:17 <adreznec> That's a bizarre system...
14:04:27 <thorst> Step 1: Sign in
14:04:33 <thorst> Step 2: Request project access
14:04:38 <thorst> Step 3: Wait indefinitely
14:04:53 <esberglu> I can't even request anything
14:05:01 <thorst> hmm...pm him to see
14:05:03 <esberglu> When I click requests it says an error occured
14:05:08 <adreznec> #agile
14:05:11 <thorst> we can just get started with the driver meeting for now
14:05:27 <esberglu> #topic openstack ci
14:05:43 <adreznec> 404 more zuul mergers not found
14:06:01 <esberglu> Did you guys have a chance to look at the patch for that?
14:06:09 <thorst> I did not.
14:06:33 <adreznec> Not yet, was busy in meetings
14:07:04 <thorst> #action thorst and adreznec to do their reviews
14:07:14 <esberglu> Okay. I might be missing some stuff on it but I mostly just took the stuff we needed from that ansible zuul role
14:08:12 <esberglu> adreznec: Do you know how zuul knows that additional zuul mergers exist?
14:08:15 <adreznec> Ah there it is, I wasn't actually on the review. Looking now
14:08:26 <adreznec> I think they report into gearman
14:08:41 <adreznec> and register internally with zuul from there
14:11:13 <esberglu> Alright. The CI runs are still looking fine once they make it through the queue
14:11:22 <esberglu> But are queue is now backed up 77 jobs
14:11:27 <esberglu> *our
14:11:32 <thorst> lol
14:11:37 <thorst> so that's top CI priority
14:11:37 <adreznec> Awesome
14:11:39 <adreznec> Yeah
14:11:46 <adreznec> we need to get those extra nodes up
14:12:02 <esberglu> Jupiter froze my browser again
14:12:18 <adreznec> Man
14:12:20 <adreznec> What are you using
14:12:28 <adreznec> It worked fine for me in FF
14:12:32 <esberglu> IBMs Firefox
14:12:36 <adreznec> Lol
14:12:43 <adreznec> Yeah, I dumped the IBM FF a long time ago
14:12:53 <adreznec> Too many versions behind
14:13:28 <adreznec> Try chrome or safari or something
14:13:36 <adreznec> At least that'll tell you if it's the browser
14:13:56 <thorst> #action esberglu Get into Jupiter cloud, make zuul mergers
14:14:17 <esberglu> I think that's pretty much it for openstack CI
14:14:30 <adreznec> Yeah, hard to do much until we solve that
14:15:12 <esberglu> #topic OSA CI
14:15:30 <esberglu> wangqwsh: Did you say we could use neo4 for CI?
14:15:57 <wangqwsh> no, i didnot
14:16:13 <esberglu> Okay.
14:16:33 <esberglu> thorst: Did you find out if neo50 could be used for OSA CI or if that was for SDN?
14:16:51 <thorst> esberglu: I think that is for CI
14:16:58 <thorst> we have two others waiting for Neutron SDN stuff
14:17:03 <wangqwsh> but I do not use noe4. If we need it for CI, we can use neo4
14:17:34 <thorst> alright, so we have neo4 and 50 for OSA CI stuff :-)
14:17:39 <esberglu> Okay. I think it would be beneficial to have 2 systems for OSA CI since it is more resource heavy than openstack
14:17:53 <esberglu> And then keep neo14 for the openstack CI staging env.
14:18:11 <esberglu> #action: esberglu: Get neo4 and neo50 configured for CI
14:18:46 <esberglu> wangqwsh: Anything you need to do/get on neo4 before I start using it?
14:19:09 <wangqwsh> no, you can use it directly
14:19:13 <esberglu> Cool
14:19:20 <thorst> I think a system rebuild would be good
14:19:27 <thorst> it has traditional VIOSes and what not
14:19:41 <esberglu> Yeah. I think neo50 got installed with 2 VIOSes as well
14:19:55 <esberglu> So I will rebuild both
14:20:16 <thorst> wait...maybe we need it that way
14:20:20 <thorst> cause SSPs...
14:20:33 <esberglu> All of the other CI systems have 1 VIOS
14:20:35 <thorst> I think we want single VIOS (for SSP) and then the SDN config on the NL
14:20:44 <thorst> right...ok
14:20:48 <thorst> I think we're saying the same thing
14:21:06 <esberglu> Yep
14:21:26 <esberglu> #topic Drivers
14:21:50 <thorst> My status there is that I pushed up four WIP reviews after getting asked by mriedem where we were at
14:22:08 <thorst> this is for the powervm integration into nova proper
14:22:18 <thorst> we are behind, and need to make up ground there...which is a challenge with the holidays
14:22:33 <thorst> my reviews did not have UT on them.  wangqwsh has been doing an excellent job adding it
14:22:36 <thorst> and we now need to review
14:22:46 <thorst> https://review.openstack.org/#/c/391288/
14:23:38 <esberglu> #action: all: review the WIP nova patches
14:24:45 <esberglu> The ocata 2 milestone is this week so I will be tagging the out of tree drivers for that
14:25:03 <esberglu> #action esberglu: Tag out of tree drives for ocata 2 milestone
14:28:03 <esberglu> Anything else for the drivers?
14:28:04 <thorst> wangqwsh: for the nova driver review...as comments come in from people generally new to the powervm driver (ex. Ed Leafe)...they'll have great comments that we need to update the patch for
14:28:14 <thorst> but we also want to make sure you then propose a corresponding patch back to nova-powervm's driver
14:28:27 <thorst> to keep the out-of-tree driver in line with the in-tree driver
14:28:30 <thorst> does that make sense?
14:29:22 <wangqwsh> ok... I do not sync comments to nova-powervm
14:29:29 <wangqwsh> I will do it later
14:29:46 <thorst> sure
14:29:48 <thorst> thx!
14:29:54 <wangqwsh> :)
14:31:00 <thorst> #action wangqwsh to push updates back into nova-powervm based on review comments on nova WIP patches
14:31:05 <thorst> that's all I had for the driver bits
14:31:14 <esberglu> #topic General Discussion
14:31:34 <esberglu> So these additional zuul mergers should work around the problem for now
14:31:54 <esberglu> But it still comes down to the poor network performance interacting with git.o.o
14:32:05 <thorst> yeah, but that'll take a while
14:32:07 <adreznec> That kind of sounds like a combination issue
14:32:09 <esberglu> At one point, 1 zuul merger was handling WAY more volume than we are seeing now
14:32:13 <adreznec> Between git.o.o and the POK lab network
14:32:18 <thorst> we need to get a ticket open to the network team
14:32:26 <thorst> and give them endpoints they can work with
14:32:35 <thorst> they may need to escalate to the back bone provider...
14:34:55 <esberglu> Can one of you guys chase that at some point? I don't think I know enough about the network
14:35:30 <thorst> pm me a source IP and we'll get a ticket open...as well as recreate steps
14:35:34 <thorst> I'll open it to the network team
14:35:52 <thorst> but, if they end up having to work with the backbone provider...expect months
14:35:58 <thorst> also, I think this is just a solid idea anyway
14:37:06 <thorst> anything else?
14:37:15 <esberglu> Yeah
14:37:32 <esberglu> wangqwsh: Did you see my note about CI deployments?
14:37:38 <wangqwsh> yes
14:38:01 <esberglu> I said in there to try deploying the staging environment. Hold off on that for now while we work through this zuul merger think
14:38:07 <esberglu> thing
14:38:09 <wangqwsh> but i am working on WIP patch sets. so I will check it later
14:38:35 <wangqwsh> ok
14:38:54 <esberglu> Okay. I will let you know when it is not being used an you can try to deploy it then
14:39:05 <wangqwsh> sure, thanks
14:39:56 <esberglu> The last thing I had. If you guys want a quick look at how the CI is performing on nova you can look here
14:40:01 <esberglu> http://ci-watch.tintri.com/project?project=nova
14:40:09 <esberglu> And scroll down to IBM PowerVM CI
14:40:26 <thorst> hah!  we're so behind!
14:40:30 <thorst> but excellent to be in there
14:41:07 <esberglu> I need to figure out why sometime the CI reports as IBM PowerVM CI and sometimes as IBM PowerVM CI check
14:41:41 <esberglu> That covers everything from my end
14:41:54 <esberglu> Anyone else have final topics?
14:42:07 <wangqwsh> thorst: what is the deadline of WIP-x patch sets?
14:42:13 <wangqwsh> this week?
14:42:54 <thorst> wangqwsh: Lets stabilize the first one this week.  Get a good head start on second and try to have the stable mid next week.
14:43:29 <thorst> I also need to make a WIP 5 for the VIF bits.
14:43:56 <wangqwsh> ok, got it
14:44:25 <thorst> many reviewers will be out in the next couple of week...starting on Friday, at least in the US, many people will be off till Jan 2nd
14:44:40 <thorst> but we want as much of this lined up for review as possible on Jan 2.
14:45:48 <wangqwsh> ok
14:49:06 <esberglu> #endmeeting