14:03:12 #startmeeting powervm_driver_meeting 14:03:13 Meeting started Tue Dec 6 14:03:12 2016 UTC and is due to finish in 60 minutes. The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:03:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:03:16 The meeting name has been set to 'powervm_driver_meeting' 14:03:25 \o 14:04:09 I know efried is out today, so we won't get him 14:04:17 o/ 14:05:18 #topic status 14:05:23 qing wu is out today too 14:05:42 So the CI is looking really good 14:06:47 Out of the last 25ish runs, there have been only 2 patches that have failed tempest and a few more that failed bc they need a rebase 14:07:30 Were those 2 legitimate failures? Anything we need to fix from them? 14:07:44 Well one of them is a WIP 14:07:46 https://review.openstack.org/#/c/319379/41 14:08:05 esberglu: does that also include nova runs? 14:08:11 Yeah 14:08:21 That one was a nova run 14:08:31 sweet. 14:08:38 so, start publishing logs? 14:09:03 I think we are ready 14:09:10 I reckon so 14:09:24 Milestone! 14:09:41 I just need to create a new zuul pipeline for non-voting 14:09:54 #action esberglu: Create a non-voting zuul pipeline 14:10:17 Yeah, and disable the current silent one so we're not running things twice? 14:10:23 #action esberglu: Turn log publishing on 14:10:28 adreznec: Yep 14:10:55 what about e-mails when nova fails? 14:11:10 I'd like to make sure we can react within 12-24 hours to 'things going to hell' 14:11:16 and turn off Nova runs when that happens 14:11:20 cause things will go to hell 14:11:30 I can just have it send it to the same mailing list that nova-powervm etc. do on failures 14:11:35 Yeah 14:11:38 Should be easy enough 14:11:49 No emails on success though, don't want to spam the inbox 14:11:58 Yep 14:12:02 esberglu: can you check daily, when you come in and a few hours before you head out how things look? 14:12:12 and maybe get that into qingwu's routine as well 14:12:31 Yep I pretty much do that already 14:12:44 cool, then we just need to make it part of qingwu's routine 14:13:21 So that pretty much covers it from the devstack CI side of things 14:13:22 Honestly, at some point here we're going to want an alert system 14:13:30 +2 14:13:42 Something that tracks the run status and reports if failures > some threshold 14:13:45 we should ask in openstack-infra on that. 14:13:52 cause we're not the only ones that would want that. 14:13:59 +1 14:14:01 and maybe not the first to think about doing it 14:14:39 Some people use CI Watch here. Not sure what it takes to get added to that 14:14:41 http://ci-watch.tintri.com/ 14:14:46 Agreed. Not that hard to build, but even nicer if we don't have to build our own 14:15:04 I don't know if that has an alert system built in or not 14:15:58 Yeah we'll have to investigate that 14:16:04 Anything that helps us respond to failures more quickly is good 14:16:22 esberglu: the notes/etc you've been putting together for qing wu - are those going on a wiki or something? 14:16:49 Yeah. I was gonna finish them last night, but then I saw he was gonna be out so I'm gonna make sure they are done by thursday when he is back 14:16:55 Just thinking it's always good to have as many references as possible 14:16:56 Ok 14:17:13 Once I put it up I will email you guys so you can take a look 14:17:18 +1 14:17:52 #topic OSA CI 14:18:07 So I would like to get working on the OSA CI again 14:18:35 But at this point I would have sacrifice the staging CI system for OSA CI development 14:18:54 thorst: did you ever double check the state of neo4/neo14? 14:18:54 I emailed Anil Kumar about helping to get that other system set up for a 2nd staging env. 14:19:10 adreznec: nope. Got caught up in other bits...seemed like we had enough to go for now 14:19:11 You were thinking Qing wu might have an extra system? 14:19:18 Ok 14:19:27 qingwu definitely has ownership of neo4. I don't think its actively being used 14:19:36 so we could either take that for dev or add it to the CI pool 14:20:17 Right ok 14:21:20 esberglu: would that work as a OSA CI system? 14:21:27 I think so 14:21:30 Yeah I think so 14:21:53 Awesome 14:22:11 Should give you room to experiment there then 14:22:45 Awesome. Gonna be super nice to be able to do that and have another staging env. to fall back on 14:22:45 I think we left off with the OSA CI at the multiple OVS issue 14:23:07 #action: esberglu Set up OSA CI dev env. 14:23:10 qingwu sent me a note on that which I haven't responded to 14:24:09 #action thorst to review the OVS OSA setup 14:24:55 I'm also gonna need 3 x86 systems for the new OSA CI env 14:25:04 hmm 14:25:12 * adreznec record scratch 14:25:35 OK - we'll have to just build you some 14:25:43 I made space on a VM host the other day... 14:26:02 I might just ask you do the installs if I walk you through how to connect in 14:27:26 OK 14:27:40 #action thorst: esberglu: Get x86 systems set up for OSA CI dev 14:28:35 I know thorst has to run. And thats all I had. So I will close the meeting unless there are objections 14:28:51 one more thing 14:28:58 driver status... 14:29:05 #topic Driver Status 14:29:05 so we have the live migration object in nova 14:29:12 we are taking it out of nova-powervm 14:29:28 next up is the skeleton that efried proposed, but I'd recommend we do that after we push through some CI log runs 14:29:30 * adreznec party 14:29:43 I would simply ask that adreznec, esberglu and myself review efried's change if we hadn't already 14:29:58 Will do 14:30:00 Yeah I'd like probably 25 solid runs there with logging before we start down that path 14:30:15 Just so we have history 14:30:25 agree...and even 25 runs isn't 'history' 14:30:33 but lets get our ducks in a row before we ask for real reviews. 14:30:44 but lets also get our personal reviews in before break 14:30:59 that's all I had 14:31:21 Right 14:31:26 But it's better than 0 14:31:27 :P 14:31:53 +1 14:32:19 I think the only other piece of driver status is that tjakobs is starting to look at image cache work 14:32:47 It'd be nice to try and land that in Ocata still if we can make it work 14:33:31 That's all I have as well 14:36:24 #endmeeting