07:01:48 <tchaypo> #startmeeting tripleo
07:01:50 <openstack> Meeting started Wed Sep  3 07:01:48 2014 UTC and is due to finish in 60 minutes.  The chair is tchaypo. Information about MeetBot at http://wiki.debian.org/MeetBot.
07:01:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
07:01:53 <openstack> The meeting name has been set to 'tripleo'
07:01:59 <tchaypo> \o
07:02:00 <marios> o/
07:02:03 <GheRivero> o/
07:02:04 <tchaypo> Have we the people?
07:02:07 <StevenK> o/
07:02:14 <tchaypo> We have the people!
07:02:28 <tchaypo> Pushing on
07:02:30 <marios> start it and they will come
07:02:31 <tchaypo> #topic agenda
07:02:33 <tchaypo> * bugs
07:02:35 <tchaypo> * reviews
07:02:37 <tchaypo> * Projects needing releases
07:02:39 <tchaypo> * CD Cloud status
07:02:41 <tchaypo> * CI
07:02:43 <tchaypo> * Tuskar
07:02:45 <tchaypo> * Specs
07:02:47 <tchaypo> * open discussion
07:02:49 <tchaypo> Remember that anyone can use the link and info commands, not just the moderator - if you have something worth noting in the meeting minutes feel free to tag it
07:02:51 <tchaypo> #topic bugs
07:03:16 <tchaypo> So I'll start with an update on two bugs I was chasing as an action from last week
07:03:40 <tchaypo> Michael Kerrin is on leave for another few days - so no progress on https://bugs.launchpad.net/tripleo/+bug/1263294
07:03:43 <uvirtbot> Launchpad bug 1263294 in tripleo "ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in /sys/block'" [Critical,In progress]
07:04:06 <tchaypo> unless someone wants to grab that..
07:04:45 <tchaypo> and https://bugs.launchpad.net/tripleo/+bug/1317056 - there doesn't seem to be anything for us to do, so I've downgraded it from critical
07:04:46 <uvirtbot> Launchpad bug 1317056 in tripleo "Guest VM FS corruption after compute host reboot" [High,Incomplete]
07:04:58 <tchaypo> Unless someone is able to reproduce itI plan to just close it
07:05:31 <tchaypo> urr, I should note some of that
07:06:43 <tchaypo> #note As followup from last week's meeting, https://bugs.launchpad.net/tripleo/+bug/1317056 has been downgraded as it doesn't seem to be reproducible, and https://bugs.launchpad.net/tripleo/+bug/1263294 is waiting for mkerrin to come back from leave and update
07:06:44 <uvirtbot> Launchpad bug 1317056 in tripleo "Guest VM FS corruption after compute host reboot" [High,Incomplete]
07:06:55 <tchaypo> #link https://bugs.launchpad.net/tripleo/
07:06:56 <tchaypo> #link https://bugs.launchpad.net/diskimage-builder/
07:06:58 <tchaypo> #link https://bugs.launchpad.net/os-refresh-config
07:07:00 <tchaypo> #link https://bugs.launchpad.net/os-apply-config
07:07:02 <tchaypo> #link https://bugs.launchpad.net/os-collect-config
07:07:04 <tchaypo> #link https://bugs.launchpad.net/os-cloud-config
07:07:06 <tchaypo> #link https://bugs.launchpad.net/tuskar
07:07:08 <tchaypo> #link https://bugs.launchpad.net/python-tuskarclient
07:07:34 <tchaypo> I've had a quick look at the TripleO bugs
07:07:41 <tchaypo> https://bugs.launchpad.net/tripleo/+bug/1364345 needs an assignee
07:07:43 <uvirtbot> Launchpad bug 1364345 in tripleo "CI tests are pulling in the wrong version of git repo's" [Critical,Triaged]
07:08:23 <tchaypo> and aside from the ones I've already mentioned, there's just one other critical that's in progress
07:11:07 <tchaypo> Does anyone want to claim 1364345?
07:11:53 <tchaypo> Tuskar has one critical - https://bugs.launchpad.net/tuskar/+bug/1357525
07:11:54 <uvirtbot> Launchpad bug 1357525 in tuskar "tuskar-api fails with "no such option: config_file"" [Critical,Fix committed]
07:12:08 <tchaypo> it's assigned to dougala
07:13:39 <marios> seems like he has ideas about what is needed there anyway
07:14:01 <shadower> fyi dougal's on a vacation until the end of this week
07:15:16 <tchaypo> Any other comments before we move on?
07:15:43 <tchaypo> #note https://bugs.launchpad.net/tripleo/+bug/1364345 is looking for an owner
07:15:44 <uvirtbot> Launchpad bug 1364345 in tripleo "CI tests are pulling in the wrong version of git repo's" [Critical,Triaged]
07:16:01 <tchaypo> #topic reviews
07:16:14 <tchaypo> #info There's a new dashboard linked from https://wiki.openstack.org/wiki/TripleO#Review_team - look for "TripleO Inbox Dashboard"
07:16:16 <tchaypo> #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html
07:16:18 <tchaypo> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
07:16:20 <tchaypo> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
07:16:39 <tchaypo> #link http://lists.openstack.org/pipermail/openstack-dev/2014-September/044691.html
07:17:12 <tchaypo> #info I posted ^^^ to the list today, pointing out that ~50% of our core reviewers are under their 3 reviews/day commitment on both 30 and 90 day reports
07:17:38 <tchaypo> I think this is part of whats hurting our velocity here
07:17:57 <tchaypo> Stats since the last revision without -1 or -2 :
07:17:59 <tchaypo> Average wait time: 12 days, 3 hours, 22 minutes
07:18:01 <tchaypo> 1st quartile wait time: 4 days, 19 hours, 3 minutes
07:18:03 <tchaypo> Median wait time: 7 days, 22 hours, 55 minutes
07:18:05 <tchaypo> 3rd quartile wait time: 19 days, 15 hours, 36 minutes
07:18:09 <marios> ouch
07:18:46 <jprovazn> I think this is at least partially caused by vacation time
07:18:48 <GheRivero> summer is usually a period were things slow down
07:18:55 <shadower> yea
07:19:03 <tchaypo> you crazy northerners :)
07:19:07 <StevenK> But it's Spring :-P
07:19:16 <jprovazn> hehe
07:19:45 <GheRivero> let's see how you do in 6 months :)
07:19:48 <tchaypo> yeah, and the last month has also seen a wedding, a baby, the ops meetup
07:20:01 <tchaypo> GheRivero: in 6 months I'll be dropping in to visit you
07:20:06 <StevenK> And other stuff that people have been focused on
07:20:12 <GheRivero> tchaypo: cool!
07:21:12 <tchaypo> anyway - there's no point ranting about it here, we don't have enough people here to make a difference.
07:21:27 <tchaypo> it'd be good if you could respond on the mailing list though
07:21:44 <tchaypo> There's also been some more activity on the thread about whether we're even measuring a useful metric
07:22:01 <tchaypo> Shall we mov eon?
07:22:23 <StevenK> Since :22, let's
07:23:32 <tchaypo> On the offchance that you can see this
07:23:38 <tchaypo> my internet suddenly got sad
07:23:41 <tchaypo> I shall attempt to reconnect
07:23:52 <marios> tchaypo: see it fine
07:25:48 <tchaypo> and I'm back
07:26:00 <tchaypo> #topic Projects needing releases
07:26:09 <tchaypo> Volunteers?
07:26:10 <marios> o/ (unless anyone else would like to, I did it last week too)
07:26:30 <tchaypo> #action marios volunteers to do the releases again
07:26:32 <tchaypo> Thanks
07:26:54 <tchaypo> #topic CD Cloud status
07:27:45 <tchaypo> #note I'm still learning while I poke at HP2. We have an undercloud with 95 machines - if the networking is working as we expect we should be able to get a ci-overcloud in the next few days
07:28:36 <StevenK> Why does the undercloud have 95 machines?
07:28:50 <StevenK> Shouldn't it have 3, leaving 92 for the overcloud to be built on?
07:29:00 <marios> (maybe he means for deployment)
07:29:08 <marios> (i.e. what you said)
07:29:54 <tchaypo> Actaully I think the undercloud I built today is occupying 1 machine; i think it didn't HA properly
07:30:10 <tchaypo> but yes, I expect it should have 92 for the overcloud to be deployed on
07:31:12 <tchaypo> If we have no other updates..
07:31:24 <tchaypo> #topic ci
07:32:06 <tchaypo> I'm not sure if anyone here will have any updates on this either..
07:33:34 <tchaypo> #topic Tuskar
07:34:28 <shadower> not sure any of the tuskar devs are here
07:34:57 <shadower> I know there's been some work recently on integrating with the merge.py-less templates, building the plan and testing it
07:35:01 <shadower> with some success
07:35:07 <shadower> 's all I know though
07:35:23 <tchaypo> Could you note that for the minutes?
07:35:31 <shadower> sure
07:36:16 <shadower> #note tuskar is testing building the plan with the merge.py-less templates and testing the deployment
07:37:00 <tchaypo> #topic  specs
07:37:26 <tchaypo> thanks shadower
07:40:45 <tchaypo> moving along..
07:40:49 <tchaypo> #topic open discussion
07:41:14 <tchaypo> A few weeks ago we talkeda bout maybe moving this meeting a little later to be more convenient for people in europe.
07:41:17 <greghaynes> tchaypo: sorry for being late, your ML thread got mentioned already?
07:41:47 <tchaypo> greghaynes: yep; but there aren't many people here so we didn't dwell.
07:42:03 <tchaypo> Except for pointing out that it *is* northern summer and that's traditionally slow due to lots of people taking breaks
07:42:09 <greghaynes> yep
07:42:19 <greghaynes> as long as we get the word out :)
07:43:12 <tchaypo> to be honest, I can understand people's 30 day stats falling below benchmark if they take a few weeks off
07:43:13 <tchaypo> but 50% of our cores are below benchmark on the 90 day stats as well
07:43:35 <greghaynes> Yes, I really think there is some culling needed at the low end
07:44:17 <tchaypo> Well.
07:44:37 <tchaypo> I think that would certainly give us a bit more clarity about how many core reviewers we actually have, which tells if we need to be trying to get more
07:44:46 <tchaypo> I don't think culling will improve velocity though
07:45:20 <tchaypo> does anyone want to volunteer to mail the list again and propose to run the meeting 14 days from now at the new time?
07:45:25 <greghaynes> It wont, I just dont believe there is any way that some of the low end can be up to date with the state of the project
07:45:33 <tchaypo> And do we have anything else to bring up?
07:45:40 <tchaypo> greghaynes: completely agree
07:46:03 <shadower> tchaypo: I'll email the list re new meeting time
07:46:18 <tchaypo> shadower: thanks :)
07:46:33 <tchaypo> I probably won't be able to make taht meeting due to being in german class, but I should be able to join the one after that
07:46:41 <shadower> right
07:46:46 <greghaynes> I guess one more topic - I am trying to work out all the kinks with the HA job
07:47:02 <greghaynes> I notice everyone is ignoring is essentially (understandably)
07:47:30 <greghaynes> but it would be awesome if people started to try and ensure it passes even though it is non-voting
07:47:49 <tchaypo> could you note that for the minutes?
07:48:21 <tchaypo> Personally I've been ignoring it until I've got a setup using it and I know how it works
07:48:23 <greghaynes> #note Please try and check HA CI job on reviews even though it is non voting
07:48:40 <tchaypo> coincidentally the thing I need to poke at tomorrow morning is making hp2 be HA
07:48:53 <tchaypo> ... i think. assuing it's in a state where we want HA.
07:49:15 <greghaynes> HA all the things
07:50:25 <tchaypo> If there's nothing else, I'm going to wrap up early
07:50:48 <jprovazn> greghaynes: did you have any luck looking at the CI HA fail when one of instances ends in error/deleting state?
07:51:24 <greghaynes> jprovazn: I started, that resulted in getting the debug patch merged and patch to break out app-specific log files
07:51:33 <greghaynes> jprovazn: the plan is to make the HA job always run with debug logs on
07:51:46 <greghaynes> jprovazn: hopefully tomorrow ill get that done
07:51:54 <jprovazn> greghaynes: ah, nice
07:52:27 <greghaynes> If you want to +A something to help :) https://review.openstack.org/#/c/118464/
07:53:05 <jprovazn> greghaynes: sure, will look
07:57:30 <tchaypo> going, going...
07:58:29 <tchaypo> #endmeeting