07:01:48 #startmeeting tripleo 07:01:50 Meeting started Wed Sep 3 07:01:48 2014 UTC and is due to finish in 60 minutes. The chair is tchaypo. Information about MeetBot at http://wiki.debian.org/MeetBot. 07:01:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 07:01:53 The meeting name has been set to 'tripleo' 07:01:59 \o 07:02:00 o/ 07:02:03 o/ 07:02:04 Have we the people? 07:02:07 o/ 07:02:14 We have the people! 07:02:28 Pushing on 07:02:30 start it and they will come 07:02:31 #topic agenda 07:02:33 * bugs 07:02:35 * reviews 07:02:37 * Projects needing releases 07:02:39 * CD Cloud status 07:02:41 * CI 07:02:43 * Tuskar 07:02:45 * Specs 07:02:47 * open discussion 07:02:49 Remember that anyone can use the link and info commands, not just the moderator - if you have something worth noting in the meeting minutes feel free to tag it 07:02:51 #topic bugs 07:03:16 So I'll start with an update on two bugs I was chasing as an action from last week 07:03:40 Michael Kerrin is on leave for another few days - so no progress on https://bugs.launchpad.net/tripleo/+bug/1263294 07:03:43 Launchpad bug 1263294 in tripleo "ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in /sys/block'" [Critical,In progress] 07:04:06 unless someone wants to grab that.. 07:04:45 and https://bugs.launchpad.net/tripleo/+bug/1317056 - there doesn't seem to be anything for us to do, so I've downgraded it from critical 07:04:46 Launchpad bug 1317056 in tripleo "Guest VM FS corruption after compute host reboot" [High,Incomplete] 07:04:58 Unless someone is able to reproduce itI plan to just close it 07:05:31 urr, I should note some of that 07:06:43 #note As followup from last week's meeting, https://bugs.launchpad.net/tripleo/+bug/1317056 has been downgraded as it doesn't seem to be reproducible, and https://bugs.launchpad.net/tripleo/+bug/1263294 is waiting for mkerrin to come back from leave and update 07:06:44 Launchpad bug 1317056 in tripleo "Guest VM FS corruption after compute host reboot" [High,Incomplete] 07:06:55 #link https://bugs.launchpad.net/tripleo/ 07:06:56 #link https://bugs.launchpad.net/diskimage-builder/ 07:06:58 #link https://bugs.launchpad.net/os-refresh-config 07:07:00 #link https://bugs.launchpad.net/os-apply-config 07:07:02 #link https://bugs.launchpad.net/os-collect-config 07:07:04 #link https://bugs.launchpad.net/os-cloud-config 07:07:06 #link https://bugs.launchpad.net/tuskar 07:07:08 #link https://bugs.launchpad.net/python-tuskarclient 07:07:34 I've had a quick look at the TripleO bugs 07:07:41 https://bugs.launchpad.net/tripleo/+bug/1364345 needs an assignee 07:07:43 Launchpad bug 1364345 in tripleo "CI tests are pulling in the wrong version of git repo's" [Critical,Triaged] 07:08:23 and aside from the ones I've already mentioned, there's just one other critical that's in progress 07:11:07 Does anyone want to claim 1364345? 07:11:53 Tuskar has one critical - https://bugs.launchpad.net/tuskar/+bug/1357525 07:11:54 Launchpad bug 1357525 in tuskar "tuskar-api fails with "no such option: config_file"" [Critical,Fix committed] 07:12:08 it's assigned to dougala 07:13:39 seems like he has ideas about what is needed there anyway 07:14:01 fyi dougal's on a vacation until the end of this week 07:15:16 Any other comments before we move on? 07:15:43 #note https://bugs.launchpad.net/tripleo/+bug/1364345 is looking for an owner 07:15:44 Launchpad bug 1364345 in tripleo "CI tests are pulling in the wrong version of git repo's" [Critical,Triaged] 07:16:01 #topic reviews 07:16:14 #info There's a new dashboard linked from https://wiki.openstack.org/wiki/TripleO#Review_team - look for "TripleO Inbox Dashboard" 07:16:16 #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html 07:16:18 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt 07:16:20 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt 07:16:39 #link http://lists.openstack.org/pipermail/openstack-dev/2014-September/044691.html 07:17:12 #info I posted ^^^ to the list today, pointing out that ~50% of our core reviewers are under their 3 reviews/day commitment on both 30 and 90 day reports 07:17:38 I think this is part of whats hurting our velocity here 07:17:57 Stats since the last revision without -1 or -2 : 07:17:59 Average wait time: 12 days, 3 hours, 22 minutes 07:18:01 1st quartile wait time: 4 days, 19 hours, 3 minutes 07:18:03 Median wait time: 7 days, 22 hours, 55 minutes 07:18:05 3rd quartile wait time: 19 days, 15 hours, 36 minutes 07:18:09 ouch 07:18:46 I think this is at least partially caused by vacation time 07:18:48 summer is usually a period were things slow down 07:18:55 yea 07:19:03 you crazy northerners :) 07:19:07 But it's Spring :-P 07:19:16 hehe 07:19:45 let's see how you do in 6 months :) 07:19:48 yeah, and the last month has also seen a wedding, a baby, the ops meetup 07:20:01 GheRivero: in 6 months I'll be dropping in to visit you 07:20:06 And other stuff that people have been focused on 07:20:12 tchaypo: cool! 07:21:12 anyway - there's no point ranting about it here, we don't have enough people here to make a difference. 07:21:27 it'd be good if you could respond on the mailing list though 07:21:44 There's also been some more activity on the thread about whether we're even measuring a useful metric 07:22:01 Shall we mov eon? 07:22:23 Since :22, let's 07:23:32 On the offchance that you can see this 07:23:38 my internet suddenly got sad 07:23:41 I shall attempt to reconnect 07:23:52 tchaypo: see it fine 07:25:48 and I'm back 07:26:00 #topic Projects needing releases 07:26:09 Volunteers? 07:26:10 o/ (unless anyone else would like to, I did it last week too) 07:26:30 #action marios volunteers to do the releases again 07:26:32 Thanks 07:26:54 #topic CD Cloud status 07:27:45 #note I'm still learning while I poke at HP2. We have an undercloud with 95 machines - if the networking is working as we expect we should be able to get a ci-overcloud in the next few days 07:28:36 Why does the undercloud have 95 machines? 07:28:50 Shouldn't it have 3, leaving 92 for the overcloud to be built on? 07:29:00 (maybe he means for deployment) 07:29:08 (i.e. what you said) 07:29:54 Actaully I think the undercloud I built today is occupying 1 machine; i think it didn't HA properly 07:30:10 but yes, I expect it should have 92 for the overcloud to be deployed on 07:31:12 If we have no other updates.. 07:31:24 #topic ci 07:32:06 I'm not sure if anyone here will have any updates on this either.. 07:33:34 #topic Tuskar 07:34:28 not sure any of the tuskar devs are here 07:34:57 I know there's been some work recently on integrating with the merge.py-less templates, building the plan and testing it 07:35:01 with some success 07:35:07 's all I know though 07:35:23 Could you note that for the minutes? 07:35:31 sure 07:36:16 #note tuskar is testing building the plan with the merge.py-less templates and testing the deployment 07:37:00 #topic specs 07:37:26 thanks shadower 07:40:45 moving along.. 07:40:49 #topic open discussion 07:41:14 A few weeks ago we talkeda bout maybe moving this meeting a little later to be more convenient for people in europe. 07:41:17 tchaypo: sorry for being late, your ML thread got mentioned already? 07:41:47 greghaynes: yep; but there aren't many people here so we didn't dwell. 07:42:03 Except for pointing out that it *is* northern summer and that's traditionally slow due to lots of people taking breaks 07:42:09 yep 07:42:19 as long as we get the word out :) 07:43:12 to be honest, I can understand people's 30 day stats falling below benchmark if they take a few weeks off 07:43:13 but 50% of our cores are below benchmark on the 90 day stats as well 07:43:35 Yes, I really think there is some culling needed at the low end 07:44:17 Well. 07:44:37 I think that would certainly give us a bit more clarity about how many core reviewers we actually have, which tells if we need to be trying to get more 07:44:46 I don't think culling will improve velocity though 07:45:20 does anyone want to volunteer to mail the list again and propose to run the meeting 14 days from now at the new time? 07:45:25 It wont, I just dont believe there is any way that some of the low end can be up to date with the state of the project 07:45:33 And do we have anything else to bring up? 07:45:40 greghaynes: completely agree 07:46:03 tchaypo: I'll email the list re new meeting time 07:46:18 shadower: thanks :) 07:46:33 I probably won't be able to make taht meeting due to being in german class, but I should be able to join the one after that 07:46:41 right 07:46:46 I guess one more topic - I am trying to work out all the kinks with the HA job 07:47:02 I notice everyone is ignoring is essentially (understandably) 07:47:30 but it would be awesome if people started to try and ensure it passes even though it is non-voting 07:47:49 could you note that for the minutes? 07:48:21 Personally I've been ignoring it until I've got a setup using it and I know how it works 07:48:23 #note Please try and check HA CI job on reviews even though it is non voting 07:48:40 coincidentally the thing I need to poke at tomorrow morning is making hp2 be HA 07:48:53 ... i think. assuing it's in a state where we want HA. 07:49:15 HA all the things 07:50:25 If there's nothing else, I'm going to wrap up early 07:50:48 greghaynes: did you have any luck looking at the CI HA fail when one of instances ends in error/deleting state? 07:51:24 jprovazn: I started, that resulted in getting the debug patch merged and patch to break out app-specific log files 07:51:33 jprovazn: the plan is to make the HA job always run with debug logs on 07:51:46 jprovazn: hopefully tomorrow ill get that done 07:51:54 greghaynes: ah, nice 07:52:27 If you want to +A something to help :) https://review.openstack.org/#/c/118464/ 07:53:05 greghaynes: sure, will look 07:57:30 going, going... 07:58:29 #endmeeting