19:05:28 #startmeeting tripleo 19:05:28 Meeting started Tue Feb 4 19:05:28 2014 UTC and is due to finish in 60 minutes. The chair is SpamapS. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:05:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:05:32 The meeting name has been set to 'tripleo' 19:05:34 Take 2.. action 19:05:36 #link https://wiki.openstack.org/wiki/Meetings/TripleO 19:05:49 hello again 19:05:54 perfectionist... :) 19:05:58 #topic bugs 19:06:15 #link https://bugs.launchpad.net/tripleo/ 19:06:15 #link https://bugs.launchpad.net/diskimage-builder/ 19:06:15 #link https://bugs.launchpad.net/os-refresh-config 19:06:15 #link https://bugs.launchpad.net/os-apply-config 19:06:15 #link https://bugs.launchpad.net/os-collect-config 19:06:17 #link https://bugs.launchpad.net/tuskar 19:06:20 #link https://bugs.launchpad.net/tuskar-ui 19:06:22 #link https://bugs.launchpad.net/python-tuskarclient 19:06:45 hi 19:06:52 hello 19:06:53 _5_ criticals.. ugh 19:07:08 o/ 19:07:34 well at least 1 is fix committed 19:08:18 We may have somehow broken our integration with the gerrit bot on Launchpad 19:08:34 because bug 1274846 is merged already 19:09:29 #link https://bugs.launchpad.net/tripleo/+bugs?search=Search&field.status=New 19:09:36 two untriaged bugs in tripleo 19:10:27 greghaynes: for https://bugs.launchpad.net/tripleo/+bug/1273882 you can self-triage bugs if you can at least assign an importance to them. FYI. :) 19:10:44 aha 19:11:01 overall though we seem to be staying ahead of triage. 19:11:05 Anybody have any other bugs they want to bring up? 19:11:57 Onward 19:12:01 #topic reviews 19:12:16 #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html 19:12:39 we're just barely keeping wait times at 1 day 19:13:21 http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt shows that people are definitely keeping up with "a review a day" 19:13:35 well, many people are :) 19:13:52 Does anybody have anything else to bring up regarding reviews? 19:14:39 i would like more folks to look at the devtest doc changes in review, but other than that, no. 19:14:51 we need more eyes/opinions 19:15:00 slagle: Do you have a link handy? 19:15:52 matty_dubs: go to https://review.openstack.org/#/q/status:open+project:openstack/tripleo-incubator,n,z 19:15:57 look at the open reviews :) 19:16:04 that are related to devtest 19:16:44 alright, good stuff 19:16:59 #topic Projects needing releases 19:17:41 I think we have unreleased commits in most of the projects.. so we should probably make a pass to pick them up. 19:17:50 anybody want to volunteer 19:17:54 ? 19:18:10 actually.. I'll do it 19:18:21 #topic CD Cloud status 19:18:25 how long does it usually take 19:18:26 ? 19:18:45 shadower_: 10 min or so. You just have to make a signed tag for each repo. 19:19:06 shadower: good to evaluate the commits and make sure you're being "semver compatible" :) 19:19:15 SpamapS: cool, I'll volunteer next week, then 19:19:22 shadower: sweet 19:19:45 AFAIK with the CD cloud we're getting some timeouts and lifeless was investigating yesterday 19:19:54 SpamapS: hi yes 19:20:06 heat isn't spawning instances w/nova 19:20:09 its a bit of an issue 19:20:19 also I've been rebuilding the ci-overcloud (with automation) so that consumed me 19:20:25 that and discussing docs :P 19:22:02 I have a hell day of meetings today - need another tripleo-cd-admin to log in and poke 19:22:10 I'll get the ci-overcloud back up after my meetings 19:22:16 so just look to cd-overcloud 19:22:31 Ok I'll poke at tripleo-cd 19:22:37 lifeless: does it by any chance look anything like this http://paste.openstack.org/show/62469/ (the timeouts from heat re nova)... wondering if its same issue 19:23:39 marios: broadly yes 19:24:15 lifeless: ack, thanks. i put the scaling param into my tuskar template generation today and was trying to deploy with your new scale stuff 19:24:52 At this point we just need to dig into what took longer or didn't happen. :) 19:25:12 #topic CI virtualized testing progress 19:25:31 pleia2: dprince lifeless anything? 19:25:38 in the last weeek 19:25:44 we've scaled out the te farm to 10 servers 19:25:58 the redeployed ci-overcloud will also have 10 hypervisors 19:26:23 this should let us keep up w/out queueing for current job definitions 19:26:28 and probably add an overcloud job 19:26:32 woot 19:26:35 I'm making progress on getting fedora into nodepool (something rh has wanted when they bring up their cloud), being documented here: https://etherpad.openstack.org/p/fedora-on-gate 19:26:45 and maybe even a stack-update job 19:26:57 pleia2: its independent of the RH cloud 19:27:05 pleia2: we'll run ubuntu and fedora jobs in both clouds to get redundancy 19:27:06 lifeless: right, we'll run it on both 19:27:14 ack ack, tell you stuff you know :) 19:27:18 just mentioning rationale :) 19:27:38 there's not been discussion yet about how we manage jobs 19:27:42 e.g. do we partition across OS's 19:27:46 or duplicate across OS's 19:28:04 seems like duplicate would find OS-related problems 19:28:06 until we have a good feel on resource usage i'm inclined to suggest partitioning 19:28:25 but trying to get coverage of all the OS bits across the partitions 19:28:43 if we partition then won't we have a chance of commit 1 breaking commit 2 because 1 passes on Fedora but not Ubuntu? 19:28:44 e.g. seed on Ubuntu, undercloud on fedora, we'll run most tools on both. Or something like that. 19:29:00 SpamapS: no, because what runs on what would be constant 19:29:18 SpamapS: we would have the chance for (using the example above) seed on Fedora to break or undercloud on Ubuntu to break 19:29:35 lifeless: right thats the worry.. but I agree, not that big. 19:29:43 SpamapS: remembering that seed and undercloud in-instance are identical, so it would be client-side only for those scenarios 19:29:57 overcloud is more tricky, but if we have two configs we can get broad coverage similarly 19:30:06 Alright, shall we move on? 19:30:17 yeah 19:31:03 #topic open discussion 19:31:15 We have a meetup! 19:31:24 It is planned. You should be RSVP'ing now. :) 19:31:27 yay! 19:31:28 Woo 19:31:29 * matty_dubs RSVPed; is eager to meet many of you folks f2f! 19:31:34 \O/ 19:31:40 (And to enjoy the not-New-England weather) 19:31:43 In luxurious Sunnyvale, CA 19:31:50 y 19:31:56 cody-somerville is organising bulk rates 19:32:01 I assume lifeless has me down as an implicit RSVP 19:32:08 cody-somerville: you'll let us know when we can book, right? 19:32:11 http://www.weather.com/weather/tenday/Sunnyvale+CA+USCA1116:1:US 19:32:14 jamezpolley_: yes, you and our other new starter 19:32:23 cody-somerville: / and how to :P 19:32:28 lifeless: Aye. Will try to get details on that ASAP. 19:32:38 RH and mirantis folk may be able to use the same rate 19:32:49 if you want to hold of a couple days 19:32:56 ouch.. compared to Boston.. 19:32:57 http://www.weather.com/weather/tenday/Boston+MA+USMA0046:1:US 19:33:41 I'm chatting with harlowja about using a Yahoo! training rooms - he kindly offered and they are perhaps better equiped for 'crowd of people connecting to internet' 19:34:01 this is also in sunnyvale :) 19:34:01 lifeless correct, we have free-internet :-P 19:34:10 lifeless: oh, yes, we do always have to deal with HP Corporate Network fun.. ;) 19:34:15 harlowja: near VTA? 19:34:18 ya 19:34:22 cool 19:34:22 bonus 19:34:31 harlowja: I hope you've got broadband :) 19:34:36 lol 19:34:37 701 1st Ave, Sunnyvale, California 94089 19:34:40 Training rooms are often setup in a way that isn't really meant for collaboration. 19:34:42 lifeless: I bet they have at least 20Mbit ;) 19:34:45 Can the room be reconfigured? 19:34:49 I am care-free, so transiting it down from SF each day (goodie!) 19:34:54 s/care/CAR 19:34:57 care too 19:34:58 SpamapS: I have 70Mbps here 19:35:07 bragger. ;) 19:35:12 I've never been inside the yahoo offices, but I've seen the outside. I imagine they have fun views over Lockheed Martin and the nasa base 19:35:16 i hope yahoo has bigger pipes than that, lol 19:35:17 SpamapS: Hey, move to the first world already! 19:35:31 We have better buffalo wings here. 19:35:40 tchaypo: HP is next door to lockheed :) 19:35:51 HP cloud that is 19:35:52 anyhow 19:36:01 Hah. I've probably ridden past HP then and not noticed 19:36:03 harlowja: I was mainly referring to policy limits, not physical glue. 19:36:13 tchaypo: its up a dead end street 19:36:17 tchaypo: right next to Amazon, ironically. ;) 19:36:24 http://media.glassdoor.com/m/2c/78/7f/36/sunnyvale-campus.jpg 19:36:30 *better pictures online somewhere, lol 19:36:35 Heh, I Google Mapsed the HP office and was kind of confused to see a bunch of military aircraft. 19:36:39 cloudy... so its legit 19:37:10 Ok seems like that is enough for the meeting. 19:37:14 Anything else? 19:37:32 Alright, hasta la vista baby. 19:37:34 #endmeeting