19:03:18 #startmeeting tripleo 19:03:18 \o 19:03:19 Meeting started Tue Feb 18 19:03:18 2014 UTC and is due to finish in 60 minutes. The chair is lifeless. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:22 The meeting name has been set to 'tripleo' 19:03:55 bugs 19:03:56 reviews 19:03:56 Projects needing releases 19:03:56 CD Cloud status 19:03:56 CI virtualized testing progress 19:03:58 Insert one-off agenda items here 19:04:27 ahoy 19:04:42 #topic bugs 19:04:48 #link https://bugs.launchpad.net/tripleo/ 19:04:49 #link https://bugs.launchpad.net/diskimage-builder/ 19:04:49 #link https://bugs.launchpad.net/os-refresh-config 19:04:49 #link https://bugs.launchpad.net/os-apply-config 19:04:49 #link https://bugs.launchpad.net/os-collect-config 19:04:51 #link https://bugs.launchpad.net/tuskar 19:04:53 #link https://bugs.launchpad.net/tuskar-ui 19:04:56 #link https://bugs.launchpad.net/python-tuskarclient 19:05:07 now, I'm being pulled to #openstack-meeting to talk about the ci-overcloud situation there 19:05:13 so - can someone else proceed for a bit ? 19:06:04 i can take a shot at it :) 19:06:33 there are untriaged bugs in tripleo... 19:06:44 4 of them 19:07:19 3 came in in the last hour, so maybe someone is trying to make us look bad :) 19:07:44 if someone wants to triage those... 19:08:27 tuskar has a bug as well 19:08:45 https://bugs.launchpad.net/tuskar/+bug/1281051 19:08:50 someone traige plz 19:09:01 slagle: i'll do the tuskar one 19:09:09 thx 19:09:30 i do believe the thing to say now is: everyone, try harder at triage next week 19:09:31 jdob: oops already done that, didn't look at irc for a minute 19:09:35 :) 19:09:47 i know i forgot this week 19:09:56 There was one I was going to self triage but pending joining tripleo group on lp 19:09:57 oh man, even better, I volunteered for work and didnt have to do it 19:10:08 greghaynes: ok, plz ping lifeless about that 19:10:17 ok 19:10:20 i think he is the one who can do it 19:10:46 any other discussion around bugs? 19:11:15 ok, reviews then 19:11:18 #topic reviews 19:11:35 i suspect i don't have permission to do that, so oh well 19:12:00 #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html 19:12:07 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt 19:12:15 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt 19:12:51 we appear to have several reviews that are quite old 19:13:14 #topic reviews 19:13:21 (not back yet) 19:14:28 i actually think if folks could look at http://russellbryant.net/openstack-stats/tripleo-openreviews.html and try to hit the ones listed there that have the longest wait times, that would be a good plan of attack 19:15:11 i know that lifeless -2'd some of the package related review pending discussion on the ML 19:15:35 if we can get an update on those, that would be good 19:15:55 whether we want to keep pushing forward with the approach in the reviews, or use environment variables for user names, etc. 19:16:05 ok so thats the mailing list thread 19:16:12 right 19:16:38 I think I'll probably lift the -2 on that basis, though I saw *another* packaging-provoked failure come through w.r.t. paste-api.ini or whatever files 19:16:48 which are unhelpfully in /usr 19:17:09 yes, there are some like that 19:17:13 thats a bug in the packaging 19:17:21 they are configuration files 19:17:33 lifeless: no its not, paste should *not* be used as a config 19:17:34 we need to file those in the distro and fix 19:17:53 this was the glance rdo package 19:18:03 it relies on configs under /usr/share/glance and /etc/glance 19:19:24 anyway, it's noted we need to revisit that and determine if a bug should be filed or not 19:19:48 #action slagle to look at glance configs under /usr/share 19:19:56 (lets hope that worked) 19:20:00 any other review business? 19:20:25 #topic releases 19:20:43 do we have a volunteer to do releases? 19:20:52 I can take, if we don't have volunteers 19:20:53 i think t-i-e got missed in the last release round 19:21:03 the latest released tarball was from end of January 19:21:05 #topic releases 19:21:17 slagle: I'm pretty sure I tagged t-i-. 19:21:20 tie 19:21:22 there's 1 new commit to os-refresh-config as well 19:21:33 maybe an arrer in the release pipeline? 19:21:42 shadower: possibly 19:21:50 i didn't see anything new end up at http://tarballs.openstack.org/tripleo-image-elements/ 19:22:27 rpodolyaka: if you're willing to do it, that would be awesome 19:22:46 slagle: sure, I'll take it 19:22:51 thanks! 19:22:55 np 19:23:05 #action rpodolyaka to release all the things (with new commits) 19:23:24 #topic CD Cloud Status 19:24:14 not sure i have much detail here, barring any input from lifeless 19:24:18 slagle: I must not have pushed the tag by mistake. Should be out now 19:24:24 it was working after the wknd, but then went down 19:24:25 sorry 19:24:25 hi 19:24:42 its up but not in nodepool atm - thats what the infra meeting discussion is on 19:24:55 shall we hold for that? 19:24:59 or keep churning through? 19:25:09 well one thing worth touching here 19:25:25 is that this list - http://git.openstack.org/cgit/openstack/tripleo-incubator/tree/tripleo-cloud/tripleo-cd-admins - really is an on-call list 19:25:32 (you get admin access, and admin responsibility) 19:25:40 infra has 2 weeks of zomg coming up 19:26:05 so if you're in the admins list but not willing to respond to ci-overcloud outages as 'omg I have to fix' - please let me know 19:26:21 Does that mean we're going to be setting up Pagerduty or something similar, or just relying on people to be active in irc? 19:26:50 that will depend on the discussion in infra 19:27:09 I suspect it might help if we all volunteered our cell phone (to be held privately) as a just-in-case 19:27:28 but, being an admin is voluntary, so I can't (and won't) try to compel tht 19:28:25 ok, good stuff 19:28:32 If we're doing it properly I think we should at least share scheduled of flights to/from Sunnyvale so that we know in advance if there are going to be times of patchy coverage 19:28:51 so if you're an admin (myself included), try to be proactive in responding to issues 19:29:18 tchaypo: good point 19:29:36 i'm not sure there's an etherpad for sunnyvale yet 19:30:03 we should make one 19:30:06 etherpad meaning agenda and hotel details? 19:30:07 dprince: i think this is where we usually get an update on the RH cloud, if you're available to give one 19:30:14 jdob: yes 19:30:25 kk, I was gonna ask about that 19:30:33 #action etherpad for sunnyvale travel 19:30:36 okay. Here is where it stands today. 19:30:37 crap 19:30:43 #action slagle etherpad for sunnyvale travel 19:30:45 Physical cabling has been done to the rack. 19:31:12 #action etherpad for sunnyvale travel 19:31:16 #action slagle etherpad for sunnyvale travel 19:31:23 They are configuring some firewall stuff, and then Kambiz needs to make some changes on the bastion too. 19:31:28 lifeless: you forgot "crap" 19:31:35 wasn't an action :P 19:31:45 i blame slagle for dropping the ball there 19:31:53 oh the puns 19:31:57 dprince: any eta? 19:32:39 Then there is a discussion that needs to happen about mapping the /24 into the private network addresses. 19:33:53 ok, so still a WIP sounds like, and we're a little ways off 19:34:07 slagle: I hope this week. I'm not directly involved... all I can do is put in requests to escalate this so please don't become a hater if it doesn't happen 19:34:53 i don't hate 19:35:13 thx for the update :) 19:35:19 any other news here before moving on? 19:35:26 dprince: whats the mapping discussion ? 19:35:28 slagle: that wasn't for you directly. Just saying... it has been awhile. 19:35:44 dprince: won't the /24 be 95% floating ips - and thus not mapped? 19:36:06 lifeless: I don't know exactly what they want to know. Probably just something about how they get routed into the network. 19:36:48 dprince: ok; lets not disrupt the meeting, but yeah, they can't arp-limit it or anything 19:36:52 or port limit 19:37:39 ok, moving on 19:37:40 dprince: do you know the /24 range that we're getting ? 19:37:48 lifeless: not yet 19:37:54 dprince: we can add it to the admin spreadsheet and block out some ranges once we do 19:38:31 #topic CI virtualized testing progress 19:39:05 anyone around to give an update on this? pleia2 perhaps? 19:39:06 we should probably drop this topic or make it more generally CI; since its done :) 19:39:27 #topic CI virtualized testing progress 19:39:37 ok, that's a good update :) 19:40:14 I do have status from the infra meeting 19:40:18 which is that subject to: 19:40:27 - moving out check jobs to a check-tripleo queue 19:40:39 - fixing the nodepool 'wont start with a provider offline' bug 19:40:51 we can be in the system during the FF weeks of panic 19:41:23 I'm going to submit a check-tripleo pipeline patch today and derekh was working on the nodepool bug 19:41:29 does anyone know where he got to ? 19:41:55 i do not; he mentioned he was working on it, but i'm not sure he got a resolution 19:42:40 BTW just checking: CI won't be *done* until every project is gated on a tempest run ontop of an overcloud right? 19:42:54 CI won't ever be *done* 19:43:01 but thats certainly a key goal :) 19:43:24 after that its just a matter of fixing issues discovered by CI 19:43:30 ok cool 19:43:37 well 19:43:42 increasing performance 19:43:50 adding more sylesof test 19:43:53 testing upgrades 19:44:00 the more we test the more we'll find we can test 19:44:03 upgrades, thats a good one 19:44:08 yep 19:44:52 #topic open discussion 19:45:00 #topic open discussion 19:45:02 feel free to keep discussing CI if you wish :) 19:45:25 anyone have anything to bring up? 19:45:29 Any update on hotel arrangements? 19:45:45 cody-somerville: any update on hotels for sunnyvale? 19:46:03 (sorry to keep asking...) 19:46:42 I'll be starting at HP Monday, so expect to start seeing newbie questions from me next week 19:47:01 i had a question around the impact of the upcoming Icehouse feature freeze on tripleo related projects 19:47:01 i.e. does it have any direct impact, e.g. are we going to be unable to land new tripleo-image-elements after M3 or anything like that? 19:47:01 slagle: I believe cody-somerville has quotes and is waiting for the contract to be signed 19:47:12 (Although at this point I expect a chunk of time to be taken up chasing travel etc arrangements) 19:47:31 ccrouch: so I think the answer for that should be tied to when slagle takes his point in time 'stable' branch 19:47:43 ccrouch: which will likely happen with everything else 19:47:52 tchaypo: I've been working on TripleO for a while now; I continue to ask newbie questions. ;) 19:48:01 ccrouch: generally though we aim for continually releaseable, so no - I don't plan on enforcing a freeze 19:48:03 ccrouch: *but* 19:48:12 FF is March 6th btw 19:48:15 ccrouch: we should be ultra careful for backwards compat during that time 19:48:40 which, many of us will be in Sunnyvale for 19:48:47 lifeless: agreed, but no moratium on new things? 19:49:09 ccrouch: not IMO 19:49:16 tuskar-api should 19:49:28 as its an API server and aiming to align with the server release protocols 19:49:46 we'll probably end up running forks of heat etc for a few weeks :( 19:49:59 since slagle et al will be managing the stable branches, i guess its up to them what will get merged down from the master branch :-) 19:50:45 jdob: ^ re: tuskar-api stability 19:50:57 lifeless: will we have backwards compat? 19:51:39 ccrouch: they shouldn't be moving too much, if at all, between now and icehouse 19:51:53 the last bits for end to end that I'm doing now are all behind the scenes 19:51:58 lsmola2: depending on what you mean, yes :) 19:52:23 lsmola2: if you mean 'will we require that all changes to trunk will work with a config/rc etc that work on the stable branch' - /no/ 19:52:49 lsmola2: we're simply not ready for that, and that was one of the conditions that emerged in the HK discussion about stable branches 19:53:26 right, just keep in mind if you make a change that is not backwards compat, we won't be able to pull it down to the stable branch 19:53:27 lsmola2: however, we should guarantee that at any point in time all the trunk commits work together, which implies some backwards compat code during transitions 19:53:48 lifeless, so we are starting backwards compat when we are stable, right? 19:53:48 slagle: I wouldn't expect you to pull anything down to stable 19:54:03 well, bugfixes 19:54:24 lsmola2: no, I think we start backwards compat when we cut a 1.0 of tripleo, whatever that means 19:54:28 i'm talking the i-3 to rc timeframe 19:54:39 lifeless, ok :-) 19:56:08 slagle: ah, ll 19:56:10 kk 19:56:18 5 mins left if there are any other topics 19:56:39 does anyone need help on anything ? 19:56:42 http://lists.openstack.org/pipermail/openstack-dev/2014-February/027444.html 19:57:21 oh right 19:57:39 so, hardware prep should be in the deploy ramdisk (e.g. raid layout application) 19:57:47 nova-bm will deploy to sda 19:58:17 for swift, I'd say - if there are multiple disks, the flavor should specify a raid-1 root and jbod for the rest 19:58:28 and we should put the swift store on the jbod 19:58:49 for cinder, whatever best practice is for glusterfs/ceph 19:59:07 a small patch may be needed to ensure we pass the full flavor metadata to the ramdisk 19:59:24 in-instance scripts can do last-stage tuning 19:59:50 but the top level q - how much should tripleo do? admins shouldn't do anything by hand. 20:00:05 lifeless lets continue in #tripleo 20:00:12 since we timed out 20:00:13 slagle: thank you! 20:00:19 #endmeeting