19:03:18 <lifeless> #startmeeting tripleo
19:03:18 <bnemec> \o
19:03:19 <openstack> Meeting started Tue Feb 18 19:03:18 2014 UTC and is due to finish in 60 minutes.  The chair is lifeless. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:22 <openstack> The meeting name has been set to 'tripleo'
19:03:55 <lifeless> bugs
19:03:56 <lifeless> reviews
19:03:56 <lifeless> Projects needing releases
19:03:56 <lifeless> CD Cloud status
19:03:56 <lifeless> CI virtualized testing progress
19:03:58 <lifeless> Insert one-off agenda items here
19:04:27 <jomara> ahoy
19:04:42 <lifeless> #topic bugs
19:04:48 <lifeless> #link https://bugs.launchpad.net/tripleo/
19:04:49 <lifeless> #link https://bugs.launchpad.net/diskimage-builder/
19:04:49 <lifeless> #link https://bugs.launchpad.net/os-refresh-config
19:04:49 <lifeless> #link https://bugs.launchpad.net/os-apply-config
19:04:49 <lifeless> #link https://bugs.launchpad.net/os-collect-config
19:04:51 <lifeless> #link https://bugs.launchpad.net/tuskar
19:04:53 <lifeless> #link https://bugs.launchpad.net/tuskar-ui
19:04:56 <lifeless> #link https://bugs.launchpad.net/python-tuskarclient
19:05:07 <lifeless> now, I'm being pulled to #openstack-meeting to talk about the ci-overcloud situation there
19:05:13 <lifeless> so - can someone else proceed for a bit ?
19:06:04 <slagle> i can take a shot at it :)
19:06:33 <slagle> there are untriaged bugs in tripleo...
19:06:44 <slagle> 4 of them
19:07:19 <slagle> 3 came in in the last hour, so maybe someone is trying to make us look bad :)
19:07:44 <slagle> if someone wants to triage those...
19:08:27 <slagle> tuskar has a bug as well
19:08:45 <slagle> https://bugs.launchpad.net/tuskar/+bug/1281051
19:08:50 <slagle> someone traige plz
19:09:01 <jdob> slagle: i'll do the tuskar one
19:09:09 <slagle> thx
19:09:30 <slagle> i do believe the thing to say now is: everyone, try harder at triage next week
19:09:31 <jistr> jdob: oops already done that, didn't look at irc for a minute
19:09:35 <slagle> :)
19:09:47 <slagle> i know i forgot this week
19:09:56 <greghaynes> There was one I was going to self triage but pending joining tripleo group on lp
19:09:57 <jdob> oh man, even better, I volunteered for work and didnt have to do it
19:10:08 <slagle> greghaynes: ok, plz ping lifeless about that
19:10:17 <greghaynes> ok
19:10:20 <slagle> i think he is the one who can do it
19:10:46 <slagle> any other discussion around bugs?
19:11:15 <slagle> ok, reviews then
19:11:18 <slagle> #topic reviews
19:11:35 <slagle> i suspect i don't have permission to do that, so oh well
19:12:00 <slagle> #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html
19:12:07 <slagle> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
19:12:15 <slagle> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
19:12:51 <slagle> we appear to have several reviews that are quite old
19:13:14 <lifeless> #topic reviews
19:13:21 <lifeless> (not back yet)
19:14:28 <slagle> i actually think if folks could look at http://russellbryant.net/openstack-stats/tripleo-openreviews.html and try to hit the ones listed there that have the longest wait times, that would be a good plan of attack
19:15:11 <slagle> i know that lifeless -2'd some of the package related review pending discussion on the ML
19:15:35 <slagle> if we can get an update on those, that would be good
19:15:55 <slagle> whether we want to keep pushing forward with the approach in the reviews, or use environment variables for user names, etc.
19:16:05 <lifeless> ok so thats the mailing list thread
19:16:12 <slagle> right
19:16:38 <lifeless> I think I'll probably lift the -2 on that basis, though I saw *another* packaging-provoked failure come through w.r.t. paste-api.ini or whatever files
19:16:48 <lifeless> which are unhelpfully in /usr
19:17:09 <slagle> yes, there are some like that
19:17:13 <lifeless> thats a bug in the packaging
19:17:21 <lifeless> they are configuration files
19:17:33 <dprince> lifeless: no its not, paste should *not* be used as a config
19:17:34 <lifeless> we need to file those in the distro and fix
19:17:53 <slagle> this was the glance rdo package
19:18:03 <slagle> it relies on configs under /usr/share/glance and /etc/glance
19:19:24 <slagle> anyway, it's noted we need to revisit that and determine if a bug should be filed or not
19:19:48 <slagle> #action slagle to look at glance configs under /usr/share
19:19:56 <slagle> (lets hope that worked)
19:20:00 <slagle> any other review business?
19:20:25 <slagle> #topic releases
19:20:43 <slagle> do we have a volunteer to do releases?
19:20:52 <rpodolyaka> I can take, if we don't have volunteers
19:20:53 <slagle> i think t-i-e got missed in the last release round
19:21:03 <slagle> the latest released tarball was from end of January
19:21:05 <lifeless> #topic releases
19:21:17 <shadower> slagle: I'm pretty sure I tagged t-i-.
19:21:20 <shadower> tie
19:21:22 <slagle> there's 1 new commit to os-refresh-config as well
19:21:33 <shadower> maybe an arrer in the release pipeline?
19:21:42 <slagle> shadower: possibly
19:21:50 <slagle> i didn't see anything new end up at http://tarballs.openstack.org/tripleo-image-elements/
19:22:27 <slagle> rpodolyaka: if you're willing to do it, that would be awesome
19:22:46 <rpodolyaka> slagle: sure, I'll take it
19:22:51 <slagle> thanks!
19:22:55 <rpodolyaka> np
19:23:05 <slagle> #action rpodolyaka to release all the things (with new commits)
19:23:24 <slagle> #topic CD Cloud Status
19:24:14 <slagle> not sure i have much detail here, barring any input from lifeless
19:24:18 <shadower> slagle: I must not have pushed the tag by mistake. Should be out now
19:24:24 <slagle> it was working after the wknd, but then went down
19:24:25 <shadower> sorry
19:24:25 <lifeless> hi
19:24:42 <lifeless> its up but not in nodepool atm - thats what the infra meeting discussion is on
19:24:55 <slagle> shall we hold for that?
19:24:59 <slagle> or keep churning through?
19:25:09 <lifeless> well one thing worth touching here
19:25:25 <lifeless> is that this list - http://git.openstack.org/cgit/openstack/tripleo-incubator/tree/tripleo-cloud/tripleo-cd-admins - really is an on-call list
19:25:32 <lifeless> (you get admin access, and admin responsibility)
19:25:40 <lifeless> infra has 2 weeks of zomg coming up
19:26:05 <lifeless> so if you're in the admins list but not willing to respond to ci-overcloud outages as 'omg I have to fix' - please let me know
19:26:21 <tchaypo> Does that mean we're going to be setting up Pagerduty or something similar, or just relying on people to be active in irc?
19:26:50 <lifeless> that will depend on the discussion in infra
19:27:09 <lifeless> I suspect it might help if we all volunteered our cell phone (to be held privately) as a just-in-case
19:27:28 <lifeless> but, being an admin is voluntary, so I can't (and won't) try to compel tht
19:28:25 <slagle> ok, good stuff
19:28:32 <tchaypo> If we're doing it properly I think we should at least share scheduled of flights to/from Sunnyvale so that we know in advance if there are going to be times of patchy coverage
19:28:51 <slagle> so if you're an admin (myself included), try to be proactive in responding to issues
19:29:18 <slagle> tchaypo: good point
19:29:36 <slagle> i'm not sure there's an etherpad for sunnyvale yet
19:30:03 <lifeless> we should make one
19:30:06 <jdob> etherpad meaning agenda and hotel details?
19:30:07 <slagle> dprince: i think this is where we usually get an update on the RH cloud, if you're available to give one
19:30:14 <slagle> jdob: yes
19:30:25 <jdob> kk, I was gonna ask about that
19:30:33 <slagle> #action etherpad for sunnyvale travel
19:30:36 <dprince> okay. Here is where it stands today.
19:30:37 <slagle> crap
19:30:43 <slagle> #action slagle etherpad for sunnyvale travel
19:30:45 <dprince> Physical cabling has been done to the rack.
19:31:12 <lifeless> #action etherpad for sunnyvale travel
19:31:16 <lifeless> #action slagle etherpad for sunnyvale travel
19:31:23 <dprince> They are configuring some firewall stuff, and then Kambiz needs to make some changes on the bastion too.
19:31:28 <jdob> lifeless: you forgot "crap"
19:31:35 <lifeless> wasn't an action :P
19:31:45 <jdob> i blame slagle for dropping the ball there
19:31:53 <lifeless> oh the puns
19:31:57 <slagle> dprince: any eta?
19:32:39 <dprince> Then there is a discussion that needs to happen about mapping the /24 into the private network addresses.
19:33:53 <slagle> ok, so still a WIP sounds like, and we're a little ways off
19:34:07 <dprince> slagle: I hope this week. I'm not directly involved... all I can do is put in requests to escalate this so please don't become a hater if it doesn't happen
19:34:53 <slagle> i don't hate
19:35:13 <slagle> thx for the update :)
19:35:19 <slagle> any other news here before moving on?
19:35:26 <lifeless> dprince: whats the mapping discussion ?
19:35:28 <dprince> slagle: that wasn't for you directly. Just saying... it has been awhile.
19:35:44 <lifeless> dprince: won't the /24 be 95% floating ips - and thus not mapped?
19:36:06 <dprince> lifeless: I don't know exactly what they want to know. Probably just something about how they get routed into the network.
19:36:48 <lifeless> dprince: ok; lets not disrupt the meeting, but yeah, they can't arp-limit it or anything
19:36:52 <lifeless> or port limit
19:37:39 <slagle> ok, moving on
19:37:40 <lifeless> dprince: do you know the /24 range that we're getting ?
19:37:48 <dprince> lifeless: not yet
19:37:54 <lifeless> dprince: we can add it to the admin spreadsheet and block out some ranges once we do
19:38:31 <slagle> #topic CI virtualized testing progress
19:39:05 <slagle> anyone around to give an update on this? pleia2 perhaps?
19:39:06 <lifeless> we should probably drop this topic or make it more generally CI; since its done :)
19:39:27 <lifeless> #topic CI virtualized testing progress
19:39:37 <slagle> ok, that's a good update :)
19:40:14 <lifeless> I do have status from the infra meeting
19:40:18 <lifeless> which is that subject to:
19:40:27 <lifeless> - moving out check jobs to a check-tripleo queue
19:40:39 <lifeless> - fixing the nodepool 'wont start with a provider offline' bug
19:40:51 <lifeless> we can be in the system during the FF weeks of panic
19:41:23 <lifeless> I'm going to submit a check-tripleo pipeline patch today and derekh was working on the nodepool bug
19:41:29 <lifeless> does anyone know where he got to ?
19:41:55 <slagle> i do not; he mentioned he was working on it, but i'm not sure he got a resolution
19:42:40 <ccrouch> BTW just checking: CI won't be *done* until every project is gated on a tempest run ontop of an overcloud right?
19:42:54 <lifeless> CI won't ever be *done*
19:43:01 <lifeless> but thats certainly a key goal :)
19:43:24 <ccrouch> after that its just a matter of fixing issues discovered by CI
19:43:30 <ccrouch> ok cool
19:43:37 <lifeless> well
19:43:42 <lifeless> increasing performance
19:43:50 <lifeless> adding more sylesof test
19:43:53 <lifeless> testing upgrades
19:44:00 <lifeless> the more we test the more we'll find we can test
19:44:03 <ccrouch> upgrades, thats a good one
19:44:08 <ccrouch> yep
19:44:52 <slagle> #topic open discussion
19:45:00 <lifeless> #topic open discussion
19:45:02 <slagle> feel free to keep discussing CI if you wish :)
19:45:25 <slagle> anyone have anything to bring up?
19:45:29 <matty_dubs> Any update on hotel arrangements?
19:45:45 <slagle> cody-somerville: any update on hotels for sunnyvale?
19:46:03 <slagle> (sorry to keep asking...)
19:46:42 <tchaypo> I'll be starting at HP Monday, so expect to start seeing newbie questions from me next week
19:47:01 <ccrouch> i had a question around the impact of the upcoming Icehouse feature freeze on tripleo related projects
19:47:01 <ccrouch> i.e. does it have any direct impact, e.g. are we going to be unable to land new tripleo-image-elements after M3 or anything like that?
19:47:01 <lifeless> slagle: I believe cody-somerville has quotes and is waiting for the contract to be signed
19:47:12 <tchaypo> (Although at this point I expect a chunk of time to be taken up chasing travel etc arrangements)
19:47:31 <lifeless> ccrouch: so I think the answer for that should be tied to when slagle takes his point in time 'stable' branch
19:47:43 <lifeless> ccrouch: which will likely happen with everything else
19:47:52 <matty_dubs> tchaypo: I've been working on TripleO for a while now; I continue to ask newbie questions. ;)
19:48:01 <lifeless> ccrouch: generally though we aim for continually releaseable, so no - I don't plan on enforcing a freeze
19:48:03 <lifeless> ccrouch: *but*
19:48:12 <slagle> FF is March 6th btw
19:48:15 <lifeless> ccrouch: we should be ultra careful for backwards compat during that time
19:48:40 <slagle> which, many of us will be in Sunnyvale for
19:48:47 <ccrouch> lifeless: agreed, but no moratium on new things?
19:49:09 <lifeless> ccrouch: not IMO
19:49:16 <lifeless> tuskar-api should
19:49:28 <lifeless> as its an API server and aiming to align with the server release protocols
19:49:46 <lifeless> we'll probably end up running forks of heat etc for a few weeks :(
19:49:59 <ccrouch> since slagle et al will be managing the stable branches, i guess its up to them what will get merged down from the master branch :-)
19:50:45 <ccrouch> jdob: ^ re: tuskar-api stability
19:50:57 <lsmola2> lifeless: will we have backwards compat?
19:51:39 <jdob> ccrouch: they shouldn't be moving too much, if at all, between now and icehouse
19:51:53 <jdob> the last bits for end to end that I'm doing now are all behind the scenes
19:51:58 <lifeless> lsmola2: depending on what you mean, yes :)
19:52:23 <lifeless> lsmola2: if you mean 'will we require that all changes to trunk will work with a config/rc etc that work on the stable branch' - /no/
19:52:49 <lifeless> lsmola2: we're simply not ready for that, and that was one of the conditions that emerged in the HK discussion about stable branches
19:53:26 <slagle> right, just keep in mind if you make a change that is not backwards compat, we won't be able to pull it down to the stable branch
19:53:27 <lifeless> lsmola2: however, we should guarantee that at any point in time all the trunk commits work together, which implies some backwards compat code during transitions
19:53:48 <lsmola2> lifeless, so we are starting backwards compat when we are stable, right?
19:53:48 <lifeless> slagle: I wouldn't expect you to pull anything down to stable
19:54:03 <slagle> well, bugfixes
19:54:24 <lifeless> lsmola2: no, I think we start backwards compat when we cut a 1.0 of tripleo, whatever that means
19:54:28 <slagle> i'm talking the i-3 to rc timeframe
19:54:39 <lsmola2> lifeless, ok :-)
19:56:08 <lifeless> slagle: ah, ll
19:56:10 <lifeless> kk
19:56:18 <slagle> 5 mins left if there are any other topics
19:56:39 <lifeless> does anyone need help on anything ?
19:56:42 <ccrouch> http://lists.openstack.org/pipermail/openstack-dev/2014-February/027444.html
19:57:21 <lifeless> oh right
19:57:39 <lifeless> so, hardware prep should be in the deploy ramdisk (e.g. raid layout application)
19:57:47 <lifeless> nova-bm will deploy to sda
19:58:17 <lifeless> for swift, I'd say - if there are multiple disks, the flavor should specify a raid-1 root and jbod for the rest
19:58:28 <lifeless> and we should put the swift store on the jbod
19:58:49 <lifeless> for cinder, whatever best practice is for glusterfs/ceph
19:59:07 <lifeless> a small patch may be needed to ensure we pass the full flavor metadata to the ramdisk
19:59:24 <lifeless> in-instance scripts can do last-stage tuning
19:59:50 <lifeless> but the top level q - how much should tripleo do? admins shouldn't do anything by hand.
20:00:05 <ccrouch> lifeless lets continue in #tripleo
20:00:12 <ccrouch> since we timed out
20:00:13 <lifeless> slagle: thank you!
20:00:19 <lifeless> #endmeeting