19:10:48 #startmeeting tripleo 19:10:49 Meeting started Tue Oct 21 19:10:48 2014 UTC and is due to finish in 60 minutes. The chair is cinerama. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:10:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:10:53 The meeting name has been set to 'tripleo' 19:11:02 Good start 19:11:06 oksohi 19:11:17 #topic agenda 19:11:17 * bugs 19:11:17 * reviews 19:11:17 * Projects needing releases 19:11:17 * CD Cloud status 19:11:17 * CI 19:11:19 * Tuskar 19:11:24 * Specs 19:11:26 * open discussion 19:11:27 Remember that anyone can use the link and info commands, not just the moderator - if you have something worth noting in the meeting minutes feel free to tag it 19:11:30 #topic bugs 19:11:36 #link https://bugs.launchpad.net/tripleo/ 19:11:36 #link https://bugs.launchpad.net/diskimage-builder/ 19:11:36 #link https://bugs.launchpad.net/os-refresh-config 19:11:38 #link https://bugs.launchpad.net/os-apply-config 19:11:40 #link https://bugs.launchpad.net/os-collect-config 19:11:42 #link https://bugs.launchpad.net/os-cloud-config 19:11:44 #link https://bugs.launchpad.net/tuskar 19:11:46 #link https://bugs.launchpad.net/python-tuskarclient 19:12:03 (so do we usually talk about the bugs now?) 19:12:13 Usually criticals 19:12:24 I'm seeing two in tripleo 19:12:43 #link https://bugs.launchpad.net/tripleo/+bug/1188067 19:12:44 Launchpad bug 1188067 in tripleo "* listening services available on all addresses" [Critical,Triaged] 19:12:50 #link https://bugs.launchpad.net/tripleo/+bug/1374626 19:12:51 Launchpad bug 1374626 in tripleo "UIDs of data-owning users might change between deployed images" [Critical,Triaged] 19:13:03 The first one is assigned to me, I think 19:13:10 I flagged it as critical 19:13:11 Yep, looks like 19:13:34 we also have a critical in os-cloud-config 19:13:37 It came up because jp security noticed our under cloud node listing on *:53 19:13:38 #link https://bugs.launchpad.net/os-cloud-config/+bug/1382275 19:13:39 Launchpad bug 1382275 in os-cloud-config "new register-nodes does not accept ints for numeric input" [Critical,In progress] 19:13:57 and more importantly the Internet had noticed and it was being used to ddos people 19:14:22 yikes 19:14:30 1374626 is assigned to SpamapS, so no update unless he arrives. 19:14:41 What is on port 53? 19:14:48 We've fixed this for most services by putting them behind haproxy and controlling where it listens 19:14:50 * bnemec hopes it isn't something really obvious 19:15:01 Dnsmasq, run by neutron 19:15:41 bnemec: dns 19:16:09 I took ownership of the critical one in os-cloud-config 19:16:11 Ah, interesting that they firewall everything else, but that's allowed to listen on public interfaces. 19:16:28 And now that I think about it, perhaps this is something I should be raising with neutron 19:16:30 So do we have a plan for addressing that? 19:16:47 yup looks like GheRivero has a proposed fix for 1382275 (which could use some review love) 19:16:53 The quick fix for hp2 has been some manual up tables rules to block it 19:16:57 yea, dns is an annoying one because its udp, so you have to either try and stateful udp which is annoying or leave it open 19:17:02 Oh. I love ghe 19:17:02 anywho, sidetracked 19:17:33 I'll review GheRivero 's patch.oving on 19:17:45 k shall we move on to reviews, on that note? 19:17:57 #action Review https://review.openstack.org/#/c/129950/ 19:17:59 it needs some unittest but it's ok 19:18:13 With the uid mapping one, we had an email thread, im not sure it reached a consensus 19:18:19 it might be worth resurrecting that thread 19:18:23 Summit topic? 19:18:28 oh 19:18:45 yes, ill resurrect and ask if we want to discuss more at summit or if we have a consensus 19:18:59 #action greghaynes to resurrect uid mapping and ask if we want to discuss more at summit or if we have a consensus 19:19:16 Is there a spec up for review? 19:19:19 summit is close enough that that sounds like an excellent opportunity to iron it out more quickly than email back-and-forth 19:19:20 Or a change? 19:19:30 tchaypo: pretty sure no 19:19:33 iirc there was a proposed change that attracted discussion 19:19:41 tchaypo: I think the fix might live entirely internally ATM, actually :( 19:20:12 anywho, resurrecting the thread will hopefully answer these questions :) 19:20:13 i *thought* i saw it come up for review recently 19:20:24 greghaynes: +1 19:20:36 Whoa, I just zoomed my whole screen somehow 19:21:00 haha 19:21:01 Haha 19:21:06 so is that it for the bugs? 19:21:16 So, last question I have on this topic is do we have a path forward for 1188067? 19:21:57 Oh, I misread earlier comment 19:22:01 And though ghe had provided a patch 19:22:21 Yeah, multiple bug discussions happening at once. :-) 19:22:24 I think we can get a list of interfaces to listen on from heat 19:22:39 We already use that lost to control where haproxy listens for things 19:23:11 As long as neutron have a hook that we can use to set the interfaces that should be sufficient 19:23:15 Im pretty sure we decided that services should not bind on all interfaces by default (for this reason) and then we should explicitly list services that should listen on public interfaces 19:23:19 so this sounds like just a missed case of that? 19:23:25 If neutron doesn't we can raise it with the 19:23:44 And as a fallback, it's be easy to manually block with up tables 19:23:49 *iptables 19:23:58 greghaynes: That's how I interpret it 19:24:55 Moving on? 19:24:59 +1 19:25:06 #topic reviews 19:25:17 #info There's a new dashboard linked from https://wiki.openstack.org/wiki/TripleO#Review_team - look for "TripleO Inbox Dashboard" 19:25:17 #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html 19:25:17 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt 19:25:17 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt 19:26:00 ... 19:26:02 Queue growth in last 30 is now at 0.9/day 19:26:31 (Bottom of 30 day report) 19:27:18 But 19:27:44 3rd quartile wait time: 6 days, 5 hours, 51 minutes 19:27:59 Thats down from almost 2 weeks 19:28:09 \o/ 19:28:20 wtg reviewing people 19:28:55 Yeah, it seems like the flow of patches coming in must have slowed down. Not sure whether that's good or not. 19:29:07 We talked about trying to be proactive wrt first timers 19:29:19 But I don't know if that has been actioned at all 19:29:27 also the end of a release, it makes sense that things would quiet a bit before summit 19:29:36 True 19:29:59 we could look into doing a more formal "patch pilot" program 19:30:06 to encourage folks to contribute 19:30:59 Maybe we can discuss on list/summit? 19:31:47 that sounds good, though i won't be at summit 19:31:59 anyway so is that enough on reviews for now? 19:32:14 wfm 19:32:29 #topic Projects needing releases 19:32:44 so...are there any? :) 19:33:04 we should, its more a question of who ;) 19:33:09 I really want to learn how, actually 19:33:20 i just did them last thursday, do we really need them weekly? 19:33:30 there weren't a ton of changes when i last did them 19:33:33 good qn 19:33:34 jdob: can I do it and try and poke you for support? 19:33:36 it's not a hard or really time consuming process 19:33:43 greghaynes: ya, absolutely 19:33:56 awesome 19:34:21 you'll need perms before you can. before i got them, lifeless asked me to quickly talk with someone about the process 19:34:36 #action greghaynes to learn how to release all the things 19:34:37 so if you want to read up about it and ping me just to double check you're clear, i'll vouch for you 19:35:00 jdob: ok. good time to do that when theres nothing pressing to go out then :) 19:35:07 very true :D 19:35:17 https://review.openstack.org/#/c/105275/ is a nice change in DIB yesterday i'd like to see released 19:35:27 maybe it would be a good week for greghaynes to do a release for practice if there are not many things going out 19:35:34 cinerama: agreed 19:35:48 i'd have argued more to skip it if it wasnt going to be an opportunity to train another 19:35:57 "train"... its really not that hard :) 19:36:54 mmkay, anything else on reviews? 19:37:08 s/reviews/releases, nerp 19:37:10 i like the weekly cadence. then there is no doubt whether your change is going out or not, and each release delta is small in case of regressions 19:37:19 greghaynes: indeed 19:37:22 so +1 greghaynes :-) 19:37:42 ccrouch: that sounds like you volunteering to help out :D 19:37:53 #topic CD Cloud status 19:38:30 I discovered this week that we had the wrong mac addresses for most of the machines 19:38:57 We now have ~70 that can (and have) be used to bring up an overcloud 19:39:04 rh1 - OK , hp1 19:39:21 hp1 - patch to start using it still waiting to be merged 19:39:58 Do you have a link? 19:40:18 jdob: ha 19:40:25 if there's a nice status dashboard somewhere i don't know about we should add it to the agenda wiki thing 19:40:25 tchaypo: this should be the link but gerrit is throwing a 404 https://review.openstack.org/#/c/126513/ 19:40:53 derekh: wfm, weird 19:41:03 Thanks 19:41:22 tchaypo: I mean thats the link from my gerrit dashboard 19:41:47 weird 19:42:17 the link works for me 19:42:23 hmm, probably something todo with the project being renamed to system-config 19:42:53 So basically we just need an infra person to approve that so we can be multi-region again? 19:43:23 bnemec: yup, and hope its ok ;-) , its been a few weeks since I last looked at hp1 19:43:24 Is jerryz around? 19:44:02 derekh: Only one way to find out :-) 19:44:04 I can poke clarkb because hes sitting across from me 19:44:17 greghaynes: He already +2'd, so don't poke him. :-) 19:44:22 oh 19:44:24 damn 19:44:44 I mean unless you want to. ;-) 19:45:21 * derekh wont be here to help with problems if its merged now 19:45:51 #action infra person should approve adding hp1 back 19:45:52 Anything else on CI? I almost hate to say it, but based on derekh's weekly updates it's been pretty quiet lately. 19:45:59 ok, I get the 404 if I'm logged in, and can see the link if I'm logged out 19:46:01 derekh: So should we hold off? 19:46:32 bnemec: not necessarily, ideally somebody should be available if there are problems 19:46:58 derekh: Do we have someone else who will be able to look into issues? 19:47:29 Pretty sure I don't know enough to be of use. 19:48:24 bnemec: any of the cd admins has access to the cloud, but may be lacking in familiarity , if there is problems logging into the hp1 bastian and joining the screen session is a good start 19:49:16 re hp1 - it's already down, right? 19:49:22 bnemec: any I wouldn't let that stop it merging, worst case senario CI might be failing for a few hours, 19:49:31 i don't think we can make it much worse, unless it starts accepting jobs and then spuriously failing them 19:50:09 Hopefully it's not down, but it's not used by nodepool atm. 19:50:25 bnemec: yup, thats the correct assessment 19:50:32 mmkay 19:51:11 shall we move on? we're getting close to the end of our slot for today i think 19:51:14 we're running low on time 19:51:17 +1 19:51:22 so, to sum up, we should just get it merged regardless of who is around, somebody will pick up the pieces 19:51:27 #topic Tuskar 19:51:37 nohting spectacular to report 19:51:52 mostly just prepping for summit 19:52:10 if no one's got anything else here, shall we move on to specs? 19:52:27 gogogo 19:52:35 #topic Specs 19:53:20 *crickets* 19:53:23 I think thats a good sign 19:53:38 I think the big thing is summit topics. 19:53:41 anyone want to talk about any of the open specs, or shall i open the floor to general discussion? 19:54:02 #topic open discussion 19:54:06 right, go nuts 19:54:12 bnemec: and the need to get *something* posted on a topic before summit 19:54:31 rather than just coming in with all the details at summit 19:54:55 #link http://lists.openstack.org/pipermail/openstack-dev/2014-October/048652.html 19:55:12 ^ML discussion of the topics SpamapS proposed for our scheduled sessions 19:55:48 Since I think we're supposed to be using a more collaborative process for scheduling this cycle it would be nice to have more input. :-) 19:56:12 INPUT 19:56:16 sorry, had to 19:56:24 :-) 19:56:30 did we have an etherpad on that as well? 19:56:34 hi lifeless 19:56:45 cinerama: Yes, it's linked from the first ML post. 19:56:52 bnemec: OIC 19:56:53 #link https://etherpad.openstack.org/p/kilo-tripleo-summit-topics 19:57:31 I didn't post anything up about CI this time round cause I thought it was much the same as the last time we discussed it 19:57:54 So, I don't expect us to resolve the question in the next three minutes, but I think schedules are due later this week so ASAP. 19:58:28 I guess that's all I had to say. 19:58:30 hi cinerama 19:59:29 good meeting everyone! 19:59:41 we done here? 20:00:07 * bnemec has nothing else 20:00:33 i have coffee now 20:00:39 but that's not hugely relevant to the meeting 20:00:45 cool, i'll close up then. thanks for participating folks 20:00:52 #endmeeting tripleo