19:02:58 #startmeeting tripleo 19:02:59 Meeting started Tue Oct 7 19:02:58 2014 UTC and is due to finish in 60 minutes. The chair is SpamapS. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:03:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:03:03 The meeting name has been set to 'tripleo' 19:03:08 Didn't realize it would fix itself 19:03:37 #topic agenda 19:03:45 All hail the new overlord! 19:04:00 * bugs 19:04:00 * reviews 19:04:00 * Projects needing releases 19:04:00 * CD Cloud status 19:04:00 * CI 19:04:03 * Tuskar 19:04:05 * Specs 19:04:08 * open discussion 19:04:10 does anyone want to add anything to that? 19:05:23 #bugs 19:05:33 #topic bugs 19:06:00 #link https://bugs.launchpad.net/tripleo/ 19:06:01 #link https://bugs.launchpad.net/diskimage-builder/ 19:06:01 #link https://bugs.launchpad.net/os-refresh-config 19:06:01 #link https://bugs.launchpad.net/os-apply-config 19:06:01 #link https://bugs.launchpad.net/os-collect-config 19:06:03 #link https://bugs.launchpad.net/os-cloud-config 19:06:05 #link https://bugs.launchpad.net/tuskar 19:06:08 #link https://bugs.launchpad.net/python-tuskarclient 19:06:37 three criticals with no assignee in tripleo 19:06:53 actually I'll take https://bugs.launchpad.net/tripleo/+bug/1374626 19:06:55 Launchpad bug 1374626 in tripleo "UIDs of data-owning users might change between deployed images" [Critical,Triaged] 19:07:11 Need people for https://bugs.launchpad.net/tripleo/+bug/1375641 19:07:12 Launchpad bug 1375641 in ironic "Error contacting Ironic server for 'node.update'" [Critical,Fix released] 19:07:37 and https://bugs.launchpad.net/tripleo/+bug/1263294 19:07:39 Launchpad bug 1263294 in tripleo "ephemeral0 of /dev/sda1 triggers 'did not find entry for sda1 in /sys/block'" [Critical,In progress] 19:08:06 https://bugs.launchpad.net/tripleo/+bug/1263294 has been critical since may 19:08:16 with no progress, but it doesn't seem to be getting inour way 19:08:29 https://bugs.launchpad.net/tripleo/+bug/1375641 is fixed, isn't it? 19:08:30 Launchpad bug 1375641 in ironic "Error contacting Ironic server for 'node.update'" [Critical,Fix released] 19:08:31 Is it possible that it's not actually critical? 19:09:38 i closed https://bugs.launchpad.net/tripleo/+bug/1373430, it was fixed in horizon 19:09:39 tchaypo: how are we working around it? 19:09:41 Launchpad bug 1373430 in tripleo "Error while compressing files" [Critical,Fix released] 19:10:17 oh right, we work around it by forcibly mounting ephemeral in o-r-c 19:10:30 tchaypo: we have a workaround while waiting for the fix in cloud-init I will decrease its importance 19:10:52 I think since it has a workaround, it is by definition 'High' not 'Critical' 19:10:55 * SpamapS drops it 19:12:13 GheRivero: well one of us dropped it anyway. :) 19:12:45 bnemec: indeed it is fixed in Ironic 19:12:49 can we revert https://review.openstack.org/#/c/124994/ ? 19:13:06 err wait n/m 19:13:08 misunderstood 19:13:41 closed bug 1375641 .. it was indeed fixed in Ironic 19:13:47 Launchpad bug 1375641 in ironic "Error contacting Ironic server for 'node.update'" [Critical,Fix released] https://launchpad.net/bugs/1375641 19:14:17 Ok that just puts the one critical on me. :) anybody want to bring up any specific bugs before we move on? 19:15:34 BTW, great job triaging everybody! 19:15:46 #topic reviews 19:16:05 #info There's a new dashboard linked from https://wiki.openstack.org/wiki/TripleO#Review_team - look for "TripleO Inbox Dashboard" 19:16:08 #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html 19:16:12 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt 19:16:15 #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt 19:16:48 9 people at 60 or higher on http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt right now 19:17:03 Queue growth in the last 30 days: 0 (0.0/day) 19:17:19 New changes in the last 30 days: 212 (7.1/day) 19:17:21 Changes merged in the last 30 days: 199 (6.6/day) 19:17:43 do we still feel like 3 reviews a work day is a reasonable commitment? 19:17:55 way to go people. I am somewhat ashamed of my 1/day rate the last 90 days, but you will see that climb hopefully soon. 19:18:09 to revert to the traditional statistic 19:18:11 3rd quartile wait time: 19 days, 7 hours, 13 minutes 19:18:41 slagle: seems like it is right in the sweet spot really. 19:18:52 Quoting monty's TC email - "OpenStack is optimized for high-throughput at the expense of individual 19:18:54 patch latency. 19:18:56 " 19:19:27 We could drive wait time down more of course, but I see more patches waiting on submitter than reviewers.. so I think there's not much point in that. 19:19:44 SpamapS: yea, i don't have a problem really with people not meeting it 19:19:51 we seems to be fitting that pattern - individual patches take literal weeks to land, but we seem to be keeping up with what's coming in fairly well 19:20:05 just more curious if maybe we ask too much, if lots of people are finding it hard to achieve 19:20:20 tchaypo: there's also the dispair effect to consider.. as it takes longer to land them, less people even bother submitting. 19:20:29 The only problem I have with the high throughput is the effect on new committeres 19:20:32 but i don't really think there's anything actionable either way 19:20:34 despair? :-P 19:21:03 dispair, datpear, it's all the same 19:21:09 tchaypo: I've always felt that we should have a little priority mark on somebody's first 2 or 3 patches. 19:21:34 SpamapS: yeah, good point. Probably especially bad for new committers if we turn them off after their first attempt to help us, but it's also bad if it means people just don't bother providing updates because it takes too much time or effort to get them through 19:21:41 BUt we don't want to make them too special.. or they'll get a false sense of how things work. 19:22:02 Anyway, I feel like reviews are working o-k right now and we have bigger fish to fry. 19:22:15 next topic in 10.. 9... 19:22:19 I wonder if the "Thanks for your first patch" message could be adapted to look up an irc channel and suggest they go hang out there 19:22:42 So this is the first meeting I remember where the queue hasn't grown. Go us :) 19:22:43 I feel like we tend to do okay at pushing through urgent patches, but that's largely because if it's urgent people jump in IRC and hassle cores 19:23:10 alexisli: one reason for that might be that OpenStack has stopped changing under us for the last 2 weeks.. ;) 19:23:48 Yeah, we also had relatively few CI issues this week because of that. 19:23:58 Shall I follow up on the list with a suggestion that we start paying more attention to throughput than to latency? 19:24:18 tchaypo: it would be good if the metrics included that 19:24:29 would be a rather nice graph really 19:24:39 or at least, paying equal attention to both, with an unresolved question about special welcome for new contributors 19:26:00 #topic Projects needing releases 19:26:47 I feel like we automated this, but I know that there is just at least one excellent human doing a good job of fooling me into thinking they're a robot here.. 19:27:09 i'll do it 19:27:45 jdob: ^5 19:27:55 #topic CI 19:28:10 SpamapS: actually it's relatively dispersed, there's probably 4 or so people who do it regularly 19:31:20 I guess CI is quiet? :-) 19:31:44 I don't believe it. :) 19:31:48 tchaypo: HP region status? 19:31:53 regions I should say 19:33:12 I don't know, was on leave last week and public holiday monday 19:33:33 I assume that means no progress on hp2 since last week... 19:33:59 thanks to help from morganfainberg we have keystone working a bit better; so now it's clear that the bug preventing >4 nodes coming up lies elsewhere 19:34:17 but I don't think it's the nova scheduler race either... 19:34:18 tchaypo, aww :( was hoping that'd be easy. 19:34:25 I'll be grovelling in logs today 19:34:38 PS: our default log levels are useless for figuring out what's happening 19:34:40 ok 19:34:54 Sounds good 19:35:20 #topic Tuskar 19:37:24 hellleewww? 19:37:35 jdob: ? 19:37:43 sorry, nothing major to report 19:37:56 slackers 19:38:14 its just that things are so perfect there's no need to waste time in the meeting reiterating that :) 19:39:11 lol 19:39:42 hehe 19:41:05 #topic Specs 19:41:14 It's _OCTOBER_ 19:41:21 the summit is in _1 month_ 19:41:55 I would like to impress upon everyone that the summit is where we resolve things we couldn't resolve via normal communication methods. 19:42:22 We should have no surprises in what is brought up at the summit. We should already have been talking about these things in IRC, here at the meeting, and on openstack-dev. 19:42:56 Spoilers! 19:43:02 SpamapS, so one thing I wanted to ask actually is 19:43:06 Submit your specs early if you like. Let us get to a point where we just get into the room to hash out details and resolve hard to communicate points at the summit. 19:43:21 bnemec: the image gets rebuilt in the end. 19:43:23 I read about ansible for cloud updates 19:43:36 Heh 19:43:50 gfidente: yes I am working on a spec for an ansible-based TripleO cloud right now actually. :-D 19:43:55 SpamapS: I kinda think the static UID/GID thing needs a spec and a "omg what?!" discussion at the summit ;) 19:44:05 oh yeah I wanted to ask who was going to discuss that, seems interesting 19:44:08 (note, I'm not volunteering you to write a spec) 19:44:42 Ng: I hope we will have patches landed by then. 19:44:53 But a spec might be in order nonetheless. 19:45:07 4 bullet points a spec make. 19:45:42 I think a spec is certainly in order. 19:45:51 so one more topic I wanted to bring up, just to ping people, not sure if this is right place to discuss 19:45:56 I'd also like some fedora folk to weigh in on at least the mailing list thread about it, in case we ubuntunites are missing some crucial twist in the land of rouge headwear 19:46:59 I'd like to see details of how any defaults can be overridden, or oids could be pre-assigned by an operator 19:47:41 gfidente: if it is not about specs in general, we'll do open discussion next. :) 19:48:02 * bnemec will try to catch up on the uid discussion today 19:48:07 SpamapS, yeah open discussion probably more appropriate 19:48:56 #info Specs are forthcoming for UID+GID preservation and Ansible integration 19:49:00 #topic Open Discussion 19:49:55 QuintupleO update coming soon. 19:49:57 i know there are some folks ramping up on tripleo-puppet-elements as well 19:50:06 I have a blog post in the works. 19:50:06 but i'm unaware of the details 19:54:29 bnemec: nice 19:54:50 slagle: we may need to reach out and make sure they have time allotted here at the meeting if they want it. :) 19:55:04 I believe that tripleo-puppet-elements is now somewhat under the tripleo umbrella 19:55:08 Also I wouldn't mind having a quick Kolla update each time, even though they have their own meeting. 19:55:18 Oh 19:55:19 in the sense that we'll see reviews in the IRC room at least 19:55:24 So the alt meeting time is _REALLY_ bad for me. 19:55:42 does that mean that contributors to tripleo-puppet-elements get to vote in the next PTL election? 19:55:51 Can we just agree to have a rotating meeting moderator for that time slot? 19:56:08 tchaypo: it should. If it does not, we should change that. 19:56:22 tchaypo: i think so, since it's under openstack/ now 19:56:27 I thought tchaypo and shadower were generally running the alt meetings. 19:56:29 That's been happening in practice - I've been doing most of them until it switched to europe 19:56:36 #action SpamapS to make sure tripleo-puppet-elements and kolla contributors get TripleO ATC status. 19:56:45 by coincidence I haven't been available to run either meeting since then, so shadower and/or marios have run it 19:56:47 mostly tchaypo but I'm happy to help out 19:56:48 tchaypo: cool 19:57:08 I'm happy to keep running with that' 19:57:08 * shadower has actually ran it only once 19:57:20 but I'll do more if need be 19:57:25 shadower: when DST changes for you guys, perhaps we can/should re-evaluate the time? 19:57:43 ok just making sure that is handled. 19:58:08 this meeting is now at 6am for me, which is manageable; the alternate meeting is now 7pm, which is getting into food time 19:58:51 Regarding Ansible for updating cloud: stackforge/tripleo-ansible is there.. but it is pretty out of date as we've been forced to move forward downstream. We do intend to reverse flow and start merging patches in the stackforge repo first as soon as we actually use it for real things. 19:59:40 with 60s left, I think I will bid you all a broad "Thank you for your votes" and also thank you to slagle for providing everyone with a choice. 20:00:10 #endmeeting