21:02:48 #startmeeting project 21:02:49 Meeting started Tue Jan 28 21:02:48 2014 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:02:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:02:53 The meeting name has been set to 'project' 21:02:55 #link http://wiki.openstack.org/Meetings/ProjectMeeting 21:03:00 \o 21:03:02 #topic icehouse-2 / 1.12.0 postmortem 21:03:19 So... last weeks gate issues created delays for Swift 1.12.0 and complicated the delivery of a sane icehouse-2 milestone 21:03:32 In particular we shipped heat with a milestone-critical issue, because (1) that issue wasn't really tested in gate yet so it ended up in master 21:03:37 and (2) milestone-critical issues do not get fast-tracked at the gate 21:03:38 * russellb dreams of synchronized releases 21:03:50 stevebaker: do we have a bug to track the absence of integration tests in that specific area ? 21:04:36 ttx: the tests exist, I need to raise a bug to enable vm -> heat on port 8000 in the gate infra 21:04:57 stevebaker: ok, when you have bug number, let me know 21:05:03 would like to tie loose ends 21:05:03 something like this, but which works https://review.openstack.org/#/c/69276/ <-- advice welcome 21:05:19 I think that we'll have the same issues again if we can't keep the top of gate entry under 12 hours of age 21:05:27 So we'll see how good we are at keeping it below that 21:05:53 but otherwise we may need to relax gate-jumping rules to include release-critical issues 21:06:18 I think I can deal with 12 hours 21:06:39 but 27 or 34... difficult to stick to release dates then 21:06:50 icehouse-3 should be a nice test :) 21:07:05 heh 21:07:19 still a lot of work to do to make icehouse-3 not blow up 21:07:22 any other post-mortem thoughts on i2 / 1.12.0 ? 21:07:28 russellb: yes, next topic 21:07:29 but i think the steps to get there are clear (enough) 21:07:49 holidays really hurt i2 velocity, too, i think 21:07:55 but i guess we can't cancel those 21:08:02 russellb: yeah, not sure that would fly 21:08:09 #topic master gate status 21:08:19 Things definitely improved over the last 7 days 21:08:25 yeah, lots has changed 21:08:29 lots and lots of good bug fixes 21:08:32 As far as I can tell the recent improvement is not due to implementation of Sean's suggestions yet 21:08:35 from a bunch of people 21:08:37 (from http://lists.openstack.org/pipermail/openstack-dev/2014-January/025140.html ) 21:08:40 ttx: right htat isn't done yet 21:08:44 So we still have room for improvement 21:08:45 ttx: correct, we're not there yet 21:08:49 which is good news 21:08:51 it's mainly bug fixes, improvements to zuul, and increased node capacity 21:09:08 yep, lots of good bug fixes from people, increased capacity, and the sliding window on the gate queue all helped 21:09:15 sdague: notmyname wanted to ask about progress being made on reducing the overlapping tests -- i suspect that effort is not started yet ? 21:09:38 ttx: right, the first step is the zuul logic to handle requiring recent good check 21:09:46 ack 21:09:48 how did tempest day go yesterday? 21:09:50 which jeblair has mostly ready, but it tickled a gerrit bug last night 21:10:02 stevebaker: pretty good I think. sdague may have more data 21:10:11 gate bug day was productive i think 21:10:18 stevebaker: pretty well I think, we're now about 95% on categorization of issues 21:10:28 link: http://status.openstack.org/elastic-recheck/data/uncategorized.html 21:10:30 at the very least it familiarized myself with some of the e-r tooling 21:10:38 and we made some good progress on some of the top ones around cinder and neutron 21:10:39 i think that's the biggest concrete achievement, better categorization 21:10:54 ttx: yeh, I think we got a lot more people familiar with that, which is really good 21:11:01 patch in progress related to top neutron failure: https://review.openstack.org/#/c/69445/ 21:11:10 a patch merged that squashed the top cinder bug (thanks jgriffith!) 21:11:32 hah 21:11:43 that was my next point 21:11:48 russellb: well we might be declaring victory too fast on that one :) 21:11:58 sdague: not victory, just progress :) 21:12:01 heh 21:12:02 sounds a pomising workaround at least 21:12:05 promising 21:12:24 Good to see a number of people working on it, including on Ubuntu's side 21:12:37 yeh our throughput on friday & sat was over 100 patches merged / 24 hrs 21:12:52 just to give a sense that we're kind of back to business in the master gate 21:13:26 but let's not get comfortable 21:13:32 agreed 21:13:32 jamespage told me they might have narrowed it on their side too 21:13:34 i still feel like the list of active bugs could use more eyes 21:13:41 yep, definitely 21:13:43 ttx: very good to hear. 21:14:13 62 bugs being tracked by elastic recheck now - http://status.openstack.org/elastic-recheck/ 21:14:26 the gate queue currently climbs but not to stratospheric heights 21:14:57 so it seems we are back at pre-crisis levels 21:15:11 (but then, neutron is not really gating those days) 21:15:15 we have a huge nova patch series we're trying to merge for nova-network performance 21:15:24 once that's all in, i'd like to try to increase tempest concurrency again 21:15:29 which should be a big speedup on test runtime 21:15:37 ack 21:15:46 anything else on that topic ? 21:15:47 most of that is in the gate now 21:15:57 oh, we also need to try to get the better image on rax perf nodes 21:16:09 sdague: what's the better image? 21:16:11 the current image is part of the slowdown 21:16:24 russellb: one with paravirt drivers configured 21:16:28 ah 21:16:31 jnoller is helping on that 21:16:41 cool, just a tweak to nodepool configs right? 21:16:44 yes 21:16:50 need to make sure it works reliably first 21:16:54 pfft 21:17:19 ttx: done on this topic I think 21:17:24 #topic Code proposal deadline (russellb) 21:17:35 yeah, so we had one of these deadlines last cycle 21:17:35 russellb: how is that proposal doing so far ? 21:17:42 5 projects had a deadline, across 3 dates 21:17:55 i'm proposing it again for nova, but wanted to see if others wanted to coordinate on a single date 21:17:58 to make our schedule less confusing 21:18:06 i'm proposing 2 weeks ahead of feature freeze, so feb 18 21:18:23 We're planning to use Feb 18th too 21:18:24 we should probably just build this into the schedule when we plan it at the next summit 21:18:24 proposal on ML has seen some feedback ... acked by markmcclain and hub_cap 21:18:25 yes, that can be opt-in but a single date would be less confusing 21:18:41 jd__ told me he would not follow it for ceilometer 21:18:50 he has a good grip on incoming proposals 21:18:51 opt-in seems fine 21:19:06 it's a bigger deal when you're overwhelmed by the incoming wave 21:19:07 yeh, honestly spreading out the freezes also probably helps on gate load 21:19:23 so a few projects going later is good and fine 21:19:23 sdague: true dat 21:19:24 sdague: well, i thought about that, but remember, this isn't *merge* deadline 21:19:26 so it's not that big of a deal 21:19:27 (not that I would't be against it if we wanted to do that for _all_ projects though) 21:19:35 s/not/note/ 21:19:43 russellb: true 21:19:46 russellb: ahhh... exccellent point 21:19:55 I think the key thing is to avoid having a different date for each project 21:19:57 it'll be a big rush on check 21:20:06 which may be a nice warmup for feature freeze :) 21:20:19 ttx: right, that's what i was hoping 21:20:27 So it's opt-in, on Feb 18. I'll document it on the release schedule 21:20:28 i like having a nice coordinated schedule 21:20:32 perfect 21:20:50 #action ttx to document FPF on Feb 18 on icehouse schedule 21:21:04 other comments on that ? 21:21:58 #topic Logging standards (sdague) 21:22:05 sdague: floor is yours 21:22:09 great 21:22:26 #link http://lists.openstack.org/pipermail/openstack-dev/2014-January/025542.html 21:22:59 this is actually an idea I floated early in the cycle, but I'd like to revisit to see if we could get a few things sorted for Icehouse 21:23:28 in staring at logs reading test fails, we've definitely got a bunch of inconsistency challenges, in single projects and across them 21:23:51 so my current thinking is for icehouse if we can get some basic guidelines for the INFO log level 21:23:55 * ttx wonders how sdague manages to have so many fishing lines at the same time in the ocean 21:24:23 and see which projects want to buy in, we could do a bunch in terms of overall debugability of OpenStack, for ourselves, and operators 21:24:56 logging standards ++ 21:24:59 so mostly this is socialization, to figure out which projects want to take part, and if anyone objects to the current list of guidelines I put up there 21:25:14 i think it's a sane idea 21:25:15 I expect this is going to be a multi cycle effort 21:25:19 I'd really like to move to something like kafka and make all our logs machine data rather than human 21:25:26 but *thats* a different idea :) 21:25:33 sdague: I forwarded the effort intro to reed, as he was looking for areas where beginners could help. Those commits do not require that much deep knowledge of stuff 21:25:34 but I think INFO sanity is doable in icehouse 21:25:35 question like many things, how does it stack up against all the other things we need to be doing 21:25:51 sdague: as long as the base principles are well documented 21:26:00 if there are volunteers, sure, i'm fine with it ... 21:26:13 ttx: thanks, yeh this is actually a good place to bring in volunteers 21:26:54 russellb: yeh, I think we'll mostly be pulling in newer folks on this, it will impact review load on core teams, which is part of trying to get some buy in on "standards" up front, to hopefully make the reviews easy 21:27:52 from my interacting with some of the larger operators, this is a pretty high impact to them, because dealing with our logs today is .... *interesting* 21:27:55 do youhave a link to the proposed guidelines ? 21:28:08 #link https://wiki.openstack.org/wiki/LoggingStandards 21:28:14 sdague: review load at this point in the cycle is quite painful 21:28:14 is there anyway we can realistically write gate tests against the guidelines? 21:28:17 cool 21:28:19 we're already incredibly overloaded 21:28:33 russellb: agreed. 21:28:46 I just cringe on the current state of things 21:29:17 i cringe at a lot of things 21:29:20 :) 21:29:22 I haven't looked over the logging standards, but wonder if the logging standards enable the possibility of doing some programmatic analysis of logs when trying to do post mortems ? 21:29:34 do i cringe at our log formatting more than the number of other blueprints already ready for review? probably not 21:29:36 "Lifecycle event 1 on VM b1b8e5c7-12f0-4092-84f6-297fe7642070" 21:29:38 I think we're a long way away from thsoe things 21:29:39 nice 21:29:49 IanGovett1: i think lifeless mentioned that (but likely for another conversation) 21:29:57 we're going to take this one step at a time 21:30:05 ++ 21:30:18 sdague: gotta start somewhere 21:30:23 * russellb nods 21:30:37 Ironic doesn't have, IMHO, enough INFO logged today. So I'm ++ to having a common standard to point volunteers to 21:30:44 and taht's a good place to start 21:30:45 sdague: not sure we can fully nail the INFO by the icehouse release, but having convergence on the standards and a model project would be nice 21:30:54 ttx: sure 21:31:09 honestly if every project logged wsgi requests exactly once at INFO 21:31:12 sdague: then we can use it as model to encourage people to converge INFO in J 21:31:16 we'd be a huge step forward 21:31:22 because... we very much don't today 21:31:53 ttx: agreed, I'm hoping to tackle ERROR in J as well 21:31:59 yep. I used to complain at Eucalyptus logs but ours don't really look better those days. 21:32:28 anyway, we can do most of this on the list I think, but it made sense to open it up here as well 21:32:40 especially if anyone has comments on the wiki page so far 21:32:58 so in summary... good idea, can't do that much with icehouse-" review load but at least defien standards and push what we can ? 21:33:04 icehouse-3 21:33:21 ttx: I think that's fair 21:33:24 +1 21:33:36 fwiw +1, it'll be a good start 21:34:05 would hate to see complex blueprint patches go into conflict over log message shuffling :-/ 21:34:11 fwiw current standards sound sane to me 21:34:13 so yeah, would rather mass patches show up early juno 21:34:31 sdague: we could also point -operators to that wiki page for feedback 21:34:58 ttx: sure, or get them on the dev thread. We discourage cross posting. 21:35:03 sounds like an area where they could be interested in participating 21:35:10 a few have already 21:35:32 yeah, just a pointer on -operators to make them aware of the -dev thread, no cross-posting 21:35:41 russellb: realistically, I think the # of patches to handle info will be relatively low 21:36:15 sdague: anything else on that topic ? 21:36:27 nope 21:36:33 Sorry for what may seem a dumb question (I'm a newbie), but do the log messages contain a timestamp in 'cut' time so that log events across multiple servers can construct a sequence of events. 21:37:37 IanGovett1: they have timestamps yes, and if you use synced clocks you should be fine enough 21:37:49 #topic Red Flag District / Blocked blueprints 21:37:49 ok. thanks 21:37:55 if you don't have synced clocks, more is broken than logs 21:37:58 Any inter-project blocked work that this meeting could help unblock ? 21:38:22 in particular, I'm interested by critical work that depends on some other project completing their stuff 21:38:40 Like Horizon waiting for a feature to land to make a panel about it available 21:38:57 we would need more eyes oslo.messaging review because that's blocking several bp of ceilometer FWIW 21:39:14 jd__: link to review ? 21:39:18 I'm feeling like stating the obvious, but well. :) 21:39:34 did you get the oslo.messaging peeps on it already ? 21:39:35 https://review.openstack.org/#/q/status:open+project:openstack/oslo.messaging+branch:master+topic:bp/notification-subscriber-server,n,z mainly 21:39:49 jd__: today's my review day so I'll try to get to them this afternoon 21:39:54 cool 21:40:06 just saying' really, I know markmc is also trying is best to take a look 21:40:29 jd__: there are a few in that series with -1 comments already 21:40:49 jd__: that BP is high, so I pester dhellmann with it regularly 21:40:52 sileht: ^ 21:41:00 hehe :) 21:41:20 jd__: thx for the pointer though, that's what I want to hear in that section of the meeting 21:41:40 prefer to hear about those early, rather than too late 21:42:13 anything else with inter-project dependency that you'd like to make sure is prioritized correctly ? 21:42:58 I guess not 21:43:05 #topic Incubated projects 21:43:12 o/ 21:43:12 o/ 21:43:15 hi guys 21:43:22 https://launchpad.net/savanna/+milestone/icehouse-3 21:43:48 doesn't look too bad -- you might want to have assignees on all of those 21:44:06 ttx, yup, working on it 21:44:06 https://launchpad.net/ironic/+milestone/icehouse-3 21:44:24 devananda: I suspect those "Unknwon" are actually "Not started", right ? 21:44:49 ttx, additionally, the good news is that we're ready to setup async gate - https://review.openstack.org/#/c/68066/ 21:44:54 actually, one of those needs to be updated to Ready for Review 21:44:57 SergeyLukjanov: awesome 21:45:07 the other 3 are either Not Started or ~vendor hasn't shared the code yet~ :) 21:45:10 dendrobates: romcheg's ? 21:45:15 oosp 21:45:20 devananda: romcheg's ? 21:45:56 devananda: I see migration-from-nova is not started ? 21:46:09 he's been working on it but i need to follow up and see where the code is at 21:46:22 ok, wil mark started 21:46:39 k 21:46:43 kgriffs: around? 21:47:56 kgriffs: if you read this, you may have too much, and also you should get people assigned to the unassigned BPs 21:48:05 too much essential BPs, I mean 21:48:31 SergeyLukjanov, devananda: questions ? 21:48:47 no questions here 21:48:48 ttx, nope, thx 21:49:11 #topic Open discussion 21:49:16 anything else anyone ? 21:49:53 just hugs 21:50:04 ok then 21:50:10 #endmeeting