21:03:32 #startmeeting project 21:03:32 o/ 21:03:33 o/ 21:03:33 Meeting started Tue Dec 17 21:03:32 2013 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:03:36 The meeting name has been set to 'project' 21:03:44 o/ 21:03:44 o/ 21:03:46 Agenda for today: 21:03:48 #link http://wiki.openstack.org/Meetings/ProjectMeeting 21:03:51 o/ 21:03:51 \o 21:03:58 o/ 21:04:05 #topic Icehouse-2 roadmap 21:04:05 o/ 21:04:23 All looks good from our 1:1s 21:04:43 we'll skip the next two meetings 21:04:56 and check back progress on the Jan 7th meeting 21:05:02 bam 21:05:26 #topic Gate checks (notmyname) 21:05:33 hello 21:05:36 hello! 21:05:40 notmyname: hi! care to introduce topic ? 21:05:51 here's where we start: 21:05:53 I've been hearing (and experiencing) some major frsutration the the amount of effort it takes to get stuff through the gate queue 21:06:14 in some cases, it takes days of rechecks. other times, it's merely a dozen hours or so 21:06:25 so I started using the stats to graph out what's happening 21:06:29 http://not.mn/gate_status.html 21:07:05 and the end result, as shown on the graph above, is that we've got about a 60-70% chance of failure for gate jobs, just based on nondeterministic bugs 21:07:20 notmyname: we also wedged the gate twice in less then 3 months 21:07:25 this means that any patch that tries to land has a pretty poor chance of actually passing 21:07:52 note that over the last 14 days, there are 9 days where a coin flip would have given you better odds on the top job in the gate passing 21:08:04 I feel like folks like jog0 and sdague have done a nice job watching this status and raising extra awareness for important issues 21:08:16 there's plenty of room for more attention to some of the bugs, though, for sure 21:08:25 notmyname: but do you have anything in particular you'd like to propose? 21:08:28 so I want to do 2 things 21:08:42 (1) raise awareness of the issue (now with real data!) 21:08:53 (2) propose some ideas to fix it 21:09:02 i feel like everyone has been very aware already :-) ... but your graph is neat 21:09:03 which leads to other ideas, I hope 21:09:30 so for (1), I claim that a 60% pass chance for gate jobs is unacceptable 21:09:39 ++ 21:09:43 +1 21:09:48 +1 21:10:01 i don't think anyone is going to argue with failures being bad 21:10:04 and I have 3 proposals of how we can potentially still move forward with day-to-day dev work 21:10:04 can we gate on pass chance? :P 21:10:12 russellb: I would disagree with me doing a good job of raising awareness and watching status. we haven't been able to get the base line low enough and getenough bugs fixed. wehave been able to track how bad it is and prioritize but that isn't enough 21:10:32 jog0: OK, well just trying to give props where it's due for those working extra hard on things 21:10:34 dolphm: only if we can take people's +2 away from them for a week when they push a 100% guarunteed to fail change to the gate :) 21:10:37 your reports help me 21:10:49 sdague: where do we sign people up 21:11:04 jog0: yeah, +1 to what russellb said, don't knock yourself for not having super powers 21:11:07 russellb: yes, I agree that the -infra team has done a great job triaging things when they get critical. but let's not stay there (as we have been) 21:11:09 which was actually a huge part of the issue the last 4 days with all the grizzly changes 21:11:11 first idea: multi-gate-queue 21:11:25 in this case, instead of having one gate queue, have 3 21:11:34 Have N gate queues (for this example, let's use 3). In gate A, run all the patches like today. In gate B, run all but the top patch. In gate C, run all but the top 2. This way, if gate A fails, you already have a head start on the rechecks (and same for B->C). If gate A passes, then throw away the results of B and C. 21:11:46 this is a pessimistic version of what we have today 21:12:25 sdague: I would love to drill down on that past your warranted frustrations 21:12:34 idea two: cut down on what's tested 21:12:51 notmyname: i would be happy to have zuul start exploring alternate scenarios sooner, even ones heuristically based on observed conditions like job failure rates 21:13:10 notmyname: that's not a simple change, so it'd be great if someone wants to sign up to dev that. 21:13:16 proposal 1 doesn't help get things to merge, it just gets them to merge faster 21:13:20 in this case, there is no need to test the same code for both postgress and mysql functionality (or normal and large ops) if the patch doesn't affect those at all 21:13:21 or fail faster 21:13:40 jog0: correct. things eventually merge today 21:13:40 jog0: i agree with that. 21:13:50 where eventually is really long 21:14:03 jog0: faster dev cycle is always appreciated, at least 21:14:09 and seems too long 21:14:10 for idea two, I'm proposing that the set of things that are tested are winnowed down 21:14:30 i'm -1 on testing things less in general ... if things fail, they're broken, and should just be fixed 21:14:41 i don't think the answer to failures is do less testing 21:14:42 notmyname: I am much more concerned about false gate failures then gate delay. if you fix false gate failure you fix gate delay too 21:14:50 eg why test postgres and mysql functionality in neutron for a glance client test? 21:14:55 notmyname: one of the benefits of running extra jobs -- evon ones that don't seem to be needed (testing mysql/pg) is that we do hit nondeterministic failures more often 21:15:05 I think testing less items is bad idea too 21:15:18 in all cases, the nondeterministic bugs need to be squashed 21:15:18 the gate issues are actual openstack bugs 21:15:27 notmyname: neutron was in a bad state for a while because it only ran 1 test whereas everyone else ran 6; it was way more apt to fail changes 21:15:34 I would rather make it harder to get the gate to pass then have these nondetermistic failures leak out into the releases for users to experience 21:15:37 notmyname: yeh, we invented more jobs for neutron for exactly that case 21:16:05 to notmyname's point, though. . we just recheck through those failures of actual nondeterministic bugs mostly, do we not? 21:16:08 jog0: so you're opposed to option 2? 21:16:14 and I agree that race conditions need to be stompt out 21:16:19 jog0: err, idea 2 21:16:23 rechecking is just *slow* ignoring 21:16:23 dolphm: very much so, we need more tests 21:16:25 but the point is, if neutron jobs are still failing a lot, then they don't need to be run for every code repo 21:16:33 s/neutron/whatever/ 21:16:37 all projects are gated on those failures, related or not 21:16:40 uhh 21:16:43 markwash: that is a problem 21:16:44 I don't follow your logic 21:16:45 markwash: you need to stop thinking about those as non deterministic, they are race conditions 21:17:00 it's not a matter of "run less things because they fail", it's a matter of "run less things because they're not needed" 21:17:10 sdague: agreed, both to me carry the same level of badness (high) 21:17:11 notmyname: neutron even got so bad that we pulled it out of the integrated gate -- it pretty much _instantly_ fully broke 21:17:17 ++ 21:17:29 what's the realistic maximum number of changes openstack has ever seen merge cleanly in succession? 21:17:34 torgomatic phrased it better than I was doing 21:17:36 4? 5? 21:17:40 dolphm: 20+ 21:17:41 torgomatic: but they *are* needed because the failures don't occur all of the time, so we need as many examples of failures as possible to debug 21:17:42 dolphm: I saw 10 recently 21:17:42 like, does keystone really need the gate job with neutron-large-ops? I don't think you can break Keystone in such a way as to only hose the large ops jobs 21:17:43 sdague: wow 21:17:48 dolphm: I witnessed 25 myself 21:17:51 notmyname: if we don't run it, and there is any dependency in that thing on the other projects we let change, we have asymmetric gating. 21:17:52 it's not been a good couple of weeks 21:17:53 notmyname: so we've learned that with no testing, real solid bugs (as opposed to transient ones) land almost immediately in repo. 21:17:59 torgomatic: yes it does 21:18:03 we also had a lot of external events in these 2 weeks 21:18:06 notmyname: asymmetric gating is a great way to wedge another project entirely, instantly. 21:18:06 dolphm: granted, it was full moon outside. 21:18:18 sphinx, puppetlabs repo, jenkins splode 21:18:19 both nova and neutron use keystone so it can break neutron-large-ops 21:18:24 yup. we've seen that almost every time we've had assymetric gating 21:18:29 jog0: maybe a bad example, then, but there are other cases where the difference between two gate jobs has 0 effect on the patch being tested 21:18:34 I would rather spend the effort needed to figure out which subset of all our tests need to be run for any given change to fixing these race conditions themselves 21:18:48 dhellmann: +1 21:18:48 dhellmann: +100 21:18:53 dhellmann: +1 21:19:07 dhellmann: +1 21:19:16 dhellmann: amen! 21:19:16 * jd__ nods 21:19:22 dhellmann: ++ 21:19:26 ok, so option 3: enforce strong SAO-style interaction between projects 21:19:32 hey, look we even have a reasonable currated list - http://status.openstack.org/elastic-recheck/ - (will continue to try to make it better) 21:19:33 dhellmann: that's obviously better but I hope we *do* it 21:19:35 Embrase API contracts between projects. If one project uses another openstack project, treat it as any other dependency with version constraints and a defined API. Use pip or packages to install it. And when a project does gate checks, only check based on that project's tests. 21:19:45 This is consistent for what we do today for other dependencies. If there are changes, then we can talk cross-project. That's the good stuff we have, so let's not throw that out. 21:19:45 notmyname: SOA? 21:20:08 dhellmann: service orientated archivetture 21:20:10 dhellmann: what we have 21:20:11 service oriented architecture. IOW, just have well defined APIs with the commitment to not break it and only use that 21:20:14 bah, architecture 21:20:19 lifeless: I know SOA, I didn't know SAO 21:20:26 you typed SAO :) 21:20:32 oh lol, my brain refused to notice that 21:20:40 so, version pinning between openstack projects? 21:20:42 sdague: the problem with elastic recheck (which it is good), is that it's hand-currated 21:20:49 seems like we'd just be kicking the "find the breakage" can down the road 21:20:55 russellb: ++ 21:20:56 in fact 21:21:05 notmyname: it's 54% of all the fails, and super easy to add another one 21:21:07 when you wanted to update the requirement, you would not have been testing the two together 21:21:14 well, what happens now for other dependencies? eg we don't run eventlet tests for every openstack patch and vice versa 21:21:16 so then why are we not pulling sphinx builds into our jobs? 21:21:16 we approve them super fast 21:21:30 I don't think we can use pip packages otherwise for projects with strong integration we run into issues landing coordinated patches in the master branches 21:21:30 or sphinx, as portante stated 21:21:37 notmyname: those are libraries, not thigns that do SDN 21:21:40 the whole point of gating on trunk is to ensure that trunk continues to work so we can prepare the integrated release, right? 21:21:43 because sphinx isn't openstack 21:21:51 markmcclain: that's exactly my point. it needs strong API contracts 21:21:56 for other dependencies, we should be doing the same gate checks on the requirements project (if we're not already) 21:21:57 it's more tahn an API 21:21:59 dhellmann: it still woiuld 21:22:00 notmyname: we want to run eventlet tests on upstream pull requests actually. 21:22:08 notmyname: those contracts evolve 21:22:13 notmyname: thats a test-the-world concept that infra have been kicking around 21:22:21 notmyname: so that we're not broken by things like sphinx 1.2 21:22:23 markmcclain: of course, that's where deendency versions come from 21:22:28 the longer we diverge between these projects, the harder re-aligning is going to be 21:22:55 it also makes it REALLY painful for folks running CD from master 21:23:03 we do integrated releases so we should the tests should be intergrated 21:23:09 yes, it's not as if dependencies did not break us badly in the past 21:23:14 mordred: painful as in ... we stop testing that use case completely :( 21:23:18 mordred: yes. integration is hard, so it needs to be continually done. if something breaks, fix it. what I'm suggesting is that treating the interdependencies as more dcoupled things 21:23:21 russellb: yup 21:23:26 so one of the problems we have seen is that gate has so many false positives that its very easy for more to sneak in 21:23:30 notmyname: but they're not 21:23:31 mmm, from a CD perspective, I don't object to carefully versioned API transitions upstream 21:23:32 we have a horrible base line to compare against 21:23:33 they're quire interrelated 21:23:34 but 21:23:43 I strongly object to big step integrations 21:23:47 lifeless: ++ 21:23:48 mordred: how are they not? 21:23:57 mordred: again, that's why I'm here talking about this today. we've got a problem, and I'm throwing out ideas to help resolve it 21:24:02 because these are things with side effects 21:24:03 if we bump the API a few times a day, that would be fine with me 21:24:16 but more than that and we'll start to see nasty suprises I expect 21:24:20 things with side effects sounds kinda general, no? 21:24:40 there is a reason that side effects are a bad idea in well constructed code - they aren't accounted for in the API 21:24:48 but 21:24:52 would notmyname's idea really make things worse than what we have today? 21:24:52 sometimes they're necessary 21:24:57 which is why scheme isn't actually used 21:25:01 yes 21:25:04 it would make it worse 21:25:06 unless 21:25:09 portante: yes 21:25:10 you happen to not care about integration 21:25:21 how will it make it worse from what we have today? 21:25:25 if you don't care about integration, it would make your experience as a developer better 21:25:34 portante: define "worse" 21:25:36 mordred: I didn't see portante say anything about not caring about integration 21:25:44 (ever in fact) 21:25:58 point is, that's the case it's not worse 21:26:17 russellb: ? 21:26:24 notmyname: I'm saying that delaying integration until we have larger sets of things to integrate is going to make it more likely to introduce isseus, and harder to track them down when they happen 21:26:30 I believe that will be worse 21:26:31 heh, mordred is saying it's worse, unless you don't care about integration 21:26:34 notmyname: because the proposal would mean we would perform integration testing less, essentially only once and on abi bumps. 21:26:41 however, doing such a delay 21:26:56 we rarely change APIs 21:27:01 integration tests would still be run at the same rate 21:27:01 will increase the pleasurability of folks doing development if those people are not concerned about the problems encountered in integration 21:27:13 portante: how so? 21:27:17 not against combinations that would show you that a patch introduced an issue 21:27:36 which means that your patch against glance has no way of knowing that it breaks when combined with a recent patch to keystone 21:27:44 when neither patches have landed yet 21:27:53 we would still run the same job sets as we do today, that would not change, it just that we would be work with sets of changes from projects instead of individual commits 21:27:55 which means you have to BUNDLE all of the possible new patches until there is a new release 21:28:10 which means _hundreds_ of patches 21:28:20 and then bisect those out when you have a problem 21:28:28 so I think this whole discussion is looking at things the wrong way. Gate is effectively broken, we don't trust it and its slowing down development. The solution is to fix the bugs not find ways of running less tests 21:28:30 considering that it's hard enough to get it right when we're doing exact patch for patch matching 21:28:40 jog0: +1 21:28:40 jog0: +1 21:28:40 but why would my patch break something else without also breaking the API contract? 21:28:42 jog0: ++ 21:28:48 think about how much worse it will be when you only test every few huundred patches 21:28:50 jog0: ++ 21:29:02 jog0: +1 21:29:03 portante: because it can and will 21:29:07 one thing that would help, is make sure we are collecting good data against master all the time 21:29:08 jog0: I think notmyanme's point is that it cannot ever be fixed so you need new ideas 21:29:10 because that's the actual reality 21:29:29 so if we have free resources, run gate against it so we get more data to analyze and debug with 21:29:32 mordred: ++; it's happened plenty in our history 21:29:36 jog0: +1 ... so basically the ask back is what do we do (me & jog0 ... as I'm signing him up for this) to get better data in elastic recheck to help bring focus to the stuff that needs fixing 21:29:41 ttx: I am not ready to accept that answer yet 21:29:42 jog0: do you think we can get to the bottom of those issues ? 21:29:50 ttx: no, not that it can't be fixed, per se. but that openstack has grown to a scale where perhaps existing methods aren't as valuable 21:29:53 ttx: yes, it may take a lot of effor but yes 21:29:57 I think the methods are fine 21:30:04 the main problem is getting people to participate 21:30:12 yeh, agree with mordred 21:30:12 I think we probably need some sort of painful freeze to draw attention to fixing these bugs 21:30:13 introducing more slack into the system will not help that 21:30:33 it does not seem to be about adding more slack 21:30:33 markwash: if only developers were feeling some pain.... ;) 21:30:34 markwash: more pain as the answer to gate pain? 21:30:36 the fact that we al know that jog0 and sdague have been killing themselves on this 21:30:38 is very sad 21:30:46 and many people should feel shame 21:30:51 torgomatic: yeah, in one big dose, to reduce future gate pain 21:30:51 but targetting a finite set of resources on the point of integration 21:30:53 because everyone should be 21:31:08 portante: it's batching integration 21:31:16 portante: which is the opposite of continuous integratoin 21:31:29 was the idea of prioritizing the gate queue ever shot down? (landing [transient] bug fixes before bp's, for example) or was that just an implementation challenge 21:31:32 and which will be a step backwards and will be a nightmare 21:31:51 mordred: if the current system causes developers to assemble large patches unbeknownst to you, isn't that the same thing? 21:31:54 dolphm: we just added the ability to do that 21:32:00 so we are tracking 27 different bugs in http://status.openstack.org/elastic-recheck/ and that doesn't cover all the failures. Fixing these bugs takes a lot of effort 21:32:02 dolphm: we have manual ways to promote now. We've used it recently 21:32:05 jeblair: oh cool - where can i find details? 21:32:34 it seems like we're saying that we can leave the gate as-is if we would just stop writing intermittent bugs 21:32:35 dolphm: we've done it ~twice now; it's a manual process that we can use for patches that are expected to fix gate-blocking bugs, and are limiting it to that for now. 21:32:37 portante: that's actually my biggest fear. that current gate issues encourage people to go into corners to contribute to forks. which is bad for everyone 21:32:37 this is the in progress data to narrow things down furthere - http://paste.openstack.org/show/55185/ 21:32:46 and if we can stop doing that, let's just stop writing bugs at all and throw the gate out 21:32:47 notmyname: what forks? 21:32:51 notmyname: what forks of openstack are there? 21:33:03 jeblair: is the process to ping -infra when we need to land a community priority change then? 21:33:11 notmyname: and which developers are hacking on them? 21:33:15 dolphm: yes 21:33:25 jeblair: sdague: easy enough, thanks! 21:33:26 mordred: maybe internal "forks" cuz patches take a while to land? 21:33:33 * hub_cap guesses 21:33:37 mordred: I guess many companies run private forks 21:33:37 mordred: no names, dont' want the nsa to take them out. ;) 21:33:46 portante: ;) 21:33:48 hub_cap: yes. but to portante's point, it happens privately 21:33:52 portante: the nsa knows already 21:33:53 guesses the nsa runs a fork :-) 21:33:58 well, those companies usually learn pretty quickly 21:33:58 its does!? 21:33:59 what company doesn't have a fork of every openstack component as they try to get features in? 21:33:59 private forks seem natural 21:34:04 alternately, we can accept that bugs happen, including intermittent bugs, and restructure things to be less annoying when they do 21:34:11 that getting out of sync signficantly is super painful 21:34:12 * portante smashes laptop on the ground 21:34:15 and honestly just seems like FUD 21:34:18 torgomatic: yes! 21:34:18 * jd__ smells FUD 21:34:21 many of the bugs we see in gate are really bad ones 21:34:22 jd__: jinx 21:34:26 raaah 21:34:28 portante: lol 21:34:44 yeh, a lot of these races are pretty fundamental things 21:34:51 * mordred hands portante a new laptop that he promises has no malware on it 21:34:57 where compute should go to a state... and it doesn't 21:35:17 * portante thankful for kind folks with hardware 21:35:19 the tension is because some developers are slowed down by issues happening in other corners of the project and over which they have limited influence 21:35:25 to that end, I think notmyname's first two suggestions are both good ones 21:35:42 ttx: and the dangerous response is to continue not to care what's happening in the other corners 21:35:53 we're all in this together :) 21:35:54 ttx: they don't have limited influence though 21:35:56 lifeless: yes! 21:35:59 can we at least run experiments with the suggestions to play them out? 21:36:03 honestly, in the past we keep going in cycles where gate gets bad, pitch forks come out, people work on bugs, it gets better 21:36:05 but if you take the viewpoint of openstack as a whole, some parts may be slowed down, but the result is better in the end 21:36:15 this time... the number of folks working these bugs isn't showing up 21:36:25 portante: which ones? #2 and #3 there were fundamental disagreements from many people 21:36:27 which is really the crux of the problem 21:36:31 one policy that might help: as we triage a race-condition based failure in the gate, we need to require unit / lower level / faster tests that reproduce those failures to land in the projects themselves and fail every time 21:36:33 i hit a transient bug on a devstack-gate change, and with some help from sdague we tracked it down to a real bug in keystone, i filed the bug, wrote an er query and moved on 21:36:33 #1 jeblair invited some help to zuul dev to add 21:36:48 i think that was beneficial to the project 21:36:53 so I proposed that gate affecting bugs be critical by default 21:37:04 markwash: that won't work many times we don't know why something is breaking 21:37:07 I think the stats we have here suggest that perhaps that isn't a bad an idea as folk thought :) 21:37:07 it is okay to disagree, can't hurt to try a few things to see they pan out 21:37:09 can someone ban d0ugal? the join/parts are really annoying 21:37:10 and i was glad i could help even though i knew that my shell script change to devstack-gate didn't cause it. 21:37:19 russellb: I use them as a clock 21:37:21 take the http2 lib file descriptor bug 21:37:25 what if we just turn off the gate for a specific project until they fix the bugs that are clogging it? 21:37:30 jog0: ah, okay. .yeah its only for bugs where we understand the race but its hard to fix 21:37:39 ttx: rofl. russellb: can your client hide join/parts? 21:37:46 creiht: +1 21:37:56 dolphm: probably, but i don't want to hide the non broken ones 21:38:02 well prevent the project from any further patches until they fix gate critical bugs 21:38:02 creiht: that is a bad idea… we've done this before and it caused more problems than it solved 21:38:09 lifeless: ++critical 21:38:25 markmcclain: my first explanation wasn't as clear sorry 21:38:28 heh, and now we have a pile of critical bugs that the same small number of people are looking at 21:38:31 creiht: not sure i follow - block that project from being tested or block that project from landing irrelevant changes? 21:38:39 just saying, that alone doesn't get people to work on them :) 21:38:50 block from landing any changes until the critical bugs are fixed 21:38:58 russellb: sure, but can't we also say 'when there are critical bugs, we won't be reviewing or landing anything else' ? 21:39:10 russellb: like, make it really crystal clear that these things are /what matters/ 21:39:16 lifeless: sure, something, just saying that labeling things critical doesn't do anything by itself 21:39:18 creiht: I think we have that option, yes 21:39:20 russellb: ack, agreed. 21:39:27 markmcclain: you've done that once or twice, right? prioritized critical fixes to the exclusion of other patches? 21:39:35 idea: can http://status.openstack.org/rechecks/ be redesigned so that you can see the most impactful bugs per project the associated bugs are tracked against? 21:39:42 creiht: if we can really identify a project that doesn't play ball 21:39:54 dolphm: have you seen http://status.openstack.org/elastic-recheck/ ? 21:39:54 it's impossible for me to glance and that page and see where i can help 21:40:04 dolphm: yes, moving towards eliminating it with the elastic recheck dashboard 21:40:04 jeblair: yes.. we blocked approvals until fixes landed 21:40:13 dolphm: keystone doesn't have any gate issues as far as I know 21:40:18 russellb: yeah, that's not what i want either 21:40:18 it just... takes time 21:40:22 jog0: understood, but still 21:40:22 sdague, dolphm: ++ 21:40:28 jog0: it does 21:40:31 the port issue 21:40:35 jog0: that's not true 21:40:37 ttx: it isn't about playing ball... if there are critical bugs blocking the gate, then your project gets no new patches in until that bug is fixed 21:40:38 clarkb: link 21:40:39 it bounced stuff this morning 21:40:50 dolphm, jog0: and the keystoneclient issue we found yesterday 21:41:00 jog0: actually we do have a couple issues ;) 21:41:02 creiht: if there are critical bugs blocking the gate from your project, then your project .... 21:41:13 yes 21:41:15 in that case I think most integrated projects have critical bugs 21:41:17 if not all 21:41:34 great, so let's do that creiht thingy then 21:41:40 lol 21:41:40 I mean, maybe they all need to stop and fix those 21:41:51 creiht: in some cases it's not as binary as that. Some bugs take time to investigate/reproduce, and blocking the project that makes progress on them is probably not very useful 21:42:02 ttx: so, I disagree 21:42:08 that approach acknowledges that bugs happen, so it's got that going for it 21:42:11 ttx: it seems more usefull then just letting status quo go on 21:42:21 ttx: when you make changes there is a chance you introduce new bugs right ? 21:42:26 ttx: or make the current ones worse! 21:42:29 race condition bugs are a good situation for tough love 21:42:29 nothing changes if nothing changes 21:42:31 well, that brings up another point. elastic-recheck doesn't do any alerting to a project. maybe that shoudl be added 21:42:46 notmyname: agreed 21:42:46 ttx: so if you have critical issues, changing things that aren't fixing that issue, is just fundamentally a bad idea. 21:42:57 notmyname: sounds like a good idea 21:43:18 or perhaps an openstack-dev email for each bug that gets added? or would that be too much? 21:43:29 public flogging? 21:43:29 might be too little 21:43:32 heh 21:43:33 notmyname: so one issue is many times we don't know which project the bug is in 21:43:34 we were talking about that, if we can determine the project, or set of projects where the bug is, it should alert those channels whenever it fails a patch 21:43:52 ok, I think we are not maling anymore progress now 21:43:55 or making 21:43:58 so people shamed into how often they are breaking things 21:44:05 so what's next, then? 21:44:07 ttx: ^ 21:44:09 I don't think shame really helps 21:44:11 status quo! 21:44:12 :) 21:44:21 noone wanted to introduce these bugs 21:44:22 the downside of public is shaming is that sometimes the initial point of fault could be incorrect 21:44:28 what's next? how to get more people helping fix bus? 21:44:30 bugs* 21:44:33 right! 21:44:41 practical actions 21:44:43 continued work to raise awareness of the most impotant things is part of it 21:44:43 russellb: agreed 21:44:45 yeah, not about shame, just about how do we progress when there are criitcal bugs 21:44:51 and i think some ideas are being tossed around for that right now 21:45:14 is everyone raising their gate critical bugs in each weekly meeting ? 21:45:15 and then what hammers are available when not enough progress is made, and when we do we use them 21:45:16 notmyname: I think everyone agreed your suggestion 1 was interesting, just missing dev manpower to make it happen 21:45:21 and i'm not sure we have good answers for that part yet 21:45:26 Like as a dedicated section? And getting volunteers to work on them ? 21:45:29 (the multigate thing) 21:45:34 lifeless: it's the 1st real item in our meeting each week 21:45:35 some of us are giant fans of suggestion 2 as well 21:46:00 (suggestion 2 is removing redundant gate jobs) 21:46:24 torgomatic: no the extra data points are very helpful for diagnosing some the race conditions 21:46:26 torgomatic: what redundant jobs? 21:46:27 I think that one was far from consensual 21:46:36 it also helps us to prioritize based on frequency 21:46:42 I think we should just have a post-gate master integration job that is wired up to a thermonuclear device. . when the failure rate hits 50% it blows 21:46:51 markwash: sweet 21:46:54 ttx: if anything, more consensus on "no" for 2 and 3 IMO 21:47:03 lifeless: like running devstack 5 times against every project, when there's not always a way for that project's patches to break stuff 21:47:11 well, not only for one 21:47:14 I meant to say 21:47:22 torgomatic: yes, your analysis is missing something 21:47:27 torgomatic: which we disucussed 21:47:34 https://bugs.launchpad.net/openstack/+bugs?search=Search&field.importance=Critical&field.status=New&field.status=Incomplete&field.status=Confirmed&field.status=Triaged&field.status=In+Progress&field.status=Fix+Committed 21:47:34 don't want to rehash it 21:47:38 torgomatic: whic his that the break relationship is often bidirectional, and transitive. 21:47:40 as in, I'm sure I can write a Swift patch that breaks devstack for everything, but I cannot write one that only breaks devstack-neutron-large-ops 21:47:40 117 critical bugs 21:47:58 torgomatic: yes you can 21:48:13 jog0: great, please provide an existence proof in the form of a patch 21:48:21 lets get out of the rabbit hole 21:48:27 put some timeouts in swift to make things super slow for glance 21:48:30 back to how do we get more people working on critical bugs 21:48:45 btw, some projects have started tagging bugs with 'gate-failure' which can help folks searching for these bugs 21:48:47 jog0: yuo probably want to remove git committed 21:49:02 s/git/fix/ 21:49:20 which brings it to 44 21:49:27 lifeless: suggestions ? 21:49:31 jog0: that includes non integrated project 21:49:43 We shall soon move on to the rest of the meeting content 21:49:52 lifeless: yeah, do you have a better link? 21:50:07 jog0: not in time for the meeting 21:50:11 jog0: LP limittion 21:50:14 I see no reason why we can't continue to discuss this on the ML, btw 21:50:35 Everyone agrees it's an issue 21:50:53 Just absence of convergence on solutions 21:50:57 let's fix it, and not by doing less testing of the continuous or the integrated varieties. 21:51:12 except suggestion 1 which was pretty consensual 21:51:20 just missing resources to make it happen 21:51:35 yeh, that's going to require dev resources on zuul 21:52:00 but jeblair said he'd be happy to entertain those adaptive algorithms 21:52:09 and it's worth remembering, that's just speeding up the failures. 21:52:09 so I am not too keen on the first idea 21:52:10 actually 21:52:24 I think we can use the compute and human resources much better 21:52:27 jog0: I don't think it hurts, while the others arguably do hurt 21:52:29 if we fix gate issue one goes away 21:52:29 well honestly, it also requires effort 21:52:32 russellb: ++ 21:52:51 so if someone is signing up for it, cool. If people are just "someone else should do it" then it won't happen 21:53:13 it seems like idea #1 is just tuning the existing optimizations we have in place, not sure why it would be bad if someone showed up with a patch? 21:53:14 like most things :) 21:53:26 ok, 7 minutes left let's move on 21:53:35 #topic Red Flag District / Blocked blueprints 21:53:39 i like this new cross project meeting style :) 21:53:48 we never had time for stuff like this before 21:53:52 exciting 21:54:00 No blocked blueprint afaict 21:54:11 russellb: yes, we used to put that dust under carpets 21:54:23 at least we now voice the anger 21:54:28 "put the dead fish on the table" 21:54:41 * markwash googles 21:54:47 ttx: I don't think "anger" is the right word 21:55:07 * jeblair thinks a failed patch in the queue should be called a dead fish 21:55:30 jeblair: so the red circle in the zuul status page should be a dead fish instead? 21:55:31 I think there is frustration, but there is quite a bit of grace given to the current state of things by those who are frsustrated 21:55:32 we still have a conflict between heat and keystone around service-scoped-role-definition 21:55:44 russellb: with little stink lines 21:55:44 notmyname: yes, frustration ius a better term, sorry 21:56:05 heat/management-api still needs keystone/service-scoped-role-definition 21:56:15 stevebaker, dolphm: did you solve it ? 21:56:16 ttx: that dep should be removed 21:56:20 i followed up on that last week - heat really shouldn't be blocked on that 21:56:31 stevebaker: ah, great 21:56:32 although heat *could* take advantage of it- and i understand the desire to 21:56:36 i thought I did that 21:56:47 notmyname: well said 21:57:17 stevebaker: yep it's removed now, thx 21:57:27 Any other blocked work that this meeting could try to help unblock ? 21:58:06 I'll take that as a "no" 21:58:09 #topic Incubated projects 21:58:48 devananda, kgriffs, SergeyLukjanov: around ? any question ? 21:59:01 ttx, I'm here 21:59:08 ttx, no questions atm 21:59:19 no questions here 21:59:20 aside from wondering how much slower development on ironic will be when we get integration testing .... nope :) 21:59:33 +1 for raising the bar on code quality 21:59:34 ttx, first working code of heat integration already landed, waiting for reviews on tempest patches 21:59:39 kgriffs: had a question for you about when you wanted to switch to release amnagement handling your milestones 22:00:02 ah, great question 22:00:11 tbh, I don't have a good feel for what that entails 22:00:12 I see your i1 is still open 22:00:32 kgriffs: we should talk. Will ping you tomorrow ? 22:00:33 hmm. Thought I closed it. 22:00:34 * kgriffs hides 22:00:42 ttx: sounds good 22:00:49 kgriffs: it's inactive but it looks in progress :) 22:00:55 I've been trying to move closer to tracking the i milestones, so this is timely 22:01:07 ttx: oic 22:01:07 kgriffs: awesome, talk to you tomorrow 22:01:10 kk 22:01:14 and.. time is up 22:01:16 #endmeeting