Tuesday, 2015-10-13

*** mestery has joined #openstack-relmgr-office00:14
*** mestery has quit IRC00:19
openstackgerritDavanum Srinivas (dims) proposed openstack/releases: Fix Bug in oslo.messaging 2.6.0 and release as 2.6.1  https://review.openstack.org/23387000:22
*** stevemar_ has quit IRC00:25
*** stevemar_ has joined #openstack-relmgr-office00:29
dims_lifeless: around?01:05
lifelessdims_: yah01:43
openstackgerritMerged openstack/releases: Fix Bug in oslo.messaging 2.6.0 and release as 2.6.1  https://review.openstack.org/23387001:47
dims_lifeless: thanks01:47
*** stevemar_ has quit IRC01:53
*** dims_ has quit IRC02:28
*** mriedem has quit IRC02:39
*** dims__ has joined #openstack-relmgr-office02:41
*** stevemar_ has joined #openstack-relmgr-office02:42
*** dims__ has quit IRC02:59
*** spzala has quit IRC03:03
*** jhesketh has quit IRC03:08
*** jhesketh has joined #openstack-relmgr-office03:13
*** dims__ has joined #openstack-relmgr-office03:59
*** dims__ has quit IRC04:06
*** nikhil_k has joined #openstack-relmgr-office04:39
*** nikhil has quit IRC04:42
*** mestery has joined #openstack-relmgr-office04:51
*** mestery has quit IRC05:02
*** nikhil has joined #openstack-relmgr-office05:41
*** nikhil_k has quit IRC05:43
stevemar_dims and dhellmann if we could get https://review.openstack.org/#/c/233761/ and https://review.openstack.org/#/c/233763/ going, that would be great, as we're currently blocked in keystone, they are breaking the gate :(05:48
openstackgerritSteve Martinelli proposed openstack/releases: keystonemiddleware 2.4.0  https://review.openstack.org/23376305:56
openstackgerritSteve Martinelli proposed openstack/releases: keystoneclient 1.8.0  https://review.openstack.org/23376105:59
*** stevemar_ has quit IRC06:03
*** dims__ has joined #openstack-relmgr-office06:03
*** dims__ has quit IRC06:09
*** openstack has joined #openstack-relmgr-office06:29
*** dims__ has joined #openstack-relmgr-office07:05
*** dims__ has quit IRC07:10
*** armax has quit IRC07:15
*** Travis__ has joined #openstack-relmgr-office07:21
*** jhesketh has quit IRC07:22
*** jhesketh has joined #openstack-relmgr-office07:23
*** Travis__ has left #openstack-relmgr-office07:32
*** dims__ has joined #openstack-relmgr-office08:06
*** dims__ has quit IRC08:11
*** openstack has joined #openstack-relmgr-office08:48
johnthetubaguyttx: not seen anything else major pop up at least08:49
*** flaper87 has quit IRC08:51
*** flaper87 has joined #openstack-relmgr-office08:51
ttxjohnthetubaguy: no, glance was being respun on another issue, but gate being busy is killing the respin08:51
ttxto the point where I'm wondering if we should not abandon the idea of respinning08:51
johnthetubaguyoh, gotcha08:51
ttxdoing those oslo releases wasn't the best idea after all08:53
johnthetubaguyttx: its always good to have something for the post mortem08:54
*** dims__ has joined #openstack-relmgr-office09:08
*** dims__ has quit IRC09:13
*** openstackstatus has joined #openstack-relmgr-office09:38
*** ChanServ sets mode: +v openstackstatus09:38
-openstackstatus- NOTICE: gerrit is undergoing an emergency restart to investigate load issues09:43
*** ChanServ changes topic to "gerrit is undergoing an emergency restart to investigate load issues"09:43
SergeyLukjanovttx, re sahara - everything is still okay :)09:45
SergeyLukjanovno need for RC09:45
SergeyLukjanov(I mean no need for RC2)09:45
ttxSergeyLukjanov: heh, cool09:46
*** dims__ has joined #openstack-relmgr-office09:49
ttxwoo, faster.09:54
*** openstackgerrit has quit IRC10:01
*** openstackgerrit has joined #openstack-relmgr-office10:02
*** dims__ is now known as dims10:18
ttxNotes: considering a RC3 for Nova over https://bugs.launchpad.net/cinder/+bug/150515310:31
openstackLaunchpad bug 1505153 in Manila "gates broken by WebOb 1.5 release" [Critical,In progress] - Assigned to Valeriy Ponomaryov (vponomaryov)10:31
ttxthe trick is that would trigger a respin for Manila and Cinder at least, if we follow the same rationale there10:31
sdaguedo these projects have their fixes up? It's the same bug from nova that they've inherited10:32
ttxsdague: checking10:32
ttxmanila master fix is not merged yet10:32
sdaguecinder is merged10:33
sdaguein master10:33
sdaguehttps://review.openstack.org/#/c/233528/10:33
ttxcinder backport proposed10:33
ttxtests pass, blocked by my -210:33
ttxsuspecting the tests would not pass anymore due to bug 1503501 though10:34
openstackbug 1503501 in neutron "oslo.db no longer requires testresources and testscenarios packages" [High,Fix committed] https://launchpad.net/bugs/1503501 - Assigned to Davanum Srinivas (DIMS) (dims-v)10:34
* ttx rechecks to see10:34
ttxdims: can you propose a backport for https://review.openstack.org/#/c/231789/ to stable/liberty so that we can get check results (and rebase other things on top of it if needed) ?10:36
dimsttx: ack10:37
ttxdims: thx10:37
* ttx breaks for lunch10:37
*** ChanServ changes topic to "OpenStack Release Managers office - Where weekly 1:1 sync points between release manager and PTLs happen - Logged at http://eavesdrop.openstack.org/irclogs/%23openstack-relmgr-office/"11:18
-openstackstatus- NOTICE: Gerrit has been restarted and is responding to normal load again.11:18
*** doug-fish has joined #openstack-relmgr-office11:22
*** gordc has joined #openstack-relmgr-office11:36
*** dims has quit IRC11:47
*** dims has joined #openstack-relmgr-office11:48
Kiallttx: re https://bugs.launchpad.net/designate/+bug/1505295 - I'm seeing  liberty-rc-potential on this, should we be considering a new RC for it? or just a backport?12:40
openstackLaunchpad bug 1505295 in openstack-ansible "Tox tests failing with AttributeError" [High,In progress] - Assigned to Jesse Pretorius (jesse-pretorius)12:41
ttxKiall: looking12:42
ttxKiall: i think we can fix that postrelease. I already have a few RC respins in the pipe and they are struggling a lot with busy gate12:43
KiallCool, Sounds good to me. Good luck :)12:43
ttxKiall: i'll let you know if we need to respin over this12:44
KiallYea, it seems to only affect tests.. But.. we'll see as the other projects realize all their gates are failing ;)12:44
Kiall(At least, that's the impression I get from the bug :P)12:45
ttxKiall: we have 2.6.112:45
ttxthat should gradually unfail as a result12:45
KiallAh, cool..12:46
KiallI saw it wasn't tagged as inporgress for o.m so assumed it hadn't been looked at there12:46
ttxI'll add a note to the bug12:47
openstackgerritMerged openstack/releases: Ignore backup files.  https://review.openstack.org/23220712:58
openstackgerritDavanum Srinivas (dims) proposed openstack/releases: oslo.log bug fix release  https://review.openstack.org/23420213:18
*** stevemar_ has joined #openstack-relmgr-office13:20
*** spzala has joined #openstack-relmgr-office13:25
*** mestery has joined #openstack-relmgr-office13:38
ttxsmcginnis: ping13:51
smcginnisttx: Hey13:59
*** sigmavirus24_awa is now known as sigmavirus2414:01
stevemar_dhellmann: dims ttx can we get https://review.openstack.org/#/c/233763/ and https://review.openstack.org/#/c/233761/ cut? they are currently blocking keystone gate for passing jenkins (we're pulling in clients into our tests that are pulling in requets 2.8.014:02
stevemar_)14:02
ttxsmcginnis: see -infra14:02
dimsstevemar_: ack. am on it14:02
smcginnisttx: ;)14:02
stevemar_dims: ty!14:04
*** AJaeger has joined #openstack-relmgr-office14:17
AJaegerrelease manager, we need a final release of the openstackdocstheme for Liberty so that we include links to it: https://review.openstack.org/233511 . Could this be done today, please?14:18
dimsstevemar_: check on pypi please14:21
stevemar_dims: thanks, it was totally blocking keystone development :)14:22
openstackgerritMerged openstack/releases: keystonemiddleware 2.4.0  https://review.openstack.org/23376314:22
openstackgerritMerged openstack/releases: keystoneclient 1.8.0  https://review.openstack.org/23376114:22
*** bdemers has quit IRC14:23
dimsvery welcome stevemar_14:23
*** bdemers has joined #openstack-relmgr-office14:24
stevemar_dims: oh shoot, i think i have to bump g-r14:34
AJaegerdims: Could you release openstackdocstheme as well, please?14:38
*** bdemers has quit IRC14:38
stevemar_dims: i'm not sure if you have power here:  https://review.openstack.org/23425514:39
dimsstevemar_: +214:39
dimsAJaeger: looking14:40
AJaegerthanks, dims14:40
stevemar_dims:  what about https://github.com/openstack/requirements#for-upper-constraintstxt-changes14:40
*** bdemers has joined #openstack-relmgr-office14:40
dimsstevemar_: let's wait for the CI job it will tell us14:41
stevemar_dims: alrighty14:41
dimsAJaeger: liberty release?14:44
AJaegerdims: yes! Our theme contains links to releases - and we couldn't release earlier...14:44
AJaegersee the left bar at http://docs.openstack.org/networking-guide/ - points to kilo14:45
AJaegerNOw is time to point it to Liberty...14:45
dimsack.14:45
AJaegerand I prefer to have it out today and a bit of slack instead of pushing it through on Thursday and not beeing ready in time...14:46
AJaegerso, whenever you release today is fine ;) we don't need it *now*. But I prefer today, please14:46
dimsAJaeger: working it now14:47
AJaegerthanks, dims!14:47
AJaegerdims, I have to leave now, will read backscroll later if there's anything for me...14:49
*** armax has joined #openstack-relmgr-office14:49
*** AJaeger has quit IRC14:49
dimsAJaeger: ttyl good night14:49
armaxttx: ping14:58
*** mriedem has joined #openstack-relmgr-office14:59
ttxarmax: o/15:01
armaxttx: I saw you you went for an RC3 for Glance15:02
ttxarmax: yes, and probably nova and cinder as well if the gate behaves15:02
ttxarmax: anything on your map ?15:02
armaxttx: at this point I wonder if we should have one too, we had 4 issues that are affecting liberty unit tests15:03
ttxarmax: do you have a list of them ? So far we've been respinning on the Webob one, not on the oslo ones (which should be fixed by later versions anyway, so not permanently broken)15:04
armaxttx: https://review.openstack.org/#/c/232270/15:04
armaxttx: ok, they are the oslo ones15:04
ttxarmax: the triuck being, the gate is very busy so it's very likely we open a door we don't know how to close15:04
armaxttx: fair enough15:04
ttxarmax: but yes, the stable/liberty branch will need some care to get fixed15:05
openstackgerritMerged openstack/releases: Liberty release - openstackdocstheme 1.2.4  https://review.openstack.org/23351115:06
ttxarmax: we can push them as -2 on the stable/liberty branch and see in what order they need to be fixed to pass15:06
ttxand maybe we can make a quick yes/no decision based on that15:07
armaxttx: let me check with ihar15:07
ttxarmax: also check the neutron-*aas15:08
ttxarmax: you may need to fix https://bugs.launchpad.net/nova/+bug/1503501 at the same time15:09
openstackLaunchpad bug 1503501 in neutron "oslo.db no longer requires testresources and testscenarios packages" [High,Fix committed] - Assigned to Davanum Srinivas (DIMS) (dims-v)15:09
armaxttx: we have that in master15:09
armaxttx: and it’s been cherry picked15:10
ttxarmax: hmm, might need to get squashed as a single fix to get in, or the tests will block each other15:10
ttxif https://review.openstack.org/#/c/232270/ really blocks tests15:11
armaxttx: yes, they are all squashed here: https://review.openstack.org/#/c/232270/15:11
ttxoh, ok15:11
armaxttx: at this point, I think the issues are only UT's15:11
ttxso we could do a neutron RC3 with just 232270 in ?15:12
ttxthat would be reasonable (and ready) enough15:12
ttxhow about the neutron-*aas, do they need patches in as well ?15:12
armaxttx: in theory yes, I actually need to squash another fix in it to make sure that the UT work with oslo.service >0.715:13
ttxah, hm. That adds a minimum of 5 hours ot the mix before it can clear the check queue then15:13
ttxthat makes it a lot more dangerous15:13
ttxarmax: ok, update it and when the tests pass we'll make the call15:14
armaxttx: agreed.15:14
armaxttx: at this point we can make an apportuinistic decision15:14
armaxttx: but if we don’t manage to, it’s not the end of the world15:14
ttxright15:14
ttxthat can land post-release ok I think15:15
dhellmannttx, dims : belated good morning (sorry, I forgot I had to go to the dentist this morning)15:15
armaxttx: indeed15:15
ttxdhellmann: hell broke loose with the oslo releases15:15
dimsdhellmann: hey, hope you are not groggy :)15:15
ttx+ the WebOb thing15:15
dhellmanndims: no, just a cleaning15:16
dhellmannttx: oh?15:16
dimsdhellmann: the oslo.db testresources thing15:16
ttxso in retrospect it was a pretty bad idea to do oslo releases on this week15:16
dhellmann:-/15:16
ttxdims: and the oslo.messaging and the oslo.versionedobjects15:16
ttxand the oslo.log15:16
dhellmannis there an email thread or something I can catch up on?15:16
ttxdhellmann: no, I'm in firefighting mode, the gate is overlaoded and I can't get anything I need in15:17
dhellmannok15:17
dhellmannhow can I help?15:17
ttxa dozen of projects want a respin15:17
dhellmannttx: I'll add a lib release freeze period to the schedule in the project guide15:17
dimsdhellmann: oslo.db had the testresources (bumping version 3.0.0 did not help as we had not capped stable/liberty). oslo.messaging had a fixture that failed because zmq was missing (zmq is optional). o.vo, master was ok stable/liberty had issues, oslo.log broke hyperv.15:20
dimsdhellmann: oslo.db - patches are in progress to add testresources to stable/liberty, master of all projects are ok15:20
dimsdhellmann: oslo.messaging - 2.6.1 unblocked everything15:20
ttxdhellmann: it's admittedly not a real solution. We are just deferring the problem15:20
dimsdhellmann: o.vo - i think sdague added a exclude in stable/liberty15:21
ttxit's the other face of the "don't cap" coin15:21
dhellmannttx: yeah, but if we move it out of the critical period at least it doesn't block the release15:21
dimsdhellmann: oslo.log - new point release unblocked them15:21
dhellmanndims: did we get that oslosphinx release out, too?15:22
dimsdhellmann: yes, done15:22
ttxdhellmann: well yeah. I tried to sell the argument that it shouldn't trigger respins since we would have had the same problem next week15:22
dimsright, if we had released next week we would have broken installs in whoever was trying things out15:23
sdaguealso, for things we aren't going to cap, we can test15:23
sdaguedims: I did not exclude, we're backporting a fix15:23
dimssdague: sorry15:23
dimsmis-remembered it15:23
sdaguewe have the ability to add the compat jobs15:23
ttxdhellmann: At this stage I'm leaning towards respinning the things affected by the webob issue15:23
ttxbut not the things where the branch is borked due to oslo things15:23
sdaguethat's the big hole we have15:24
sdaguebecause that was a safety net we used to enforce15:24
ttxin the webob case it's apparently the consuming project fault15:24
dhellmannsdague: compat jobs?15:24
dhellmannttx: that approach makes sense15:25
sdaguewhere we tested the library against stable/foo of the rest of the stack15:25
dhellmannsdague: I think that's the thing lifeless wants and that I've been objecting to15:25
dimsdhellmann: one note here, if nova or cinder tests break, compat jobs won't catch them15:26
ttxdhellmann: so we are looking at projects affected by https://bugs.launchpad.net/cinder/+bug/150515315:26
openstackLaunchpad bug 1505153 in openstack-ansible "gates broken by WebOb 1.5 release" [High,In progress] - Assigned to Jesse Pretorius (jesse-pretorius)15:26
sdaguedhellmann: that's what we used to do, because otherwise you have to cap every all time15:26
mriedemis it too late for me to ask why we ca'nt just cap oslo.db<3.0 in g-r on stable/liberty?15:26
ttxmriedem: because we said we wouldn't cap for the last 6 months15:26
sdaguedims: yes, that's fine, however the compat job would have caught this15:26
mriedembut upper-constraints doesn't apply to unit test jobs15:26
dhellmannsdague: my impression of his approach is that we would not ever be able to advance anything in the libraries, and development would become so difficult that everyone would stop trying to do it15:26
ttxand two days before release sound like the wrong moiment to reverse 6 month worth of thinking15:27
dhellmannsdague: we're in this state where requirements think we are saying we're compatible, but we're not actually trying to be compatible15:27
sdaguedhellmann: some libraries are15:27
mriedemi personally think the upper-constraints strategy is flawed, but it's hard to get into that right now15:27
ttxmriedem: last cycle we did that and the white walkers attacked us15:27
sdagueo.vo explicitly wants to be15:27
sdaguefor instance15:27
mriedemttx: last cycle we didn't have the centralized releases repo either15:27
dhellmannsdague: that's fine, but lifeless was trying for a blanket policy15:27
*** mestery has quit IRC15:28
dhellmannI don't have a problem if a team wants to take on that challenge, but I don't want everyone forced to do it15:28
sdagueok, so realistically every library must either be capped in gr or have a compat job15:28
ttxmriedem: personally I would rather try to make sure that new oslo releases work with their consuming projects /before/ they are released15:28
dhellmannthat makes sense to me15:28
sdaguehonestly, I'm fine with it being a library by library decision15:28
mriedemin my experience, we've had capping issues in the past because (1) we capped with <= so we left 0 wiggle room for patch releases and (2) projects released on their own and didn't follow semver - both of which should be resolved with a centralized release team using the releases repo15:28
sdaguebut right now we've got a ton which have neither15:28
dhellmannsdague: agreed15:28
dansmithttx: we had that for o.vo and master, but missed liberty, FWIW, but I agree we should do muuch better for oslo libs in that area15:29
sdaguemriedem: the problem ends up being propogation of the caps into libraries which makes the N-cycle uncap craziness15:29
mriedemmaybe we shouldn't sync caps to libraries15:30
sdaguemriedem: and then you run into the pip resolver question15:30
mriedemyeah15:31
sdaguebecause if they aren't there, you can get stuff that you don't expect15:31
dansmithmaybe we should rewrite in go before friday?15:31
mriedemb/c the lib has uncapped deps and the server has capped deps15:31
* dansmith runs15:31
sdagueand libraries depend on other libraries15:31
dimslol15:31
sdaguewhich is what contraints solved, narrowly15:31
ttxdansmith: release is tomorrow fwiw, not friday15:32
sdagueit only solves it for us, the ansible fails here demonstrate how it fails in the field15:32
dansmithttx: oh, then I guess my plan won't work because of *that* :P15:32
sdagueanyway15:32
mriedemi guess i'm lost on how upper-constraints is a solution when it doesn't apply to the requirements files in a repo used during unit test jobs15:33
mriedemwhich is why we're breaking this week15:33
mriedemand i don't like having to tell packagers that they have to not only be aware of the requirements in the repo but the global upper-constraints in another repo15:33
dhellmannmriedem: the constraints stuff just isn't done15:33
dhellmann(finished)15:33
mriedembut is the goal with constraints to just change the unit test jobs in our CI system to use those also, rather than just pip install requirements.txt test-requirements.txt?15:34
sdaguemriedem: yeh, well, upper constraints is a published list of "this works"15:34
dansmithso, another direction change aside,15:34
dansmithare we good on what we need to do in the short term?15:34
dansmithto avoid ttx's head from exploding15:34
sdaguemriedem: yes, it's coming to unit tests15:34
sdagueit's not there yet15:35
ttxdhellmann: apparently that wouldn't have prevented breaking the release, if we ship a requirements that's actually only not broken if you take upper-constraints into account15:35
sdaguemriedem: https://github.com/openstack/nova/blob/e6d3b8592cab5dcf35069bffd39fb3d558c16ea1/tox.ini#L1315:35
ttxat least that's the rationale under which we are considering a Nova rc315:35
mriedemsdague: yeah that grosses me out15:35
sdaguemriedem: you have a better idea? :)15:36
sdaguettx: in the o.vo case, I think we would have caught this with a liberty job15:36
dhellmannttx: yeah. maybe we should go back to capping requirements in servers, though there was some discussion of why that is not sufficient in the scrollback15:36
ttxdansmith: if the omnibus fix from sdague passes tests, we'll likely respin a rc3 over it15:36
sdaguebecause libs_from_git punches through requirements15:36
sdagueand it would have exposed that incompatibility15:36
ttxgood day to have a busy gate15:36
dansmithttx: okay, I need to check the logs even if it passes before you pull that trigger15:36
sdaguedansmith: we've got all the logs for the dsvm jobs15:37
ttxdansmith: mmmkay, now I need to remember to ping you15:37
dhellmannttx: there are ways to kick unimportant changes out of the gate15:37
ttxdhellmann: ah?15:37
dhellmannttx: edit the commit message in gerrit15:37
sdaguedansmith: so you can look at them now15:37
ttxways I haven't discovered in the last 5 years of making openstack releases then15:37
dansmithsdague: I don't see the partial job results on the omnibus patch15:37
ttxdhellmann: it's the check queue that is overloaded15:37
sdaguedhellmann: http://logs.openstack.org/66/234166/1/check/gate-grenade-dsvm-partial-ncpu/9522191//logs/15:37
ttxdhellmann: not the gate queue15:37
dhellmannttx: oh, well, there's not much you can do about that15:38
ttxright15:38
dhellmannsdague: what am I looking for?15:38
sdaguedhellmann: right, we've just got so much in check, though failing gate queue patches keep reseting and sucking up nodes15:38
sdaguedhellmann: sorry, that was meant for dansmith15:38
dhellmannah15:38
sdaguetab complete fail15:38
dansmithsdague: ttx: logs look fine +115:38
dhellmannttx: I just made the problem worse: https://review.openstack.org/23429315:38
ttxwe did send "stop the presses, only push fixes that matter" heads-up in the past. But it seems to have only resulted in people retrying their patches to make sure those would pass15:39
dhellmannttx: maybe we need an "emergency mode" switch for gerrit15:39
ttxdhellmann: yeah , I floated the idea in the past without a lot of success15:40
sdaguemriedem: https://jenkins03.openstack.org/job/gate-cinder-python27/1782/console15:40
sdaguecinder's use of 400 instead of 500 is apparently an issue?15:41
sdagueI thought that passed at some point15:41
mriedemsdague: did they change it in the last 12 hours?15:41
ttxdansmith: so we are all clear from your side ?15:41
sdaguemriedem: https://review.openstack.org/#/c/233923/15:42
dansmithttx: yeah15:42
mriedemsdague: well, he has to fix the unit test first15:42
ttxdhellmann: in other news, I intercepted the syncs from the requests blacklist this morning15:43
dhellmannttx: that's one bit of good news15:43
sdaguemriedem: how did it pass unit tests in check?15:43
mriedemidk but it's wrong15:44
mriedemhttps://review.openstack.org/#/c/231438/5/cinder/tests/unit/test_exception.py15:44
mriedemtest_default_args checks for 40015:44
sdaguemriedem: ok, so let's fix that15:44
sdaguethat keeps reseting the gate15:44
sdagueI'll get it15:44
sdagueok, this is a legit rebase error by gerrit I think15:46
sdaguemanually rebasing15:47
sdagueok, so here's what I'm seeing15:50
sdagueceilometer - unit tests break on a timeout sometimes (no idea why)15:50
sdaguekeystone - unit tests blow up on dep conflict with webob 1.5 - https://jenkins07.openstack.org/job/gate-keystone-python27/3849/consoleFull15:51
sdaguesahara - unit tests blow up on oslo.db issue15:51
sdaguein the current gate queue15:51
sdaguethe sahara job is master15:52
openstackgerritDoug Hellmann proposed openstack/releases: Revert "Fix docs build by blocking bad oslosphinx version"  https://review.openstack.org/23430715:52
mriedemhmm, keystonemiddleware problems with webob deps15:52
sdagueright, because we capped it and it didnt?15:53
dhellmannso keystone needs the webob cap, it looks like15:53
dhellmannyeah15:53
sdaguethat's the whole caps bite you problem15:53
sdaguebecause you have to have the entire world synchronized15:53
dhellmannwell I wonder why the cap wasn't enforced when the test env was built15:54
sdagueI blame pip :)15:54
sdaguehttps://review.openstack.org/#/c/233923/ - mriedem that's the cinder thing15:54
dhellmannyeah15:54
dhellmannso what's the deal with webob 1.5? do they have a broken release, or are we not handling some aspect of a change in behavior?15:55
mriedemthe latter15:56
dhellmannk15:56
mriedemlooks like keystone hasn't merged a global reqs update change15:56
*** mestery has joined #openstack-relmgr-office15:56
mriedembtw, the webob cap was temporary, goes away here https://review.openstack.org/#/c/233857/15:56
dhellmannmriedem: did we get a keystone client released with the cap, though?15:57
dhellmannor middleware, I guess15:57
*** mestery has quit IRC15:57
mriedemthis is the keystone change https://review.openstack.org/#/c/233820/15:57
mriedemkeystonemiddleware has the cap and a release at 2.4.015:58
mriedemwhich must have all happened in the last 12 hours or so15:58
mriedemhttps://github.com/openstack/keystonemiddleware/commit/2074ca89b9b19c953c3fbde8915bba30a6e1e0bb15:58
sdagueoh, so that was the badness, because it wasn't supposed to get out there15:58
sdaguegrrr15:58
dimsstevemar_: ^^15:58
sdaguethe webob 1.5 fix is a one line patch15:58
sdaguewhich is why we're just fixing it15:59
mriedemthe keystonemiddleware release was for something else https://review.openstack.org/#/c/233763/15:59
mriedemrequests, but it inadvertantly pulled in the webob thing15:59
dhellmannmriedem: right, so we need to undo that to fix keystone I think15:59
sdaguedhellmann: here is what that patch looks like, fyi - https://review.openstack.org/#/c/233845/15:59
dhellmannsdague: ack, I saw the other one in cinder, too, lgtm16:00
ttxsdague: we have a +1 on the omnibus fix16:00
*** mestery has joined #openstack-relmgr-office16:00
sdaguettx: I say do it16:01
ttxsdague: so i can open a RC3 with the two bugs referenced and approve it to stable/liberty16:01
sdagueprobably get johnthetubaguy to ack it16:01
ttxlet me prepare that16:01
mriedemthe keystone g-r sync is failing unit tests, but if we land the revert of the webob cap in g-r then the keystone sync is void16:01
mriedemand would require a keystonemiddleware 2.4.1 release16:01
* johnthetubaguy ready to comment on those when needed16:01
dhellmannmriedem: yep, that sounds like the plan. shall we wait until the nova situation is resolved before doing that so we're only making one change at a time? it sounds like sdague, johnthetubaguy, and ttx are close to done16:02
mriedemseems we can still push forward with https://review.openstack.org/#/c/233857/ in paralel16:03
mriedem*parallel16:03
mriedemsince that's only master16:03
sdaguemriedem: yeh, it can go in parallel16:03
ttxjohnthetubaguy: https://launchpad.net/nova/+milestone/liberty-rc316:04
mriedembtw the merge conflict there is false16:04
ttxabout to approve https://review.openstack.org/#/c/234166/ for gate16:04
ttxwouldn't mind your +2 on that one16:04
ttxjohnthetubaguy: ^16:04
sdaguemriedem: no, it's real16:04
sdaguebecause you have an abandoned depends on16:05
sdaguethat patch is stuck with the depends on16:05
mriedemgah16:05
mriedemok16:05
mriedemupdating16:06
sdagueI actually sent a big long email about it this morning to the list to explain that16:06
sdaguebecause I realized it's not always obvious16:06
dhellmannyeah, the error message doesn't include a reference to what is actually blocking things16:06
mriedemi didn't realize depends-on ignored abandoned changes either16:07
dhellmannmriedem: +2a16:07
ttxyou get a rc3. And YOU get a RC316:07
mriedemthanks oprah16:07
* dims checks under his chair :)16:08
*** AJaeger has joined #openstack-relmgr-office16:08
ttxdhellmann: OK, so things need time in the zuul washing machine now. We have glance and nova RC3 windows opened with fixes in the pipe. We have tentative RC3s for Cinder and Manila over basically the same fix, pending the check results16:10
ttxif the check result returns positive in the coming hours we'll likely respin them as well16:10
AJaegerdims: thanks for the openstackdocstheme release.16:10
dhellmannttx: ack. I'll work on the keystonemiddleware release to unblock keystone16:11
AJaegerdims, dhellmann: Could you approve the constraints changes: https://review.openstack.org/233512 and https://review.openstack.org/233514 , please?16:11
ttxdhellmann: hopefully we can hold the RC respin line there.16:11
dhellmannttx: yep, just the lib16:11
dhellmannAJaeger: we're fighting a bunch of gate issues right now, I don't think we want to start changing other requirements16:11
AJaegerdhellmann: ok, understood.16:13
*** mestery has quit IRC16:15
ttxmriedem: any chance you can chase down e0ne or smcginnis to get https://review.openstack.org/#/c/233668/ updated to your liking ? The cinder rc3 hinges on, a fast check result for this change16:18
ttxwe have a 6-hour tunnel on the check gate at this stage16:18
mriedemttx: yeah, can do16:19
ttxmriedem: thx16:19
* ttx goes for dinner break16:20
cp16netdhellmann: i'd like to make a new release of the python-troveclient with a minor update from 1.3.0 to 1.3.1 could you help with this or busy atm?16:27
dhellmanncp16net: we're fighting several requirements-related fires right now, so we're not doing releases today16:28
*** mestery has joined #openstack-relmgr-office16:30
mriedemcinder stable/liberty squash is updated https://review.openstack.org/#/c/234329/16:32
cp16netdhellmann: ok thanks i'll ping you agin about that in a day or so16:33
*** mestery has quit IRC16:33
*** mestery has joined #openstack-relmgr-office16:37
sdaguemriedem: is there no test fix needed for - https://review.openstack.org/#/c/234329/1 ?16:38
mriedemsdague: no, the test_default_args or whatever it was that failed on master was an independent change16:39
dhellmanncp16net: sounds good16:39
sdaguemriedem: ok, cool16:39
mriedemhttps://review.openstack.org/#/c/231438/ which i don't want to backport16:39
mriedemsince it plays with webob internals16:39
sdaguemriedem: yep, that sounds fine16:39
dhellmannAJaeger: we should spend some time thinking about whether the doc theme actually needs to be constrained or not -- it's not used by dsvm jobs, or most of the regular unit test jobs16:41
dhellmannsdague, mriedem, dims : are we waiting on jobs at this point or is there something for me to poke?16:42
jgriffithsdague: yeah, so that test isn't there in liberty so he won't hit the merge issue that I did16:42
AJaegerdhellmann: indeed. But that's today's situation - don't we want to change the framework?16:42
sdaguethe cinder change is new and waiting on nodes. Unless we wanted to jump the queue and get some stuff promoted for the release16:42
dimsdhellmann: sileht/cdent and i are working the oslo.messaging problem16:42
AJaegerdhellmann: Or should we add a comment to constraints file and say that the entry is ignored?16:42
dhellmannAJaeger; that package may be a candidate for the set of unconstrained things in blacklist.txt16:42
sdaguedims: does that hit liberty, or just master?16:43
AJaegerdhellmann: let me check that...16:43
sdagueI think that's got to be the game plan right now16:43
sdaguea) what's impacting liberty, how fast can we address it16:43
dimssdague: no cap means liberty + master are both picking up latest oslo.messaging right?16:43
sdagueb) what's impacting master making the gate reset, how do we fix those16:43
sdagueeverything else16:43
openstackgerritMerged openstack/releases: oslo.log bug fix release  https://review.openstack.org/23420216:43
sdaguedims: yep, I just had only seen the fail on master16:44
dhellmannsdague: I'm comfortable with us jumping the queue16:44
dimssdague: will check with cdent16:44
sdaguedhellmann: ok, we should collect all the changes we want to get promoted in one go then16:44
dhellmannI'll start an etherpad to track all the things we're doing, sec16:44
mriedemthere were keystone things impacting master gate i thought, bknudson was looking at those16:44
dhellmannhttps://etherpad.openstack.org/p/liberty-release-gate-race16:44
sdaguemriedem: right, let's put master fixes in tier 216:45
sdagueand make sure we've got all the liberty things we think we need in flight16:45
*** mestery has quit IRC16:46
sdagueok, what other liberty things are in flight16:47
dhellmanndims: please add what you're working on to https://etherpad.openstack.org/p/liberty-release-gate-race16:47
*** mestery has joined #openstack-relmgr-office16:50
sdaguedhellmann: any idea if there is a manilla backport?16:53
dhellmannsdague: I'm checking to see now16:53
dhellmannactually manila is failing unit tests because of the oslo.db dependency change16:53
*** mestery has quit IRC16:54
sdaguedhellmann: ok, right, then they need the omnibus fix like cinder and nova16:54
sdaguewhich is the 2 merged together16:54
dhellmannthis manila change does have a webob related fix16:54
dhellmannhttps://review.openstack.org/#/c/234288/116:54
dhellmannbswartz: ^^16:55
dhellmannugh16:55
stevemar_dhellmann: back from lunch, what's going on with the gate?! :) was releasing ksm/ksc with the new webob req a bad move?16:56
dhellmannstevemar_: it made things a little worse, so we'll need to spin new releases without the cap16:56
dhellmannstevemar_: see https://etherpad.openstack.org/p/liberty-release-gate-race16:56
stevemar_dhellmann: okay, i can push patches to for version bumps (2.3.1 and 1.8.1) once the proposal bot updates ksc and ksm16:59
dhellmannstevemar_: good, thanks16:59
mriedemstevemar_: 2.4.1 for ksvm16:59
mriedem*ksm16:59
stevemar_mriedem: oops, yeah16:59
stevemar_was going off memory17:00
sdaguedhellmann: so the only other project that I might imagine has the webob issue would be ironic17:00
sdaguebecause this is nova heritage code, and I'm thinking about the projects that would have forked off and gotten it17:00
sdaguecinder got it from nova, manilla from cinder17:01
dhellmannsdague: makes sense17:01
dhellmanndevananda, jroll : is ironic seeing gate issues?17:01
sdagueI'm going to go do some digging around17:01
mriedemwhen i was looking yesterday at the webob thing it was only nova cinder and manila17:02
sdaguedhellmann: ironic master at least doesn't look like it subclasses webob for exceptions17:02
sdaguemriedem: that was via logstash?17:02
mriedemyeah17:03
mriedembefore kibana died17:03
*** doug-fis_ has joined #openstack-relmgr-office17:04
jrolldhellmann: not that I'm aware of17:04
*** doug-fis_ is now known as doug-fish_17:05
jrolldhellmann: as of 20 minutes ago we're good17:05
dhellmannjroll: ok, thanks for confirming17:06
jrollnp17:06
*** doug-fish has quit IRC17:07
sdaguedhellmann: so manilla, what's our story there? are we going to wait for them to lead on that, or is someone else going to fix it for them17:11
sdagueI was waiting on doing the ask to infra until we felt pretty good we had a complete fix list17:11
mriedemsdague: i think https://review.openstack.org/#/c/234288/ is complete17:11
mriedemit has the oslo.db change in test-requirements.txt17:11
sdagueoh, but is has ttx -217:11
mriedemi think that's just ttx bot17:12
mriedemauto -217:12
sdaguehey, ttx, come back from lunch!17:12
sdagueok, I guess we should get the cinder and nova ones promoted17:12
mriedemdinner, so it'll be like 3 hours before he's back17:12
mriedemright?17:12
jgriffithLOL17:13
mriedemthe manila change is in the check queue17:13
mriedemi have to go get my quarterly haircut now so be back in an hour17:13
*** gordc has quit IRC17:14
stevemar_mriedem: not your luxurious locks!17:15
sdaguemriedem: and you aren't even coming to summit, slacker17:18
sdaguedhellmann: fungi is doing promotes now of the cinder and nova fixes17:18
fungiyep17:18
dhellmannfungi, sdague : thanks17:18
fungithey're enqueued and promoted to the front of the gate now17:20
sdaguefungi: thanks much17:20
*** mestery has joined #openstack-relmgr-office17:20
fungiany time!17:20
ttxsdague: o/17:23
ttxwhat when which where17:23
stevemar_ttx: who?17:23
sdaguettx: you've got a -2 on the manilla fix, just curious what our manilla story will be17:23
stevemar_:)17:23
ttxsdague: I'll +2 it if it passes tests in the next hours17:24
ttxi.e. RC3 and all17:25
ttxafaict the tests are still running?17:25
sdaguettx: yeh17:25
* ttx needs to put the kids in bed, back in 45min17:26
sdaguewe got fungi to promote the nova and cinder ones17:26
sdaguedhellmann / ttx - stable kilo question17:26
ttxsdague: well if he can promote that one as well that would be great17:26
sdaguewho is supposed to take care of the 2015.1.2 bump on cinder - https://github.com/openstack/cinder/blob/1ec74b88ac5438c5eba09d64759445574aa97e72/setup.cfg#L317:26
AJaegerdhellmann: https://review.openstack.org/234357 and https://review.openstack.org/234358 are the move of openstackdocstheme to the blacklist17:26
ttxglance nova cinder manila is my current target RC3 list17:26
sdaguettx: ok, I'd say remove your -2 so it can be done17:26
ttxsdague: done17:27
*** bknudson has joined #openstack-relmgr-office17:27
sdaguettx: thanks, we'll have to let the current changes run their course, but will keep an eye on a good time for this one17:27
dhellmannsdague: not sure what you mean? do we need to raise the version # for cinder in stable/kilo?17:28
ttxsdague: cinder-stable-maint folks17:28
sdaguedhellmann: yes, because it's failing all the tests17:28
* ttx runs17:28
dhellmannsdague: as ttx said, the stable team for cinder should do that17:28
jgriffiththat must've been the other item that infra pointed out, that I *thought* was the ec17:29
jgriffithcool17:29
sdaguejgriffith: ok, you on it?17:29
jgriffithyeah17:29
jgriffithsdague: ttx dhellmann https://review.openstack.org/#/c/234360/17:30
sdaguejgriffith: cool, thanks17:30
* dhellmann can't wait to get everyone onto post-version numbering17:31
mtreinishdhellmann: there should be no reason to do post-version numbering17:31
mtreinishall it does is cause these headaches when someone pushes a release and thinks their job is done17:31
mtreinish(its not the first time this has happened)17:31
dhellmannmtreinish: I'm not sure what that means.17:35
*** mestery has quit IRC17:35
mtreinishdhellmann: when the stable release person (forgot the real name) pushes a point release of a stable branch there are follow up tasks (like bumping the setup.cfg version)17:37
mtreinishbut that never seems to happen17:37
mtreinishwhich causes that issue on stable branches17:38
*** nikhil has quit IRC17:39
dhellmannmtreinish: so you're saying this isn't the only task we forget to do? why does eliminating that task entirely not help to solve that problem?17:39
sdagueok, as I think we're a bit in a lull for test results / patch merge, I'm going to pop off and assemble my table saw and mentally regroup17:39
mtreinishdhellmann: it does, I'm saying we should remove the version lines17:40
*** nikhil has joined #openstack-relmgr-office17:40
dhellmannmtreinish: ok, you said "there should be no reason to do post-version numbering" but I think you meant pre-version17:40
dhellmannmtreinish: so I think we're saying the same thing17:41
mtreinishdhellmann: oh sry, yeah my bad17:41
*** gordc has joined #openstack-relmgr-office17:41
dhellmannand for now the only reason to keep with pre-versioning is that immature projects need help with the process, we tend to use pre-versioning for the milestones but we could also just restrict tag access17:42
dhellmannmtreinish: yeah, fwiw, I'm trying to find all the by-hand things and cut them from the process where possible17:42
*** harlowja has joined #openstack-relmgr-office17:48
bknudsonso for keystone I think for now we just wait for the webob cap to merge and then we'll put out a new ksc + ksm release with the reqs update.17:52
dhellmannbknudson: ack, thanks, I thinks stevemar_ is on that so we're just waiting17:54
bknudsonand then we'll figure out how to deal with whatever probs are left over17:54
* dhellmann nods17:56
*** armax has quit IRC17:59
*** bswartz has joined #openstack-relmgr-office18:05
bswartzdhellmann: I'm here now -- sorry I used to have this channel on auto-join until freenode started kicking me for autojoining too many channels18:06
dhellmannbswartz: np, I know the feeling. We're using https://etherpad.openstack.org/p/liberty-release-gate-race to track what needs to happen to finish the release today18:07
bknudsondhellmann: liberty release? I haven't been looking at keystone issues in L.18:07
bknudsonI assume they're the same as master18:07
dhellmannbswartz: it looks like https://review.openstack.org/#/c/234288/ failed its tests18:08
dhellmannbknudson: the keystone libs from master are causing some issues in the liberty jobs, too, I think18:08
bswartzdhellmann: yeah, I'm taking a look now18:08
dhellmannbswartz: great, thanks18:08
lifelesssdague: dhellmann: we have a room for discussing backwards compat in tokyo right?18:10
lifelessmriedem: realistically issues for packagers haven't changed in any fundamental way18:11
lifelessmriedem: we've *never* been able to say with precision which versions of the billions of permutations do and don't work18:11
lifelessmriedem: its always been an approximation18:11
stevemar_dhellmann: i'll be a little delayed, keystone meeting right now18:14
stevemar_if theres something i need to do =\18:14
dhellmannstevemar_: I think we're still waiting for patches to land18:16
*** mriedem has quit IRC18:18
stevemar_coolio18:19
sdaguedhellmann / ttx the nova and cinder patches merged it appears18:20
sdagueso those should be rc3 able18:20
dhellmannsdague: woot, I'll let ttx handle that when he gets back because he knows which scripts need to run18:20
sdaguethe manila one failed unit tests, I'm going to reprod locally and see if I can fix it18:21
*** mriedem has joined #openstack-relmgr-office18:21
mriedemsdague: i had to get cleaned up for mickey and the gang18:21
mriedemwalt has high standards18:22
sdaguemriedem: yeh, wouldn't want to be black listed out of there18:22
* sdague still thinks mriedem should fly from orlando to tokyo 18:22
mriedemlooks like the manila backport is hitting other unit test failures18:25
mriedemmissing this for one https://github.com/openstack/manila/commit/f38b8d4efd1f68f4ea29747f7377e0936f61d89c18:25
dhellmannsdague: bswartz was looking at the manila patch, too18:26
mriedemit needs f38b8d4efd1f68f4ea29747f7377e0936f61d89c18:26
lifelessmriedem: perhaps we could do a voice call to get some high bandwidth handle on the issues you're hearing from your packagers?18:26
mriedemlifeless: maybe at some point, not today though18:27
sdaguedhellmann: ok18:27
mriedembswartz: your manila backport needs f38b8d4efd1f68f4ea29747f7377e0936f61d89c squashed in also18:27
sdaguemriedem: yeh, that looks like the ticket18:28
mriedemi could cram that in quick if no one else is doing it18:28
sdaguemriedem: go for it, you found it18:28
bswartzmriedem: thanks that would be appreviated18:28
lifelessmriedem: definitely we should talk in tokyo18:28
mriedemnp, up in a sec18:28
mriedemlifeless: i won't be there18:28
* bswartz is struggling to get dependent libs installed on fedora after recent upgrade18:29
mriedemsdague: bswartz: https://review.openstack.org/#/c/232535/18:32
sdaguemriedem: wait, wasn't there a version that wasn't ttx -2?18:33
mriedemyeah, https://review.openstack.org/#/c/234288/18:33
mriedemdueling18:33
mriedemi can update that one instead18:33
sdagueyeh, lets update the not -2ed patch18:33
sdaguebecause we would like to merge18:33
bswartzis ttx not here to remove his -2?18:34
mriedemweird, 234288 is what i used for git review -d18:34
mriedemstarting over18:34
bswartzmriedem: you have 2 change IDs in your commit message18:35
bswartzonly the second one was considered by gerrit18:35
mriedemyeah, it's usually not a problem18:35
mriedemthat's how the cinder one was18:35
mriedembut yeah, that's why it didn't update the other18:36
sdagueyeh, it ignores everything but the last one18:36
lifelessmriedem: :/18:36
ttxI'm here18:36
ttxsdague, bswartz: am I needed ?18:37
bswartzttx: just 2 different changes in gerrit both trying to fix manila18:37
bswartzmriedem updated the wrong one18:37
dansmithttx: in general? yes! :)18:37
sdaguenot if mriedem sorts the ids on the changes18:37
sdaguewe need a ttx bat signal18:37
mriedemthere https://review.openstack.org/#/c/234288/18:37
dansmithsdague: nice18:38
bswartzttx: you could remove your -2 from https://review.openstack.org/#/c/232535/ and we could use that instead18:38
ttxsure18:38
mriedemwe're good with https://review.openstack.org/#/c/234288/ now18:38
mriedemi fixed it18:38
bswartzor we could not and use the one mriedem just updated18:38
ttxwell, whichever is first in the test run18:38
mriedemtoo late18:38
mriedemheh, yeah18:38
lifelesstoo many fixes :)18:38
ttxwhich one should we kill18:38
sdaguemriedem: I like your new topic18:39
mriedemit's more appropriate18:39
lifelessttx: / dhellmann: we do have time in toyko to talk removing of version= from setup.cfg right ?18:39
dhellmannlifeless: that's one of the topics for the fishbowl session18:40
mriedembknudson: ^ you should be around for that one on the version= in setup.cfg thing since you had questions about it18:40
lifelessdhellmann: great18:41
ttxsdague: the omnibus is in. Looks like I can tag nova rc318:41
dhellmannlifeless: though given how some of the projects went this cycle, I'm mostly convinced to only encourage that for mature projects18:41
sdaguettx: yes, for cinder as well18:41
ttxfor cinder ?18:42
ttxthe patch I have was -1ed18:42
sdaguettx: https://review.openstack.org/#/c/234329/18:42
lifelessdhellmann: (I was reminded of this because the helion dev-tools-mgr was asking me about dealing with the situation where they've merged a project right after tag but before the version bump to setup.cfg18:42
ttxETOOMANYPATCHES18:42
lifelessdhellmann: how so? what problems does it avoid with them ?18:42
* ttx abandons the others18:42
lifelessdhellmann: (Or lets wait for fishbowl?)18:42
dhellmannlifeless: let's wait18:42
mriedemlifeless: internally when we branch we comment out version= in setup.cfg18:43
ttxsmcginnis, johnthetubaguy: about to tag cinder and nova rc318:44
smcginnisttx: Thanks!18:44
sdagueok, if I can get someone to +A the manilla patch - https://review.openstack.org/#/c/234288/2 (stable/liberty)18:44
sdagueI think we can ask fungi to promote it18:44
ttxsdague: on it18:44
sdagueI've manually confirmed the unit tests pass18:44
sdaguewhich are all that failed before18:45
lifelessmriedem: yeah, thats a decent approach IMO18:45
fungiyeah18:45
*** armax has joined #openstack-relmgr-office18:45
fungidid the nova and cinder fixes merge yet?18:45
sdaguefungi: yes18:46
sdaguefungi: 234288,218:46
sdagueI manually confirmed the tests that failed last time pass18:46
ttxfungi: if you're in a good day i'd like to have https://review.openstack.org/#/c/233661/ as well18:46
ttxblocking glance rc318:46
sdagueso I think it's low risk to jump straight to gate18:46
ttxbah it's gating already18:46
smcginnisfungi: Last one for Cinder - https://review.openstack.org/#/c/234329/18:46
sdaguettx: it can be promoted18:46
ttxfungi: it's gating already, not sure it needs promotion18:47
ttxsdague: ok then18:47
* ttx glaces at the queue18:47
ttxholy mary18:47
sdague234288,2 and 233661,2  it seems18:47
fungik18:47
sdaguettx: so is that it for rc3s?18:47
ttxthe ones I know about yes18:48
sdagueok18:48
ttx(the one we decided on Monday and the ones we adde tdue to the webob18:48
sdagueok, great18:48
ttxsdague: well, thx for the help unlocking the day18:48
* ttx likes to share the pain18:48
sdagueyeh, no prob18:48
ttxalright. fire in the hole, nova rc3 coming18:49
ttxhmm, not the best hour to us ethe internetz for me18:50
fungiheh, looks like manila's not in the integrated gate queue, so it's off gating on its own anyway18:51
ttxyay18:52
fungiand the glance change is at the front of the gate now18:53
ttxhmm, looks like everyone is using netflix tonight18:53
ttxcan't get a proper nova repo clone18:53
sdaguettx: it's only 600 MB18:54
bswartzfungi: how did you get the manila change into the gate without passing through the check queue first?18:54
ttxsdague: mind you, the release tools use the zuul cloner, supposedly to be able to cache, except they don't.18:54
ttxbswartz: black magic I suspect18:54
fungibswartz: it involves dead chickens. better not to ask18:55
bswartzI always suspected it was possible to pull the strings with zuul but I didn't know if anyone actually did it18:55
fungifor urgent fixes blocking most of openstack or holding up the semi-annual release, yes18:56
fungisometimes also for critical fixes corresponding to security advisories18:56
bswartzwe've periodically had gate stability issues (jobs timing out, usually) that required endless rechecks -- it's something we're fixing in mitaka -- but I wondered if there were shortcuts to merge things18:56
sdaguebswartz: not unless it is affecting things at a large project level18:58
lifelessbswartz: the technical capacity exists, policy is to only use it in really exceptional circumstances18:58
ttxi'll be back in a few -- I don't have enough bandwidth for the tags right now18:59
bswartzwell we're fixing the problem by making the first party manila driver faster so tests take less time18:59
bswartzbut it can be maddening to have a change get thrown out of the gate and go back through the check queue multiple times before merging19:00
ttxsdague: the cinder fix went in before https://review.openstack.org/#/c/233923/ merged in master19:28
ttxsdague: now we have to keep tabs on that one and make sure it merges19:28
jgriffithttx: I think we squashed that19:28
ttxjgriffith: squashed in master ?19:29
jgriffithttx: in liberty19:29
ttxright, and tyhe fix never made it to master19:29
ttxcreating a regression opportunity19:29
jgriffithttx: yeah, I see now19:30
ttxjgriffith: so we need to make sure that merges now19:30
jgriffithttx: yeah, I'll keep babysitting19:30
ttxI can't even describe the state we are in the bug19:30
jgriffithttx: indeed... for some reason I thought the cherry pick would autofail on the merge if it hadn't landed in master19:31
jgriffithttx: well.. not the cherry pick, but the merge of the cherry pick19:31
ttxwell, that would remove a lot of process if that happened19:31
jgriffithttx: you say that like it's a bad thing :)19:32
bswartzttx: one of the gate tests for 234288,2 took a very long time to install devstack -- there's a good chance it will hit a timeout19:33
ttxnot sure if it's reasonable to cut the RC3 i the mean time19:33
jgriffithttx: so the problem is I think that'll be around 3+ hours at least before it merges19:34
bswartzI will recheck if it ultimately fails, but it may have to travel through the check queue again19:34
ttxfungi: we might need 233923,2 promoted in gate queue since the stable/liberty backport was approved before the master fix landed19:34
ttxbswartz: let's keep an eye on it19:34
fungilet me know if it becomes necessary. not paying close attention since i'm running the infra meeting right now19:35
ttxLooks like I don't have enough bandwidth for the tagging right now anyway19:35
ttxfungi: 233923,2 is necessary19:35
jgriffithnet neutrality :)19:35
bswartzttx: i've got my eye on it19:35
* bswartz has jenkins open with auto-refresh19:36
jgriffithttx: should have his own fast-lane19:36
ttxfungi: i don't feel comfortable cutting cinder rc3 while we have that potential regression in master19:36
bknudson+3 it19:36
fungittx: necessary to promote? can do then19:37
ttxdamn, just as my cinder repo clone finally completed. Bad luck19:37
fungittx: okay, i promoted it19:38
ttxfungi: thx19:38
fungihopefully the other stuff from earlier already merged? i'm not really watching19:38
fungibut i can take a look again after we wrap the infra meeting19:39
ttxfungi: well, no :)19:39
*** mestery has joined #openstack-relmgr-office19:45
bswartzttx: it failed19:53
bswartzhttps://jenkins06.openstack.org/job/gate-manila-tempest-dsvm-neutron-multibackend/958/console19:53
ttxdammit19:53
ttxfungi: we had a fail on the manila fix https://review.openstack.org/#/c/234288/ and need it to jump to gate again (it has its own queue)19:54
bswartzit seems to be bad luck -- the slowness of devstack make it take too long19:54
ttxfungi: shall I recheck it, or reapprove it or...19:55
*** AJaeger has left #openstack-relmgr-office19:55
sdaguettx: https://review.openstack.org/#/c/233923/ is not a critical fix19:58
sdaguettx: it's almost a noop in how it exposes19:58
ttxah. I wonder why we included it in the RC then19:59
ttxbecause the state of master changed ?19:59
* ttx is confused20:00
sdaguettx: we needed *some* change20:01
sdaguebut 923 is shifting it from 400 -> 50020:01
sdaguethe issue was it previously was 020:01
sdaguewhich explodes20:01
sdagueso there is a little bit of eventual consistency here, but any valid http value is fine. It always gets overwritten20:02
fungittx: i've pushed 234288,2 back into the gate for manila now20:02
sdagueit's just an invalid base value make things explody20:02
sdagueanyway, we can resync post tc meeting if you want20:03
bswartzfungi: thx20:03
smcginnismriedem: Saw your note on the version bump.20:27
smcginnismriedem: Makes sense.20:28
smcginnismriedem: Context of why I did it: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-10-13.log.html#t2015-10-13T17:25:2820:28
mriedemi will forever grind that axe now20:30
ttxsdague: the evil is done now, it's top of gate anyway20:35
mriedemsdague: manila is also setting the webob thing to 400 but as you said it doesn't really matter20:41
mriedemlet's just pull ConvertedException out to oslo.exceptions20:41
* bswartz will BRB need to drive home20:42
*** bswartz has quit IRC20:42
*** TravT_ has joined #openstack-relmgr-office20:46
*** TravT__ has joined #openstack-relmgr-office20:48
*** TravT__ has joined #openstack-relmgr-office20:48
*** TravT has quit IRC20:49
*** TravT__ is now known as TravT20:49
*** TravT_ has quit IRC20:51
jgriffithttx: smcginnis https://review.openstack.org/#/c/233923/ merged21:00
jgriffithGTG21:00
ttxalrighty21:00
ttxall clear21:00
smcginnisjgriffith: Just noticed that.21:00
smcginnisOpen the floodgates again. :)21:01
jgriffithand on that note... I've got some homemade tomato sauce to make :)21:01
ttxlet's see if I can clone a repo now21:01
ttxsdague, fungi: dammit, glance failed https://review.openstack.org/#/c/233661/21:02
ttxAt this stage I'll pick it up tomorrow morning21:02
ttxfeel free to promote it toward the end of the day if that feels necessary to get it21:02
ttxcinder rc3 on its way21:04
jgriffithttx: cool.. at this point things can/should just wait until first update :)21:05
* jgriffith bounces back out to the kitchen21:06
*** bswartz has joined #openstack-relmgr-office21:07
bswartzttx: merged https://review.openstack.org/#/c/234288/21:07
jrollttx: re liberty we still have at least one patch, maybe two, inbound for ironic, jfyi21:08
ttxbswartz: yep, will tag in a few21:08
bswartzttx: ty21:08
ttxjroll: planning to make another release pre-release-day ?21:08
ttxjroll: or is that stable point release material ?21:09
jrollttx: yes, there's a patch that will need backporting that is having some contention :( upgrade-related so I wanted to get it in before release day21:09
ttxjroll: time is running out but meh21:09
jrollttx: I know :(21:10
ttxintermediary released things are not so much of an issue21:10
ttxjroll: I won't have time to help you on Thursday, so better do that release tomorrow21:10
jrollright; I just don't want to release current version of ironic as "liberty" when there's an upgrade bug21:10
jrollack21:10
ttxagreed21:10
jrollI wanted to do it last week fwiw :P21:12
jrollbut yeah, will push people21:12
*** mestery has quit IRC21:12
ttxalright cinder and manila rc3 are in21:17
ttxnova is next21:18
fungiokay, glance 233661,2 has been shunted back to the gate again21:18
fungior will be as soon as zuul finishes catching up with its event queue backlog21:20
fungithere it is21:21
ttxalright, nova rc3 done21:25
ttxand closing shop for the night21:27
*** mestery has joined #openstack-relmgr-office21:46
*** gordc has quit IRC21:47
*** mriedem has quit IRC21:48
*** sigmavirus24 is now known as sigmavirus24_awa22:08
*** david-lyle has quit IRC22:10
*** david-lyle has joined #openstack-relmgr-office22:12
anteayattx: yay22:17
*** mestery has quit IRC22:21
*** stevemar_ has quit IRC23:23
*** stevemar_ has joined #openstack-relmgr-office23:24
*** dims has quit IRC23:41
*** dims has joined #openstack-relmgr-office23:42
*** mestery has joined #openstack-relmgr-office23:45

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!