*** achanda has quit IRC | 00:01 | |
openstackgerrit | Joshua Harlow proposed openstack/tooz: Avoid using a thread local token storage https://review.openstack.org/176133 | 00:04 |
---|---|---|
openstackgerrit | Joshua Harlow proposed openstack/tooz: Avoid using a thread local token storage https://review.openstack.org/176133 | 00:05 |
openstackgerrit | Joshua Harlow proposed openstack/tooz: Avoid using a thread local token storage https://review.openstack.org/176133 | 00:06 |
*** tsekiyam_ has joined #openstack-oslo | 00:14 | |
*** tsekiyama has quit IRC | 00:18 | |
*** tsekiyam_ has quit IRC | 00:19 | |
*** mtanino has quit IRC | 00:19 | |
openstackgerrit | Joshua Harlow proposed openstack/tooz: Heartbeat on acquired locks copy https://review.openstack.org/176142 | 00:23 |
*** achanda has joined #openstack-oslo | 00:23 | |
*** david-lyle has quit IRC | 00:24 | |
*** david-lyle has joined #openstack-oslo | 00:27 | |
*** sputnik13 has quit IRC | 00:27 | |
*** achanda has quit IRC | 00:28 | |
*** jaypipes has quit IRC | 00:37 | |
*** AAzza_afk has joined #openstack-oslo | 00:43 | |
*** AAzza has quit IRC | 00:43 | |
*** AAzza_afk is now known as AAzza | 00:43 | |
*** prad has quit IRC | 00:53 | |
bknudson | openstack/common/service.py seems to be doing a lot of work in the child just to catch SIGTERM so that it can exit with 1 rather than a signal indicator... | 00:55 |
bknudson | anyone have a problem with having the child just not catch SIGTERM and go away on the signal? | 00:56 |
*** browne has quit IRC | 01:00 | |
openstackgerrit | Joshua Harlow proposed openstack/taskflow: Retain chain of missing dependencies (WIP) https://review.openstack.org/176148 | 01:02 |
openstackgerrit | Joshua Harlow proposed openstack/taskflow: Retain chain of missing dependencies (WIP) https://review.openstack.org/176148 | 01:03 |
*** prad has joined #openstack-oslo | 01:08 | |
openstackgerrit | Brant Knudson proposed openstack/oslo-incubator: service child process normal SIGTERM exit https://review.openstack.org/176151 | 01:14 |
Kennan | hi :bknudson | 01:32 |
Kennan | there ? | 01:32 |
Kennan | :krotscheck ? | 01:35 |
*** flwang has quit IRC | 01:38 | |
*** zzzeek has quit IRC | 01:48 | |
*** flwang has joined #openstack-oslo | 01:50 | |
*** flwang has left #openstack-oslo | 01:51 | |
*** jamesllondon has quit IRC | 01:53 | |
*** zzzeek has joined #openstack-oslo | 01:54 | |
*** zzzeek has quit IRC | 01:54 | |
*** jamesllondon has joined #openstack-oslo | 01:54 | |
*** stevemar has joined #openstack-oslo | 01:56 | |
*** gtt116__ has quit IRC | 01:59 | |
*** gtt116 has joined #openstack-oslo | 01:59 | |
*** browne has joined #openstack-oslo | 02:01 | |
*** harlowja is now known as harlowja_away | 02:03 | |
*** _amrith_ is now known as amrith | 02:05 | |
*** alexpilotti has quit IRC | 02:27 | |
*** ChuckC has joined #openstack-oslo | 02:29 | |
*** jamielennox is now known as jamielennox|away | 02:39 | |
*** sputnik13 has joined #openstack-oslo | 02:39 | |
*** sputnik13 has quit IRC | 02:44 | |
*** sputnik13 has joined #openstack-oslo | 02:44 | |
*** sputnik13 has quit IRC | 02:47 | |
*** amrith is now known as _amrith_ | 03:06 | |
*** sputnik13 has joined #openstack-oslo | 03:14 | |
*** sputnik13 has quit IRC | 03:14 | |
*** arnaud___ has joined #openstack-oslo | 03:50 | |
*** joesavak has joined #openstack-oslo | 04:16 | |
*** enikanorov__ has quit IRC | 04:21 | |
*** joesavak has quit IRC | 04:29 | |
*** sdake_ has joined #openstack-oslo | 04:32 | |
*** prad has quit IRC | 04:45 | |
*** prad has joined #openstack-oslo | 05:04 | |
*** inc0 has joined #openstack-oslo | 05:06 | |
*** nkrinner has joined #openstack-oslo | 05:20 | |
*** yamahata has joined #openstack-oslo | 05:29 | |
*** sdake_ has quit IRC | 05:42 | |
*** haigang has joined #openstack-oslo | 05:44 | |
*** achanda has joined #openstack-oslo | 05:49 | |
*** arnaud___ has quit IRC | 06:05 | |
*** jamesllondon has quit IRC | 06:10 | |
*** achanda has quit IRC | 06:20 | |
*** achanda has joined #openstack-oslo | 06:21 | |
*** stevemar has quit IRC | 06:31 | |
*** sdake has joined #openstack-oslo | 06:35 | |
*** jaosorior has joined #openstack-oslo | 06:54 | |
*** browne has quit IRC | 06:58 | |
*** gtt116_ has joined #openstack-oslo | 07:02 | |
*** shardy has joined #openstack-oslo | 07:04 | |
*** gtt116 has quit IRC | 07:05 | |
*** achanda has quit IRC | 07:29 | |
*** liusheng has joined #openstack-oslo | 07:30 | |
*** achanda_ has joined #openstack-oslo | 07:32 | |
*** sdake has quit IRC | 07:35 | |
*** andreykurilin__ has joined #openstack-oslo | 07:44 | |
*** achanda_ has quit IRC | 07:45 | |
*** sdake has joined #openstack-oslo | 07:48 | |
*** sdake has quit IRC | 08:00 | |
*** yamahata has quit IRC | 08:04 | |
*** alexpilotti has joined #openstack-oslo | 08:04 | |
*** ozamiatin has joined #openstack-oslo | 08:11 | |
*** rushiagr_away is now known as rushiagr | 08:13 | |
*** ajo has joined #openstack-oslo | 08:18 | |
*** jamespage has quit IRC | 08:24 | |
*** jamespage has joined #openstack-oslo | 08:24 | |
*** jamespage has quit IRC | 08:24 | |
*** jamespage has joined #openstack-oslo | 08:24 | |
*** ccrouch has quit IRC | 08:31 | |
*** prad has quit IRC | 08:32 | |
*** haigang has quit IRC | 08:32 | |
*** prad has joined #openstack-oslo | 08:34 | |
*** haigang has joined #openstack-oslo | 08:41 | |
*** gtt116 has joined #openstack-oslo | 08:45 | |
*** gtt116_ has quit IRC | 08:45 | |
*** haigang has quit IRC | 08:46 | |
*** haigang has joined #openstack-oslo | 08:47 | |
*** andreykurilin__ has quit IRC | 08:49 | |
*** andreykurilin__ has joined #openstack-oslo | 08:50 | |
*** haigang has quit IRC | 08:52 | |
*** prad has quit IRC | 08:53 | |
*** sheeprine has quit IRC | 08:56 | |
*** sputnik13 has joined #openstack-oslo | 08:58 | |
*** haigang has joined #openstack-oslo | 08:58 | |
*** prad has joined #openstack-oslo | 08:59 | |
*** sheeprine has joined #openstack-oslo | 09:00 | |
*** ihrachyshka has joined #openstack-oslo | 09:01 | |
*** sputnik13 has quit IRC | 09:02 | |
*** arnaud___ has joined #openstack-oslo | 09:06 | |
*** nkrinner has quit IRC | 09:07 | |
*** nkrinner has joined #openstack-oslo | 09:09 | |
*** arnaud___ has quit IRC | 09:10 | |
*** rushiagr is now known as rushiagr_away | 09:17 | |
*** Kennan2 has joined #openstack-oslo | 09:19 | |
*** Kennan has quit IRC | 09:19 | |
*** i159 has joined #openstack-oslo | 09:21 | |
*** e0ne has joined #openstack-oslo | 09:30 | |
*** rushiagr_away is now known as rushiagr | 09:37 | |
*** ajo has quit IRC | 09:41 | |
*** dguitarbite has joined #openstack-oslo | 09:45 | |
*** ajo has joined #openstack-oslo | 09:46 | |
*** shardy_ has joined #openstack-oslo | 09:46 | |
*** shardy has quit IRC | 09:48 | |
*** shardy_ has quit IRC | 09:52 | |
*** shardy has joined #openstack-oslo | 09:52 | |
*** ajo has quit IRC | 09:58 | |
*** inc0 has quit IRC | 10:00 | |
*** ozamiatin has quit IRC | 10:01 | |
*** andreykurilin__ has quit IRC | 10:04 | |
*** ajo has joined #openstack-oslo | 10:15 | |
*** inc0 has joined #openstack-oslo | 10:22 | |
*** inc0 has quit IRC | 10:25 | |
*** inc0_ has joined #openstack-oslo | 10:25 | |
*** _amrith_ is now known as amrith | 10:27 | |
*** rushiagr is now known as rushiagr_away | 10:48 | |
*** e0ne is now known as e0ne_ | 10:51 | |
*** inc0_ has quit IRC | 10:55 | |
*** e0ne_ is now known as e0ne | 10:55 | |
*** haigang has quit IRC | 10:58 | |
*** Kennan2 has quit IRC | 11:08 | |
*** haigang has joined #openstack-oslo | 11:08 | |
*** Kennan has joined #openstack-oslo | 11:23 | |
*** e0ne is now known as e0ne_ | 11:27 | |
*** shardy has quit IRC | 11:28 | |
*** haypo has joined #openstack-oslo | 11:32 | |
*** amrith is now known as _amrith_ | 11:34 | |
*** e0ne_ has quit IRC | 11:38 | |
*** bknudson has quit IRC | 11:40 | |
*** inc0_ has joined #openstack-oslo | 11:50 | |
*** inc0__ has joined #openstack-oslo | 11:52 | |
*** inc0_ has quit IRC | 11:56 | |
*** e0ne has joined #openstack-oslo | 11:58 | |
*** haigang has quit IRC | 12:02 | |
*** haigang has joined #openstack-oslo | 12:03 | |
*** inc0__ has quit IRC | 12:03 | |
*** inc0__ has joined #openstack-oslo | 12:04 | |
*** inc0__ has quit IRC | 12:11 | |
*** inc0 has joined #openstack-oslo | 12:12 | |
*** haigang has quit IRC | 12:12 | |
*** jaypipes has joined #openstack-oslo | 12:15 | |
*** cdent has joined #openstack-oslo | 12:18 | |
*** ozamiatin has joined #openstack-oslo | 12:19 | |
*** rushiagr_away is now known as rushiagr | 12:39 | |
*** e0ne is now known as e0ne_ | 12:41 | |
*** gordc has joined #openstack-oslo | 12:41 | |
*** inc0 has quit IRC | 12:42 | |
*** bknudson has joined #openstack-oslo | 12:43 | |
*** kgiusti has joined #openstack-oslo | 12:46 | |
*** e0ne_ is now known as e0ne | 12:46 | |
ihrachyshka | flaper87, who's managing oslo.log? I don't see any oslo-log-core team in gerrit. | 12:52 |
dhellmann | jd__: can you join #openstack-relmgr-office to discuss https://review.openstack.org/#/c/175851, please? | 12:53 |
*** zzzeek has joined #openstack-oslo | 13:09 | |
*** rushiagr is now known as rushiagr_away | 13:12 | |
openstackgerrit | Merged openstack/oslo-incubator: Revert "Revert "Revert "Optimization of waiting subprocesses in ProcessLauncher""" https://review.openstack.org/175851 | 13:13 |
*** yamahata has joined #openstack-oslo | 13:19 | |
*** mriedem_away is now known as mriedem | 13:22 | |
dhellmann | jd__: nevermind, sdague filled me in | 13:22 |
jd__ | dhellmann: ok sorry I was AFK | 13:24 |
dhellmann | jd__: that's what I figured when I looked at the clock; np | 13:24 |
dhellmann | I was confused about the number of reverts and what the original 'optimization' had been | 13:24 |
jd__ | dhellmann: are we sure that's the culprit finally? | 13:25 |
dhellmann | jd__: I think it is one of a couple. We want this for keystone, apparently | 13:25 |
jd__ | dhellmann: sigh :| ok! | 13:25 |
*** jungleboyj has quit IRC | 13:29 | |
*** jecarey has joined #openstack-oslo | 13:29 | |
*** stevemar has joined #openstack-oslo | 13:29 | |
*** andreykurilin__ has joined #openstack-oslo | 13:33 | |
*** stevemar has quit IRC | 13:37 | |
openstackgerrit | Ihar Hrachyshka proposed openstack-dev/hacking: Add support for flake8 off_by_default for optional checks https://review.openstack.org/134052 | 13:41 |
openstackgerrit | Ihar Hrachyshka proposed openstack-dev/hacking: Add support for flake8 off_by_default for optional checks https://review.openstack.org/134052 | 13:42 |
openstackgerrit | Ihar Hrachyshka proposed openstack-dev/hacking: Add support for flake8 off_by_default for optional checks https://review.openstack.org/134052 | 13:43 |
*** _amrith_ is now known as amrith | 13:44 | |
*** e0ne is now known as e0ne_ | 13:46 | |
*** amotoki has joined #openstack-oslo | 13:49 | |
haypo | " Merged openstack/oslo-incubator: Revert "Revert "Revert ..." hum, someone fall into an unlimited loop | 13:50 |
*** e0ne_ has quit IRC | 13:56 | |
*** e0ne has joined #openstack-oslo | 13:59 | |
*** nkrinner has quit IRC | 14:01 | |
*** tsekiyama has joined #openstack-oslo | 14:01 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 14:05 | |
*** stevemar has joined #openstack-oslo | 14:07 | |
*** tsekiyama has quit IRC | 14:17 | |
*** salv-orlando has quit IRC | 14:18 | |
*** jungleboyj has joined #openstack-oslo | 14:19 | |
*** ChuckC has quit IRC | 14:19 | |
*** tsekiyama has joined #openstack-oslo | 14:20 | |
*** tsekiyama has quit IRC | 14:20 | |
*** tsekiyama has joined #openstack-oslo | 14:21 | |
*** mtanino has joined #openstack-oslo | 14:21 | |
*** stpierre has joined #openstack-oslo | 14:25 | |
*** stevemar has quit IRC | 14:26 | |
ozamiatin | dhellmann, hi, could you please give me a hint where CALL+fanout used? | 14:28 |
dhellmann | ozamiatin: have you looked in nova? I think it used to be used there. sileht may know better | 14:28 |
Kiall | zzzeek: about? Had a SQLA and/or olso.db question and I'm striking out with Google.... DB Connections as checked in/out of the connection pool, I seem to remember there being a delay between reuse of connections.. i.e. a connection can't be used for X ms after it's checked into the pool.. am I imagining that? | 14:29 |
zzzeek | moment | 14:29 |
Kiall | thanks :) | 14:29 |
ozamiatin | dhellmann, i saw a direct call fanout was not discovered yet ) | 14:30 |
sileht | ozamiatin, https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/rpc/client.py#L140 | 14:30 |
sileht | ozamiatin, this is not possible with the public API | 14:31 |
sileht | ozamiatin, rabbit driver doesn't implement that too | 14:31 |
ozamiatin | sileht, thanks! that's what i assumed it should be :) | 14:31 |
haypo | dhellmann, hi. do you know when patches on requirements will be accepted again (i would like to upgrade eventlet)? | 14:32 |
dhellmann | haypo: not until the release is done | 14:32 |
haypo | dhellmann, ok. so it should be short ;) | 14:33 |
dhellmann | haypo: no idea, at the rate things are going | 14:33 |
openstackgerrit | Ihar Hrachyshka proposed openstack/oslo.context: Add user_name and project_name to context object https://review.openstack.org/176333 | 14:36 |
ihrachyshka | dhellmann, weren't you supposed to be on vacation? :) | 14:37 |
ihrachyshka | dhellmann, so who's managing oslo repos that do not have separate oslo-$project-core teams? | 14:38 |
ihrachyshka | is it oslo-core? | 14:38 |
dhellmann | ihrachyshka: dims is out this week, I was just out Monday | 14:38 |
ihrachyshka | I see | 14:39 |
dhellmann | ihrachyshka: yes, if there's not a separate core team yet, it's just oslo-core. We have a few repos where we should probably create separate core teams | 14:39 |
*** sdake has joined #openstack-oslo | 14:43 | |
*** yamahata has quit IRC | 14:43 | |
*** yamahata has joined #openstack-oslo | 14:44 | |
*** sdake_ has joined #openstack-oslo | 14:44 | |
zzzeek | Kiall: oh youre still waiting, sorry :) | 14:44 |
*** jungleboyj has quit IRC | 14:44 | |
zzzeek | Kiall: um that is not true. connection that is checked in is ready to go again immediately | 14:44 |
Kiall | zzzeek: Humm, Ok.. I'm imagining it so.. | 14:45 |
zzzeek | Kiall: there’s a timeout that causes a connection that is X seconds old to be closed and reopened brefore use, that’s it | 14:45 |
zzzeek | openstack sets that timeout to 3600 | 14:45 |
*** jungleboyj has joined #openstack-oslo | 14:45 | |
Kiall | zzzeek: Yea, we're hitting an issue where, no matter how large we set the pool size (and mysql connection limits), we're consuming them ALL and ending up with a QueuePool timeout :( Trying to get a handle on what's happening, and noticed we have a very SQL heavy method which it's grabbing a new connection for each and every query, but serially, so one at a time. Clearly a bug.. But, without a reuse delay likely isn't the cause of the issue | 14:47 |
Kiall | I'm looking into! | 14:47 |
*** sdake has quit IRC | 14:48 | |
*** browne has joined #openstack-oslo | 14:50 | |
*** jungleboyj has quit IRC | 14:52 | |
*** jungleboyj has joined #openstack-oslo | 14:53 | |
*** russellb has quit IRC | 15:00 | |
*** amrith is now known as _amrith_ | 15:02 | |
*** russellb has joined #openstack-oslo | 15:05 | |
*** ozamiatin has quit IRC | 15:12 | |
*** yassine_ has joined #openstack-oslo | 15:12 | |
openstackgerrit | Victor Stinner proposed openstack/oslo.db: Add Python 3 classifiers to setup.cfg https://review.openstack.org/176355 | 15:13 |
openstackgerrit | Victor Stinner proposed openstack/oslo.db: Add Python 3 classifiers to setup.cfg https://review.openstack.org/176355 | 15:15 |
*** andreykurilin__ has quit IRC | 15:18 | |
*** yassine_ has quit IRC | 15:22 | |
*** yassine_ has joined #openstack-oslo | 15:22 | |
*** i159 has quit IRC | 15:22 | |
*** yassine_ has quit IRC | 15:24 | |
*** lefais has joined #openstack-oslo | 15:24 | |
*** lefais has quit IRC | 15:25 | |
*** yassine_ has joined #openstack-oslo | 15:25 | |
*** ihrachyshka has quit IRC | 15:35 | |
*** e0ne has quit IRC | 15:36 | |
*** e0ne has joined #openstack-oslo | 15:37 | |
*** yamahata has quit IRC | 15:43 | |
ttx | jd__, dhellmann: would like to clear some confusion around https://bugs.launchpad.net/oslo-incubator/+bug/1446583 | 15:46 |
openstack | Launchpad bug 1446583 in Keystone "services no longer reliably stop in stable/kilo" [Critical,In progress] - Assigned to Julien Danjou (jdanjou) | 15:46 |
ttx | if you happen to be around | 15:46 |
jd__ | ttx: yep | 15:46 |
ttx | so, fix is in oslo-incubator | 15:47 |
ttx | jd__: you proposed it for keystone/master and keystone/kilo | 15:47 |
ttx | dhellmann proposed it for keystone/kilo | 15:47 |
ttx | comments on the bug say it's useless | 15:47 |
ttx | unclear if anything else is affected | 15:48 |
*** sdake_ has quit IRC | 15:48 | |
jd__ | so far that's also what I know | 15:48 |
ttx | maybe sdague has extra info | 15:48 |
jd__ | bknudson seems to have debugged it further | 15:48 |
ttx | just wondering what is affected and what fixes it | 15:49 |
ttx | i the mean time I'm holding on the keystone patches | 15:49 |
bknudson | I don't know why https://review.openstack.org/#/c/175851/ was proposed as a solution? Maybe there was some discussion that I missed. | 15:50 |
bknudson | was it because of the timing of when it was committed? | 15:51 |
ttx | that was jd__'s solution to the bug | 15:51 |
bknudson | how was it verified? | 15:51 |
bknudson | I was able to recreate a shutdown hang locally by just connecting nc to keystone and then it wouldn't shut down. | 15:52 |
bknudson | I assume that's the problem that grenade was seeing. | 15:52 |
bknudson | although maybe it was something else. | 15:53 |
jd__ | I just pushed the patches because dhellmann asked me to do so and based on a git log and changes done to service.py that _seemed_ like the culprit :) | 15:54 |
jd__ | but it might also be a totally different change | 15:54 |
jd__ | someone should do a git bisect? | 15:54 |
ttx | bknudson: and https://review.openstack.org/#/c/176151/ is the patch for it ? | 16:01 |
bknudson | ttx: https://review.openstack.org/#/c/176151/ worked for me locally... keystone would shut down if there was a connection open. | 16:02 |
bknudson | also, I think it's the right thing to do. (KISS principle) | 16:02 |
*** browne has quit IRC | 16:08 | |
*** arnaud___ has joined #openstack-oslo | 16:10 | |
*** yassine_ has quit IRC | 16:12 | |
dhellmann | bknudson: did you talk to asalkeld about why he had the signal handler in there like that in the first place? I agree it's curious, but I wonder if there is a difference in behavior in some circumstances | 16:12 |
bknudson | dhellmann: I didn't | 16:12 |
bknudson | I'll add him to the review. | 16:13 |
bknudson | I don't know how good the coverage is... obviously the change I made passes so the child exit status isn't tested. | 16:13 |
*** viktors is now known as viktors|afk | 16:14 | |
dhellmann | yeah | 16:14 |
dhellmann | bknudson: are we not running keystone under apache? why is this even an issue? | 16:14 |
bknudson | dhellmann: we deprecated running eventlet keystone in K. | 16:15 |
dhellmann | bknudson: I guess we're still testing that way, though? | 16:15 |
bknudson | so before that devstack was always running keystone with evnetlet. | 16:15 |
bknudson | I think the postgres test runs with eventlet | 16:16 |
dhellmann | ah, so some jobs | 16:16 |
dhellmann | ok | 16:16 |
bknudson | and we have a work item to provide a script for grenade to "migrate" config from eventlet to httpd. | 16:16 |
bknudson | morganfainberg: ^ was talking about this yesterday. | 16:16 |
dhellmann | k | 16:16 |
bknudson | I thought we didn't need one, but whatever. | 16:17 |
dhellmann | at this point I'm just trying to understand how to fix the issue we have in the gate | 16:17 |
* morganfainberg reads scroll back. | 16:17 | |
bknudson | dhellmann: I think https://review.openstack.org/#/c/176151/ , applied to keystone, will do it. | 16:17 |
bknudson | it is curious that this seems to have only become a real problem recently... I've been seeing it on my local system for a long time. | 16:18 |
morganfainberg | bknudson: interaction with a lib somewhere is my guess. | 16:18 |
bknudson | most shutdown scripts are more aggressive about getting rid of unwanted processes. | 16:18 |
morganfainberg | Or a chance in event let. | 16:18 |
morganfainberg | Change* | 16:18 |
morganfainberg | yeah but not shutting down cleanly is suboptimal. Your app should shutdown if it can. If it almost never does, that is a valid issue. Kill -9 while valid, is ugly | 16:19 |
dhellmann | bknudson: have you proposed a version of 176151 to keystone, too? | 16:20 |
bknudson | dhellmann: I haven't. I can do that easy enough. | 16:20 |
dhellmann | bknudson: yeah, if you could do that and mark it as depending on the version in the incubator (to keep us honest) that would be good. We should also test it in the stable/kilo branch of keystone, I suppose | 16:21 |
dhellmann | it seems innocuous, and the test coverage isn't checking exit codes but there are tests for killing subprocesses | 16:21 |
dhellmann | still, I'd like to be able to see if it fixes the issues sdague had merging https://review.openstack.org/#/c/175391/ | 16:22 |
dhellmann | ttx: how does this sound? ^^ | 16:22 |
bknudson | do you only want to see it in keystone stable/juno? | 16:22 |
dhellmann | bknudson: that's the one I care about, but it should go into keystone master, too | 16:22 |
bknudson | or do I have to go through the whole rigamarole of backporting to oslo-incubator stable/juno , keystone master, etc. | 16:22 |
dhellmann | oh, no, not oslo-incubator stable | 16:23 |
dhellmann | oslo-incubator master -> keystone master -> keystone stable/kilo | 16:23 |
bknudson | oh, right, it's kilo. | 16:23 |
dhellmann | yeah, we don't have a stable/kilo for the incubator yet | 16:23 |
bknudson | I didn't think this was just keystone , cinder is also implicated. | 16:24 |
bknudson | did somebody want to try the fix on cinder? | 16:24 |
dhellmann | I don't even know if we're seeing the issue with cinder any more, or if it was the same issue | 16:25 |
ttx | dhellmann: sounds good | 16:25 |
dhellmann | bknudson: I believe, from what sdague has said, that testing https://review.openstack.org/#/c/175391 with your backport will tell us if keystone is working ok -- although that's why we thought the other patch fixed the problem :-/ | 16:25 |
ttx | oslo-incubator master -> keystone master -> keystone stable/kilo | 16:25 |
bknudson | I need to use a different change id in keystone since otherwise I don't see how depends-on would work. | 16:28 |
dhellmann | bknudson: yes, but isn't that what we normally do with syncs? | 16:29 |
bknudson | I'm doing a "cherry-pick" this time... I could do a full sync. | 16:30 |
dhellmann | bknudson: you can sync just that module, hang on | 16:30 |
dhellmann | bknudson: ./update.sh --module service --base keystone ../keystone | 16:31 |
dhellmann | run that from your incubator checkout and make the last argument the path to the keystone directory | 16:31 |
bknudson | actually, that's the only module that's changed, so a full sync is the same result. | 16:31 |
dhellmann | k | 16:31 |
bknudson | I'll just do a full sync then. | 16:31 |
dhellmann | bknudson, ttx: I'm keeping some notes about all of these patches in https://etherpad.openstack.org/p/april-21-gate-wedge | 16:31 |
ttx | ok, I'll disappear for a few hours, beer/dinner | 16:32 |
bknudson | dhellmann: here's the sync to master: https://review.openstack.org/#/c/176391/ | 16:35 |
bknudson | and here's stable/kilo: https://review.openstack.org/#/c/176392/ | 16:36 |
*** haypo has quit IRC | 16:36 | |
bknudson | cherry-pick | 16:36 |
dhellmann | thanks, bknudson | 16:37 |
*** andreykurilin__ has joined #openstack-oslo | 16:37 | |
bknudson | no problem. | 16:37 |
*** arnaud___ has quit IRC | 16:39 | |
dhellmann | bknudson, ttx: I've set up https://review.openstack.org/#/c/175391/ to run with the new service change. I'm going to grab lunch while zuul does its thing | 16:47 |
*** subscope_ has joined #openstack-oslo | 16:48 | |
*** browne has joined #openstack-oslo | 16:50 | |
*** e0ne has quit IRC | 16:50 | |
*** jungleboyj has quit IRC | 16:50 | |
* morganfainberg watches zuul crunch away. | 16:56 | |
*** harlowja_away is now known as harlowja | 16:59 | |
*** jaosorior has quit IRC | 17:02 | |
*** andreykurilin__ has quit IRC | 17:03 | |
*** jungleboyj has joined #openstack-oslo | 17:05 | |
*** subscope_ has quit IRC | 17:07 | |
*** subscope_ has joined #openstack-oslo | 17:07 | |
*** salv-orlando has joined #openstack-oslo | 17:07 | |
*** russellb has quit IRC | 17:17 | |
*** russellb has joined #openstack-oslo | 17:21 | |
*** sdake has joined #openstack-oslo | 17:24 | |
*** ChuckC has joined #openstack-oslo | 17:26 | |
*** sdake has quit IRC | 17:29 | |
*** sdake has joined #openstack-oslo | 17:30 | |
-openstackstatus- NOTICE: gerrit is restarting to clear hung stream-events tasks. any review events between 16:48 and 17:32 utc will need to be rechecked or have their approval votes reapplied to trigger testing in zuul | 17:32 | |
*** russellb has quit IRC | 17:35 | |
*** russellb has joined #openstack-oslo | 17:40 | |
*** david-lyle has quit IRC | 17:42 | |
*** yamahata has joined #openstack-oslo | 17:50 | |
*** achanda has joined #openstack-oslo | 17:51 | |
*** e0ne has joined #openstack-oslo | 17:59 | |
*** yamahata has quit IRC | 18:03 | |
openstackgerrit | Joshua Harlow proposed openstack/tooz: Have run_watchers take a timeout and respect it https://review.openstack.org/176417 | 18:09 |
*** alexpilotti has quit IRC | 18:25 | |
*** yamahata has joined #openstack-oslo | 18:33 | |
*** yamahata has quit IRC | 18:40 | |
*** _amrith_ is now known as amrith | 18:43 | |
*** bkc has joined #openstack-oslo | 18:47 | |
*** stevemar has joined #openstack-oslo | 18:50 | |
*** pblaho_ has joined #openstack-oslo | 19:02 | |
*** pblaho has quit IRC | 19:05 | |
*** pblaho__ has joined #openstack-oslo | 19:10 | |
openstackgerrit | Joshua Harlow proposed openstack/oslo.utils: Add the start of a futures model for async work/additions https://review.openstack.org/176439 | 19:10 |
*** pblaho_ has quit IRC | 19:10 | |
openstackgerrit | Joshua Harlow proposed openstack/oslo.utils: Add the start of a futures model for async work/additions https://review.openstack.org/176439 | 19:11 |
openstackgerrit | Joshua Harlow proposed openstack/oslo.utils: Add the start of a futures model for async work/additions https://review.openstack.org/176439 | 19:11 |
*** stevemar has quit IRC | 19:15 | |
openstackgerrit | Joshua Harlow proposed openstack/oslo.utils: Add the start of a futures model for async work/additions https://review.openstack.org/176439 | 19:17 |
sdague | ok, back now for a bit | 19:17 |
sdague | cinder didn't ever take the patch in question, and it fails a lot less than keystone | 19:17 |
*** achanda has quit IRC | 19:22 | |
*** jaypipes has quit IRC | 19:24 | |
*** amrith is now known as _amrith_ | 19:25 | |
*** ozamiatin has joined #openstack-oslo | 19:25 | |
bknudson | give me a few minutes and I'll try my nc test on cinder. | 19:28 |
*** achanda has joined #openstack-oslo | 19:31 | |
sdague | bknudson: also, do you have a narrower test case for this that could be put into the oslo tree for testing? | 19:31 |
sdague | as you have apparently gotten closer to the heart of it | 19:32 |
bknudson | sdague: I'm sure I could come up with a test case, but I also expect it would take a while... I'd probably have to fork&exec an eventlet server and send it a signal. | 19:33 |
bknudson | and open a tcp connection to it. | 19:33 |
sdague | bknudson: take a while to run, or to write? | 19:34 |
bknudson | take a while to write. | 19:34 |
sdague | yeh, so given that this code in oslo was revert / revert / revert it feels like actually writing the correct test case would be really useful | 19:35 |
sdague | because otherwise, someone is going to tweak something again in the future in the name of optimization, and break it | 19:35 |
bknudson | I agree. tests are always good. | 19:35 |
sdague | bknudson: don't get me wrong, I don't want the test to hold up the fix | 19:36 |
sdague | I just want a promiss of the test coming as soon as possible | 19:36 |
bknudson | I can work on it. | 19:36 |
*** nkrinner has joined #openstack-oslo | 19:37 | |
sdague | thanks | 19:40 |
ttx | sounds worthwhile. I don't really like critical issues detected at D-8 | 19:40 |
* dhellmann catches up with sdague, ttx, bknudson | 19:41 | |
sdague | morganfainberg: so... who's going to approve - https://review.openstack.org/#/c/176392/ ? | 19:41 |
sdague | oh, that's blocked on oslo change landing? | 19:42 |
dhellmann | sorry, I've been fighting with the expense report system for the last 45 min or so, and am just catching up with our status | 19:42 |
morganfainberg | sdague, yes. | 19:42 |
morganfainberg | sdague, i'll approve it as soon as oslo change lands. | 19:42 |
dhellmann | if we're confident of this change fixing things, I can approve the oslo side | 19:42 |
morganfainberg | and we are happy with the change that is. | 19:42 |
dhellmann | sdague: it looks like your patch works with bknudson's patch, if I'm looking at https://review.openstack.org/#/c/175391/ correctly | 19:43 |
sdague | bknudson seems to have the narrower manual reproduce, and likes this, so given the time crunch, I'd say lets do it, and get some tests together for it later | 19:43 |
dhellmann | ++ | 19:43 |
dhellmann | +2a on https://review.openstack.org/#/c/176151/ | 19:44 |
sdague | dhellmann: yeh, I hit recheck again to give us another round of test results | 19:44 |
*** stevemar has joined #openstack-oslo | 19:44 | |
dhellmann | sdague: sounds good | 19:44 |
morganfainberg | getting keystone master one +2a | 19:44 |
morganfainberg | right now. | 19:44 |
dhellmann | fwiw, your patch had passed before with the *other* fix, so I don't know if either is actually related :-/ | 19:44 |
bknudson | btw - I was able to recreate this with cinder... "nc localhost 8776" and then c-api fails to stop. | 19:45 |
dhellmann | that is, your patch may not have been as good a test case as we thought, sdague | 19:45 |
sdague | dhellmann: well the other patch puts it back to stable/juno code, which may still have had a race | 19:45 |
sdague | but narrower | 19:45 |
dhellmann | yeah | 19:45 |
sdague | dhellmann: well, my patch was the thing that found this the first time, and was what was blocked | 19:45 |
bknudson | you should be able to check the log to see if there were any connections open to keystone when it was trying to stop. | 19:45 |
morganfainberg | sdague, dhellmann, ttx, https://review.openstack.org/#/c/176392/ i don't mind +A this, but i don't have a second stable to +A atm. | 19:45 |
morganfainberg | s/+A/+2 | 19:45 |
bknudson | since we added that bit. | 19:45 |
ttx | on it | 19:46 |
morganfainberg | ttx, thnx | 19:46 |
dhellmann | sdague: yeah, that's what I thought, but it seems odd that 2 different fixes cleared it up so I'm less confident of either as the "right" fix | 19:46 |
ttx | hmm, do we need to revert the revert of the revert of the revert on oslo-incub ? | 19:46 |
morganfainberg | ttx, i .. how many revert of reverts is that? | 19:47 |
dhellmann | ttx: as things stand we do not have the optimization in place, and I think the optimization is a little suspicious, so I'm comfortable leaving it out | 19:47 |
ttx | hm, ok, but we would keep it out of stable/kilo, or in ? | 19:48 |
ttx | i.e. what do we do of https://review.openstack.org/#/c/175859/ ? | 19:48 |
morganfainberg | interesting... stable/kilo ACLs aren't as locked down as stable/juno | 19:49 |
openstackgerrit | Merged openstack/oslo-incubator: service child process normal SIGTERM exit https://review.openstack.org/176151 | 19:49 |
dhellmann | ttx: we should make the service module in keystone stable/kilo match the one in oslo-incubator master | 19:49 |
ttx | morganfainberg: that's because it's pre-release stable/kilo | 19:49 |
bknudson | I don't know how up-to-date cinder is with oslo-incubator... seems to not be up to date. | 19:49 |
dhellmann | I don't know what that means about the specific patches we have open, or if bknudson's sync took care of it | 19:49 |
morganfainberg | ah | 19:49 |
stevemar | dhellmann, theres a patch already? https://review.openstack.org/#/c/176391/ | 19:49 |
* ttx is already tired of explaining why stable/kilo is different from stable/* | 19:50 | |
dhellmann | bknudson: someone from the cinder team is going to need to get involved in submitting and approving patches | 19:50 |
ttx | I already regret proposed/* | 19:50 |
stevemar | err https://review.openstack.org/#/c/176392/1 | 19:50 |
morganfainberg | ttx, you could have just ignored it :P | 19:50 |
stevemar | thanks ttx :P | 19:50 |
dhellmann | stevemar: that's master, we want stable/kilo to match too | 19:50 |
ttx | at least nobody was asking why it was specvial. ANd nobody was pushing random patches to it | 19:50 |
morganfainberg | ttx, it didn't matter to me [i assumed it was the same] | 19:50 |
stevemar | dhellmann, bad copy pasta, see other msg | 19:50 |
stevemar | dhellmann, it's merging now :) | 19:50 |
dhellmann | stevemar: cool | 19:51 |
morganfainberg | ttx: ok so that is the last outstanding bug for kilo rc2 doing a 2nd check over new bugs | 19:52 |
morganfainberg | making sure nothing else came in | 19:52 |
ttx | morganfainberg: we might want https://review.openstack.org/#/c/175859 if bknudson's patch doesn't include it | 19:52 |
morganfainberg | bknudson, does yours include that? ^ | 19:53 |
morganfainberg | i think it did. | 19:53 |
bknudson | my patch to keystone included everything in oslo-incubator | 19:53 |
ttx | looks like it does | 19:53 |
morganfainberg | ok we're good then | 19:53 |
bknudson | so if it's merged into oslo-incubator master then it's in keystone | 19:53 |
ttx | ok, so we can abandon the other one | 19:53 |
morganfainberg | ttx: no new bugs that look like RC blockers. i'll slate getting request-ids into the context for logging as a backport but it's a much larger change for us than other projects. | 19:54 |
*** openstackgerrit has quit IRC | 19:54 | |
ttx | morganfainberg: RC2 will have to wait for requirements syncs to be reenabled so we merge the last ones | 19:54 |
*** openstackgerrit has joined #openstack-oslo | 19:55 | |
morganfainberg | ttx: once that merges to stable/kilo i'm happy with RC2, so when reqs lands we're set | 19:55 |
bknudson | I tried my oslo-incubator patch in cinder and it has the same effect -- exits on SIGINT even when there's a client connected. | 19:55 |
ttx | bknudson: so we might want the same patch there , | 19:56 |
ttx | ? | 19:56 |
bknudson | y, if you're willing to take it. | 19:57 |
bknudson | I'm not going to try to do a sync of the whole part. | 19:57 |
bknudson | my patch is only like 5 lines so it's easy to make the change. | 19:57 |
ttx | bah we'll handle the sync if thingee is fine with it | 19:58 |
*** subscope_ has quit IRC | 19:59 | |
bknudson | https://review.openstack.org/#/c/176455/ -- cinder change in master | 19:59 |
dhellmann | ttx: I'm going to abandon https://review.openstack.org/#/c/176082/ as not needed | 20:01 |
bknudson | cherry-picked to stable/kilo -- https://review.openstack.org/#/c/176457/ | 20:01 |
ttx | dhellmann: you fine with cherrypicking the change in cinder rather than wholesale sync ? | 20:02 |
dhellmann | ttx: yeah, at this point let's fix the bug | 20:03 |
ttx | ok | 20:03 |
dhellmann | this code is up for graduation next cycle | 20:03 |
dhellmann | assuming we can find someone to own it | 20:03 |
dhellmann | we're also talking about deprecating it in favor of supervisord | 20:03 |
*** sputnik13 has joined #openstack-oslo | 20:11 | |
*** e0ne has quit IRC | 20:12 | |
*** achanda has quit IRC | 20:12 | |
*** sputnik13 has quit IRC | 20:32 | |
*** sputnik13 has joined #openstack-oslo | 20:34 | |
*** stevemar has quit IRC | 20:41 | |
*** stevemar has joined #openstack-oslo | 20:45 | |
*** achanda has joined #openstack-oslo | 20:53 | |
*** ozamiatin has quit IRC | 20:57 | |
*** nkrinner has quit IRC | 21:00 | |
*** kgiusti has left #openstack-oslo | 21:07 | |
openstackgerrit | Joshua Harlow proposed openstack/taskflow: Retain chain of missing dependencies https://review.openstack.org/176148 | 21:10 |
*** stevemar has quit IRC | 21:13 | |
openstackgerrit | Joshua Harlow proposed openstack/taskflow: Add + use diagram explaining retry controller area of influence https://review.openstack.org/176496 | 21:22 |
*** openstackgerrit has quit IRC | 21:29 | |
*** openstackgerrit has joined #openstack-oslo | 21:30 | |
openstackgerrit | Joshua Harlow proposed openstack/taskflow: Add + use diagram explaining retry controller area of influence https://review.openstack.org/176496 | 21:32 |
openstackgerrit | Joshua Harlow proposed openstack/taskflow: Add + use diagram explaining retry controller area of influence https://review.openstack.org/176496 | 21:33 |
*** yamahata has joined #openstack-oslo | 21:34 | |
openstackgerrit | Joshua Harlow proposed openstack/taskflow: Test more engine types in argument passing unit test https://review.openstack.org/176502 | 21:42 |
*** sdake has quit IRC | 21:44 | |
*** yamahata has quit IRC | 21:46 | |
*** bknudson has quit IRC | 21:46 | |
openstackgerrit | Joshua Harlow proposed openstack/taskflow: Expose fake filesystem 'join' and 'normpath' https://review.openstack.org/176515 | 21:56 |
*** cdent has quit IRC | 21:56 | |
*** ndipanov has quit IRC | 21:56 | |
*** andreykurilin__ has joined #openstack-oslo | 22:05 | |
*** bknudson has joined #openstack-oslo | 22:05 | |
*** bkc__ has joined #openstack-oslo | 22:08 | |
*** bkc has quit IRC | 22:08 | |
*** bkc__ has quit IRC | 22:11 | |
*** openstackgerrit has quit IRC | 22:11 | |
*** bkc__ has joined #openstack-oslo | 22:11 | |
*** openstackgerrit has joined #openstack-oslo | 22:11 | |
*** jaypipes has joined #openstack-oslo | 22:20 | |
*** hogepodge has quit IRC | 22:27 | |
*** sdake has joined #openstack-oslo | 22:32 | |
*** hogepodge has joined #openstack-oslo | 22:32 | |
lifeless | zigo: https://bugs.launchpad.net/pbr/+bug/1419860 <- could you update please | 22:36 |
openstack | Launchpad bug 1419860 in PBR "Folders inside packages aren't installed" [Undecided,Incomplete] | 22:36 |
lifeless | zigo: and https://bugs.launchpad.net/pbr/+bug/1405101 | 22:37 |
openstack | Launchpad bug 1405101 in PBR "oslo.concurrency doesn't get openstack/common installed correctly" [Undecided,Incomplete] | 22:37 |
zigo | lifeless: As much as I can remember, no, it wasn't fixed. | 22:37 |
zigo | (I mean #1419860) | 22:37 |
zigo | Updating them both. | 22:37 |
lifeless | zigo: it would help to have non-debbuild reproduction instructions | 22:38 |
lifeless | zigo: if thats possible | 22:38 |
zigo | lifeless: I have no idea how to do that ! | 22:38 |
zigo | However, I wrote instructions for upstreams on how to recreate the Debian build env which I use. | 22:38 |
zigo | http://openstack.alioth.debian.org/ | 22:39 |
lifeless | zigo: if that link isn't in the bugs, please add it. :) | 22:39 |
zigo | Will do. | 22:39 |
lifeless | zigo: (I've closed the tabs for now, will be comnig backto it in a bit) | 22:39 |
*** pradk has joined #openstack-oslo | 22:41 | |
*** gordc has quit IRC | 22:47 | |
*** andreykurilin__ has quit IRC | 22:59 | |
*** ChuckC has quit IRC | 23:00 | |
*** stevemar has joined #openstack-oslo | 23:01 | |
*** jd__ has quit IRC | 23:08 | |
*** jd__ has joined #openstack-oslo | 23:08 | |
*** ajo has quit IRC | 23:10 | |
*** ChuckC has joined #openstack-oslo | 23:21 | |
*** david-lyle has joined #openstack-oslo | 23:24 | |
*** ChuckC has quit IRC | 23:25 | |
*** ChuckC has joined #openstack-oslo | 23:25 | |
*** mriedem is now known as mriedem_away | 23:26 | |
*** amotoki has quit IRC | 23:27 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 23:27 | |
*** sputnik13 has quit IRC | 23:29 | |
*** sdake has quit IRC | 23:39 | |
*** jecarey has quit IRC | 23:47 | |
* morganfainberg looks at the gate | 23:53 | |
*** tsekiyam_ has joined #openstack-oslo | 23:57 | |
*** tsekiyama has quit IRC | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!