*** rcernin has joined #openstack-keystone | 00:00 | |
*** hamalq has joined #openstack-keystone | 00:56 | |
*** hamalq has quit IRC | 01:01 | |
*** zzzeek has quit IRC | 02:04 | |
*** zzzeek has joined #openstack-keystone | 02:06 | |
*** hamalq has joined #openstack-keystone | 02:57 | |
*** hamalq has quit IRC | 03:01 | |
*** rcernin has quit IRC | 03:06 | |
*** rcernin has joined #openstack-keystone | 03:11 | |
*** whoami-rajat has joined #openstack-keystone | 03:51 | |
*** ricolin has quit IRC | 04:35 | |
*** ricolin has joined #openstack-keystone | 04:48 | |
*** hamalq has joined #openstack-keystone | 04:58 | |
*** hamalq has quit IRC | 05:03 | |
*** Luzi has joined #openstack-keystone | 05:59 | |
*** viks____ has joined #openstack-keystone | 06:11 | |
*** hamalq has joined #openstack-keystone | 06:59 | |
*** hamalq has quit IRC | 07:03 | |
*** bengates has joined #openstack-keystone | 07:14 | |
*** bengates has quit IRC | 07:17 | |
*** bengates has joined #openstack-keystone | 07:18 | |
*** rcernin has quit IRC | 07:29 | |
*** xek_ has joined #openstack-keystone | 07:47 | |
*** xek_ has quit IRC | 07:48 | |
*** bengates_ has joined #openstack-keystone | 08:05 | |
*** bengates has quit IRC | 08:08 | |
*** shyamb has joined #openstack-keystone | 08:09 | |
*** tosky has joined #openstack-keystone | 08:28 | |
*** rcernin has joined #openstack-keystone | 08:48 | |
*** hamalq has joined #openstack-keystone | 09:00 | |
*** rcernin has quit IRC | 09:00 | |
*** hamalq has quit IRC | 09:04 | |
*** xek has joined #openstack-keystone | 09:10 | |
*** shyamb has quit IRC | 09:31 | |
*** shyamb has joined #openstack-keystone | 09:33 | |
*** rcernin has joined #openstack-keystone | 09:52 | |
*** shyamb has quit IRC | 10:03 | |
*** shyamb has joined #openstack-keystone | 10:04 | |
*** rcernin has quit IRC | 10:07 | |
*** rcernin has joined #openstack-keystone | 10:07 | |
*** rcernin has quit IRC | 10:20 | |
*** rcernin has joined #openstack-keystone | 10:27 | |
*** rcernin has quit IRC | 10:38 | |
*** rcernin has joined #openstack-keystone | 10:51 | |
*** hamalq has joined #openstack-keystone | 11:00 | |
*** hamalq has quit IRC | 11:05 | |
*** rcernin has quit IRC | 11:05 | |
*** shyamb has quit IRC | 12:20 | |
*** hamalq has joined #openstack-keystone | 13:01 | |
*** hamalq has quit IRC | 13:06 | |
*** raildo has joined #openstack-keystone | 13:06 | |
*** Luzi has quit IRC | 13:43 | |
*** rcernin has joined #openstack-keystone | 14:01 | |
*** rcernin has quit IRC | 14:06 | |
*** whoami-rajat has quit IRC | 14:09 | |
*** raildo has quit IRC | 14:36 | |
*** whoami-rajat_ has joined #openstack-keystone | 14:44 | |
*** hamalq has joined #openstack-keystone | 15:02 | |
*** xek has quit IRC | 15:05 | |
*** hamalq has quit IRC | 15:07 | |
lbragstad | knikolla ping | 15:36 |
---|---|---|
*** timburke has joined #openstack-keystone | 15:41 | |
knikolla | lbragstad: o/ | 15:49 |
lbragstad | knikolla hey - i'm curious if you have an opinion on https://review.opendev.org/c/openstack/keystone/+/738677 | 15:50 |
*** Guest76033 is now known as jroll | 15:50 | |
lbragstad | the problem is that we have a race condition when we hammer the identity API with password udpates | 15:50 |
lbragstad | but - a simple fix is to retry the request at the db layer | 15:50 |
lbragstad | but - my initial approach to a unit test isn't going to work because keystone sql module isn't isolated to specific tests - i think it's shared | 15:51 |
lbragstad | so - when you mock it in one test, like what i'm doing, it has the ability to bleed into other tests causing them to fail | 15:51 |
lbragstad | do you have anything thoughts or recommendations on how to handle that? | 15:53 |
lbragstad | s/anything/any/ | 15:53 |
knikolla | give me a sec to read through the discussion on the review | 15:55 |
lbragstad | ack | 15:55 |
knikolla | interesting, i thought the sqlite was in memory and unique to each thread running a test. | 15:58 |
lbragstad | sqlite might be, but the actual keystone.common.sql.core bits are not - from what i can tell | 15:59 |
lbragstad | so i think the sql.core.session_for_reader()/session_for_writer() bits are shared across test cases | 15:59 |
lbragstad | (when the tests are run in parallel) | 15:59 |
knikolla | so you think the global context referenced in that module is what's causing the bleeding? | 16:02 |
lbragstad | i think so | 16:02 |
lbragstad | i'd like to backport this fix, so i'd like to keep it relatively self-contained | 16:03 |
lbragstad | (it affects all supported branches) | 16:03 |
knikolla | pragmatically, i think i'd be fine not adding the unit test | 16:06 |
lbragstad | it's a simply change, but it would be nice to have something to protect against regressions | 16:06 |
lbragstad | simple* | 16:06 |
lbragstad | at the same time - i'm not sure how invasive a fix to make that module work across tests would be | 16:07 |
lbragstad | (or if it would be sane to backport) | 16:07 |
knikolla | i'm not sure. perhaps it's as simple as wrapping line 1008 in a with statement or something similar. | 16:08 |
knikolla | hmmm, no, that wouldn't change much i think. | 16:09 |
lbragstad | oh - using the context manager for the mock lifespan? | 16:09 |
lbragstad | i can try it | 16:09 |
knikolla | yeah, i don't think 240 tests would fail from a race condition bleeding out to other tests being run at the same time | 16:12 |
knikolla | and more something messing up the environment for future unit tests to be run | 16:12 |
*** hamalq has joined #openstack-keystone | 16:15 | |
lbragstad | i wonder if using a context manager just makes the problem less likely to occur? | 16:15 |
knikolla | i think it may solve it | 16:17 |
knikolla | because 240 failed tests seems to imply that this did mess only the tests running on that core | 16:17 |
knikolla | so my hunch is that future tests on that core fail | 16:18 |
knikolla | and that resetting the mock would return things to normal for successive tests | 16:18 |
knikolla | aka, less about tests in parallel, and more about future tests on that thread | 16:18 |
openstackgerrit | Lance Bragstad proposed openstack/keystone master: Retry update_user when sqlalchemy raises StaleDataErrors https://review.opendev.org/c/openstack/keystone/+/738677 | 16:23 |
lbragstad | ^ seems to work for me locally? | 16:23 |
lbragstad | we'll see what zuul says though | 16:23 |
lbragstad | i can run all the sql tests in parallel | 16:23 |
lbragstad | so - nice suggestion knikolla :) | 16:23 |
knikolla | logically it makes sense to me what i described above | 16:23 |
lbragstad | i wonder if the race still exists though and we just pushed it into a much smaller window | 16:25 |
knikolla | i don't think so. a thread can only run one test at a time | 16:25 |
knikolla | and it's returning in to normal before the end of the test | 16:26 |
knikolla | the other threads are not messing with their sql sessions | 16:26 |
lbragstad | isn't that what the results here were showing though? https://zuul.opendev.org/t/openstack/build/affa4fa71e44413391a9ac1f153e01d9 | 16:30 |
lbragstad | because the mock was being set on a global object shared across tests via threads? | 16:31 |
*** hamalq has quit IRC | 16:31 | |
*** hamalq has joined #openstack-keystone | 16:31 | |
knikolla | the failures say {7} | 16:32 |
knikolla | so they're on the same thread | 16:33 |
*** hamalq has quit IRC | 16:33 | |
*** hamalq has joined #openstack-keystone | 16:33 | |
lbragstad | aha - good call | 16:34 |
lbragstad | interesting - so yeah, i guess it depends on when that test was run and the fact that the mock wasn't properly cleaned up | 16:35 |
lbragstad | nice catch | 16:35 |
lbragstad | we'll see if https://review.opendev.org/c/openstack/keystone/+/738677 comes back green | 16:35 |
knikolla | yeah. 240 tests seems consistent with 5000/8, somewhere mid run. | 16:36 |
* lbragstad nods | 16:36 | |
knikolla | a parallel issue would most likely only cause 8-16 failures depending on how long that test runs. | 16:36 |
lbragstad | yeah - that makes sense | 16:37 |
lbragstad | assuming the issue is localized to that specific test | 16:37 |
lbragstad | or testing window | 16:37 |
lbragstad | if that patch comes back green i'll propose all the backports again | 16:39 |
*** gyee has joined #openstack-keystone | 16:57 | |
*** zzzeek has quit IRC | 17:08 | |
*** bengates_ has quit IRC | 17:10 | |
*** zzzeek has joined #openstack-keystone | 17:10 | |
*** bengates has joined #openstack-keystone | 17:11 | |
*** gyee has quit IRC | 17:14 | |
*** gyee has joined #openstack-keystone | 17:14 | |
*** bengates has quit IRC | 17:15 | |
*** timburke_ has joined #openstack-keystone | 17:35 | |
*** timburke has quit IRC | 17:38 | |
*** gyee has quit IRC | 17:38 | |
*** gyee has joined #openstack-keystone | 17:49 | |
*** ayoung has quit IRC | 18:10 | |
*** ayoung has joined #openstack-keystone | 18:13 | |
*** whoami-rajat_ is now known as whoami-rajat | 18:31 | |
*** viks____ has quit IRC | 19:04 | |
*** zzzeek has quit IRC | 19:05 | |
*** zzzeek has joined #openstack-keystone | 19:06 | |
lbragstad | knikolla sweet - https://review.opendev.org/c/openstack/keystone/+/738677 | 19:19 |
lbragstad | proposing backports now | 19:19 |
lbragstad | knikolla proposed to stable/wallaby, too | 19:30 |
lbragstad | https://review.opendev.org/q/I75590c20e90170ed862f46f0de7d61c7810b5c90 | 19:30 |
*** timburke_ has quit IRC | 20:35 | |
*** timburke_ has joined #openstack-keystone | 20:36 | |
*** whoami-rajat has quit IRC | 20:39 | |
*** dasp has quit IRC | 22:13 | |
*** dasp has joined #openstack-keystone | 22:19 | |
*** rcernin has joined #openstack-keystone | 22:24 | |
*** rcernin has quit IRC | 22:24 | |
*** rcernin has joined #openstack-keystone | 22:24 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!