opendevreview | Ghanshyam Mann proposed openstack/tempest master: Add fields in hypervisor schema for 2.33 and 2.53 https://review.opendev.org/c/openstack/tempest/+/737209 | 02:07 |
---|---|---|
*** pojadhav|afk is now known as pojadhav | 04:30 | |
*** jpena|off is now known as jpena | 08:01 | |
*** amoralej|off is now known as amoralej | 08:16 | |
opendevreview | wangxiyuan proposed openstack/devstack master: openEuler 20.03 LTS SP2 support https://review.opendev.org/c/openstack/devstack/+/760790 | 09:14 |
opendevreview | Andre Aranha proposed openstack/tempest master: WIP - Refactor ssh.Client to allow other clients https://review.opendev.org/c/openstack/tempest/+/820860 | 11:12 |
opendevreview | Balazs Gibizer proposed openstack/tempest master: Introduce @serial test execution decorator https://review.opendev.org/c/openstack/tempest/+/821732 | 11:47 |
sean-k-mooney | gibi: so ya https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L167-L168 the interprocess lock in fasteners doew not allow locking in one process and releasing in anohter | 11:55 |
gibi | sean-k-mooney: I'm not sure but maybe the interprocess rw lock from the same lib does | 11:57 |
gibi | hence my recent try | 11:57 |
sean-k-mooney | you might need use fcntl.lockf | 11:57 |
sean-k-mooney | to actully implemat a file based lock directly | 11:57 |
gibi | the fasteners rw lock depends on fcntl.lockf | 11:59 |
sean-k-mooney | https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L477-L497 | 11:59 |
sean-k-mooney | yep just found that | 11:59 |
*** amoralej is now known as amoralej|lunch | 11:59 | |
sean-k-mooney | gibi: only for the interprocesrw lock the normal interprocess lock jsut uses an bool in memeory | 11:59 |
gibi | I meant that yeah | 12:00 |
sean-k-mooney | https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L72 | 12:00 |
sean-k-mooney | so yes the new one from fateners shoudl work i think im going to quickly read it | 12:00 |
sean-k-mooney | ok ya so the reader lock is a shared noblocking lock on the file using fcntl.lockf | 12:02 |
sean-k-mooney | and the writer lock is an exclusive nonblocking lock | 12:03 |
sean-k-mooney | im not sure if you can release an exclusive lock form a different process | 12:03 |
gibi | we will see | 12:03 |
gibi | it is running on the gate | 12:04 |
sean-k-mooney | https://man7.org/linux/man-pages/man2/flock.2.html | 12:08 |
sean-k-mooney | reading ^ i dont think it will work | 12:08 |
sean-k-mooney | we would need to share the file descriptor between the processes | 12:08 |
sean-k-mooney | there is an old locking trick i used to use in bash we could use | 12:10 |
sean-k-mooney | which is a directory lock | 12:10 |
sean-k-mooney | https://opendev.org/x/networking-ovs-dpdk/src/branch/master/devstack/ovs-dpdk/ovs-dpdk-init#L566-L606 | 12:12 |
sean-k-mooney | mkdir when called to make a directoyr will only return 0 for one process if multipel try | 12:12 |
sean-k-mooney | when the directory exists the lock is held by the process that created it | 12:13 |
sean-k-mooney | you can release the lock by deleting the directory | 12:14 |
*** pojadhav is now known as pojadhav|brb | 12:30 | |
*** pojadhav|brb is now known as pojadhav | 12:52 | |
gibi | sean-k-mooney: I think with the fasteners impl, we don't need to release lock from another process | 13:06 |
gibi | as with the fasteners lock we don't need two locks just one with exclusive an non exclusive locking | 13:06 |
*** amoralej|lunch is now known as amoralej | 13:12 | |
sean-k-mooney | gibi: that is a good point is the ci run stil running | 14:00 |
gibi | it is just finished with green results | 14:01 |
gibi | I still want to look at the logs | 14:01 |
sean-k-mooney | lookign at the job times thee are more or less inline with what i would expect | 14:14 |
sean-k-mooney | they are a little longer in some cases but not by too much | 14:14 |
gibi | it is not superb https://paste.opendev.org/show/811724/ we are basically blocking executors (waiting for the write lock) for a long time | 14:17 |
gibi | it would be a lot better if can reorde the tests :/ | 14:18 |
opendevreview | Andre Aranha proposed openstack/tempest master: WIP - Refactor ssh.Client to allow other clients https://review.opendev.org/c/openstack/tempest/+/820860 | 14:19 |
sean-k-mooney | yes it would | 14:19 |
sean-k-mooney | maybe we can | 14:19 |
sean-k-mooney | but i think that is mostly contoled by the test runner | 14:19 |
gibi | yeah stestr collects the tests and splits them to executors | 14:20 |
sean-k-mooney | i knwo pytest supprot markign test cases i dont knwo if there is somethign more generic we can use that could infucne the order | 14:20 |
sean-k-mooney | there is basic test grouping in stestr | 14:21 |
sean-k-mooney | https://stestr.readthedocs.io/en/latest/MANUAL.html#grouping-tests | 14:21 |
gibi | yepp but whathever we do with stestr is not transferrable to other usesers of tempest | 14:22 |
sean-k-mooney | but i dont see a way to do this form a test definition point of view | 14:22 |
gibi | I might be able to tune the write lock taking to be more agressive | 14:22 |
sean-k-mooney | right we woudl need somethign that woudl just work for all clients of the tests | 14:22 |
gibi | there are timers and sleeps in the lock | 14:22 |
sean-k-mooney | well for the muliti node job and the nova-live-migration job the tests run time delta was in the noise | 14:23 |
gibi | https://github.com/harlowja/fasteners/blob/4b7fe854ab49e863b69e2aa5bcaf2bb50f395b8b/fasteners/process_lock.py#L280-L300 | 14:23 |
sean-k-mooney | but for tempest-full-py3 it was about an addional 25 mins | 14:23 |
gibi | yeah I looked at ^^ | 14:23 |
gibi | but in theory in a concurrency 2 case in worst case we double the runtime | 14:24 |
gibi | by blocking one of the executor til the last minute | 14:24 |
sean-k-mooney | not enirely | 14:24 |
sean-k-mooney | we do fo the small number of test that actully take the witelock | 14:24 |
sean-k-mooney | but they are a small propration fo the total tests | 14:25 |
sean-k-mooney | if we started having a lot of serial tests it would become a problem | 14:25 |
gibi | if the executor A starts with the serial test and it gets blocked then it might be blocked until executor B runs the whole set of non serial tests | 14:26 |
gibi | hence my wish to reorder tests | 14:27 |
sean-k-mooney | well i think we have a per test timeout | 14:27 |
gibi | but it does not apply | 14:27 |
sean-k-mooney | but yes in pinicapl it could | 14:27 |
gibi | at this stage | 14:27 |
gibi | the executor that executed AggregatesAdminTestV241 in the full run was blocked for 45 minutes | 14:28 |
sean-k-mooney | ah ok | 14:28 |
gibi | that I don't like | 14:28 |
gibi | as that is waste | 14:28 |
sean-k-mooney | ya ok | 14:29 |
sean-k-mooney | but if we give the write lock prefernce it woudl help | 14:29 |
sean-k-mooney | reordering is still the best approhc | 14:30 |
sean-k-mooney | if we could actully do the grouping dynmicaly in tempest somehow | 14:30 |
gibi | we would need 'a reader cannot take the lock if there is writers waiting' logic | 14:30 |
sean-k-mooney | yes | 14:30 |
gibi | but fasteners only have timers :/ | 14:30 |
sean-k-mooney | that is a witer prefered rw lock | 14:30 |
sean-k-mooney | ah ok | 14:30 |
gibi | timer tuning might work with two executor | 14:31 |
gibi | but if we have 3 | 14:31 |
gibi | then two executor can pass the reader lock between each other even if I tune write lock timer | 14:31 |
gibi | I'm wondering if stester splits the test case list by some stable order... | 14:32 |
sean-k-mooney | """Write-preferring RW locks avoid the problem of writer starvation by preventing any new readers from acquiring the lock if there is a writer queued and waiting for the lock;""" | 14:32 |
sean-k-mooney | directly from the wiki | 14:32 |
gibi | yeah ^^ that would be nice | 14:32 |
gibi | but we don't have such impl | 14:36 |
sean-k-mooney | gibi: maybe we have to do both for now | 14:41 |
sean-k-mooney | use the rwlock to make it gernally correct | 14:41 |
sean-k-mooney | and then use grouping in strer/tox to make it fast in our gate jobs | 14:41 |
gibi | yeah that is a viable compromise | 14:42 |
sean-k-mooney | https://docs.python.org/3/library/unittest.html#unittest.TestLoader.sortTestMethodsUsing | 14:43 |
sean-k-mooney | not sure how that works but maybe we can provide an implemeation for that | 14:43 |
opendevreview | Stephen Finucane proposed openstack/devstack stable/xena: Further fixup for Ubuntu cloud images https://review.opendev.org/c/openstack/devstack/+/821908 | 14:44 |
gibi | sean-k-mooney: cool, I will hack on that a bit | 14:45 |
gibi | but first I need an espresso | 14:45 |
sean-k-mooney | :) coffee is life | 14:45 |
sean-k-mooney | just looking at temepst it looks like the enttry poitn it tempst run | 14:46 |
sean-k-mooney | we are not usign stestr or anything eles to run them | 14:46 |
sean-k-mooney | so we might be able to alter the test order in https://github.com/openstack/tempest/blob/master/tempest/cmd/run.py | 14:47 |
sean-k-mooney | althogu it is using stestr internally | 14:47 |
gibi | yeah it is calling stestr with load_list https://paste.opendev.org/show/811726/ | 14:48 |
gibi | load_list seems to still allow for later ordering during execution if I understand correctly | 14:48 |
sean-k-mooney | ya so https://stestr.readthedocs.io/en/latest/MANUAL.html#cmdoption-stestr-run-load-list just say it limits the tests to those in the file | 14:54 |
sean-k-mooney | it does not say if it affects order | 14:54 |
sean-k-mooney | but it might just run them in tha torder | 14:54 |
sean-k-mooney | https://github.com/mtreinish/stestr/blob/bbc839fabb28f57d34c9a195b75158fa203988de/stestr/commands/run.py#L494-L505 | 14:58 |
sean-k-mooney | i think it woudl run them in the order in the file | 15:00 |
sean-k-mooney | if you dont pass randomise | 15:00 |
sean-k-mooney | so maybe we can just reorder the test ids in the file | 15:00 |
gibi | hm I looked at https://github.com/mtreinish/stestr/blob/28892333822c93d5af41a372f79cf16dcbcc9cdc/stestr/subunit_runner/program.py#L182 and that suggest simple filtering | 15:01 |
gibi | anyhow I have to try | 15:01 |
sean-k-mooney | it depends on if you have a list of ids or now | 15:01 |
sean-k-mooney | that seams to be testing the else branch | 15:02 |
sean-k-mooney | which is just an intersection | 15:02 |
sean-k-mooney | but when ids is none it jsut uses the list of ides form the file | 15:02 |
sean-k-mooney | with the arges we pass and lookign at the code i think ids will be None | 15:04 |
sean-k-mooney | we are not using --failing or --analyze_isolation and are not use --no-discover or --pdb | 15:04 |
sean-k-mooney | so we will take the else on 49 | 15:05 |
sean-k-mooney | 493 | 15:05 |
sean-k-mooney | and set ids=None | 15:05 |
sean-k-mooney | and then take the ifn on line 499 | 15:05 |
sean-k-mooney | https://github.com/mtreinish/stestr/blob/bbc839fabb28f57d34c9a195b75158fa203988de/stestr/commands/run.py#L490-L501 | 15:05 |
clarkb | frickler: any idea if the new qemu 6.2 release fixes the memory thing you discovered? | 15:28 |
frickler | clarkb: well, qemu won't fix it, since it is not considered a bug. there should be a patch to libvirt which gives nova the option to override things, but I haven't checked the release status yet | 15:37 |
clarkb | oh I see | 15:37 |
clarkb | libvirt needs to isntruct qemu to change the sizing right? | 15:38 |
gibi | sean-k-mooney: the order of the test in the list passed to load_list does not equal with the execution order of the tests | 15:38 |
gibi | based on my trials | 15:38 |
sean-k-mooney | too bad | 15:39 |
sean-k-mooney | clarkb: yes | 15:39 |
frickler | clarkb: yes | 15:39 |
sean-k-mooney | i have a partial workaround propsoed to devstack | 15:39 |
sean-k-mooney | jsut have not worked on it in a while | 15:39 |
sean-k-mooney | https://review.opendev.org/c/openstack/devstack/+/817075 | 15:40 |
sean-k-mooney | basiclaly i would need to unistall apparmor | 15:40 |
frickler | https://gitlab.com/libvirt/libvirt/-/issues/229 | 15:40 |
sean-k-mooney | frickler: yep so we plan ot allow you to set that via the nova.conf | 15:41 |
sean-k-mooney | in the future | 15:41 |
frickler | sean-k-mooney: well we now have the workaround of adding swap, that seems to work well enough for now, too | 15:41 |
frickler | sean-k-mooney: I think kashyap already started a patch for that | 15:41 |
sean-k-mooney | ya swap works, ya i think they have | 15:42 |
sean-k-mooney | clarkb: frickler i know vexhost wanted to provde 16GB vms by default | 15:42 |
sean-k-mooney | could we also adress this by updating the testing vm size | 15:42 |
clarkb | sean-k-mooney: we are already doing the larger VMs in vexxhost. But we cannot do this globally beacuse they are basically the only cloud doing that. | 15:43 |
frickler | https://review.opendev.org/c/openstack/nova/+/816823 | 15:43 |
sean-k-mooney | clarkb: ah ok | 15:43 |
clarkb | Basically you can opt into the larger flavors but if you do so your risk goes up because only ~2 clouds offer those iirc | 15:43 |
sean-k-mooney | clarkb: i rememebre the thead about it | 15:44 |
sean-k-mooney | i woudl have perfered an opt out | 15:44 |
clarkb | again we cannot do that because we don't have the redundancy | 15:44 |
clarkb | nor the capactiy | 15:44 |
clarkb | if vexxhost was 80% of our capacity then maybe we can do it. But tehy are like 8% | 15:44 |
sean-k-mooney | well what i ment was allow the normal lables ot give you either the big or small vms | 15:44 |
clarkb | so we'd have everyone fighting for a tiny portion of our resources with an opt out | 15:45 |
clarkb | sean-k-mooney: you would have to compeltely rewrite nodepool to do that | 15:45 |
clarkb | I mean I guess that is theoretically possible, but I'm not sure anyone has the time for that right now | 15:45 |
sean-k-mooney | not really just have them use the same lable | 15:45 |
clarkb | that will mess up the scheduling for jobs that explicitly need the resources | 15:45 |
sean-k-mooney | it will yes | 15:46 |
clarkb | and it can allow jobs to only pass on the larger instances | 15:46 |
sean-k-mooney | btu it was howe it work before the mail thread | 15:46 |
clarkb | then when they schedule the other 80% of their workload on the smaller flavors you have a broken gate | 15:46 |
clarkb | its a really bad situation to get yourself into | 15:46 |
sean-k-mooney | it can but i think if your job cares its not writen properly | 15:46 |
clarkb | that isn't true iwth an opt out because most jobs won't know | 15:47 |
clarkb | this is why we make it opt in. Then you can do that | 15:47 |
sean-k-mooney | and most jobs wont care | 15:47 |
sean-k-mooney | the concuancy woudl change but the rest shoudl not | 15:47 |
sean-k-mooney | anyway there is no point reopening this | 15:47 |
clarkb | If we made that change my prediction is the openstack gate would be completely wedged within 2 weeks | 15:47 |
sean-k-mooney | i was reading the mail thread on thid discsion when it happened | 15:48 |
clarkb | everyone else would be fine beacuse they aren't very sensitive to node sizes, but openstack is signficantly | 15:48 |
clarkb | everyone else being zuul starlingx airship etc | 15:48 |
sean-k-mooney | clarkb: i really dont think it would have any effect on the nova jobs | 15:48 |
clarkb | it would on all of youe devstack jobs | 15:48 |
clarkb | (for this reason) | 15:49 |
sean-k-mooney | they woudl jsut work | 15:49 |
clarkb | until you landed something that only passed with 16GB of memory | 15:49 |
clarkb | then they would stop working 92% of the time | 15:49 |
sean-k-mooney | that woudl be a chagne in tempest | 15:49 |
sean-k-mooney | most likley not a nova change | 15:49 |
clarkb | its all the same if nova relies on it | 15:49 |
clarkb | its silly to think of openstack components in that way | 15:50 |
sean-k-mooney | but that my poitn it woudl not be nova reliing on it | 15:50 |
sean-k-mooney | it would be tempest relying on it | 15:50 |
clarkb | I think that distinction doesn't matter | 15:50 |
clarkb | the end result is the same regardless of who you want to blame | 15:50 |
sean-k-mooney | well i disagre sine its differnet peropel writeing and review each repo | 15:50 |
clarkb | but its the same project gated together and releasing together. Sure you can work in isolation and pretend tempest doesn't exist but you won't get very far | 15:51 |
clarkb | When it comes to OpenStack's gate and CI that distiction as things are designed today isn't meaningful | 15:51 |
clarkb | (and I think trying to assert that distinction is unhealthy to the project) | 15:51 |
sean-k-mooney | well we do more or less. yes we fix tempest issue if it breaks but we mainly use unit and fucntional tests today for or testing | 15:51 |
sean-k-mooney | we try to use tempest only fo addtion test coverage and upgrades | 15:52 |
sean-k-mooney | we try to test the basic fucntilly with fucntional test in cluding things like live migration | 15:52 |
clarkb | yes, but if the tempest job fails you aren't merging anything | 15:53 |
sean-k-mooney | dont get me wrong tempest adds value sepsically for ceph or interservce interactions | 15:53 |
sean-k-mooney | true | 15:53 |
sean-k-mooney | i just think you are over estimating how many jobs actully woudl be affected by the 16Gvs8G ram | 15:54 |
sean-k-mooney | i might also be underestimating it | 15:54 |
clarkb | I think you underestimate it :) we ran that way for a few weeks and we had breakage | 15:54 |
clarkb | when vexxhost made the initial change we basically said we'd try what you suggest and ya we broke | 15:54 |
sean-k-mooney | thats the thing i dont recal that brake affecting nova | 15:55 |
clarkb | it was zuul that noticed it | 15:55 |
sean-k-mooney | right i rememebr that and the mail thread that arose | 15:55 |
clarkb | but this same qemu thing would've been papered over similarly | 15:55 |
sean-k-mooney | it woudl but as the qemu devs have said they do not consider it a bug | 15:56 |
clarkb | but it is a bug in our ci system | 15:56 |
clarkb | because it would cause jobs on 8GB instances to fail but pass on 16gb | 15:56 |
sean-k-mooney | clarkb: i also am not patrially happy with have nova expsoe this by the way | 15:56 |
clarkb | the edact issue I'm saying we want to avoid | 15:56 |
clarkb | I think we've given a reasonable outlet. If you know you need more memory it is there with the caveat that it may not always be there if a cloud has an outage and that resources are limited | 15:57 |
clarkb | Otherwise we take a conservative approach to avoid wedging projects gates that might suddenly rely on more memory | 15:58 |
sean-k-mooney | in the long run i dont think nova shoudl really be exposing a config option to allow configing the tb-cache size | 15:58 |
sean-k-mooney | if we keep this long term it really shoudl be a flavor or image porpty and live iwth the vm | 15:58 |
sean-k-mooney | by usuing a host level config it may break or change with live migration | 15:59 |
clarkb | I personally don't care how it is configured but it seems reasonable to configure it if the alternative is running out of memory | 15:59 |
sean-k-mooney | well the other alternitve is to use kvm | 16:00 |
clarkb | separately if people want to find more cloud resources so that we can offer more larger memory instances that would be great too. It just seems that right now we don't really have that ability | 16:00 |
sean-k-mooney | but onfortenlly we cant really do that in the ci | 16:00 |
clarkb | we cannot use kvm either for similar reasons | 16:00 |
clarkb | because some clouds don't enable it and those that do have crashes | 16:00 |
clarkb | :/ | 16:00 |
sean-k-mooney | yep | 16:00 |
clarkb | and then it becomes my problem which is particularly frustrating because we've said over and over and over it isn't a supported configuration due to these problems | 16:01 |
sean-k-mooney | we likely will add a workaroudn config option with the caveat that you shoudl cofnigure it the same on all hosts | 16:01 |
clarkb | I'd love nothing more than for kvm nested virt to be stable but unfortuantely... | 16:01 |
clarkb | related: there is no nested virt on aarch64 aiui | 16:01 |
sean-k-mooney | we use it almost exclisvely for our downstrema ci | 16:01 |
sean-k-mooney | e.g. if it not on baremetal its nested virt | 16:02 |
clarkb | sean-k-mooney: I'm sure with specific clouds and specific kernels you can get away with it. But our experience is that we consistently have people bugging us over why it braks and i find that frustrating because I shouldn't have to debug when users go off piste | 16:02 |
clarkb | We let the adventurous do it, but if you end up in a tree well... that should be on you not me | 16:03 |
sean-k-mooney | yep thats fair | 16:03 |
sean-k-mooney | what im hoping to avoid is having to make the nova schduler aware of this or changing the objects sent over the rpc bus specificlaly the migration_data objets | 16:04 |
clarkb | that said the vast majority of the testing that tempest needs to do seems to work rpetty well on small images like cirros | 16:04 |
clarkb | even with qemu I mean | 16:04 |
sean-k-mooney | for the most part yes | 16:04 |
sean-k-mooney | we ocationally have some guest boot timing issues | 16:04 |
sean-k-mooney | but cirros and qemu is gerneally enouch | 16:05 |
clarkb | right and it is far more reliable so makes sense to use in that 95% care | 16:05 |
clarkb | *case | 16:05 |
sean-k-mooney | yep i just mention kvm because the issue we hit does not happen when using it | 16:05 |
sean-k-mooney | not for any other reason | 16:06 |
clarkb | Also if runtime is an issue there are massive savings to be made if openstackclient can be sped up or devsatck converted to dedicated scripts to do bootstrapping instead of openstackclient | 16:14 |
clarkb | something like half of devstack's runtime can be made to go away | 16:15 |
opendevreview | Andre Aranha proposed openstack/tempest master: WIP - Refactor ssh.Client to allow other clients https://review.opendev.org/c/openstack/tempest/+/820860 | 16:21 |
sean-k-mooney | clarkb: runtime has never been a reason to use kvm for me | 16:21 |
sean-k-mooney | the ocational timing issue with guest boot are witht est that dont wait for the gust OS to be ready before doing things like attaching cinder volumens | 16:22 |
sean-k-mooney | that can cause the guest kernel to panic in some cases | 16:22 |
clarkb | I see, its performance but not wall time | 16:22 |
clarkb | (the distinction is a little blurry but I think I get it) | 16:22 |
sean-k-mooney | well really its an incorrect test | 16:22 |
sean-k-mooney | lee yarword is tryign to fix the test | 16:23 |
sean-k-mooney | by making it wait for the guest to be pingable or sshable | 16:23 |
clarkb | gotcha | 16:23 |
sean-k-mooney | he has patches up for review | 16:23 |
clarkb | still probably broken with kvm but much harder to win the race and fail | 16:23 |
opendevreview | James Parker proposed openstack/tempest master: Add flavor extra spec validation tests https://review.opendev.org/c/openstack/tempest/+/819920 | 16:23 |
sean-k-mooney | clarkb: ya kvm would just mask it by allowing the gust to boot faster | 16:24 |
sean-k-mooney | even with qemu its often fast enough to not triger it | 16:24 |
sean-k-mooney | clarkb: this is the series https://review.opendev.org/q/topic:%22wait_until_sshable_pingable%22+(status:open%20OR%20status:merged) | 16:26 |
sean-k-mooney | clarkb: he is implementing an old tempest spec | 16:27 |
sean-k-mooney | this one https://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/ssh-auth-strategy.html | 16:27 |
sean-k-mooney | it was never actully fully completed | 16:28 |
opendevreview | Balazs Gibizer proposed openstack/tempest master: Introduce @serial test execution decorator https://review.opendev.org/c/openstack/tempest/+/821732 | 16:45 |
gibi | sean-k-mooney, gmann: please don't judge me based on this pile of hack ^^ | 16:46 |
sean-k-mooney | gibi: we wont judge you just your code :P | 16:47 |
gibi | that is fair, that code deserves judgement | 16:47 |
sean-k-mooney | i see you mved them int a serial subdirector | 16:47 |
sean-k-mooney | oh and did an import fo the lock | 16:48 |
gibi | the only way to affect test order is the naming of the tests /o\ | 16:49 |
sean-k-mooney | hence the zzz prefix | 16:49 |
gibi | yes | 16:49 |
gibi | I need to copy the lock code from fasteners as it is from 0.16 and we are still on 0.14 and the 0.15 bump is blocked by an eventlet issue | 16:50 |
sean-k-mooney | you im not even mad at the code. it is a gloiursly blatent hack aroudn a real problem | 16:50 |
sean-k-mooney | so the lock is for safte and the renameing is for speed basically | 16:51 |
gibi | yes, the lock makes the execution correct, but potentially slow, the ordering can speed things up | 16:52 |
sean-k-mooney | ya since the serial test that get the write lock wont start runnign untill the end so we will be paralle for most of the run | 16:53 |
gibi | yes, if the executors are well balanced then there will be a lot less waiting | 16:53 |
sean-k-mooney | i mean other then doing this in tox via multiple runes i dont really have an alternitive sugestion | 16:53 |
gibi | yes, tox sequential run would work but work for only us using tox | 16:54 |
gibi | one possible direction is to look into stestr and extend it with some way to express test ordering | 16:54 |
sean-k-mooney | so fasteres is also apache2 so the lisciening shoudl be fine to vendor the lock | 16:55 |
gibi | yes, we are lucky with the licensing | 16:55 |
sean-k-mooney | i think fasteners was born out of oslo concurancy? or per hase we jsut addopted it | 16:56 |
*** jpena is now known as jpena|off | 16:58 | |
sean-k-mooney | gibi: i think this is the code equivalent of "ugly beautiful" | 16:58 |
sean-k-mooney | gibi: its less of a hack then the "code" im currently reviewing https://review.opendev.org/c/openstack/tripleo-heat-templates/+/818639/1/environments/enable-secure-rbac.yaml#172 | 17:00 |
gibi | OK, you win :D | 17:01 |
sean-k-mooney | for context for the rest we are impleing the gobal admin, project reader and project member secuire rbac personas in all provdies via ooo by overriding every policy | 17:02 |
sean-k-mooney | as a workaroudn until secure rbac actully works upstream | 17:02 |
sean-k-mooney | im currently review the backprot to make sure its correct for wallaby | 17:02 |
sean-k-mooney | so my tollerence for hack might currently be higher then others. | 17:03 |
gibi | I see | 17:06 |
gmann | gibi: I have not looked at the latest patch set but timeout and execution wait is the concern i am hoping will be there with lock approach | 17:09 |
gibi | gmann: as far as I see test will not timeout waiting for the lock as the locking happens in setUpClass | 17:10 |
gibi | the execution wait is a concern we are trying to mitigate with test ordering | 17:10 |
gmann | yeah | 17:11 |
*** amoralej is now known as amoralej|off | 17:15 | |
opendevreview | Merged openstack/devstack stable/xena: Further fixup for Ubuntu cloud images https://review.opendev.org/c/openstack/devstack/+/821908 | 17:42 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Add fields in hypervisor schema for 2.33 and 2.53 https://review.opendev.org/c/openstack/tempest/+/737209 | 17:43 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Fix server group schema for compute microversion 2.64 https://review.opendev.org/c/openstack/tempest/+/733769 | 17:48 |
opendevreview | Andre Aranha proposed openstack/tempest master: WIP - Add libssh Client to use with fips https://review.opendev.org/c/openstack/tempest/+/821019 | 17:53 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Fix server group schema for compute microversion 2.64 https://review.opendev.org/c/openstack/tempest/+/733769 | 19:27 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Fix compute quota classes schema for v2.50 and v2.57 https://review.opendev.org/c/openstack/tempest/+/733740 | 21:43 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Add fields in hypervisor schema for 2.33 and 2.53 https://review.opendev.org/c/openstack/tempest/+/737209 | 21:56 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Fix server group schema for compute microversion 2.64 https://review.opendev.org/c/openstack/tempest/+/733769 | 21:58 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Fix compute quota classes schema for v2.50 and v2.57 https://review.opendev.org/c/openstack/tempest/+/733740 | 21:58 |
gmann | kopecmartin: this is ready, all testing are good https://review.opendev.org/c/openstack/devstack/+/816549 | 22:52 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Add schema for compute 2.45 microversion https://review.opendev.org/c/openstack/tempest/+/508393 | 22:58 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Add fields in hypervisor schema for 2.33 and 2.53 https://review.opendev.org/c/openstack/tempest/+/737209 | 23:01 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Fix server group schema for compute microversion 2.64 https://review.opendev.org/c/openstack/tempest/+/733769 | 23:01 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Fix compute quota classes schema for v2.50 and v2.57 https://review.opendev.org/c/openstack/tempest/+/733740 | 23:02 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Add schema for compute 2.45 microversion https://review.opendev.org/c/openstack/tempest/+/508393 | 23:02 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Remove compute api_extensions config option https://review.opendev.org/c/openstack/tempest/+/822054 | 23:43 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Remove stable/train jobs from master gate https://review.opendev.org/c/openstack/tempest/+/822055 | 23:54 |
opendevreview | Ghanshyam Mann proposed openstack/stackviz master: Define stable/train job in stackviz https://review.opendev.org/c/openstack/stackviz/+/822056 | 23:58 |
opendevreview | Ghanshyam Mann proposed openstack/tempest master: Remove stable/train jobs from master gate https://review.opendev.org/c/openstack/tempest/+/822055 | 23:59 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!