Thursday, 2021-12-16

opendevreviewGhanshyam Mann proposed openstack/tempest master: Add fields in hypervisor schema for 2.33 and 2.53  https://review.opendev.org/c/openstack/tempest/+/73720902:07
*** pojadhav|afk is now known as pojadhav04:30
*** jpena|off is now known as jpena08:01
*** amoralej|off is now known as amoralej08:16
opendevreviewwangxiyuan proposed openstack/devstack master: openEuler 20.03 LTS SP2 support  https://review.opendev.org/c/openstack/devstack/+/76079009:14
opendevreviewAndre Aranha proposed openstack/tempest master: WIP - Refactor ssh.Client to allow other clients  https://review.opendev.org/c/openstack/tempest/+/82086011:12
opendevreviewBalazs Gibizer proposed openstack/tempest master: Introduce @serial test execution decorator  https://review.opendev.org/c/openstack/tempest/+/82173211:47
sean-k-mooneygibi: so ya https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L167-L168 the interprocess lock in fasteners doew not allow locking in one process and releasing in anohter11:55
gibisean-k-mooney: I'm not sure but maybe the interprocess rw lock from the same lib does11:57
gibihence my recent try11:57
sean-k-mooneyyou might need use fcntl.lockf11:57
sean-k-mooneyto actully implemat a file based lock directly11:57
gibithe fasteners rw lock depends on fcntl.lockf11:59
sean-k-mooneyhttps://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L477-L49711:59
sean-k-mooneyyep just found that11:59
*** amoralej is now known as amoralej|lunch11:59
sean-k-mooneygibi: only for the interprocesrw lock the normal interprocess lock jsut uses an bool in memeory11:59
gibiI meant that yeah12:00
sean-k-mooneyhttps://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L7212:00
sean-k-mooneyso yes the new one from fateners shoudl work i think im going to quickly read it12:00
sean-k-mooneyok ya so the reader lock is a shared noblocking lock on the file using fcntl.lockf12:02
sean-k-mooneyand the writer lock is an exclusive nonblocking lock12:03
sean-k-mooneyim not sure if you can release an exclusive lock form a different process12:03
gibiwe will see12:03
gibiit is running on the gate12:04
sean-k-mooneyhttps://man7.org/linux/man-pages/man2/flock.2.html12:08
sean-k-mooneyreading ^ i dont think it will work12:08
sean-k-mooneywe would need to share the file descriptor between the processes12:08
sean-k-mooneythere is an old locking trick i used to use in bash we could use12:10
sean-k-mooneywhich is a directory lock12:10
sean-k-mooneyhttps://opendev.org/x/networking-ovs-dpdk/src/branch/master/devstack/ovs-dpdk/ovs-dpdk-init#L566-L60612:12
sean-k-mooneymkdir when called to make a directoyr will only return 0 for one process if multipel try12:12
sean-k-mooneywhen the directory exists the lock is held by the process that created it12:13
sean-k-mooneyyou can release the lock by deleting the directory12:14
*** pojadhav is now known as pojadhav|brb12:30
*** pojadhav|brb is now known as pojadhav12:52
gibisean-k-mooney: I think with the fasteners impl, we don't need to release lock from another process13:06
gibias with the fasteners lock we don't need two locks just one with exclusive an non exclusive locking13:06
*** amoralej|lunch is now known as amoralej13:12
sean-k-mooneygibi: that is a good point is the ci run stil running14:00
gibiit is just finished with green results14:01
gibiI still want to look at the logs14:01
sean-k-mooneylookign at the job times thee are more or less inline with what i would expect14:14
sean-k-mooneythey are a little longer in some cases but not by too much14:14
gibiit is not superb https://paste.opendev.org/show/811724/ we are basically blocking executors (waiting for the write lock) for a long time14:17
gibiit would be a lot better if can reorde the tests :/14:18
opendevreviewAndre Aranha proposed openstack/tempest master: WIP - Refactor ssh.Client to allow other clients  https://review.opendev.org/c/openstack/tempest/+/82086014:19
sean-k-mooneyyes it would14:19
sean-k-mooneymaybe we can14:19
sean-k-mooneybut i think that is mostly contoled by the test runner14:19
gibiyeah stestr collects the tests and splits them to executors14:20
sean-k-mooneyi knwo pytest supprot markign test cases i dont knwo if there is somethign more generic we can use that could infucne the order 14:20
sean-k-mooneythere is basic test grouping in stestr14:21
sean-k-mooneyhttps://stestr.readthedocs.io/en/latest/MANUAL.html#grouping-tests14:21
gibiyepp but whathever we do with stestr is not transferrable to other usesers of tempest14:22
sean-k-mooneybut i dont see a way to do this form a test definition point of view14:22
gibiI might be able to tune the write lock taking to be more agressive14:22
sean-k-mooneyright we woudl need somethign that woudl just work for all clients of the tests14:22
gibithere are timers and sleeps in the lock14:22
sean-k-mooneywell for the muliti node job and the nova-live-migration job the tests run time delta was in the noise14:23
gibihttps://github.com/harlowja/fasteners/blob/4b7fe854ab49e863b69e2aa5bcaf2bb50f395b8b/fasteners/process_lock.py#L280-L30014:23
sean-k-mooneybut for tempest-full-py3 it was about an addional 25 mins14:23
gibiyeah I looked at ^^14:23
gibibut in theory in a concurrency 2 case in worst case we double the runtime14:24
gibiby blocking one of the executor til the last minute14:24
sean-k-mooneynot enirely14:24
sean-k-mooneywe do fo the small number of test that actully take the witelock14:24
sean-k-mooneybut they are a small propration fo the total tests14:25
sean-k-mooneyif we started having a lot of serial tests it would become a problem14:25
gibiif the executor A starts with the serial test and it gets blocked then it might be blocked until executor B runs the whole set of non serial tests14:26
gibihence my wish to reorder tests14:27
sean-k-mooneywell  i think we have a per test timeout14:27
gibibut it does not apply14:27
sean-k-mooneybut yes in pinicapl it could14:27
gibiat this stage14:27
gibithe executor that executed AggregatesAdminTestV241 in the full run was blocked for 45 minutes14:28
sean-k-mooneyah ok14:28
gibithat I don't like14:28
gibias that is waste14:28
sean-k-mooneyya ok14:29
sean-k-mooneybut if we give the write lock prefernce it woudl help14:29
sean-k-mooneyreordering is still the best approhc14:30
sean-k-mooneyif we could actully do the grouping dynmicaly in tempest somehow14:30
gibiwe would need 'a reader cannot take the lock if there is writers waiting' logic14:30
sean-k-mooneyyes14:30
gibibut fasteners only have timers :/14:30
sean-k-mooneythat is a witer prefered rw lock14:30
sean-k-mooneyah ok14:30
gibitimer tuning might work with two executor14:31
gibibut if we have 3 14:31
gibithen two executor can pass the reader lock between each other even if I tune write lock timer14:31
gibiI'm wondering if stester splits the test case list by some stable order...14:32
sean-k-mooney"""Write-preferring RW locks avoid the problem of writer starvation by preventing any new readers from acquiring the lock if there is a writer queued and waiting for the lock;"""14:32
sean-k-mooneydirectly from the wiki14:32
gibiyeah ^^ that would be nice14:32
gibibut we don't have such impl14:36
sean-k-mooneygibi: maybe we  have to do both for now14:41
sean-k-mooneyuse the rwlock to make it gernally correct14:41
sean-k-mooneyand then use grouping in strer/tox to make it fast in our gate jobs14:41
gibiyeah that is a viable compromise14:42
sean-k-mooneyhttps://docs.python.org/3/library/unittest.html#unittest.TestLoader.sortTestMethodsUsing14:43
sean-k-mooneynot sure how that works but maybe we can provide an implemeation for that14:43
opendevreviewStephen Finucane proposed openstack/devstack stable/xena: Further fixup for Ubuntu cloud images  https://review.opendev.org/c/openstack/devstack/+/82190814:44
gibisean-k-mooney: cool, I will hack on that a bit14:45
gibibut first I need an espresso 14:45
sean-k-mooney:) coffee is life14:45
sean-k-mooneyjust looking at temepst it looks like the enttry poitn it tempst run14:46
sean-k-mooneywe are not usign stestr or anything eles to run them14:46
sean-k-mooneyso we might be able to alter the test order in https://github.com/openstack/tempest/blob/master/tempest/cmd/run.py14:47
sean-k-mooneyalthogu it is using stestr internally14:47
gibiyeah it is calling stestr with load_list https://paste.opendev.org/show/811726/14:48
gibiload_list seems to still allow for later ordering during execution if I understand correctly14:48
sean-k-mooneyya so https://stestr.readthedocs.io/en/latest/MANUAL.html#cmdoption-stestr-run-load-list just say it limits the tests to those in the file14:54
sean-k-mooneyit does not say if it affects order14:54
sean-k-mooneybut it might just run them in tha torder14:54
sean-k-mooneyhttps://github.com/mtreinish/stestr/blob/bbc839fabb28f57d34c9a195b75158fa203988de/stestr/commands/run.py#L494-L50514:58
sean-k-mooneyi think it woudl run them in the order in the file15:00
sean-k-mooneyif you dont pass randomise15:00
sean-k-mooneyso maybe we can just reorder the test ids in the file15:00
gibihm I looked at https://github.com/mtreinish/stestr/blob/28892333822c93d5af41a372f79cf16dcbcc9cdc/stestr/subunit_runner/program.py#L182 and that suggest simple filtering15:01
gibianyhow I have to try15:01
sean-k-mooneyit depends on if you have a list of ids or now15:01
sean-k-mooney that seams to be testing the else branch15:02
sean-k-mooneywhich is just an intersection15:02
sean-k-mooneybut when ids is none it jsut uses the list of ides form the file15:02
sean-k-mooneywith the arges we pass and lookign at the code i think ids will be None15:04
sean-k-mooneywe are not using --failing or --analyze_isolation and are not use --no-discover or --pdb15:04
sean-k-mooneyso we will take the else on 4915:05
sean-k-mooney49315:05
sean-k-mooneyand set ids=None15:05
sean-k-mooneyand then take the ifn on line 49915:05
sean-k-mooneyhttps://github.com/mtreinish/stestr/blob/bbc839fabb28f57d34c9a195b75158fa203988de/stestr/commands/run.py#L490-L50115:05
clarkbfrickler: any idea if the new qemu 6.2 release fixes the memory thing you discovered?15:28
fricklerclarkb: well, qemu won't fix it, since it is not considered a bug. there should be a patch to libvirt which gives nova the option to override things, but I haven't checked the release status yet15:37
clarkboh I see15:37
clarkblibvirt needs to isntruct qemu to change the sizing right?15:38
gibisean-k-mooney: the order of the test in the list passed to load_list does not equal with the execution order of the tests15:38
gibibased on my trials15:38
sean-k-mooneytoo bad15:39
sean-k-mooneyclarkb: yes15:39
fricklerclarkb: yes15:39
sean-k-mooneyi have a partial workaround propsoed to devstack15:39
sean-k-mooneyjsut have not worked on it in a while15:39
sean-k-mooneyhttps://review.opendev.org/c/openstack/devstack/+/81707515:40
sean-k-mooneybasiclaly i would need to unistall apparmor15:40
fricklerhttps://gitlab.com/libvirt/libvirt/-/issues/22915:40
sean-k-mooneyfrickler: yep so we plan ot allow you to set that via the nova.conf15:41
sean-k-mooneyin the future15:41
fricklersean-k-mooney: well we now have the workaround of adding swap, that seems to work well enough for now, too15:41
fricklersean-k-mooney: I think kashyap already started a patch for that15:41
sean-k-mooneyya swap works, ya i think they have15:42
sean-k-mooneyclarkb: frickler  i know vexhost wanted to provde 16GB vms by default15:42
sean-k-mooneycould we also adress this by updating the testing vm size15:42
clarkbsean-k-mooney: we are already doing the larger VMs in vexxhost. But we cannot do this globally beacuse they are basically the only cloud doing that.15:43
fricklerhttps://review.opendev.org/c/openstack/nova/+/81682315:43
sean-k-mooneyclarkb: ah ok15:43
clarkbBasically you can opt into the larger flavors but if you do so your risk goes up because only ~2 clouds offer those iirc15:43
sean-k-mooneyclarkb: i rememebre the thead about it15:44
sean-k-mooneyi woudl have perfered an opt out15:44
clarkbagain we cannot do that because we don't have the redundancy15:44
clarkbnor the capactiy15:44
clarkbif vexxhost was 80% of our capacity then maybe we can do it. But tehy are like 8%15:44
sean-k-mooneywell what i ment was allow the normal lables ot give you either the big or small vms15:44
clarkbso we'd have everyone fighting for a tiny portion of our resources with an opt out15:45
clarkbsean-k-mooney: you would have to compeltely rewrite nodepool to do that15:45
clarkbI mean I guess that is theoretically possible, but I'm not sure anyone has the time for that right now15:45
sean-k-mooneynot really just have them use the same lable15:45
clarkbthat will mess up the scheduling for jobs that explicitly need the resources15:45
sean-k-mooneyit will yes15:46
clarkband it can allow jobs to only pass on the larger instances15:46
sean-k-mooneybtu it was howe it work before the mail thread 15:46
clarkbthen when they schedule the other 80% of their workload on the smaller flavors you have a broken gate15:46
clarkbits a really bad situation to get yourself into15:46
sean-k-mooneyit can but i think if your job cares its not writen properly15:46
clarkbthat isn't true iwth an opt out because most jobs won't know15:47
clarkbthis is why we make it opt in. Then you can do that15:47
sean-k-mooneyand most jobs wont care15:47
sean-k-mooneythe concuancy woudl change but the rest shoudl not15:47
sean-k-mooneyanyway there is no point reopening this15:47
clarkbIf we made that change my prediction is the openstack gate would be completely wedged within 2 weeks15:47
sean-k-mooneyi was reading the mail thread on thid discsion when it happened15:48
clarkbeveryone else would be fine beacuse they aren't very sensitive to node sizes, but openstack is signficantly15:48
clarkbeveryone else being zuul starlingx airship etc15:48
sean-k-mooneyclarkb: i really dont think it would have any effect on the nova jobs15:48
clarkbit would on all of youe devstack jobs15:48
clarkb(for this reason)15:49
sean-k-mooneythey woudl jsut work15:49
clarkbuntil you landed something that only passed with 16GB of memory15:49
clarkbthen they would stop working 92% of the time15:49
sean-k-mooneythat woudl be a chagne in tempest15:49
sean-k-mooneymost likley not a nova change15:49
clarkbits all the same if nova relies on it15:49
clarkbits silly to think of openstack components in that way15:50
sean-k-mooneybut that my poitn it woudl not be nova reliing on it15:50
sean-k-mooneyit would be tempest relying on it15:50
clarkbI think that distinction doesn't matter15:50
clarkbthe end result is the same regardless of who you want to blame15:50
sean-k-mooneywell i disagre sine its differnet peropel writeing and review each repo15:50
clarkbbut its the same project gated together and releasing together. Sure you can work in isolation and pretend tempest doesn't exist but you won't get very far15:51
clarkbWhen it comes to OpenStack's gate and CI that distiction as things are designed today isn't meaningful15:51
clarkb(and I think trying to assert that distinction is unhealthy to the project)15:51
sean-k-mooneywell we do more or less. yes we fix tempest issue if it breaks but we mainly use unit and fucntional tests today for or testing15:51
sean-k-mooneywe try to use tempest only fo addtion test coverage  and upgrades15:52
sean-k-mooneywe try to test the basic fucntilly with fucntional test in cluding things like live migration15:52
clarkbyes, but if the tempest job fails you aren't merging anything15:53
sean-k-mooneydont get me wrong tempest adds value sepsically for ceph or interservce interactions15:53
sean-k-mooneytrue15:53
sean-k-mooneyi just think you are over estimating how many jobs actully woudl be affected by the 16Gvs8G ram15:54
sean-k-mooneyi might also be underestimating it15:54
clarkbI think you underestimate it :) we ran that way for a few weeks and we had breakage15:54
clarkbwhen vexxhost made the initial change we basically said we'd try what you suggest and ya we broke15:54
sean-k-mooneythats the thing i dont recal that brake affecting nova 15:55
clarkbit was zuul that noticed it15:55
sean-k-mooneyright i rememebr that and the mail thread that arose15:55
clarkbbut this same qemu thing would've been papered over similarly15:55
sean-k-mooneyit woudl but as the qemu devs have said they do not consider it a bug15:56
clarkbbut it is a bug in our ci system15:56
clarkbbecause it would cause jobs on 8GB instances to fail but pass on 16gb15:56
sean-k-mooneyclarkb: i also am not patrially happy with have nova expsoe this by the way15:56
clarkbthe edact issue I'm saying we want to avoid15:56
clarkbI think we've given a reasonable outlet. If you know you need more memory it is there with the caveat that it may not always be there if a cloud has an outage and that resources are limited15:57
clarkbOtherwise we take a conservative approach to avoid wedging projects gates that might suddenly rely on more memory15:58
sean-k-mooneyin the long run i dont think nova shoudl really be exposing a config option to allow configing the tb-cache size15:58
sean-k-mooneyif we keep this long term it really shoudl be a flavor or image porpty and live iwth the vm15:58
sean-k-mooneyby usuing a host level config it may break or change with live migration15:59
clarkbI personally don't care how it is configured but it seems reasonable to configure it if the alternative is running out of memory15:59
sean-k-mooneywell the other alternitve is to use kvm16:00
clarkbseparately if people want to find more cloud resources so that we can offer more larger memory instances that would be great too. It just seems that right now we don't really have that ability16:00
sean-k-mooneybut onfortenlly we cant really do that in the ci16:00
clarkbwe cannot use kvm either for similar reasons16:00
clarkbbecause some clouds don't enable it and those that do have crashes16:00
clarkb:/16:00
sean-k-mooneyyep16:00
clarkband then it becomes my problem which is particularly frustrating because we've said over and over and over it isn't a supported configuration due to these problems16:01
sean-k-mooneywe likely will add a workaroudn config option with the caveat that you shoudl cofnigure it the same on all hosts16:01
clarkbI'd love nothing more than for kvm nested virt to be stable but unfortuantely...16:01
clarkbrelated: there is no nested virt on aarch64 aiui16:01
sean-k-mooneywe use it almost exclisvely for our downstrema ci 16:01
sean-k-mooneye.g. if it not on baremetal its nested virt16:02
clarkbsean-k-mooney: I'm sure with specific clouds and specific kernels you can get away with it. But our experience is that we consistently have people bugging us over why it braks and i find that frustrating because I shouldn't have to debug when users go off piste16:02
clarkbWe let the adventurous do it, but if you end up in a tree well... that should be on you not me16:03
sean-k-mooneyyep thats fair16:03
sean-k-mooneywhat im hoping to avoid is having to make the nova schduler aware of this or changing the objects sent over the rpc bus specificlaly the migration_data objets16:04
clarkbthat said the vast majority of the testing that tempest needs to do seems to work rpetty well on small images like cirros16:04
clarkbeven with qemu I mean16:04
sean-k-mooneyfor the most part yes16:04
sean-k-mooneywe ocationally have some guest boot timing issues16:04
sean-k-mooneybut cirros and qemu is gerneally enouch16:05
clarkbright and it is far more reliable so makes sense to use in that 95% care16:05
clarkb*case16:05
sean-k-mooneyyep i just mention kvm because the issue we hit does not happen when using it16:05
sean-k-mooneynot for any other reason16:06
clarkbAlso if runtime is an issue there are massive savings to be made if openstackclient can be sped up or devsatck converted to dedicated scripts to do bootstrapping instead of openstackclient16:14
clarkbsomething like half of devstack's runtime can be made to go away16:15
opendevreviewAndre Aranha proposed openstack/tempest master: WIP - Refactor ssh.Client to allow other clients  https://review.opendev.org/c/openstack/tempest/+/82086016:21
sean-k-mooneyclarkb: runtime has never been a reason to use kvm for me16:21
sean-k-mooneythe ocational timing issue with guest boot are witht est that dont wait for the gust OS to be ready before doing things like attaching cinder volumens16:22
sean-k-mooneythat can cause the guest kernel to panic in some cases16:22
clarkbI see, its performance but not wall time16:22
clarkb(the distinction is a little blurry but I think I get it)16:22
sean-k-mooneywell really its an incorrect test16:22
sean-k-mooneylee yarword is tryign to fix the test16:23
sean-k-mooneyby making it wait for the guest to be pingable or sshable16:23
clarkbgotcha16:23
sean-k-mooneyhe has patches up for review 16:23
clarkbstill probably broken with kvm but much harder to win the race and fail16:23
opendevreviewJames Parker proposed openstack/tempest master: Add flavor extra spec validation tests  https://review.opendev.org/c/openstack/tempest/+/81992016:23
sean-k-mooneyclarkb: ya kvm would just mask it by allowing the gust to boot faster16:24
sean-k-mooneyeven with qemu its often fast enough to not triger it16:24
sean-k-mooneyclarkb: this is the series https://review.opendev.org/q/topic:%22wait_until_sshable_pingable%22+(status:open%20OR%20status:merged)16:26
sean-k-mooneyclarkb: he is implementing an old tempest spec 16:27
sean-k-mooneythis one https://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/ssh-auth-strategy.html16:27
sean-k-mooneyit was never actully fully completed 16:28
opendevreviewBalazs Gibizer proposed openstack/tempest master: Introduce @serial test execution decorator  https://review.opendev.org/c/openstack/tempest/+/82173216:45
gibisean-k-mooney, gmann: please don't judge me based on this pile of hack ^^16:46
sean-k-mooneygibi: we wont judge you just your code :P16:47
gibithat is fair, that code deserves judgement16:47
sean-k-mooneyi see you mved them int a serial subdirector16:47
sean-k-mooneyoh and did an import fo the lock16:48
gibithe only way to affect test order is the naming of the tests /o\16:49
sean-k-mooneyhence the zzz prefix16:49
gibiyes16:49
gibiI need to copy the lock code from fasteners as it is from 0.16 and we are still on 0.14 and the 0.15 bump is blocked by an eventlet issue16:50
sean-k-mooneyyou im not even mad at the code. it is a gloiursly blatent hack aroudn a real problem16:50
sean-k-mooneyso the lock is for safte and the renameing is for speed basically16:51
gibiyes, the lock makes the execution correct, but potentially slow, the ordering can speed things up16:52
sean-k-mooneyya since the serial test that get the write lock wont start runnign untill the end so we will be paralle for most of the run16:53
gibiyes, if the executors are well balanced then there will be a lot less waiting16:53
sean-k-mooneyi mean other then doing this in tox via multiple runes i dont really have an alternitive sugestion16:53
gibiyes, tox sequential run would work but work for only us using tox16:54
gibione possible direction is to look into stestr and extend it with some way to express test ordering16:54
sean-k-mooneyso fasteres is also apache2 so the lisciening shoudl be fine to vendor the lock16:55
gibiyes, we are lucky with the licensing16:55
sean-k-mooneyi think fasteners was born out of oslo concurancy? or per hase we jsut addopted it16:56
*** jpena is now known as jpena|off16:58
sean-k-mooneygibi: i think this is the code equivalent of "ugly beautiful"16:58
sean-k-mooneygibi: its less of a hack then the "code" im currently reviewing https://review.opendev.org/c/openstack/tripleo-heat-templates/+/818639/1/environments/enable-secure-rbac.yaml#17217:00
gibiOK, you win :D17:01
sean-k-mooneyfor context for the rest we are impleing the gobal admin, project reader and project member secuire rbac personas in all provdies via ooo by overriding every policy17:02
sean-k-mooneyas a workaroudn until secure rbac actully works upstream17:02
sean-k-mooneyim currently review the backprot to make sure its correct for wallaby17:02
sean-k-mooneyso my tollerence for hack might currently be higher then others.17:03
gibiI see17:06
gmanngibi: I have not looked at the latest patch set but timeout and execution wait is the concern i am hoping will be there with lock approach 17:09
gibigmann: as far as I see test will not timeout waiting for the lock as the locking happens in setUpClass 17:10
gibithe execution wait is a concern we are trying to mitigate with test ordering17:10
gmannyeah17:11
*** amoralej is now known as amoralej|off17:15
opendevreviewMerged openstack/devstack stable/xena: Further fixup for Ubuntu cloud images  https://review.opendev.org/c/openstack/devstack/+/82190817:42
opendevreviewGhanshyam Mann proposed openstack/tempest master: Add fields in hypervisor schema for 2.33 and 2.53  https://review.opendev.org/c/openstack/tempest/+/73720917:43
opendevreviewGhanshyam Mann proposed openstack/tempest master: Fix server group schema for compute microversion 2.64  https://review.opendev.org/c/openstack/tempest/+/73376917:48
opendevreviewAndre Aranha proposed openstack/tempest master: WIP - Add libssh Client to use with fips  https://review.opendev.org/c/openstack/tempest/+/82101917:53
opendevreviewGhanshyam Mann proposed openstack/tempest master: Fix server group schema for compute microversion 2.64  https://review.opendev.org/c/openstack/tempest/+/73376919:27
opendevreviewGhanshyam Mann proposed openstack/tempest master: Fix compute quota classes schema for v2.50 and v2.57  https://review.opendev.org/c/openstack/tempest/+/73374021:43
opendevreviewGhanshyam Mann proposed openstack/tempest master: Add fields in hypervisor schema for 2.33 and 2.53  https://review.opendev.org/c/openstack/tempest/+/73720921:56
opendevreviewGhanshyam Mann proposed openstack/tempest master: Fix server group schema for compute microversion 2.64  https://review.opendev.org/c/openstack/tempest/+/73376921:58
opendevreviewGhanshyam Mann proposed openstack/tempest master: Fix compute quota classes schema for v2.50 and v2.57  https://review.opendev.org/c/openstack/tempest/+/73374021:58
gmannkopecmartin: this is ready, all testing are good https://review.opendev.org/c/openstack/devstack/+/81654922:52
opendevreviewGhanshyam Mann proposed openstack/tempest master: Add schema for compute 2.45 microversion  https://review.opendev.org/c/openstack/tempest/+/50839322:58
opendevreviewGhanshyam Mann proposed openstack/tempest master: Add fields in hypervisor schema for 2.33 and 2.53  https://review.opendev.org/c/openstack/tempest/+/73720923:01
opendevreviewGhanshyam Mann proposed openstack/tempest master: Fix server group schema for compute microversion 2.64  https://review.opendev.org/c/openstack/tempest/+/73376923:01
opendevreviewGhanshyam Mann proposed openstack/tempest master: Fix compute quota classes schema for v2.50 and v2.57  https://review.opendev.org/c/openstack/tempest/+/73374023:02
opendevreviewGhanshyam Mann proposed openstack/tempest master: Add schema for compute 2.45 microversion  https://review.opendev.org/c/openstack/tempest/+/50839323:02
opendevreviewGhanshyam Mann proposed openstack/tempest master: Remove compute api_extensions config option  https://review.opendev.org/c/openstack/tempest/+/82205423:43
opendevreviewGhanshyam Mann proposed openstack/tempest master: Remove stable/train jobs from master gate  https://review.opendev.org/c/openstack/tempest/+/82205523:54
opendevreviewGhanshyam Mann proposed openstack/stackviz master: Define stable/train job in stackviz  https://review.opendev.org/c/openstack/stackviz/+/82205623:58
opendevreviewGhanshyam Mann proposed openstack/tempest master: Remove stable/train jobs from master gate  https://review.opendev.org/c/openstack/tempest/+/82205523:59

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!