opendevreview | Dan Smith proposed openstack/devstack master: Change DB counting mechanism https://review.opendev.org/c/openstack/devstack/+/839820 | 01:58 |
---|---|---|
opendevreview | Dan Smith proposed openstack/devstack master: Improve API log parsing https://review.opendev.org/c/openstack/devstack/+/839067 | 01:58 |
opendevreview | Dan Smith proposed openstack/devstack master: WIP: Test static perfdata comparisons https://review.opendev.org/c/openstack/devstack/+/838947 | 01:58 |
*** bhagyashris|ruck is now known as bhagyashris|sick | 04:37 | |
*** jpena|off is now known as jpena | 07:07 | |
bkopilov | Hi experts , a question about tempest cleanup , i see in base classed (example volume) that we add to cleanup in class level | 11:12 |
bkopilov | means that we clean the volumes at the end of class (resource_cleanup) and not in testcase level | 11:13 |
bkopilov | Why do we delete the resource in class level ? while on others i see in testcase level | 11:13 |
*** iurygregory__ is now known as iurygregory | 11:14 | |
frickler | bkopilov: I wouldn't call myself expert on tempest, but I think it depends on whether the resource can be reused for multiple tests within a class or whether each individual test needs a new one | 11:16 |
bkopilov | frickler, but the resources usually per testcase and they should be deleted. | 11:19 |
bkopilov | why do we keep "all" created volume till class ended , assume we have a class with 10 testcases of volume create | 11:19 |
bkopilov | in that case if delete volume fails , you will not know for sure if its because test x or other issue | 11:20 |
frickler | bkopilov: IMO if the assumption is that a deletion could fail in relation with a test, the deletion should explicitly be part of the test and not be delayed to the cleanup phase | 11:36 |
*** pojadhav is now known as pojadhav|afk | 13:50 | |
bkopilov | frickler, ack , agree. | 14:05 |
bkopilov | frickler, if you have 10 testcases for volume_create , and the failure point is on delete volume from clenaup , means all 10 will be passed but error on tear_down , | 14:06 |
bkopilov | frickler, i think i am going to raise this issue in a bug | 14:07 |
gmann | bkopilov: reason is same what frickler mentioned that is resource is shared among the tests in class then we create/delete resource at class level otherwise test leve | 14:10 |
bkopilov | gm | 14:11 |
gmann | one good example is GET resource tests, where we create resoruce at class level and check all GET APIs | 14:11 |
bkopilov | gmann, but it creates a bug , if the failure on deletion | 14:11 |
gmann | bkopilov: but if any resource only used at test level and we wait to clean that at class level, feel free to propose change that will be valid things to do | 14:12 |
bkopilov | gmann, need to think about , thanks | 14:12 |
gmann | bkopilov: on that, we delete resource with ignore-not-found try-block means if resource is deleted before class/test cleanup start then we do not fail test | 14:13 |
gmann | bkopilov: but again if there is any issue in some test on this ^^ feel free to propose fix and we can check the issye | 14:13 |
gmann | bkopilov: remember, most cleanup in tests are bets effort to "do not leak resource in testing cloud" and they are not perfect we keep fixing them regularly. | 14:14 |
gmann | *best effort | 14:14 |
gmann | bkopilov: or you can just raise bug in tempest and ping me here, so that we can check before you do the changes. | 14:14 |
gmann | bkopilov: or I am here if you have anything failing and want to discuss | 14:15 |
opendevreview | yatin proposed openstack/devstack master: Collect status of all services https://review.opendev.org/c/openstack/devstack/+/839752 | 14:21 |
slaweq | gmann: regarding secure rbac, I just tried locally run all neutron_tempest_plugin.api tests with enforce_new_defaults=True in Neutron | 14:22 |
slaweq | I also sent DNM patch https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/839954 to check that in Zuul | 14:22 |
slaweq | but the main issue there is that in tempest (and neutron-tempest-plugin) admin_client is always in same tenant and it's different one than tenant of os.client | 14:23 |
gmann | slaweq: and enforce_scope=True also? | 14:23 |
slaweq | so this admin_client don't have any access to the client's resources | 14:23 |
slaweq | gmann: locally I tried with enfoce_scope=False first but then also with enforce_scope=True | 14:24 |
gmann | ohk NEUTRON_ENFORCE_SCOPE set both in devstack | 14:24 |
slaweq | so with small "hack" https://review.opendev.org/c/openstack/neutron/+/839952 most of the test were passing for me | 14:24 |
slaweq | there are some issues with sharing resources but that I will check | 14:25 |
gmann | slaweq: ohk, that is actually what new RBAC was securing not allowing projectA-admin to do anything on projectB-admin | 14:25 |
slaweq | main question is - should tempest be changed to create admin_client always in same tenant as client ? | 14:25 |
slaweq | or should we change somehow all of our tests? | 14:25 |
gmann | slaweq: yeah that is target ^^ and we will be able to see same admin is accessing the resource in their project | 14:26 |
gmann | slaweq: I will say let's wait for the RBAC discussion on policy popup meeting in case we decide something on scope things. | 14:26 |
slaweq | ok | 14:26 |
slaweq | when this discussion will be? next tuesday? | 14:26 |
gmann | slaweq: but one thing we can do is to set enforce_new_defaults=True only and see how test behaving | 14:27 |
gmann | slaweq: yeah https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting | 14:27 |
slaweq | yes, that's what I did locally | 14:27 |
gmann | slaweq: I think that conflict with your neutorn meeting? | 14:27 |
slaweq | and that's what didn't work properly for me due to this admin limitation to the own project | 14:27 |
slaweq | yes, it is | 14:27 |
slaweq | and this next tuesday it's public holiday in Poland so I will not be available | 14:28 |
gmann | slaweq: ok I think admin client we need to change. let me check in tempest side becuase I need to fix one bug also for member, reader having same project_id as admin | 14:28 |
slaweq | sure | 14:28 |
slaweq | I can continue that next week :) | 14:28 |
gmann | slaweq: sure, no hurry. menawhile let's see if will ready with that project_id things which id needed for new default testing | 14:29 |
gmann | slaweq: just to know and in case I check time for rbac discussion, what time can work for you or neutron folks | 14:29 |
slaweq | if Tuesday then e.g. 1300 UTC would work for me | 14:30 |
gmann | I can try to check in 3rd May call and see if we can change in future call | 14:31 |
slaweq | at least every second week | 14:31 |
slaweq | thx | 14:31 |
gmann | slaweq: starting with 10th May or 17th ? | 14:31 |
slaweq | 10th is good for me | 14:32 |
gmann | k | 14:32 |
opendevreview | Dan Smith proposed openstack/devstack master: Change DB counting mechanism https://review.opendev.org/c/openstack/devstack/+/839820 | 14:46 |
*** jpena is now known as jpena|off | 16:30 | |
opendevreview | yatin proposed openstack/devstack master: Collect status of all services https://review.opendev.org/c/openstack/devstack/+/839752 | 16:40 |
opendevreview | yatin proposed openstack/devstack master: DNM: Testing only jammy job https://review.opendev.org/c/openstack/devstack/+/839389 | 16:40 |
dansmith | slaweq: still around? | 18:18 |
opendevreview | Dan Smith proposed openstack/devstack master: Change DB counting mechanism https://review.opendev.org/c/openstack/devstack/+/839820 | 18:24 |
opendevreview | Dan Smith proposed openstack/devstack master: Improve API log parsing https://review.opendev.org/c/openstack/devstack/+/839067 | 18:24 |
bkopilov | gmann,Thanks , | 18:43 |
*** spotz_ is now known as spotz | 19:53 | |
opendevreview | Dan Smith proposed openstack/devstack master: Change DB counting mechanism https://review.opendev.org/c/openstack/devstack/+/839820 | 20:18 |
opendevreview | Dan Smith proposed openstack/devstack master: Improve API log parsing https://review.opendev.org/c/openstack/devstack/+/839067 | 20:18 |
opendevreview | Dan Smith proposed openstack/devstack master: WIP: Test static perfdata comparisons https://review.opendev.org/c/openstack/devstack/+/838947 | 20:18 |
opendevreview | Dan Smith proposed openstack/devstack master: Change DB counting mechanism https://review.opendev.org/c/openstack/devstack/+/839820 | 22:09 |
opendevreview | Dan Smith proposed openstack/devstack master: Improve API log parsing https://review.opendev.org/c/openstack/devstack/+/839067 | 22:09 |
opendevreview | Dan Smith proposed openstack/devstack master: WIP: Test static perfdata comparisons https://review.opendev.org/c/openstack/devstack/+/838947 | 22:09 |
dansmith | gmann: with these two changes ^ I get at least a couple of stable runs that show 0% change for API and DB loads | 22:10 |
dansmith | the "test static perfdata comparisons" now reports no change (locally, hopefully in gate on the next run) | 22:10 |
dansmith | but just FYI that's where I'm at | 22:11 |
dansmith | the DB one ended up being a lot less easy than I was hoping by just turning on mysql logging | 22:11 |
dansmith | it defaults to "record 10k queries", which I had bumped to 100k | 22:11 |
dansmith | turns out, keystone has almost 100k queries before devstack is even finished stacking, and we're waaaay over 100k after a tempest run all around | 22:12 |
dansmith | turning mysql up to 1m rows causes it to OOM in our workers | 22:12 |
dansmith | so I took a different approach, which is more complicated, but hopefully tenable.. way less performance impact and gives us what we want | 22:13 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!