*** jph7 is now known as jph | 01:40 | |
opendevreview | Takashi Kajinami proposed openstack/hacking master: Drop tox target for python 3.5 https://review.opendev.org/c/openstack/hacking/+/907282 | 04:04 |
---|---|---|
opendevreview | Nobuhiro MIKI proposed openstack/devstack master: glance: Set the port used by uwsgi as a constant https://review.opendev.org/c/openstack/devstack/+/902241 | 06:55 |
opendevreview | Martin Kopec proposed openstack/tempest master: General doc updates https://review.opendev.org/c/openstack/tempest/+/905790 | 08:00 |
opendevreview | Merged openstack/hacking master: Drop tox target for python 3.5 https://review.opendev.org/c/openstack/hacking/+/907282 | 08:37 |
*** zigo_ is now known as zigo | 09:43 | |
gmann | kopecmartin: it was meeting time today or I missed the alternate days of meeting? | 16:53 |
kopecmartin | #startmeeting qa | 17:00 |
opendevmeet | Meeting started Wed Jan 31 17:00:33 2024 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. | 17:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 17:00 |
opendevmeet | The meeting name has been set to 'qa' | 17:00 |
kopecmartin | should be right now, or my calendar is mistaken | 17:00 |
gmann | kopecmartin: I think 1 hr before? let me check | 17:01 |
kopecmartin | 17:00 utc | 17:01 |
gmann | kopecmartin: oh you are right, it is now. i had wrongly in my calendar | 17:01 |
gmann | sorry | 17:01 |
kopecmartin | np, i mess up the time zones like 2 out of 10 times too | 17:02 |
kopecmartin | #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours | 17:03 |
kopecmartin | let's go quickly through the agenda | 17:03 |
kopecmartin | #topic Announcement and Action Item (Optional) | 17:03 |
kopecmartin | no updates from my side | 17:03 |
kopecmartin | i have planning of hte next PTG on my list, haven't got to that yet | 17:03 |
gmann | ++ | 17:04 |
kopecmartin | #topic Caracal Priority Items progress | 17:04 |
kopecmartin | #link https://etherpad.opendev.org/p/qa-caracal-priority | 17:04 |
kopecmartin | i don't think that there are any updates there | 17:04 |
kopecmartin | everyone is busy with other things | 17:05 |
gmann | there is one related update for RBAC | 17:05 |
gmann | keystone update to allow project scope token is merged. which I was testing if any impact on Tempest tests or not | 17:05 |
gmann | with old defaults, everything pass but with new default there are some failure which I hope tests adjustment is needed. I will work on that after mid Feb or so | 17:06 |
gmann | #link https://review.opendev.org/q/topic:%22keystone-srbac-test%22 | 17:06 |
gmann | these are ^^ testing changes | 17:06 |
gmann | once we modify the tempest test and green on new defaults, we should make keystone new default enable by default in devstack and see how it work with all other services new default | 17:07 |
kopecmartin | ack, sounds like a plan | 17:07 |
gmann | that will complete out integrated testing of new RBAC at least for 4-5 services | 17:07 |
gmann | currently all new defaults jobs run with keystone old defaults | 17:08 |
gmann | that is all on this from my side | 17:08 |
kopecmartin | some of the SRBAC issues we can see in the LPs under "Make pre-provisioned credentials stable again" topic, right? | 17:09 |
kopecmartin | e.g. | 17:09 |
kopecmartin | #link https://bugs.launchpad.net/tempest/+bug/2020860 | 17:09 |
gmann | yeah | 17:09 |
gmann | pre-provisioned is also one thing to do as next step | 17:09 |
gmann | i did not had time to look into those yet | 17:10 |
kopecmartin | okey, yeah me neither tbh | 17:10 |
kopecmartin | the cycle is gonna end soon and we haven't figured out a plan for this "stack.sh fails on Rocky 9 and Centos 9 because of global virtualenv" | 17:11 |
kopecmartin | at some point we wanted to go with this | 17:11 |
kopecmartin | #link https://review.opendev.org/c/openstack/keystone/+/899519 | 17:11 |
kopecmartin | but it doesn't look like that's a plan anymor e | 17:11 |
kopecmartin | anyway, i see like this | 17:13 |
kopecmartin | no one is very affected by this otherwise someone would pick this up, clearly it's not a priority .. i guess we're fine then | 17:13 |
kopecmartin | #topic Gate Status Checks | 17:15 |
kopecmartin | #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) | 17:15 |
kopecmartin | no reviews | 17:15 |
kopecmartin | #topic Sub Teams highlights | 17:15 |
kopecmartin | Changes with Review-Priority == +1 | 17:15 |
kopecmartin | #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) | 17:15 |
kopecmartin | no reviews | 17:15 |
kopecmartin | #topic Periodic jobs Status Checks | 17:15 |
kopecmartin | periodic stable full | 17:15 |
kopecmartin | #link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1&job_name=tempest-full-2023-2 | 17:15 |
kopecmartin | periodic stable slow | 17:15 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-2&job_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena | 17:15 |
kopecmartin | periodic extra tests | 17:15 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-full-2023-2-extra-tests&job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests | 17:16 |
kopecmartin | periodic master | 17:16 |
kopecmartin | #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic | 17:16 |
kopecmartin | all look good, except tempest-all job | 17:16 |
kopecmartin | it's been timeouting for a while now .. i wanted to take a look at that last week but .. ah | 17:16 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-all&project=openstack/tempest | 17:16 |
kopecmartin | we're already at the maximum timeout limit, right? | 17:16 |
kopecmartin | oh, maybe not | 17:17 |
kopecmartin | #link https://opendev.org/openstack/tempest/src/branch/master/zuul.d/integrated-gate.yaml | 17:17 |
gmann | I think in tempest-all we run all including scenario test in parallel right | 17:17 |
kopecmartin | some jobs like e.g. tempest-ipv6-only has timeout set to 3 hours, tempest-all timetouted slightly over 2 | 17:17 |
gmann | yeah https://opendev.org/openstack/tempest/src/branch/master/tox.ini#L78 | 17:18 |
kopecmartin | easy fix then | 17:18 |
gmann | slow test job also 3 hr right? | 17:19 |
gmann | and no stackviz report in that, did you see | 17:20 |
gmann | #link https://zuul.openstack.org/build/d90f94dbf12946eeaaab91c489d37832/logs | 17:20 |
kopecmartin | i think so, checking .. yeah, no report, the playbook got killed - 2024-01-31 06:00:33.447887 | RUN END RESULT_TIMED_OUT: [untrusted : opendev.org/openstack/tempest/playbooks/devstack-tempest.yaml@master] | 17:21 |
kopecmartin | after 2 hours exactly | 17:21 |
gmann | kopecmartin: anywyas, as it run slow tests too I think making timeout as 3 hrs make sense | 17:21 |
gmann | yeah, may be that is why | 17:21 |
gmann | and bcz it was periodic we might have missed to do that when we increased it for slow test job | 17:22 |
opendevreview | Martin Kopec proposed openstack/tempest master: Increase timeout for tempest-all job https://review.opendev.org/c/openstack/tempest/+/907342 | 17:22 |
gmann | perfect, thanks | 17:22 |
kopecmartin | moving on | 17:23 |
kopecmartin | #topic Distros check | 17:23 |
kopecmartin | cs-9 | 17:23 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0 | 17:23 |
kopecmartin | debian | 17:23 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&job_name=devstack-platform-debian-bookworm&skip=0 | 17:23 |
kopecmartin | rocky | 17:23 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx | 17:23 |
kopecmartin | openEuler | 17:23 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0 | 17:23 |
kopecmartin | jammy | 17:23 |
kopecmartin | #link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0 | 17:23 |
kopecmartin | these look good \o/ | 17:23 |
kopecmartin | #topic Open Discussion | 17:24 |
kopecmartin | anything for the open discussion? | 17:24 |
kopecmartin | the doc patch should be ready now | 17:26 |
kopecmartin | #link https://review.opendev.org/c/openstack/tempest/+/905790 | 17:26 |
gmann | +W, thanks | 17:26 |
kopecmartin | thanks | 17:27 |
kopecmartin | i'm a bit lost with this one | 17:27 |
kopecmartin | #link https://review.opendev.org/c/openstack/tempest/+/891123 | 17:27 |
kopecmartin | I understand that nova has an API that can print me the nodes | 17:27 |
kopecmartin | do we want to set that new config (src/dest host) in devstack all the time , for all the jobs? | 17:27 |
kopecmartin | i mean, i can't make any job to set that config from tempest's side, it has to be done in devstack, the question is whether for all jobs (multinode ones) or not | 17:28 |
kopecmartin | if not, we could put setting of that option under an if and trigger that if only from the jobs we want, that could wor | 17:28 |
gmann | let me check the comment there | 17:28 |
kopecmartin | i'd like to see that test be executed in the CI too, but setting the configs is a bit complex in the CI | 17:29 |
gmann | kopecmartin: i remember now, so in devstack we can set it easily right? | 17:30 |
kopecmartin | i think so , shouldn't be too difficult | 17:30 |
kopecmartin | it's the only place where we know the names of the hosts | 17:31 |
kopecmartin | so it's either there or nowhere | 17:31 |
gmann | for example, we d for flavor #link https://github.com/openstack/devstack/blob/5c1736b78256f5da86a91c4489f43f8ba1bce224/lib/tempest#L298 | 17:31 |
gmann | yeah | 17:31 |
gmann | we can call nova GET service API and filter with compute service ebabled and get the hostname | 17:32 |
gmann | something like this #link https://github.com/openstack/tempest/blob/master/tempest/api/compute/base.py#L688 | 17:33 |
gmann | kopecmartin: and we can set it only for the multinode job based on request var so that we do not call nova service APIs everytime in devstack | 17:34 |
kopecmartin | yeah, that works | 17:34 |
gmann | TEMEPST_SET_SRC_DEST_HOST=False by default in devstack and set it true from multinode jo b | 17:34 |
gmann | something like this | 17:34 |
kopecmartin | great | 17:34 |
kopecmartin | cool, what about this one | 17:35 |
kopecmartin | #link https://review.opendev.org/c/openstack/tempest/+/898732 | 17:35 |
gmann | bcz we should have those test run somewhere in our CI otherwise it is easy to have error in those and wothout knowning | 17:35 |
kopecmartin | agree, yes | 17:35 |
kopecmartin | to me it seems as a good case for reusing the min_compute_nodes option | 17:36 |
kopecmartin | in general, i prefer solution that aren't hardcoded .. although at this point i would agree with just len(nodes)/2 just to get it over with :D | 17:37 |
gmann | kopecmartin: but we are not using min_compute_node to boot servers, that is only used to decide if we want to run the test or not | 17:37 |
gmann | problem is here where it can boot 100 of servers if we have env of 100 compute nde env https://review.opendev.org/c/openstack/tempest/+/898732/15/tempest/scenario/test_instances_with_cinder_volumes.py#90 | 17:38 |
kopecmartin | the loop is under control using that variable on line 108 | 17:39 |
kopecmartin | even if there is 100 hosts, if min_compute_nodes is 1 then it will create just one server | 17:39 |
gmann | so if min_compute_nodes is 2 which is for multinode jobs so number of hosts selected will be 2? | 17:40 |
kopecmartin | yes, per hosts[:min_compute_nodes] | 17:41 |
kopecmartin | or we can create a new opt , like max_compute_nodes | 17:41 |
kopecmartin | and let it iterate only until we reach the limit , not over all the hosts found | 17:41 |
gmann | so basically user need to increase the min_compute_nodes to number of nodes they want to run this test. for example if they have 100 nodes and want to test 20 then they can set min_compute_nodes to 20 and this test will create servers/volume on those 20 nodes | 17:43 |
gmann | right? | 17:43 |
kopecmartin | yes | 17:44 |
gmann | ok, i think that should work. let me review it after meeting., I think i missed that point | 17:45 |
gmann | thanks for clarification | 17:45 |
kopecmartin | i'm judging based on the for cycle there, e.g.: | 17:45 |
kopecmartin | #link https://paste.openstack.org/show/bfAyQIkKDdr8lWwm4pGJ/ | 17:45 |
kopecmartin | although now i'm not sure where is the logic that creates the servers on all a.k.a different nodes .. i don't see how openstack knows on which node to create the resource | 17:46 |
gmann | I think which node is not needed as it can boot servers on any of the avaailable nodes which is ok | 17:48 |
kopecmartin | yeah, i guess ... i need to revisit the patch, i forgot some details .. it needs a recheck anyway | 17:49 |
gmann | anyways I will review it today and add comment if any update needed | 17:49 |
kopecmartin | good, at least we agree on the approach, i'll check the one thing and comment there | 17:49 |
kopecmartin | perfect, thank you | 17:49 |
gmann | ++ | 17:49 |
kopecmartin | then we have there this beauty :D ( the last one, I promise) | 17:50 |
kopecmartin | #link https://review.opendev.org/c/openstack/tempest/+/879923 | 17:50 |
kopecmartin | i think that the name of the --no-prefix opt is the problem | 17:50 |
kopecmartin | it can conflict with the --prefix (not technically, but based on the name) | 17:50 |
gmann | I think this need more discussion on design, is it urgent to delay till PTG? | 17:50 |
kopecmartin | well, it's 2 months .. i hope i'll be doing different things then | 17:51 |
gmann | if yes then how about we have a call on it to finalize it as doing it as gerrit way is becoming litlte difficult | 17:51 |
kopecmartin | sure, that would work | 17:51 |
gmann | sure, maybe we can have call after mid Feb (I need to finish few things before that) ? | 17:51 |
kopecmartin | ok , sounds good | 17:52 |
kopecmartin | in the meantime i'll try to polish the code a bit more | 17:52 |
gmann | cool, I will send invite for that. and same time as our meeting works fine ? Wed 17 UTC ? | 17:52 |
kopecmartin | yes | 17:52 |
gmann | Feb 21, 17 UTC | 17:52 |
kopecmartin | ack | 17:53 |
gmann | kopecmartin: i will say wait for our discussion to update anything so that we can save your time on that in case we agree to different approach | 17:53 |
opendevreview | Abhishek Kekane proposed openstack/devstack master: Make `centralized_db` driver as default cache driver https://review.opendev.org/c/openstack/devstack/+/907110 | 17:53 |
kopecmartin | okey then | 17:54 |
gmann | cool | 17:54 |
kopecmartin | #topic Bug Triage | 17:54 |
kopecmartin | #link https://etherpad.openstack.org/p/qa-bug-triage-caracal | 17:54 |
kopecmartin | there's been a small increase in the bugs in tempest .. but i haven't checked the details yet | 17:54 |
gmann | kopecmartin: just one thing on call, should I sent invite to your redhat email or gmail? | 17:55 |
kopecmartin | anyway, that's all from me | 17:55 |
kopecmartin | mkopec@redhat.com | 17:55 |
gmann | sent | 17:56 |
kopecmartin | thank you | 17:56 |
kopecmartin | alhtough the time is a bit weird, 2am on Thursday for me | 17:56 |
gmann | updated time in UTC, i selected wrong one initially :) | 17:56 |
kopecmartin | .. yeah :D | 17:56 |
gmann | :) | 17:56 |
kopecmartin | yep, much better now | 17:57 |
kopecmartin | i'm not that young anymore, i'm usually asleep at 2am :D | 17:57 |
kopecmartin | anyway, that's a wrap | 17:58 |
kopecmartin | thank you for all the feedback | 17:58 |
gmann | hehe, I can still do when I need to finish things in quiet time when my son sleep :P | 17:58 |
kopecmartin | yeah, the quietness is great at night | 17:58 |
gmann | otherwise he want to do code with me and do not let me to do :) | 17:58 |
gmann | nothing else from me, thanks kopecmartin. | 17:59 |
kopecmartin | awesome, he should really start with programming as soon as possible | 17:59 |
gmann | :) | 17:59 |
kopecmartin | perfect mind excercise | 17:59 |
kopecmartin | .. | 17:59 |
kopecmartin | #endmeeting | 17:59 |
opendevmeet | Meeting ended Wed Jan 31 17:59:57 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 17:59 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/qa/2024/qa.2024-01-31-17.00.html | 17:59 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/qa/2024/qa.2024-01-31-17.00.txt | 17:59 |
opendevmeet | Log: https://meetings.opendev.org/meetings/qa/2024/qa.2024-01-31-17.00.log.html | 17:59 |
abhishekk | Anybody knows whether post-run will be effective on grenade job to execute some script after completion of setup? | 18:25 |
abhishekk | IF yes can we restrict it just for old setup and not to run after new setup? | 18:25 |
clarkb | abhishekk: are you asking about zuul post-run? | 18:26 |
abhishekk | yes | 18:26 |
clarkb | zuul's idea for post run is that you use it to perform actions after your testing is completed like collecting logs, copying artifacts, etc. Basically not stuff that is being tested but stuff necessary to record and report | 18:27 |
abhishekk | I want to do some entries in old db and after upgrade verify it is there in new db | 18:28 |
clarkb | you should do that as part of the actual run | 18:28 |
clarkb | as that is part of your testing | 18:28 |
abhishekk | the think is I am changing db in between | 18:28 |
abhishekk | pre I am using sqlite and after upgrade I am using mysql | 18:29 |
abhishekk | so I need to verify data from sqlite db is transferred to mysql | 18:29 |
clarkb | and I'm guessing you don't want to override the parent jobs run: playbook ? | 18:29 |
abhishekk | https://review.opendev.org/c/openstack/glance/+/901649 | 18:30 |
clarkb | my suggestion woudl still be to add your checks to the run playbook beacuse validating database content is useful generally | 18:30 |
abhishekk | I want to write something for ToDO in commit | 18:30 |
clarkb | right I think that will involve changes within grenade's test runs not the post-run collection. I believe there are hooks in grenade to do validation like that | 18:31 |
abhishekk | could you point one if possible | 18:32 |
clarkb | well the grenade readme says this: allow projects to create resources that should survive upgrade. verify resources are still working during shutdown. verify resources are still working after upgrade | 18:33 |
abhishekk | are you talking about this? | 18:33 |
abhishekk | https://github.com/openstack/grenade/tree/master/projects/40_glance | 18:33 |
clarkb | no https://opendev.org/openstack/grenade/src/branch/master/README.rst is where the text comes from | 18:34 |
clarkb | looks like you should create a resources.sh script under 40_glance to drive this | 18:34 |
clarkb | nova does so | 18:34 |
clarkb | there is a create and verify hook | 18:34 |
clarkb | so you would create what ever you need to in order to side effect in the database. Then verify would probably use the apis to read it back again and confirm the database still contains that info | 18:35 |
abhishekk | ack, thanks | 18:35 |
abhishekk | will have a look | 18:36 |
clarkb | in general zuul expects you to run setup necessary to perform testing in pre-run (so git repo setup happens here as well as network configuration etc), then run does your actual testing, then pre-run does reporting and collecting and shouldn't be used for testing | 18:37 |
clarkb | abhishekk: https://opendev.org/openstack/grenade/src/branch/master/PLUGINS.rst this seems to be the documentation | 18:42 |
clarkb | and nova has examples | 18:42 |
abhishekk | ack, looking at it, got some idea from resource.sh, need to understand more when destroy phase will execute (in old or new setup) | 18:44 |
clarkb | that PLUGINS.rst doc has a flow documented | 18:44 |
clarkb | looks like destroy is at the very end | 18:44 |
abhishekk | ++ | 18:44 |
abhishekk | makes sense | 18:45 |
abhishekk | Thanks again,looks easy now | 18:46 |
opendevreview | Merged openstack/tempest master: General doc updates https://review.opendev.org/c/openstack/tempest/+/905790 | 19:45 |
gmann | kopecmartin: left some comment in this, test is not running on any jobs - https://review.opendev.org/c/openstack/tempest/+/898732 | 20:15 |
opendevreview | Abhishek Kekane proposed openstack/devstack master: Make `centralized_db` driver as default cache driver https://review.opendev.org/c/openstack/devstack/+/907110 | 20:26 |
opendevreview | Ashley Rodriguez proposed openstack/devstack-plugin-ceph master: Add ingress deamon https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/900348 | 20:45 |
opendevreview | Abhishek Kekane proposed openstack/devstack master: Make `centralized_db` driver as default cache driver https://review.opendev.org/c/openstack/devstack/+/907110 | 21:37 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!