17:00:33 #startmeeting qa 17:00:33 Meeting started Wed Jan 31 17:00:33 2024 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:33 The meeting name has been set to 'qa' 17:00:43 should be right now, or my calendar is mistaken 17:01:00 kopecmartin: I think 1 hr before? let me check 17:01:41 17:00 utc 17:01:53 kopecmartin: oh you are right, it is now. i had wrongly in my calendar 17:01:55 sorry 17:02:23 np, i mess up the time zones like 2 out of 10 times too 17:03:05 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours 17:03:19 let's go quickly through the agenda 17:03:28 #topic Announcement and Action Item (Optional) 17:03:36 no updates from my side 17:03:54 i have planning of hte next PTG on my list, haven't got to that yet 17:04:24 ++ 17:04:27 #topic Caracal Priority Items progress 17:04:27 #link https://etherpad.opendev.org/p/qa-caracal-priority 17:04:55 i don't think that there are any updates there 17:05:05 everyone is busy with other things 17:05:06 there is one related update for RBAC 17:05:34 keystone update to allow project scope token is merged. which I was testing if any impact on Tempest tests or not 17:06:11 with old defaults, everything pass but with new default there are some failure which I hope tests adjustment is needed. I will work on that after mid Feb or so 17:06:34 #link https://review.opendev.org/q/topic:%22keystone-srbac-test%22 17:06:39 these are ^^ testing changes 17:07:23 once we modify the tempest test and green on new defaults, we should make keystone new default enable by default in devstack and see how it work with all other services new default 17:07:48 ack, sounds like a plan 17:07:53 that will complete out integrated testing of new RBAC at least for 4-5 services 17:08:06 currently all new defaults jobs run with keystone old defaults 17:08:36 that is all on this from my side 17:09:19 some of the SRBAC issues we can see in the LPs under "Make pre-provisioned credentials stable again" topic, right? 17:09:29 e.g. 17:09:30 #link https://bugs.launchpad.net/tempest/+bug/2020860 17:09:34 yeah 17:09:45 pre-provisioned is also one thing to do as next step 17:10:01 i did not had time to look into those yet 17:10:27 okey, yeah me neither tbh 17:11:00 the cycle is gonna end soon and we haven't figured out a plan for this "stack.sh fails on Rocky 9 and Centos 9 because of global virtualenv" 17:11:15 at some point we wanted to go with this 17:11:16 #link https://review.opendev.org/c/openstack/keystone/+/899519 17:11:25 but it doesn't look like that's a plan anymor e 17:13:26 anyway, i see like this 17:13:47 no one is very affected by this otherwise someone would pick this up, clearly it's not a priority .. i guess we're fine then 17:15:19 #topic Gate Status Checks 17:15:19 #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 17:15:25 no reviews 17:15:33 #topic Sub Teams highlights 17:15:33 Changes with Review-Priority == +1 17:15:33 #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 17:15:38 no reviews 17:15:53 #topic Periodic jobs Status Checks 17:15:53 periodic stable full 17:15:53 #link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1&job_name=tempest-full-2023-2 17:15:55 periodic stable slow 17:15:57 #link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-2&job_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena 17:15:59 periodic extra tests 17:16:01 #link https://zuul.openstack.org/builds?job_name=tempest-full-2023-2-extra-tests&job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests 17:16:03 periodic master 17:16:05 #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic 17:16:07 all look good, except tempest-all job 17:16:22 it's been timeouting for a while now .. i wanted to take a look at that last week but .. ah 17:16:29 #link https://zuul.openstack.org/builds?job_name=tempest-all&project=openstack/tempest 17:16:46 we're already at the maximum timeout limit, right? 17:17:23 oh, maybe not 17:17:30 #link https://opendev.org/openstack/tempest/src/branch/master/zuul.d/integrated-gate.yaml 17:17:35 I think in tempest-all we run all including scenario test in parallel right 17:17:50 some jobs like e.g. tempest-ipv6-only has timeout set to 3 hours, tempest-all timetouted slightly over 2 17:18:13 yeah https://opendev.org/openstack/tempest/src/branch/master/tox.ini#L78 17:18:14 easy fix then 17:19:52 slow test job also 3 hr right? 17:20:04 and no stackviz report in that, did you see 17:20:15 #link https://zuul.openstack.org/build/d90f94dbf12946eeaaab91c489d37832/logs 17:21:13 i think so, checking .. yeah, no report, the playbook got killed - 2024-01-31 06:00:33.447887 | RUN END RESULT_TIMED_OUT: [untrusted : opendev.org/openstack/tempest/playbooks/devstack-tempest.yaml@master] 17:21:24 after 2 hours exactly 17:21:26 kopecmartin: anywyas, as it run slow tests too I think making timeout as 3 hrs make sense 17:21:33 yeah, may be that is why 17:22:01 and bcz it was periodic we might have missed to do that when we increased it for slow test job 17:22:22 Martin Kopec proposed openstack/tempest master: Increase timeout for tempest-all job https://review.opendev.org/c/openstack/tempest/+/907342 17:22:30 perfect, thanks 17:23:12 moving on 17:23:14 #topic Distros check 17:23:14 cs-9 17:23:14 #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0 17:23:16 debian 17:23:18 #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&job_name=devstack-platform-debian-bookworm&skip=0 17:23:20 rocky 17:23:22 #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx 17:23:24 openEuler 17:23:26 #link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0 17:23:28 jammy 17:23:30 #link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0 17:23:55 these look good \o/ 17:24:38 #topic Open Discussion 17:24:38 anything for the open discussion? 17:26:21 the doc patch should be ready now 17:26:23 #link https://review.opendev.org/c/openstack/tempest/+/905790 17:26:56 +W, thanks 17:27:27 thanks 17:27:31 i'm a bit lost with this one 17:27:32 #link https://review.opendev.org/c/openstack/tempest/+/891123 17:27:42 I understand that nova has an API that can print me the nodes 17:27:51 do we want to set that new config (src/dest host) in devstack all the time , for all the jobs? 17:28:08 i mean, i can't make any job to set that config from tempest's side, it has to be done in devstack, the question is whether for all jobs (multinode ones) or not 17:28:26 if not, we could put setting of that option under an if and trigger that if only from the jobs we want, that could wor 17:28:27 let me check the comment there 17:29:10 i'd like to see that test be executed in the CI too, but setting the configs is a bit complex in the CI 17:30:23 kopecmartin: i remember now, so in devstack we can set it easily right? 17:30:59 i think so , shouldn't be too difficult 17:31:12 it's the only place where we know the names of the hosts 17:31:20 so it's either there or nowhere 17:31:32 for example, we d for flavor #link https://github.com/openstack/devstack/blob/5c1736b78256f5da86a91c4489f43f8ba1bce224/lib/tempest#L298 17:31:40 yeah 17:32:13 we can call nova GET service API and filter with compute service ebabled and get the hostname 17:33:05 something like this #link https://github.com/openstack/tempest/blob/master/tempest/api/compute/base.py#L688 17:34:06 kopecmartin: and we can set it only for the multinode job based on request var so that we do not call nova service APIs everytime in devstack 17:34:36 yeah, that works 17:34:45 TEMEPST_SET_SRC_DEST_HOST=False by default in devstack and set it true from multinode jo b 17:34:51 something like this 17:34:59 great 17:35:29 cool, what about this one 17:35:30 #link https://review.opendev.org/c/openstack/tempest/+/898732 17:35:30 bcz we should have those test run somewhere in our CI otherwise it is easy to have error in those and wothout knowning 17:35:43 agree, yes 17:36:47 to me it seems as a good case for reusing the min_compute_nodes option 17:37:35 in general, i prefer solution that aren't hardcoded .. although at this point i would agree with just len(nodes)/2 just to get it over with :D 17:37:46 kopecmartin: but we are not using min_compute_node to boot servers, that is only used to decide if we want to run the test or not 17:38:12 problem is here where it can boot 100 of servers if we have env of 100 compute nde env https://review.opendev.org/c/openstack/tempest/+/898732/15/tempest/scenario/test_instances_with_cinder_volumes.py#90 17:39:11 the loop is under control using that variable on line 108 17:39:48 even if there is 100 hosts, if min_compute_nodes is 1 then it will create just one server 17:40:35 so if min_compute_nodes is 2 which is for multinode jobs so number of hosts selected will be 2? 17:41:06 yes, per hosts[:min_compute_nodes] 17:41:08 or we can create a new opt , like max_compute_nodes 17:41:32 and let it iterate only until we reach the limit , not over all the hosts found 17:43:04 so basically user need to increase the min_compute_nodes to number of nodes they want to run this test. for example if they have 100 nodes and want to test 20 then they can set min_compute_nodes to 20 and this test will create servers/volume on those 20 nodes 17:43:06 right? 17:44:33 yes 17:45:04 ok, i think that should work. let me review it after meeting., I think i missed that point 17:45:07 thanks for clarification 17:45:38 i'm judging based on the for cycle there, e.g.: 17:45:39 #link https://paste.openstack.org/show/bfAyQIkKDdr8lWwm4pGJ/ 17:46:49 although now i'm not sure where is the logic that creates the servers on all a.k.a different nodes .. i don't see how openstack knows on which node to create the resource 17:48:02 I think which node is not needed as it can boot servers on any of the avaailable nodes which is ok 17:49:14 yeah, i guess ... i need to revisit the patch, i forgot some details .. it needs a recheck anyway 17:49:24 anyways I will review it today and add comment if any update needed 17:49:31 good, at least we agree on the approach, i'll check the one thing and comment there 17:49:34 perfect, thank you 17:49:37 ++ 17:50:01 then we have there this beauty :D ( the last one, I promise) 17:50:02 #link https://review.opendev.org/c/openstack/tempest/+/879923 17:50:16 i think that the name of the --no-prefix opt is the problem 17:50:34 it can conflict with the --prefix (not technically, but based on the name) 17:50:48 I think this need more discussion on design, is it urgent to delay till PTG? 17:51:16 well, it's 2 months .. i hope i'll be doing different things then 17:51:22 if yes then how about we have a call on it to finalize it as doing it as gerrit way is becoming litlte difficult 17:51:42 sure, that would work 17:51:42 sure, maybe we can have call after mid Feb (I need to finish few things before that) ? 17:52:00 ok , sounds good 17:52:16 in the meantime i'll try to polish the code a bit more 17:52:25 cool, I will send invite for that. and same time as our meeting works fine ? Wed 17 UTC ? 17:52:38 yes 17:52:42 Feb 21, 17 UTC 17:53:20 ack 17:53:30 kopecmartin: i will say wait for our discussion to update anything so that we can save your time on that in case we agree to different approach 17:53:34 Abhishek Kekane proposed openstack/devstack master: Make `centralized_db` driver as default cache driver https://review.opendev.org/c/openstack/devstack/+/907110 17:54:14 okey then 17:54:35 cool 17:54:35 #topic Bug Triage 17:54:35 #link https://etherpad.openstack.org/p/qa-bug-triage-caracal 17:54:57 there's been a small increase in the bugs in tempest .. but i haven't checked the details yet 17:55:29 kopecmartin: just one thing on call, should I sent invite to your redhat email or gmail? 17:55:30 anyway, that's all from me 17:55:41 mkopec@redhat.com 17:56:03 sent 17:56:06 thank you 17:56:47 alhtough the time is a bit weird, 2am on Thursday for me 17:56:50 updated time in UTC, i selected wrong one initially :) 17:56:54 .. yeah :D 17:56:57 :) 17:57:09 yep, much better now 17:57:30 i'm not that young anymore, i'm usually asleep at 2am :D 17:58:08 anyway, that's a wrap 17:58:15 thank you for all the feedback 17:58:16 hehe, I can still do when I need to finish things in quiet time when my son sleep :P 17:58:40 yeah, the quietness is great at night 17:58:58 otherwise he want to do code with me and do not let me to do :) 17:59:18 nothing else from me, thanks kopecmartin. 17:59:35 awesome, he should really start with programming as soon as possible 17:59:47 :) 17:59:54 perfect mind excercise 17:59:56 .. 17:59:57 #endmeeting