17:00:33 <kopecmartin> #startmeeting qa
17:00:33 <opendevmeet> Meeting started Wed Jan 31 17:00:33 2024 UTC and is due to finish in 60 minutes.  The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:33 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:33 <opendevmeet> The meeting name has been set to 'qa'
17:00:43 <kopecmartin> should be right now, or my calendar is mistaken
17:01:00 <gmann> kopecmartin: I think 1 hr before? let me check
17:01:41 <kopecmartin> 17:00 utc
17:01:53 <gmann> kopecmartin: oh you are right, it is now. i had wrongly in my calendar
17:01:55 <gmann> sorry
17:02:23 <kopecmartin> np, i mess up the time zones like 2 out of 10 times too
17:03:05 <kopecmartin> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours
17:03:19 <kopecmartin> let's go quickly through the agenda
17:03:28 <kopecmartin> #topic Announcement and Action Item (Optional)
17:03:36 <kopecmartin> no updates from my side
17:03:54 <kopecmartin> i have planning of hte next PTG on my list, haven't got to that yet
17:04:24 <gmann> ++
17:04:27 <kopecmartin> #topic Caracal Priority Items progress
17:04:27 <kopecmartin> #link https://etherpad.opendev.org/p/qa-caracal-priority
17:04:55 <kopecmartin> i don't think that there are any updates there
17:05:05 <kopecmartin> everyone is busy with other things
17:05:06 <gmann> there is one related update for RBAC
17:05:34 <gmann> keystone update to allow project scope token is merged. which I was testing if any impact on Tempest tests or not
17:06:11 <gmann> with old defaults, everything pass but with new default there are some failure which I hope tests adjustment is needed. I will work on that after mid Feb or so
17:06:34 <gmann> #link https://review.opendev.org/q/topic:%22keystone-srbac-test%22
17:06:39 <gmann> these are ^^ testing changes
17:07:23 <gmann> once we modify the tempest test and green on new defaults, we should make keystone new default enable by default in devstack and see how it work with all other services new default
17:07:48 <kopecmartin> ack, sounds like a plan
17:07:53 <gmann> that will complete out integrated testing of new RBAC at least for  4-5 services
17:08:06 <gmann> currently all new defaults jobs run with keystone old defaults
17:08:36 <gmann> that is all on this from my side
17:09:19 <kopecmartin> some of the SRBAC issues we can see in the LPs under "Make pre-provisioned credentials stable again" topic, right?
17:09:29 <kopecmartin> e.g.
17:09:30 <kopecmartin> #link https://bugs.launchpad.net/tempest/+bug/2020860
17:09:34 <gmann> yeah
17:09:45 <gmann> pre-provisioned is also one thing to do as next step
17:10:01 <gmann> i did not had time to look into those yet
17:10:27 <kopecmartin> okey, yeah me neither tbh
17:11:00 <kopecmartin> the cycle is gonna end soon and we haven't figured out a plan for this "stack.sh fails on Rocky 9 and Centos 9 because of global virtualenv"
17:11:15 <kopecmartin> at some point we wanted to go with this
17:11:16 <kopecmartin> #link https://review.opendev.org/c/openstack/keystone/+/899519
17:11:25 <kopecmartin> but it doesn't look like that's a plan anymor e
17:13:26 <kopecmartin> anyway, i see like this
17:13:47 <kopecmartin> no one is very affected by this otherwise someone would pick this up, clearly it's not a priority .. i guess we're fine then
17:15:19 <kopecmartin> #topic Gate Status Checks
17:15:19 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)
17:15:25 <kopecmartin> no reviews
17:15:33 <kopecmartin> #topic Sub Teams highlights
17:15:33 <kopecmartin> Changes with Review-Priority == +1
17:15:33 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)
17:15:38 <kopecmartin> no reviews
17:15:53 <kopecmartin> #topic Periodic jobs Status Checks
17:15:53 <kopecmartin> periodic stable full
17:15:53 <kopecmartin> #link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1&job_name=tempest-full-2023-2
17:15:55 <kopecmartin> periodic stable slow
17:15:57 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-2&job_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena
17:15:59 <kopecmartin> periodic extra tests
17:16:01 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-full-2023-2-extra-tests&job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests
17:16:03 <kopecmartin> periodic master
17:16:05 <kopecmartin> #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic
17:16:07 <kopecmartin> all look good, except tempest-all job
17:16:22 <kopecmartin> it's been timeouting for a while now .. i wanted to take a look at that last week but .. ah
17:16:29 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-all&project=openstack/tempest
17:16:46 <kopecmartin> we're already at the maximum timeout limit, right?
17:17:23 <kopecmartin> oh, maybe not
17:17:30 <kopecmartin> #link https://opendev.org/openstack/tempest/src/branch/master/zuul.d/integrated-gate.yaml
17:17:35 <gmann> I think in tempest-all we run all including scenario test in parallel right
17:17:50 <kopecmartin> some jobs like e.g. tempest-ipv6-only has timeout set to 3 hours, tempest-all timetouted slightly over 2
17:18:13 <gmann> yeah https://opendev.org/openstack/tempest/src/branch/master/tox.ini#L78
17:18:14 <kopecmartin> easy fix then
17:19:52 <gmann> slow test job also 3 hr right?
17:20:04 <gmann> and no stackviz report in that, did you see
17:20:15 <gmann> #link https://zuul.openstack.org/build/d90f94dbf12946eeaaab91c489d37832/logs
17:21:13 <kopecmartin> i think so, checking .. yeah, no report, the playbook got killed - 2024-01-31 06:00:33.447887 | RUN END RESULT_TIMED_OUT: [untrusted : opendev.org/openstack/tempest/playbooks/devstack-tempest.yaml@master]
17:21:24 <kopecmartin> after 2 hours exactly
17:21:26 <gmann> kopecmartin: anywyas, as it run slow tests too I think making timeout as 3 hrs make sense
17:21:33 <gmann> yeah, may be that is why
17:22:01 <gmann> and bcz it was periodic we might have missed to do that when we increased it for slow test job
17:22:22 <opendevreview> Martin Kopec proposed openstack/tempest master: Increase timeout for tempest-all job  https://review.opendev.org/c/openstack/tempest/+/907342
17:22:30 <gmann> perfect, thanks
17:23:12 <kopecmartin> moving on
17:23:14 <kopecmartin> #topic Distros check
17:23:14 <kopecmartin> cs-9
17:23:14 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0
17:23:16 <kopecmartin> debian
17:23:18 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&job_name=devstack-platform-debian-bookworm&skip=0
17:23:20 <kopecmartin> rocky
17:23:22 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx
17:23:24 <kopecmartin> openEuler
17:23:26 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0
17:23:28 <kopecmartin> jammy
17:23:30 <kopecmartin> #link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0
17:23:55 <kopecmartin> these look good \o/
17:24:38 <kopecmartin> #topic Open Discussion
17:24:38 <kopecmartin> anything for the open discussion?
17:26:21 <kopecmartin> the doc patch should be ready now
17:26:23 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/905790
17:26:56 <gmann> +W, thanks
17:27:27 <kopecmartin> thanks
17:27:31 <kopecmartin> i'm a bit lost with this one
17:27:32 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/891123
17:27:42 <kopecmartin> I understand that nova has an API that can print me the nodes
17:27:51 <kopecmartin> do we want to set that new config (src/dest host) in devstack all the time , for all the jobs?
17:28:08 <kopecmartin> i mean, i can't make any job to set that config from tempest's side, it has to be done in devstack, the question is whether for all jobs (multinode ones) or not
17:28:26 <kopecmartin> if not, we could put setting of that option under an if and trigger that if only from the jobs we want, that could wor
17:28:27 <gmann> let me check the comment there
17:29:10 <kopecmartin> i'd like to see that test be executed in the CI too, but setting the configs is a bit complex in the CI
17:30:23 <gmann> kopecmartin: i remember now, so in devstack we can set it easily right?
17:30:59 <kopecmartin> i think so , shouldn't be too difficult
17:31:12 <kopecmartin> it's the only place where we know the names of the hosts
17:31:20 <kopecmartin> so it's either there or nowhere
17:31:32 <gmann> for example, we d for flavor #link https://github.com/openstack/devstack/blob/5c1736b78256f5da86a91c4489f43f8ba1bce224/lib/tempest#L298
17:31:40 <gmann> yeah
17:32:13 <gmann> we can call nova GET service API and filter with compute service ebabled and get the hostname
17:33:05 <gmann> something like this #link https://github.com/openstack/tempest/blob/master/tempest/api/compute/base.py#L688
17:34:06 <gmann> kopecmartin: and we can set it only for the multinode job based on request var so that we do not call nova service APIs everytime in devstack
17:34:36 <kopecmartin> yeah, that works
17:34:45 <gmann> TEMEPST_SET_SRC_DEST_HOST=False by default in devstack and set it true from multinode jo b
17:34:51 <gmann> something like this
17:34:59 <kopecmartin> great
17:35:29 <kopecmartin> cool, what about this one
17:35:30 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/898732
17:35:30 <gmann> bcz we should have those test run somewhere in our CI otherwise it is easy to have error in those and wothout knowning
17:35:43 <kopecmartin> agree, yes
17:36:47 <kopecmartin> to me it seems as a good case for reusing the min_compute_nodes option
17:37:35 <kopecmartin> in general, i prefer solution that aren't hardcoded .. although at this point i would agree with just len(nodes)/2 just to get it over with :D
17:37:46 <gmann> kopecmartin: but we are not using min_compute_node to boot servers, that is only used to decide if we want to run the test or not
17:38:12 <gmann> problem is here where it can boot 100 of servers if we have env of 100 compute nde env https://review.opendev.org/c/openstack/tempest/+/898732/15/tempest/scenario/test_instances_with_cinder_volumes.py#90
17:39:11 <kopecmartin> the loop is under control using that variable on line 108
17:39:48 <kopecmartin> even if there is 100 hosts, if min_compute_nodes is 1 then it will create just one server
17:40:35 <gmann> so if min_compute_nodes is 2 which is for multinode jobs so number of hosts selected will be 2?
17:41:06 <kopecmartin> yes, per hosts[:min_compute_nodes]
17:41:08 <kopecmartin> or we can create a new opt , like max_compute_nodes
17:41:32 <kopecmartin> and let it iterate only until we reach the limit , not over all the hosts found
17:43:04 <gmann> so basically user need to increase the min_compute_nodes to number of nodes they want to run this test. for example if they have 100 nodes and want to test 20 then they can set min_compute_nodes to 20 and this test will create servers/volume on those 20 nodes
17:43:06 <gmann> right?
17:44:33 <kopecmartin> yes
17:45:04 <gmann> ok, i think that should work. let me review it after meeting., I think i missed that point
17:45:07 <gmann> thanks for clarification
17:45:38 <kopecmartin> i'm judging based on the for cycle there, e.g.:
17:45:39 <kopecmartin> #link https://paste.openstack.org/show/bfAyQIkKDdr8lWwm4pGJ/
17:46:49 <kopecmartin> although now i'm not sure where is the logic that creates the servers on all a.k.a different nodes .. i don't see how openstack knows on which node to create the resource
17:48:02 <gmann> I think which node is not needed as it can boot servers on any of the avaailable nodes which is ok
17:49:14 <kopecmartin> yeah, i guess ... i need to revisit the patch, i forgot some details  .. it needs a recheck anyway
17:49:24 <gmann> anyways I will review it today and add comment if any update needed
17:49:31 <kopecmartin> good, at least we agree on the approach, i'll check the one thing and comment there
17:49:34 <kopecmartin> perfect, thank you
17:49:37 <gmann> ++
17:50:01 <kopecmartin> then we have there this beauty :D ( the last one, I promise)
17:50:02 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/879923
17:50:16 <kopecmartin> i think that the name of the --no-prefix opt is the problem
17:50:34 <kopecmartin> it can conflict with the --prefix (not technically, but based on the name)
17:50:48 <gmann> I think this need more discussion on design, is it urgent to delay till PTG?
17:51:16 <kopecmartin> well, it's 2 months .. i hope i'll be doing different things then
17:51:22 <gmann> if yes then how about we have a call on it to finalize it as doing it as gerrit way is becoming litlte difficult
17:51:42 <kopecmartin> sure, that would work
17:51:42 <gmann> sure, maybe we can have call after mid Feb (I need to finish few things before that) ?
17:52:00 <kopecmartin> ok , sounds good
17:52:16 <kopecmartin> in the meantime i'll try to polish the code a bit more
17:52:25 <gmann> cool, I will send invite for that. and same time as our meeting works fine ? Wed 17 UTC ?
17:52:38 <kopecmartin> yes
17:52:42 <gmann> Feb 21, 17 UTC
17:53:20 <kopecmartin> ack
17:53:30 <gmann> kopecmartin: i will say wait for our discussion to update anything so that we can save your time on that in case we agree to different approach
17:53:34 <opendevreview> Abhishek Kekane proposed openstack/devstack master: Make `centralized_db` driver as default cache driver  https://review.opendev.org/c/openstack/devstack/+/907110
17:54:14 <kopecmartin> okey then
17:54:35 <gmann> cool
17:54:35 <kopecmartin> #topic Bug Triage
17:54:35 <kopecmartin> #link https://etherpad.openstack.org/p/qa-bug-triage-caracal
17:54:57 <kopecmartin> there's been a small increase in the bugs in tempest .. but i haven't checked the details yet
17:55:29 <gmann> kopecmartin: just one thing on call, should I sent invite to your redhat email or gmail?
17:55:30 <kopecmartin> anyway, that's all from me
17:55:41 <kopecmartin> mkopec@redhat.com
17:56:03 <gmann> sent
17:56:06 <kopecmartin> thank you
17:56:47 <kopecmartin> alhtough the time is a bit weird, 2am on Thursday for me
17:56:50 <gmann> updated time in UTC, i selected wrong one initially :)
17:56:54 <kopecmartin> .. yeah :D
17:56:57 <gmann> :)
17:57:09 <kopecmartin> yep, much better now
17:57:30 <kopecmartin> i'm not that young anymore, i'm usually asleep at 2am :D
17:58:08 <kopecmartin> anyway, that's a wrap
17:58:15 <kopecmartin> thank you for all the feedback
17:58:16 <gmann> hehe, I can still do when I need to finish things in quiet time when my son sleep :P
17:58:40 <kopecmartin> yeah, the quietness is great at night
17:58:58 <gmann> otherwise he want to do code with me and do not let me to do :)
17:59:18 <gmann> nothing else from me, thanks kopecmartin.
17:59:35 <kopecmartin> awesome, he should really start with programming as soon as possible
17:59:47 <gmann> :)
17:59:54 <kopecmartin> perfect mind excercise
17:59:56 <kopecmartin> ..
17:59:57 <kopecmartin> #endmeeting