15:00:15 <kopecmartin> #startmeeting qa
15:00:15 <opendevmeet> Meeting started Tue May 14 15:00:15 2024 UTC and is due to finish in 60 minutes.  The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:15 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:15 <opendevmeet> The meeting name has been set to 'qa'
15:00:21 <kopecmartin> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours
15:01:36 <kopecmartin> after a while, i'm back, let's go through the agenda
15:01:47 <lpiwowar> o/
15:01:58 <kopecmartin> o/
15:02:26 <kopecmartin> #topic Dalmatian Priority Items progress
15:02:31 <kopecmartin> #link https://etherpad.opendev.org/p/qa-dalmatian-priority
15:02:38 <kopecmartin> any news on this front?
15:02:40 * kopecmartin checking
15:03:55 <lpiwowar> Nothing from my side
15:04:40 <frickler> don't think so
15:05:39 <kopecmartin> yeah
15:05:40 <kopecmartin> moving on
15:05:41 <kopecmartin> #topic Gate Status Checks
15:05:49 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)
15:06:18 <kopecmartin> 2 patches there, one with -1 vote and one -1 zuul, hmm
15:07:35 <frickler> one is for unmaintained, we shouldn't set RP there imo
15:08:27 <frickler> the other is the one that made me notice the reno stuff
15:08:49 <frickler> RP+1 is likely good enough for that
15:10:49 <kopecmartin> right, i commented , let's leave it with the priority for a week, if it's not updated till then, we'll drop the priority label
15:11:09 <kopecmartin> about the unmaintained , it's blocked by grenade job, the patch looks quite simple though
15:11:23 <kopecmartin> oh, not only grenade , there is one more patch failing
15:11:33 <kopecmartin> anything we should do here?
15:12:08 <kopecmartin> should we at least drop (or make non voting) the failing jobs? i don't think they'll get fixed
15:13:14 <kopecmartin> am talking about this patch
15:13:15 <kopecmartin> #link https://review.opendev.org/c/openstack/devstack/+/916727
15:13:20 <kopecmartin> gmann: wdyt? ^
15:14:12 <kopecmartin> ok, let's leave the RP there for couple more days too, at least until i figure out what should/could we do with these unmaintained branches
15:14:31 <kopecmartin> #topic Sub Teams highlights
15:14:31 <kopecmartin> Changes with Review-Priority == +1
15:14:36 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)
15:14:51 <frickler> IMO ignore unmaintained branches and leave things to the unaintainers
15:15:31 <kopecmartin> that would make sense, do they have rights to deal with these patches?
15:16:12 <frickler> yes. and normal cores don't
15:16:50 <kopecmartin> ahaa
15:17:17 <kopecmartin> that's right, i missed this fact .. ok , there is nothing i can do then
15:17:33 <frickler> for the RP+1 reviews, there was an update just earlier, will need to have another look
15:18:52 <kopecmartin> yep, great, let's wait for the jobs , i'll review those after the meeting
15:19:20 <kopecmartin> moving on
15:19:53 <kopecmartin> #topic Periodic jobs Status Checks
15:19:53 <kopecmartin> Periodic stable full: https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-2023-1&job_name=tempest-full-2023-2&job_name=tempest-full-2024-1
15:19:53 <kopecmartin> Periodic stable slow: https://zuul.openstack.org/builds?job_name=tempest-slow-2024-1&job_name=tempest-slow-2023-2&job_name=tempest-slow-2023-1
15:19:55 <kopecmartin> Periodic extra tests: https://zuul.openstack.org/builds?job_name=tempest-full-2024-1-extra-tests&job_name=tempest-full-2023-2-extra-tests&job_name=tempest-full-2023-1-extra-tests
15:19:57 <kopecmartin> Periodic master: https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic
15:22:10 <kopecmartin> tempest-slow-* fail from time to time but i don't see any pattern
15:22:23 <kopecmartin> they look unstable
15:22:41 <kopecmartin> the rest is as usual
15:23:17 <kopecmartin> let's wait and see if it gets worse
15:23:28 <kopecmartin> #topic Distros check
15:23:28 <kopecmartin> Centos 9: https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0
15:23:29 <kopecmartin> Debian: https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&job_name=devstack-platform-debian-bookworm&skip=0
15:23:31 <kopecmartin> Rocky: https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx
15:23:33 <kopecmartin> openEuler: https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0
15:23:35 <kopecmartin> jammy: https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0
15:25:33 <kopecmartin> they seem good \o/
15:26:47 <kopecmartin> #topic Open Discussion
15:26:47 <kopecmartin> anything for the open discussion?
15:27:26 <lpiwowar> Implement purge list patch. Can somebody please take a look at this?:) -> https://review.opendev.org/c/openstack/tempest/+/897847
15:27:32 <lpiwowar> I think it is really close now.
15:27:40 <kopecmartin> gmann: we can see the resource_list.json outuput in this one
15:27:46 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/918725/
15:28:02 <kopecmartin> so we have practically verified the above patch mentioned by lpiwowar
15:28:07 <kopecmartin> and it's good to go
15:28:30 <kopecmartin> then we can move on with the release to announce end of support for zed
15:28:31 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/918135
15:28:49 <kopecmartin> i promised we'll get merged the release by the end of the week O:)
15:29:22 <kopecmartin> anything else?
15:29:28 <lpiwowar> kopecmartin I've checked two random builds for the slow job. And it is failing often on test_volume_migrate_attached ... Two times on mounting / remounting a volume. My guess is that there is maybe missing a wait for the mounting to be successful.
15:30:18 <lpiwowar> But this is just a guess. It reminds me failures Jakub was facing when writing the new tests.
15:30:28 <kopecmartin> scenario tests?
15:30:37 <kopecmartin> if so, there is a waiter
15:30:39 <kopecmartin> #link https://opendev.org/openstack/tempest/src/branch/master/tempest/scenario/manager.py#L1007
15:30:41 <lpiwowar> Yes
15:31:00 <lpiwowar> I think it is in the create_timestamp phase.
15:31:13 <lpiwowar> I do not know by heart whether there is a waiter or no.
15:31:29 <kopecmartin> if the tests call nova_volume_detach then there is
15:32:17 <lpiwowar> From the traceback it does not look like nova_volume_detach was called -> https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0b6/periodic-stable/opendev.org/openstack/tempest/master/tempest-slow-2023-2/0b632c0/testr_results.html
15:33:39 <kopecmartin> the test eventually calls nova_volume_attach
15:33:40 <kopecmartin> #link https://opendev.org/openstack/tempest/src/branch/master/tempest/scenario/test_volume_migrate_attached.py#L216
15:33:44 <kopecmartin> and that calls the detach
15:33:49 <kopecmartin> #link https://opendev.org/openstack/tempest/src/branch/master/tempest/scenario/manager.py#L992
15:34:22 <lpiwowar> But that's not where it failed or is it?
15:34:22 <kopecmartin> the traceback you shared failed on
15:34:23 <kopecmartin> Device or resource busy
15:34:51 <lpiwowar> When executing:  sudo mount /dev/vdb /mnt
15:35:19 <lpiwowar> So I guess there was something mounted to /mnt that was not properly unmounted?
15:35:59 <kopecmartin> hmm, maybe
15:36:10 <kopecmartin> it's a race, it happens only from time to time :/
15:36:42 <lpiwowar> Yes, it just reminded me the jskunda's patch so I wanted to mention it.
15:37:46 <lpiwowar> Because I remember that we were talking about some wait in the create_timestamp function. But I'm not sure.
15:40:14 <kopecmartin> there is concurrency 4 in that job, maybe another test tries to create a timestamp too, at the same time, no clue ,feel free to check it out
15:40:20 <kopecmartin> i'll try to give it a few mins
15:41:19 <kopecmartin> if there isn't anything else, let's end the call with the bug triage statistics
15:41:22 <kopecmartin> #topic Bug Triage
15:41:22 <kopecmartin> #link https://etherpad.openstack.org/p/qa-bug-triage-dalmatian
15:41:52 <kopecmartin> that's all from my side
15:42:24 <kopecmartin> thank you and see you all \o
15:42:28 <kopecmartin> #endmeeting