15:00:17 <kopecmartin> #startmeeting qa
15:00:17 <opendevmeet> Meeting started Tue Oct  3 15:00:17 2023 UTC and is due to finish in 60 minutes.  The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:17 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:18 <opendevmeet> The meeting name has been set to 'qa'
15:00:26 <kopecmartin> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours
15:00:28 <kopecmartin> agenda ^^^^
15:01:38 <lpiwowar> o/
15:01:56 <kopecmartin> #topic Announcement and Action Item (Optional)
15:02:27 <kopecmartin> we're in the release time
15:02:53 <kopecmartin> patches proposed by gmann
15:02:55 <kopecmartin> #link https://review.opendev.org/q/topic:qa-2023-2-release+status:open
15:03:05 <kopecmartin> #link https://review.opendev.org/q/topic:qa-2023-2-release+
15:03:09 <kopecmartin> https://review.opendev.org/q/topic:qa-2023-2-release
15:03:12 <kopecmartin> ah
15:03:14 <kopecmartin> #link https://review.opendev.org/q/topic:qa-2023-2-release+
15:03:18 <kopecmartin> ....
15:03:55 <kopecmartin> we released tempest 36.0.0 last week
15:04:37 <kopecmartin> gmann, anything I should do related to release process? thank you for proposing all the patches btw
15:05:23 <kopecmartin> moving on
15:05:24 <kopecmartin> #topic Bobcat Priority Items progress
15:05:29 <kopecmartin> #link https://etherpad.opendev.org/p/qa-bobcat-priority
15:06:30 <kopecmartin> not many updates .. we'll triage that during PTG
15:06:39 <kopecmartin> #topic OpenStack Events Updates and Planning
15:06:43 <kopecmartin> the next PTG will be held virtually, October 23-27, 2023
15:07:27 <kopecmartin> you can propose topics that we will discuss during PTG here
15:07:28 <kopecmartin> #link https://etherpad.opendev.org/p/oct2023-ptg-qa
15:07:42 <kopecmartin> don't forget to register ..
15:07:43 <kopecmartin> #link http://ptg2023.openinfra.dev/
15:08:29 <kopecmartin> you may also influence dates of QA PTG sessions, just fill this:
15:08:30 <kopecmartin> #link https://framadate.org/f26R3EcZ2BOo7r8Q
15:08:59 <kopecmartin> #topic Gate Status Checks
15:09:11 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)
15:09:22 <kopecmartin> nothing there, anything urgent to review?
15:09:38 <lpiwowar> I know we discussed it already here. This change was reverted https://review.opendev.org/c/openstack/tempest/+/894269 because when ceph is used as a backup driver we can not use the "container" parameter in the API call. I was wondering whether it would be ok to create new config option that will indicate what backup_driver is used by cinder.
15:09:58 <lpiwowar> I just wanted to mention it here kopecmartin.
15:10:55 <kopecmartin> sure, why not?
15:11:33 <lpiwowar> Because last time we talked about it someone was against the new option. I do not remember who.
15:11:42 <lpiwowar> I can maybe check the logs.
15:11:50 <kopecmartin> me neither, i don't remember the discussion at all :D
15:11:52 <kopecmartin> yeah
15:11:57 <lpiwowar> I understand :D
15:12:09 <kopecmartin> check that and we can discuss that in Open Discussion
15:12:15 <kopecmartin> #topic Bare rechecks
15:12:17 <lpiwowar> kopecmartin: +1
15:12:21 <kopecmartin> #link https://etherpad.opendev.org/p/recheck-weekly-summary
15:13:02 <kopecmartin> all good here .. although interesting number - the QA team has the biggest number of rechecks over the last 90 days
15:13:15 <kopecmartin> #topic Periodic jobs Status Checks
15:13:15 <kopecmartin> periodic stable full
15:13:15 <kopecmartin> #link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1&job_name=tempest-full-2023-2
15:13:17 <kopecmartin> periodic stable slow
15:13:19 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-2&jjob_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena
15:13:21 <kopecmartin> periodic extra tests
15:13:23 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-full-2023-2-extra-tests&job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests
15:13:25 <kopecmartin> periodic master
15:13:27 <kopecmartin> #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic
15:15:27 <kopecmartin> all seems as expected
15:16:02 <kopecmartin> #topic Distros check
15:16:02 <kopecmartin> cs-9
15:16:04 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0
15:16:06 <kopecmartin> debian
15:16:08 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&job_name=devstack-platform-debian-bookworm&skip=0
15:16:10 <kopecmartin> rocky
15:16:12 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx
15:16:14 <kopecmartin> openEuler
15:16:16 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0
15:16:18 <kopecmartin> jammy
15:16:20 <kopecmartin> #link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0
15:19:40 * kopecmartin still checking the results
15:21:20 <kopecmartin> i see a few failures that happened last week but i vaguely remember there were known failures due to all the releases that are happening right now ..
15:21:29 <kopecmartin> seems like now it's all on track
15:21:37 <kopecmartin> #topic Sub Teams highlights
15:21:41 <kopecmartin> Changes with Review-Priority == +1
15:21:45 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)
15:21:53 <kopecmartin> no patches
15:21:56 <kopecmartin> #topic Open Discussion
15:22:00 <kopecmartin> anything for the open discussion?
15:23:02 <lpiwowar> Nothing from my side:)
15:24:00 <kopecmartin> did you find who was objecting the approach in your patch?
15:25:17 <kopecmartin> #link https://meetings.opendev.org/meetings/qa/2023/
15:25:18 <lpiwowar> Not yet ...
15:25:27 <kopecmartin> no idea when we could discuss that :/
15:25:28 <lpiwowar> I'm not able to find the correct meeting. It was long time ago
15:25:36 <lpiwowar> yeah :/
15:28:07 <kopecmartin> here
15:28:13 <kopecmartin> #link https://meetings.opendev.org/meetings/qa/2023/qa.2023-08-01-15.00.log.html
15:29:36 <lpiwowar> Thanks! I was searching for "config"
15:30:08 <lpiwowar> It looks like you were against it and dansmith.
15:30:37 <lpiwowar> But I remember that I agreed with you.
15:30:49 <dansmith> which patch was I against?
15:31:56 <lpiwowar> Against a new config option to tempest. The option would tell what backup driver is used by cinder. It would help us to do a proper clean up for volume backup tests.
15:32:35 <lpiwowar> We are talking about this patch:
15:32:48 <kopecmartin> the original LP:
15:32:50 <kopecmartin> #link https://bugs.launchpad.net/cinder/+bug/2028671
15:32:50 <lpiwowar> #link https://review.opendev.org/c/openstack/tempest/+/890798
15:33:28 <kopecmartin> .. but we had to revert that because of a new LP:
15:33:30 <kopecmartin> #link https://bugs.launchpad.net/tempest/+bug/2034913
15:33:46 <kopecmartin> so we're practically at the beginning
15:33:59 <dansmith> okay I don't see me on any of that and don't recall any such conversation
15:34:03 * kopecmartin still loads the context of the issue
15:34:42 <kopecmartin> lpiwowar: please imlement it as you think is right, we'll discuss that during review, as always :)
15:34:42 <lpiwowar> The conversation is here:
15:34:46 <lpiwowar> #link https://meetings.opendev.org/meetings/qa/2023/qa.2023-08-01-15.00.log.html
15:35:02 <lpiwowar> kopecmartin: ok:)
15:35:40 <kopecmartin> it's always easier to discuss a specific solution if it is executed in the CI - we have a proof it works etc
15:36:44 <kopecmartin> ou, i'm starting to remember , lpiwowar you wanted to create a new opt just becuase of the cleanup, not a test
15:36:52 <kopecmartin> that's strange
15:37:10 <kopecmartin> and not a good approach
15:37:14 <lpiwowar> This is how it would look like: https://review.opendev.org/c/openstack/tempest/+/896011/8/tempest/api/volume/base.py
15:37:42 <lpiwowar> Yeah, I agree. It is strange. But currently I'm not sure how to do it without it.
15:38:34 <lpiwowar> The issue is line 197 (previous link). I want to add this option only when Swift is used as a backup driver.
15:39:05 <kopecmartin> does the patch only revert the previous patch or are there some modifications on top of that?
15:39:58 <kopecmartin> can't we add 2 addCleanups? .. one for when swift is used the other if it isn't .. one will always fail but we can ignore that failure
15:40:17 <lpiwowar> I'm little bit lost in the all patches. I do not know which one do you mean right now :D.
15:40:33 <kopecmartin> the one you shared
15:40:36 <lpiwowar> If I understand the issue correctly it will not work.
15:41:18 <lpiwowar> But the issue is not in the clean up but in the creation of the backup itself.
15:42:08 <lpiwowar> When Swift is used as a backup driver we want to be able to tell through the API that we want the backup to be stored in a specific container.
15:42:15 <lpiwowar> So that we can clean it up properly later.
15:42:56 <kopecmartin> oh, ok , i see it now
15:42:58 <lpiwowar> It works fine when Swift is enabled. But when Ceph is used as a backup driver we get an error because Ceph does not understand the concept of container.
15:43:08 <lpiwowar> ok:)
15:43:38 <kopecmartin> there is one danger in that, we will have 2 testing paths - when swift is enabled (or whatever) we create the container with different options
15:43:49 <kopecmartin> not saying it's an issue, it's just something that needs to be taken into account
15:44:26 <kopecmartin> ... in this case, it seems like another config opt makes sense
15:44:28 <kopecmartin> however
15:45:11 <kopecmartin> new config opt means new opt that needs to be set by the user as well as our jobs in the CI .. so, if we go that way, how many jobs will we need to edit?
15:46:12 <lpiwowar> Well I was thinking that we can set it by default to ceph. This should not influence any job because we will have a different behaviour only when Swift is used as a backup driver.
15:46:57 <lpiwowar> And for the jobs which use Swift as a backup driver I think we can update devstack/lib/tempest file so that it updates tempest.conf with the correct option (?)/
15:47:14 <kopecmartin> sounds good, that would work
15:47:41 <lpiwowar> Ok, awesome:)
15:48:52 <kopecmartin> just avoid stating smth like this "adding new option to cleanup container properly" .. it's more like adding a new option so that we can create a resource properly and avoid cleanup issues
15:49:51 <lpiwowar> Ack, I understand
15:50:36 <kopecmartin> also address https://bugs.launchpad.net/tempest/+bug/2034913 in your patch https://review.opendev.org/c/openstack/tempest/+/896011 .. and maybe it would be better to change the title as it's not a pure revert
15:50:48 <kopecmartin> it's more like a second try to resolve the original LP
15:51:01 <kopecmartin> while taking the new LP into account
15:51:29 <kopecmartin> #topic Bug Triage
15:51:31 <lpiwowar> Ack
15:51:37 <kopecmartin> #link https://etherpad.openstack.org/p/qa-bug-triage-bobcat
15:51:52 <kopecmartin> numbers recorded, that's unfortunately all i had time for
15:51:59 <kopecmartin> that's all from my side
15:52:02 <kopecmartin> anything else?
15:52:20 <lpiwowar> Nothing from my side
15:52:51 <kopecmartin> cool, than we're done for today ..
15:52:53 <kopecmartin> thanks
15:52:56 <kopecmartin> #endmeeting