15:00:19 #startmeeting qa 15:00:19 Meeting started Tue Feb 8 15:00:19 2022 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:19 The meeting name has been set to 'qa' 15:00:25 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Weekly_QA_Team_meeting 15:00:28 agenda ^^ 15:00:38 o/ 15:02:33 hi gmann, let's start 15:02:40 #topic Announcement and Action Item (Optional) 15:02:54 we have PTL elections coming up 15:03:02 #link https://governance.openstack.org/election/ 15:03:23 yeah, nomination is open now until 15th Feb 15:03:34 kopecmartin: do you want to run again? 15:03:50 I would support that 15:03:56 I hope so :) 15:04:05 i wanted to ask first whether there is anyone who would like to run for qa PTL 15:04:06 I too support for sure 15:04:20 not me 15:04:24 kopecmartin ++ 15:04:54 if not, then yes, i'd like to run again 15:05:17 thanks kopecmartin 15:05:26 +1 15:05:32 thanks :) 15:06:00 #topic Yoga Priority Items progress 15:06:04 #link https://etherpad.opendev.org/p/qa-yoga-priority 15:06:15 any updates on the priority items? 15:06:57 on RBAC, I am trying to finish the new policy change in Nova first and then will go through the devstack required changes based on what new default nova end up with 15:07:02 regarding FIPS, we've merged some patches , i need to check what else is needed 15:07:17 two nodeset for centos stream is also merged 15:07:43 there are multinode job adding patch but that is failing somewhere. I commented on that 15:08:27 gmann: thanks 15:08:36 regarding rbac, let me know if i can do anything 15:08:56 i'd like to get to the point where we can see where tempest stands on a s-rbac env 15:09:36 yeah, policy defaults are again changing on service side so tempest needs to hold it until they are ready 15:09:56 there's also this fix for neutron system scope, which needs reviews https://review.opendev.org/c/openstack/devstack/+/826851 15:10:26 keystone is already ready, nova will be in Yoga release along with other like neutron or so 15:10:40 frickler: ack, will check 15:11:03 that one is on my list too 15:11:50 for keystone the changes in devstack are also still pending, I just rechecked two, the next needs rebasing 15:12:33 yeah, i saw those. waiting for the gate result 15:13:00 also we need to move those setting from keystone if it work before test phase. need to check my old patch which was failing earlier 15:14:18 Cleanup of duplicated scenario.manager - one of our interns is working on this, a good task for initial onboarding 15:14:26 +1, nice 15:14:28 and i have no updates for the rest of the items 15:15:27 moving on 15:15:29 #topic OpenStack Events Updates and Planning 15:15:46 we'll start gathering ptg topics once the Z release is named 15:15:53 should be done anytime soon ,right? 15:16:25 IIUC it depends on trademark checking 15:16:38 good, thanks 15:17:01 kopecmartin: yeah, 1 more week and we will be ready. 15:17:06 btw, team registration is open, i have in my TODO, i'll fill the registration form and will follow up with a doodle to find the best time slots for our sessions 15:17:17 gmann: perfect 15:17:20 +1 15:17:32 #topic Gate Status Checks 15:17:38 #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade+OR+project:openstack/hacking) 15:17:57 kopecmartin: I will also schedule TC+PTL sessions early PTG week, just to note for scheduling QA sessions. 15:18:11 F35 I approved before the meeting 15:18:21 and notified neutron+nova 15:18:30 gmann: ack 15:18:32 frickler: thanks 15:19:12 frickler: do you know centos-stream8 is stable now, so that we can again make the jobs voting ? 15:19:40 I saw a couple of failures in the F35, so IMO it is not stable 15:19:55 they are not stable seems https://zuul.openstack.org/builds?job_name=tempest-integrated-compute-centos-8-stream&job_name=tempest+tempest-full-py3-centos-8-stream&skip=0 15:19:57 in the F35 change 15:20:02 frickler: yeah 15:20:47 and centos-9-stream is hard broken with some attachment failures 15:21:27 very likely some libvirt/qemu version issue with nova 15:21:52 someone was looking at that, but I lost track whether there was any progress 15:22:07 gmann: maybe you can pick that up in the nova meeting? 15:22:07 k 15:22:11 sure 15:22:46 btw, it seems we broke neutron-tempest-with-uwsgi job 15:22:47 #link https://bugs.launchpad.net/neutron/+bug/1960022 15:22:48 with 15:22:54 #link https://review.opendev.org/c/openstack/tempest/+/814085 15:23:16 i'm looking into it , i'm failing to install the env locally 15:24:14 will try again , after that I'll leave it up to zuul and move the experiments on how to fix that to the WIP reviews 15:24:25 kopecmartin: let me know if you want a help node for debugging 15:24:29 held 15:25:19 frickler: i have an internal openstack where i can spawn one .. the problem i have is with the installation, it fails for various reasons 15:25:33 at least it gives me more insights into devstack 15:25:37 kopecmartin: so it is failing because we are waiting for it ? 15:26:44 it looks like that, although i doesn't make sense to me right now, however, if would be too big coincidence if that review is not related 15:26:57 the job started falling after the review got merged 15:27:05 humm 15:28:09 #topic Periodic jobs Status Checks 15:28:14 if it is 100% failure, testing against a revert should be pretty easy to confirm that? 15:28:55 frickler: yup, i can propose a dnm patch just to confirm that that's the real culprit 15:29:02 #link https://zuul.openstack.org/builds?job_name=tempest-full-xena-py3&job_name=tempest-full-wallaby-py3&job_name=tempest-full-victoria-py3&job_name=tempest-full-ussuri-py3&job_name=tempest-full-train-py3&pipeline=periodic-stable 15:29:08 #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic 15:29:19 for the periodic jobs, I noticed there isn't a xena one 15:29:31 but it seems failing only on victoria and ussuri ? 15:29:44 and train also no longer runs, so it can be removed from that query 15:29:44 oh 15:29:51 frickler: ohk, did I miss that 15:30:09 gmann: it seems the job is running only on victoria and ussuri 15:30:30 kopecmartin: ohk 15:31:02 stable jobs are up to date https://github.com/openstack/tempest/blob/master/zuul.d/project.yaml#L166 15:31:41 oh, full-xena, not -py3 15:31:49 then just the query above is wrong 15:32:03 yeah, my bad 15:32:08 #link https://zuul.openstack.org/builds?job_name=tempest-full-xena&job_name=tempest-full-wallaby-py3&job_name=tempest-full-victoria-py3&job_name=tempest-full-ussuri-py3&pipeline=periodic-stable 15:32:18 ^^ this one we can use 15:32:34 yeah I removed the py3 from xena onwards as everything is py3 15:33:05 i've fixed the link in the agenda 15:33:10 thanks 15:33:13 frickler: good catch! 15:34:05 anyway, periodic jobs seem fine 15:34:13 #topic Sub Teams highlights 15:34:19 Changes with Review-Priority == +1 15:34:24 #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade+OR+project:openstack/hacking) 15:34:36 i see 2 reviews, will try to check them soon 15:35:18 #topic Open Discussion 15:35:23 anything for the open discussion? 15:35:49 there was this idea proposed of doing reboot checks for devstack 15:36:24 which might be useful it we get it to work in a stable way 15:37:28 but let me know if you disagree, then we can stop investing work 15:37:34 interesting, what would it be about exactly? restarting the env after deployment and checking whether is up and running? 15:37:59 yes, my idea would be "run devstack", "reboot", "run tempest" 15:38:40 the reboot part is happing in the fips jobs anyway, but before devstack 15:39:00 but this way rebooting during a CI run isn't a new thing anymore 15:39:01 what is it about to catch/test ? 15:39:04 or maybe , devstack, tempest, reboot, some subset of tempest? the subset can grow after it's stable 15:39:29 currently it fails because swift and cinder loopback setup isn't rebootsafe 15:39:41 and as separate job right not by default in all devstack jobs? 15:40:00 see https://review.opendev.org/c/openstack/devstack/+/828280 15:40:11 yes, I would want a dedicated job 15:41:29 jm1 may even be still around who proposed this 15:42:22 anyway, we don't need a long discussion right now, just wanted to raise awareness for this idea 15:42:32 https://review.opendev.org/c/openstack/devstack/+/828289 15:42:39 that's interesting, i don't see a reason why not to try that in one separate job and let's see where it gets us 15:43:03 ok, looks good idea. +1 15:43:44 jm1: thanks, added to my watchlist 15:43:51 anything else for the open discussion? 15:44:33 just one other headsup 15:44:49 I started integrating cirros CI in opendev 15:45:04 since the old travis setup no longer exists 15:45:21 anyone wanting to help and learn about zuul is welcome to join 15:45:39 https://review.opendev.org/c/cirros/cirros/+/827916 15:46:30 my current painpoint is how to collect artifacts and where to store them 15:46:43 and that's it from me 15:47:19 frickler: thanks for the headsup 15:47:57 one last thing to do 15:47:58 #topic Bug Triage 15:48:03 #link https://etherpad.opendev.org/p/qa-bug-triage-yoga 15:48:07 bug numbers are recorded at ^ 15:48:22 #link https://bugs.launchpad.net/tempest/+bug/1911044 15:48:27 gmann: ^ 15:48:34 we can proceed with unittest2 removal, right? 15:48:48 there is already a patch proposed 15:48:49 #link https://review.opendev.org/c/openstack/tempest/+/826191 15:50:44 kopecmartin: about to ask the link. I think yes as stable/train does not use tempest master. but let me check again in case 15:50:47 I will review it today 15:51:00 it doesn't, we pinned tempest for train 15:51:11 #link https://opendev.org/openstack/devstack/src/commit/8a22f7380c7029e931fe9103320f24a223b619d1/stackrc#L319 15:51:16 gmann: ok, thanks 15:51:31 that's all from my side 15:51:35 thank you everyone 15:51:40 see you around 15:52:15 #endmeeting