15:00:16 #startmeeting qa 15:00:16 Meeting started Tue Nov 1 15:00:16 2022 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:16 The meeting name has been set to 'qa' 15:00:24 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours 15:00:26 agenda ^ 15:00:26 hello all 15:00:30 o/ 15:01:34 it's been 2 weeks we had this meeting (due to PTG and my PTO) , let's go over the whole agenda today 15:02:19 #topic Announcement and Action Item (Optional) 15:02:33 we have a new etherpad with the priority items for the new cycle 15:03:00 oh, that's the next topic :D 15:03:27 anyway, my only announcement is quite obvious, we're at the beginning of another cycle 15:03:40 i'm still processing all the things we discussed during the PTG 15:03:46 i'm almost done though 15:03:57 #topic Zed Priority Items progress 15:04:03 :/ 15:04:07 #topic Antelope Priority Items progress 15:04:18 \o 15:04:21 #link https://etherpad.opendev.org/p/qa-antelope-priority 15:04:23 o/ 15:04:50 the priority etherpad is almost up to date, i'm planning to do a few more additions later today 15:05:42 hi o/ 15:05:54 i added there also the "default job variant - ovs?" and "cleanup deprecated lib/neutron code" tasks 15:06:06 frickler: did we get any feedback on the default ovs job? 15:06:29 yes, Neutron prefers to keep OVN the default 15:06:47 and slaweq wanted to prepare a patch to make grenade test that variant 15:06:54 not sure if that happened yet 15:07:34 slaweq also volunteered to do the lib cleanup 15:07:36 what about ovn not being product ready, isn't that an issue anymore? 15:07:43 frickler: that's great news 15:07:47 thank you slaweq 15:07:51 it's very much appreciated 15:08:44 I think today is a holiday in Poland, but he can update the etherpad when he returns I guess 15:09:07 Most of Europe has a holiday yes 15:09:31 absolutely, no pressure, important thing is we track it and we are aware of the next step 15:12:05 the other tasks in the priority etherpad look quite good, i'll add a few more details to the srbac thing later today 15:12:06 let's move on 15:12:07 #topic OpenStack Events Updates and Planning 15:12:29 we can skip, as we are at the beginning of the cycle 15:12:34 #topic Gate Status Checks 15:12:39 #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:12:52 nothing there 15:12:57 anything urgent for review? 15:13:18 nothing from my side 15:13:54 #topic Bare rechecks 15:13:58 #link https://etherpad.opendev.org/p/recheck-weekly-summary 15:14:24 we are at <15% .. out of 7 rechecks 15:14:53 #topic Periodic jobs Status Checks 15:15:05 stable 15:15:09 #link https://zuul.openstack.org/builds?job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-wallaby-py3&job_name=tempest-full-victoria-py3&job_name=tempest-full-ussuri-py3&job_name=tempest-full-zed&pipeline=periodic-stable 15:15:15 master 15:15:17 #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic 15:15:55 periodic pipeline looks good 15:16:12 fips looks broken 15:16:34 although it seems that tempest was successful 15:16:37 it failed after that 15:18:17 Yes, agree. I'm checking it too. 15:20:19 i'm staring at the console log but i don't see where exactly it failed 15:20:35 maybe here 15:20:54 [Capture most recent qemu crash dump, if any] but it says it ignores errors 15:22:25 Yeah, I saw this too. The Warnings in the "Generate statistics" task look a bit suspicious. 15:22:36 in the latest job, tempest didn't finish, but was interrupted due to timeout 15:22:55 2022-11-01 04:43:02.590005 | RUN END RESULT_TIMED_OUT: [untrusted : opendev.org/openstack/tempest/playbooks/devstack-tempest.yaml@master] 15:23:05 https://zuul.openstack.org/build/009a202f662e4039801910581e65a704 15:23:27 oh, i see! i overlooked that line 15:23:55 Ok 15:24:01 so maybe increasing the job timeout even more might help 15:24:22 or reducing scope for fips 15:25:30 what do you mean by reducing scope? running less tests? 15:25:51 yes 15:26:18 or find out why it is slower than the normal c9 job 15:26:56 ok, let's increase a timeout just to confirm it would help, let's also compare the tests (how long they take) with the tests in another yet similar job 15:27:31 good, thanks 15:27:37 moving on 15:27:38 #topic Distros check 15:27:45 cs-9 15:27:47 #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0 15:27:52 fedora 15:27:57 #link https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest&skip=0 15:28:05 debian 15:28:06 #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&skip=0 15:28:14 jammy 15:28:14 #link https://zuul.openstack.org/builds?job_name=devstack-platform-ubuntu-jammy&job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0 15:29:04 cs-9 and fedora have some timeouts too, but only sporadic ones 15:29:50 yeah :/ 15:30:27 i've just remembered, what's the status of migrating to jammy? 15:30:31 I didn't spot any for debuntu, so might be newer version of something, kernel or qemu 15:30:53 most obvious blocker for jammy is ceph 15:31:02 not sure if anyone wanted to work on that 15:31:13 noonedeadpunk made a lot of test patches 15:31:34 and a couple of projects fixes issues already, like designate 15:32:01 ack, thanks, just wondering, i bet i have the gerrit topic tracking that open somewhere and lost among other tabs 15:32:26 #link https://review.opendev.org/q/topic:migrate-to-jammy 15:32:42 looks mixed in general 15:32:51 so some work to do I guess 15:33:58 swift still doesn't even do unit tests on py310 :( 15:35:10 they were also late with py3 support, so it will be interesting to see whether they manage to stay integrated 15:35:32 gmann: ^^ maybe something for the tc to keep an eye on 15:35:57 #link https://review.opendev.org/c/openstack/swift/+/850947 15:39:07 efforts affecting everybody are always interesting .. 15:39:12 #topic Sub Teams highlights 15:39:17 Changes with Review-Priority == +1 15:39:23 #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:40:22 I think https://review.opendev.org/c/openstack/devstack/+/860976 should go there, too, I just didn't get to it yet after dansmith updated it 15:40:38 \o/ 15:42:31 seems interest in openeuler also has dropped off again, no signs of debugging the failures https://review.opendev.org/c/openstack/devstack/+/852237 15:44:00 a quick question in 15:44:02 #link https://review.opendev.org/c/openstack/devstack/+/861915 15:44:06 * kopecmartin checking the others 15:44:55 seeing the merged patches, we should check whether wwe have periodic jobs for rocky and create them if not, also add them to the agenda 15:46:13 I'll defer to haleyb for the ovn question 15:47:29 frickler: devstack-platform-rocky-blue-onyx is in check queue, isn't that enough? 15:48:58 kopecmartin: ah, yes, we don't do periodic for the platform jobs. then just add https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx to the agenda to look at regularly 15:49:20 frickler: sounds good, thanks 15:50:19 done 15:50:40 #topic Open Discussion 15:50:45 anything for the open discussion 15:50:45 ? 15:51:56 frickler: what's up? 15:52:20 haleyb: just a quick question (last comment) in https://review.opendev.org/c/openstack/devstack/+/861915 15:53:23 * haleyb looking 15:53:36 thanks 15:53:40 #topic Bug Triage 15:53:49 #link https://etherpad.openstack.org/p/qa-bug-triage-antelope 15:53:56 our new etherpad for bug triaging ^^ 15:54:08 this is all from my side 15:54:14 if there isn't anything else 15:54:25 we can close our office hour 15:54:47 thank you everyone 15:54:51 #endmeeting