15:00:24 #startmeeting qa 15:00:24 Meeting started Tue Nov 15 15:00:24 2022 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:24 The meeting name has been set to 'qa' 15:00:34 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours 15:00:37 agenda ^^^ 15:01:12 o/ 15:02:03 \o 15:03:28 o/ 15:04:23 #topic Announcement and Action Item (Optional) 15:04:34 nothing from my side here 15:04:43 #topic Antelope Priority Items progress 15:04:48 #link https://etherpad.opendev.org/p/qa-antelope-priority 15:05:24 regarding srbac 15:05:39 gmann's patch adding enforce_scope for nova in devstack has been merged 15:05:40 #link https://review.opendev.org/c/openstack/devstack/+/861930 15:07:01 and the tempest's follow up is in check gates right now 15:07:03 #link https://review.opendev.org/c/openstack/tempest/+/614484 15:07:37 that's all i know for now 15:07:43 regarding the priority items 15:08:38 #topic OpenStack Events Updates and Planning 15:08:48 skipping 15:08:49 #topic Gate Status Checks 15:08:54 #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:09:17 nothing there, any urgent / gate blocking patch to review? 15:09:30 nope, debian got fixed, thx clarkb 15:10:34 yeah, that's great 15:10:38 thank you clarkb 15:10:47 #topic Bare rechecks 15:10:52 #link https://etherpad.opendev.org/p/recheck-weekly-summary 15:11:01 1 out of 5 15:11:09 could be worse ;) 15:11:18 #topic Periodic jobs Status Checks 15:11:25 stable 15:11:26 #link https://zuul.openstack.org/builds?job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-wallaby-py3&job_name=tempest-full-victoria-py3&job_name=tempest-full-ussuri-py3&job_name=tempest-full-zed&pipeline=periodic-stable 15:11:31 master 15:11:31 #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic 15:11:44 there was some zuul issue over the weekend, you can ignore those 15:12:08 oh, yeah, everything is passing now 15:12:23 devstack-no-tls-proxy failed on stable/zed 15:12:35 it hasn't run since 15:12:46 * kopecmartin checking the output 15:13:12 time out, hopefully random 15:13:15 I think yoga is also still blocked by c9s, but I didn't look closer 15:13:36 also as discussed earlier, old stable branches are broken by nova EOL 15:13:47 q-r-s I think 15:14:24 but it seems we are fine with just waiting for devstack to become EOL, too 15:14:43 i think so 15:15:27 #topic Distros check 15:15:34 cs-9 15:15:35 #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0 15:15:40 fedora 15:15:41 #link https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest&skip=0 15:15:47 debian 15:15:48 #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&skip=0 15:15:54 jammy 15:15:55 #link https://zuul.openstack.org/builds?job_name=devstack-platform-ubuntu-jammy&job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0 15:15:59 rocky 15:16:02 #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx 15:16:52 https://review.opendev.org/c/openstack/devstack/+/863451/ is the yoga patch I was referring to 15:16:59 * kopecmartin checking cs-9 on yoga 15:18:05 it's been a week, i'm gonna recheck and if it fails, we'll make it n-v? 15:18:22 the cs jobs are always so volatile 15:19:21 ack 15:19:32 other distros looks quite good though 15:20:00 #topic Sub Teams highlights 15:20:00 Changes with Review-Priority == +1 15:20:08 #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) 15:20:27 nothing here 15:20:37 #topic Open Discussion 15:20:43 Hi, I'd like to say more about openEuler distro job. https://review.opendev.org/c/openstack/devstack/+/852237 15:20:57 sure wxy-xiyuan 15:20:59 Sorry fo late reply. I spent much time to debug the test failure, since it raise random. The behavior of the failure is that sometime the VM can't be ssh, but the state of the VM is Active and the console log looks OK. 15:21:07 frickler: kopecmartin 15:21:27 From my sight, the case is: neutron try to delete the old OVN db record of the floating ip, but there is no response of OVN, and no error raise from OVN as well. Then neutron treat it works and reuse the flaoting ip with the new VM. So the VM can't be ssh via the flaoting ip since the OVN record is not refreshed. 15:21:45 The OVN version openEuler uses is v20.06.1 from https://opendev.org/openstack/devstack/src/commit/1f5d6c0abba7c18dc809a68ed893a6e8f77b207a/lib/neutron_plugins/ovn_agent#L31 15:22:00 Then I tried use a new version the same as devstack-platform-ubuntu-jammy-ovn-source does. https://opendev.org/openstack/devstack/src/branch/master/.zuul.yaml#L710 15:22:07 that sounds like you might want to bring it up with the neutron team 15:22:12 The problem seems fixed now. Until today, daily recheck for openEuler job all runs well. 15:22:20 https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-openEuler-22.03&skip=0 15:23:03 using ovn from source doesn't sound like a stable solution to me 15:23:21 neutron only added that option as a workaround 15:23:42 yeah, I've asked openEuler to support OVN package in next version. 15:23:59 And I just notice that devstack team seem want to switch to ovs, right? 15:24:25 no, that was discussed, but neutron team prefers ovn in general 15:24:31 Ok. 15:24:35 but I think we use ovs still for debian 15:24:44 so might be an option for openeuler, too 15:25:16 At least OVN works now. what's your suggestion for the next step? 15:25:38 keep rechecking to make sure it works as expect? 15:26:15 how long till openEuler supports OVN package? 15:26:19 maybe you can try to add a second job that uses ovs, then we can see how both options work for a bit 15:26:35 +1 15:26:49 kopecmartin: the next LTS will be release in 2022.12.30 15:26:59 frickler: sure. 15:27:29 cool, so in the meantime let's go with frickler's suggestion 15:28:21 so then there is the topic of jammy migration 15:28:36 no progress afaict from either swift or ceph plugin 15:29:01 I was hoping for someone from manila to pick up the latter, but I didn't see anything yet 15:29:13 frickler: kopecmartin thanks for your reply, I'll add one later to take future watching 15:29:33 wxy-xiyuan: thanks 15:29:39 #link https://etherpad.opendev.org/p/migrate-to-jammy 15:29:45 the deadline is Nov 18th 15:29:58 so we'll just switch the jobs then ,right? 15:30:41 I would suggest to do that to increase the pressure, yes 15:30:52 makes sense to me 15:30:59 projects can still override to focal if they think they need more time 15:31:19 but I'm also thinking the TC should oversee that progress 15:31:38 since after all they defined the PTI that way 15:32:20 If we have time I'd like to ask for assistance on something 15:32:24 gmann: any other opinion? 15:32:31 eliadcohen: sure 15:32:45 I'll wait on gmann , we will better wrap up current topic 15:33:02 manila seems to wait on ceph 15:33:13 so ceph is the blocker it seems 15:33:15 likely he isn't around yet, just pinged since he might answer later 15:33:23 eliadcohen: yep, go ahead 15:33:44 we discussed ceph earlier, distro pkgs are there, no need to wait for IBM imo 15:34:28 frickler: ah, ok, i didn't know that, i just went through the etherpad 15:36:14 eliadcohen: still around? 15:38:06 seems not, maybe we go on and they can ask later? 15:38:29 sure 15:38:34 anything else for the open discussion? 15:39:33 nothing from my side 15:40:02 #topic Bug Triage 15:40:07 #link https://etherpad.openstack.org/p/qa-bug-triage-antelope 15:40:18 numbers recorded but i haven't had time triaging 15:40:20 shame on me 15:40:23 :D 15:40:24 yes sorry 15:40:42 np, how can we help? eliadcohen 15:40:48 Hello, I'd like to ask for a little help figuring out the Zuul errors in https://review.opendev.org/c/openstack/tempest/+/857756 15:40:56 (Precanned message) 15:41:49 I'm just not clear on some of the Zuul failures around glance-multistore-cinder-import, tempest-full-test-account-py3 etc 15:42:00 * kopecmartin checking 15:42:08 Is there an easy way to figure out if mine is the breaking change? 15:43:12 well, if the jobs pass in other jobs (jobs triggered by other patches) and the failed tests use the code you changed then probably your code caused the failure 15:43:20 otherwise the failures are unrelated 15:43:21 eliadcohen you can run the tox jobs locally using `tox -e py38` for example. 15:43:35 Both make sense 15:44:00 so lpiwowar I can run all these locally and then figure out why they fail. Now suppose it doesn't have to do anything with what I changed? 15:44:29 the tox jobs fail on the new unit test job 15:44:30 #link https://cbde7a5ceb90f79b264f-a3c1233a0305e644b60ccc0279f1954b.ssl.cf2.rackcdn.com/857756/8/check/openstack-tox-py310/ec918e2/testr_results.html 15:44:52 pep8 job is complaining about the length of the lines mostly 15:45:35 i'd start with those first 15:45:47 the failure in glance-multistore-cinder-import seems unrelated (just at the first glance though) 15:45:48 eliadcohen not sure I understand. You've already run the tests locally? 15:45:59 lpiwowar, no, only a few 15:46:05 I'll run all next 15:46:13 Was trying to get a feel for the process 15:46:37 ok, so I'll go at these one by one and we will take it from there offline as needed? 15:47:06 Oh, ok I see. I agree with kopecmartin it is best to start with the pep8 errors. 15:47:11 eliadcohen: sure, sounds like a plan 15:47:25 eliadcohen: ok 15:47:56 cool, anything else we can help with? 15:48:24 That was it on my end gang, thanks! 15:48:48 ok then, that's it from my side too 15:48:53 unless there is anything else 15:49:09 seems not 15:49:13 thanks everyone 15:49:15 #endmeeting