15:00:24 <kopecmartin> #startmeeting qa
15:00:24 <opendevmeet> Meeting started Tue Nov 15 15:00:24 2022 UTC and is due to finish in 60 minutes.  The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:24 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:24 <opendevmeet> The meeting name has been set to 'qa'
15:00:34 <kopecmartin> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours
15:00:37 <kopecmartin> agenda ^^^
15:01:12 <wxy-xiyuan> o/
15:02:03 <frickler> \o
15:03:28 <kopecmartin> o/
15:04:23 <kopecmartin> #topic Announcement and Action Item (Optional)
15:04:34 <kopecmartin> nothing from my side here
15:04:43 <kopecmartin> #topic Antelope Priority Items progress
15:04:48 <kopecmartin> #link https://etherpad.opendev.org/p/qa-antelope-priority
15:05:24 <kopecmartin> regarding srbac
15:05:39 <kopecmartin> gmann's patch adding enforce_scope for nova in devstack has been merged
15:05:40 <kopecmartin> #link https://review.opendev.org/c/openstack/devstack/+/861930
15:07:01 <kopecmartin> and the tempest's follow up is in check gates right now
15:07:03 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/614484
15:07:37 <kopecmartin> that's all i know for now
15:07:43 <kopecmartin> regarding the priority items
15:08:38 <kopecmartin> #topic OpenStack Events Updates and Planning
15:08:48 <kopecmartin> skipping
15:08:49 <kopecmartin> #topic Gate Status Checks
15:08:54 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)
15:09:17 <kopecmartin> nothing there, any urgent / gate blocking patch to review?
15:09:30 <frickler> nope, debian got fixed, thx clarkb
15:10:34 <kopecmartin> yeah, that's great
15:10:38 <kopecmartin> thank you clarkb
15:10:47 <kopecmartin> #topic Bare rechecks
15:10:52 <kopecmartin> #link https://etherpad.opendev.org/p/recheck-weekly-summary
15:11:01 <kopecmartin> 1 out of 5
15:11:09 <kopecmartin> could be worse ;)
15:11:18 <kopecmartin> #topic Periodic jobs Status Checks
15:11:25 <kopecmartin> stable
15:11:26 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-wallaby-py3&job_name=tempest-full-victoria-py3&job_name=tempest-full-ussuri-py3&job_name=tempest-full-zed&pipeline=periodic-stable
15:11:31 <kopecmartin> master
15:11:31 <kopecmartin> #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic
15:11:44 <frickler> there was some zuul issue over the weekend, you can ignore those
15:12:08 <kopecmartin> oh, yeah, everything is passing now
15:12:23 <kopecmartin> devstack-no-tls-proxy failed on stable/zed
15:12:35 <kopecmartin> it hasn't run since
15:12:46 * kopecmartin checking the output
15:13:12 <kopecmartin> time out, hopefully random
15:13:15 <frickler> I think yoga is also still blocked by c9s, but I didn't look closer
15:13:36 <frickler> also as discussed earlier, old stable branches are broken by nova EOL
15:13:47 <frickler> q-r-s I think
15:14:24 <frickler> but it seems we are fine with just waiting for devstack to become EOL, too
15:14:43 <kopecmartin> i think so
15:15:27 <kopecmartin> #topic Distros check
15:15:34 <kopecmartin> cs-9
15:15:35 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0
15:15:40 <kopecmartin> fedora
15:15:41 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest&skip=0
15:15:47 <kopecmartin> debian
15:15:48 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&skip=0
15:15:54 <kopecmartin> jammy
15:15:55 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-ubuntu-jammy&job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=0
15:15:59 <kopecmartin> rocky
15:16:02 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx
15:16:52 <frickler> https://review.opendev.org/c/openstack/devstack/+/863451/ is the yoga patch I was referring to
15:16:59 * kopecmartin checking cs-9 on yoga
15:18:05 <kopecmartin> it's been a week, i'm gonna recheck and if it fails, we'll make it n-v?
15:18:22 <kopecmartin> the cs jobs are always so volatile
15:19:21 <frickler> ack
15:19:32 <kopecmartin> other distros looks quite good though
15:20:00 <kopecmartin> #topic Sub Teams highlights
15:20:00 <kopecmartin> Changes with Review-Priority == +1
15:20:08 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)
15:20:27 <kopecmartin> nothing here
15:20:37 <kopecmartin> #topic Open Discussion
15:20:43 <wxy-xiyuan> Hi, I'd like to say more about openEuler distro job. https://review.opendev.org/c/openstack/devstack/+/852237
15:20:57 <kopecmartin> sure wxy-xiyuan
15:20:59 <wxy-xiyuan> Sorry fo late reply. I spent much time to debug the test failure, since it raise random. The behavior of the failure is that sometime the VM can't be ssh, but the state of the VM is Active and the console log looks OK.
15:21:07 <wxy-xiyuan> frickler: kopecmartin
15:21:27 <wxy-xiyuan> From my sight, the case is: neutron try to delete the old OVN db record of the floating ip, but there is no response of OVN, and no error raise from OVN as well. Then neutron treat it works and reuse the flaoting ip with the new VM. So the VM can't be ssh via the flaoting ip since the OVN record is not refreshed.
15:21:45 <wxy-xiyuan> The OVN version openEuler uses is v20.06.1 from https://opendev.org/openstack/devstack/src/commit/1f5d6c0abba7c18dc809a68ed893a6e8f77b207a/lib/neutron_plugins/ovn_agent#L31
15:22:00 <wxy-xiyuan> Then I tried use a new version the same as devstack-platform-ubuntu-jammy-ovn-source does. https://opendev.org/openstack/devstack/src/branch/master/.zuul.yaml#L710
15:22:07 <frickler> that sounds like you might want to bring it up with the neutron team
15:22:12 <wxy-xiyuan> The problem seems fixed now. Until today, daily recheck for openEuler job all runs well.
15:22:20 <wxy-xiyuan> https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-openEuler-22.03&skip=0
15:23:03 <frickler> using ovn from source doesn't sound like a stable solution to me
15:23:21 <frickler> neutron only added that option as a workaround
15:23:42 <wxy-xiyuan> yeah, I've asked openEuler to support OVN package in next version.
15:23:59 <wxy-xiyuan> And I just notice that devstack team seem want to switch to ovs, right?
15:24:25 <frickler> no, that was discussed, but neutron team prefers ovn in general
15:24:31 <wxy-xiyuan> Ok.
15:24:35 <frickler> but I think we use ovs still for debian
15:24:44 <frickler> so might be an option for openeuler, too
15:25:16 <wxy-xiyuan> At least OVN works now. what's your suggestion for the next step?
15:25:38 <wxy-xiyuan> keep rechecking to make sure it works as expect?
15:26:15 <kopecmartin> how long till openEuler supports OVN package?
15:26:19 <frickler> maybe you can try to add a second job that uses ovs, then we can see how both options work for a bit
15:26:35 <kopecmartin> +1
15:26:49 <wxy-xiyuan> kopecmartin: the next LTS will be release in 2022.12.30
15:26:59 <wxy-xiyuan> frickler: sure.
15:27:29 <kopecmartin> cool, so in the meantime let's go with frickler's suggestion
15:28:21 <frickler> so then there is the topic of jammy migration
15:28:36 <frickler> no progress afaict from either swift or ceph plugin
15:29:01 <frickler> I was hoping for someone from manila to pick up the latter, but I didn't see anything yet
15:29:13 <wxy-xiyuan> frickler: kopecmartin thanks for your reply, I'll add one later to take future watching
15:29:33 <kopecmartin> wxy-xiyuan: thanks
15:29:39 <kopecmartin> #link https://etherpad.opendev.org/p/migrate-to-jammy
15:29:45 <kopecmartin> the deadline is Nov 18th
15:29:58 <kopecmartin> so we'll just switch the jobs then ,right?
15:30:41 <frickler> I would suggest to do that to increase the pressure, yes
15:30:52 <kopecmartin> makes sense to me
15:30:59 <frickler> projects can still override to focal if they think they need more time
15:31:19 <frickler> but I'm also thinking the TC should oversee that progress
15:31:38 <frickler> since after all they defined the PTI that way
15:32:20 <eliadcohen> If we have time I'd like to ask for assistance on something
15:32:24 <frickler> gmann: any other opinion?
15:32:31 <frickler> eliadcohen: sure
15:32:45 <eliadcohen> I'll wait on gmann , we will better wrap up current topic
15:33:02 <kopecmartin> manila seems to wait on ceph
15:33:13 <kopecmartin> so ceph is the blocker it seems
15:33:15 <frickler> likely he isn't around yet, just pinged since he might answer later
15:33:23 <kopecmartin> eliadcohen: yep, go ahead
15:33:44 <frickler> we discussed ceph earlier, distro pkgs are there, no need to wait for IBM imo
15:34:28 <kopecmartin> frickler: ah, ok,  i didn't know that, i just went through the etherpad
15:36:14 <frickler> eliadcohen: still around?
15:38:06 <frickler> seems not, maybe we go on and they can ask later?
15:38:29 <kopecmartin> sure
15:38:34 <kopecmartin> anything else for the open discussion?
15:39:33 <lpiwowar> nothing from my side
15:40:02 <kopecmartin> #topic Bug Triage
15:40:07 <kopecmartin> #link https://etherpad.openstack.org/p/qa-bug-triage-antelope
15:40:18 <kopecmartin> numbers recorded but i haven't had time triaging
15:40:20 <kopecmartin> shame on me
15:40:23 <kopecmartin> :D
15:40:24 <eliadcohen> yes sorry
15:40:42 <kopecmartin> np, how can we help? eliadcohen
15:40:48 <eliadcohen> Hello, I'd like to ask for a little help figuring out the Zuul errors in https://review.opendev.org/c/openstack/tempest/+/857756
15:40:56 <eliadcohen> (Precanned message)
15:41:49 <eliadcohen> I'm just not clear on some of the Zuul failures around glance-multistore-cinder-import, tempest-full-test-account-py3 etc
15:42:00 * kopecmartin checking
15:42:08 <eliadcohen> Is there an easy way to figure out if mine is the breaking change?
15:43:12 <kopecmartin> well, if the jobs pass in other jobs (jobs triggered by other patches) and the failed tests use the code you changed then probably your code caused the failure
15:43:20 <kopecmartin> otherwise the failures are unrelated
15:43:21 <lpiwowar> eliadcohen you can run the tox jobs locally using `tox -e py38` for example.
15:43:35 <eliadcohen> Both make sense
15:44:00 <eliadcohen> so lpiwowar I can run all these locally and then figure out why they fail. Now suppose it doesn't have to do anything with what I changed?
15:44:29 <kopecmartin> the tox jobs fail on the new unit test job
15:44:30 <kopecmartin> #link https://cbde7a5ceb90f79b264f-a3c1233a0305e644b60ccc0279f1954b.ssl.cf2.rackcdn.com/857756/8/check/openstack-tox-py310/ec918e2/testr_results.html
15:44:52 <kopecmartin> pep8 job is complaining about the length of the lines mostly
15:45:35 <kopecmartin> i'd start with those first
15:45:47 <kopecmartin> the failure in glance-multistore-cinder-import seems unrelated (just at the first glance though)
15:45:48 <lpiwowar> eliadcohen not sure I understand. You've already run the tests locally?
15:45:59 <eliadcohen> lpiwowar, no, only a few
15:46:05 <eliadcohen> I'll run all next
15:46:13 <eliadcohen> Was trying to get a feel for the process
15:46:37 <eliadcohen> ok, so I'll go at these one by one and we will take it from there offline as needed?
15:47:06 <lpiwowar> Oh, ok I see. I agree with kopecmartin it is best to start with the pep8 errors.
15:47:11 <kopecmartin> eliadcohen: sure, sounds like a plan
15:47:25 <lpiwowar> eliadcohen: ok
15:47:56 <kopecmartin> cool, anything else we can help with?
15:48:24 <eliadcohen> That was it on my end gang, thanks!
15:48:48 <kopecmartin> ok then, that's it from my side too
15:48:53 <kopecmartin> unless there is anything else
15:49:09 <kopecmartin> seems not
15:49:13 <kopecmartin> thanks everyone
15:49:15 <kopecmartin> #endmeeting