14:00:11 <kopecmartin> #startmeeting qa
14:00:11 <opendevmeet> Meeting started Tue Oct 12 14:00:11 2021 UTC and is due to finish in 60 minutes.  The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:11 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:11 <opendevmeet> The meeting name has been set to 'qa'
14:00:19 <kopecmartin> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours
14:00:22 <kopecmartin> the agenda ^^^
14:00:28 <soniya29> hello
14:01:15 <kopecmartin> hi soniya29
14:02:12 <kopecmartin> let's start
14:02:18 <kopecmartin> #topic Announcement and Action Item (Optional)
14:02:23 <yoctozepto> o/
14:02:43 <kopecmartin> PTG is gonna be the next week, but we'll talk about that in a minute
14:02:45 <kopecmartin> hi yoctozepto
14:02:51 <kopecmartin> #topic Xena Priority Items progress
14:02:57 <kopecmartin> #link https://etherpad.opendev.org/p/qa-xena-priority
14:03:03 <kopecmartin> any progress on this front?
14:03:12 <kopecmartin> nothing from my side unfortunately
14:03:17 <gmann> o/
14:03:23 <gmann> nothing from me
14:04:33 <kopecmartin> #topic OpenStack Events Updates and Planning
14:04:38 <kopecmartin> #link https://etherpad.opendev.org/p/qa-yoga-ptg
14:04:46 <kopecmartin> PTG is next week
14:05:17 <kopecmartin> if you haven't proposed any topic yet, feel free to do so, rather sooner than later
14:05:39 <kopecmartin> a reminder:
14:05:40 <kopecmartin> we have booked the following slots: 13-15 UTC October 18th and 15-17 UTC October 19th
14:05:48 <soniya29> kopecmartin, sure
14:06:24 <kopecmartin> speaking about the PTG, the QA office hour next week is cancelled due to PTG
14:06:40 <gmann> +1
14:06:40 <kopecmartin> we can talk in qa sessions :)
14:06:55 <kopecmartin> moving on
14:06:57 <kopecmartin> #topic Gate Status Checks
14:07:04 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade+OR+project:openstack/hacking)
14:07:08 <kopecmartin> any issues to bring up?
14:07:14 <kopecmartin> I don't see any urgent patches
14:07:28 <gmann> I think it seems green.
14:07:47 <kopecmartin> that's great
14:07:47 <kopecmartin> #topic Periodic jobs Status Checks
14:07:53 <kopecmartin> #link https://zuul.openstack.org/builds?job_name=tempest-full-victoria-py3&job_name=tempest-full-ussuri-py3&job_name=tempest-full-train-py3&pipeline=periodic-stable
14:07:58 <kopecmartin> #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic
14:08:11 <kopecmartin> all green too it seems (one failure, although that seems to be random)
14:08:48 <kopecmartin> #topic Sub Teams highlights
14:08:53 <kopecmartin> any updates, highlights we need to discuss today ?
14:09:04 <kopecmartin> Changes with Review-Priority == +1
14:09:05 <kopecmartin> #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade+OR+project:openstack/hacking)
14:09:09 <kopecmartin> there are none
14:11:10 <kopecmartin> #topic Open Discussion
14:11:11 <gmann> one issue nitesh brought before meeting about env not having L3 network and how to run tests as router cannot be created without L3
14:11:56 <gmann> answer is to stop the network resource creation and rely on env existing network
14:12:19 <gmann> slaweq ^^ is this right approach from neutron perspective right?
14:12:42 <kopecmartin> this is the option in tempest btw: https://opendev.org/openstack/tempest/src/commit/a7bcabc8976e6e646d5e4379e3289b43586261c1/tempest/config.py#L71
14:13:00 <slaweq> gmann: if You have provider network configured, You can plug vms directly to that network
14:13:11 <slaweq> and then they should be accessible using fixed ips directly
14:13:32 <gmann> yeah, that is Tempest expecation and tempest do not create any network resource
14:13:42 <frickler> wow, you are fast. I wanted to bring up the issue of pip installs failing with latest setuptools
14:13:47 <frickler> this is hidden in CI by our use of pre-built, cached wheels
14:14:08 <frickler> but an increasing number of people show up failing to install devstack locally because of that
14:14:15 <gmann> slaweq: thanks for confirmation
14:14:19 <slaweq> yw
14:14:36 <frickler> it also affects running unit tests
14:14:43 <gmann> frickler: we are facing that on stable right?
14:14:52 <frickler> no, on master too
14:15:01 <frickler> or especially there
14:15:20 <frickler> I made a requirements job withour our ci-built wheels
14:15:28 <kopecmartin> is it this bug?
14:15:29 <kopecmartin> #link https://bugs.launchpad.net/devstack/+bug/1942396
14:15:56 <frickler> https://review.opendev.org/c/openstack/requirements/+/813292
14:16:34 <frickler> yes, that bug
14:16:34 <frickler> ehm, no
14:16:34 <frickler> https://bugs.launchpad.net/cinder/+bug/1946340
14:17:02 <frickler> it is mostly an issue with oslo.vmware and designate
14:18:03 <frickler> but also some further uninstallable libs I listed in https://review.opendev.org/c/openstack/requirements/+/813302
14:18:33 <frickler> for stable branches, capping setuptools and possibly pip might be a solution
14:18:48 <gmann> yeah,
14:18:53 <frickler> for master, we need to replace the outdated libs
14:19:25 <frickler> for suds-jurko, there is a suds-community fork, which will hopefully release 1.0.0 with a fix soon
14:19:44 <frickler> for the other libs, I don't know a solution yet
14:20:35 <frickler> I'm also thinking we should do a periodic devstack job than runs without prebuilt wheels, so that we can detect this situation
14:20:51 <gmann> +1 on job
14:20:53 <kopecmartin> that is a good idea
14:21:12 <kopecmartin> maybe we could detect these kind of things sooner
14:21:29 <gmann> but if it does and on replacement (like current siutaiton) what we do? ask projects/requirement team to remove the deps?
14:21:39 <gmann> *no replacement
14:22:01 <kopecmartin> there isn't other option than waiting until the libs will make their code compatible with newer setuptools/pip, is there?
14:22:21 <frickler> those libs mostly seem to be abandoned
14:22:28 <kopecmartin> oh
14:22:34 <frickler> like suds-jurko hasn't been updated in some years
14:22:42 <gmann> yeah
14:22:53 <frickler> which makes them good candidates for replacement anyway, yes
14:24:15 <gmann> seems 8 project using it https://codesearch.opendev.org/?q=suds-jurko&i=nope&literal=nope&files=&excludeFiles=&repos=
14:25:27 <gmann> I think adding such job and then notify requirement/projects  team to start thinking about replacement/remove-deps can be good way forward
14:25:51 <kopecmartin> yup, i agree
14:25:54 <gmann> and reply on prebuilt wheels until then but it does not solve the local devstack issue
14:26:01 <gmann> rely
14:26:29 <gmann> in that case we can add var to in the setuptool/pip for local env. frickler ?
14:26:47 <gmann> * to pin the setuptool/pip
14:27:08 <frickler> hmm, I think the mirrors are accessible from the outside, but not sure if we want to point people to that
14:27:37 <frickler> in the current case, just removing oslo.vmware from nova/cinder/designate reqs is enough
14:27:50 <frickler> because most deployments don't actually use it
14:28:49 <gmann> but we need to disable corresponding tests too right?
14:29:28 <gmann> and it might be hard to find all unit/functional tests corresponding to vmware driver in nova until we see all failing one
14:29:31 <frickler> I didn't check that, just verified that devstack installation finishes
14:30:05 <frickler> I can try to build that job later and maybe also a corresponding tox job
14:30:19 <frickler> then we can see in nova what actually is failing
14:30:29 <gmann> frickler: if we remove it from requirement repo then we have nova.cinder unit test jobs
14:30:32 <gmann> yah
14:30:44 <gmann> functional too
14:33:17 <kopecmartin> anything else to add?
14:33:56 <kopecmartin> anything else for the open discussion?
14:34:56 <gmann> nothing from me
14:35:06 <kopecmartin> #topic Bug Triage
14:35:11 <kopecmartin> #link https://etherpad.opendev.org/p/qa-bug-triage-xena
14:35:16 <kopecmartin> i recorded the numbers as always ^^
14:35:20 <kopecmartin> i have 2 bugs to bring up
14:35:27 <kopecmartin> #link https://bugs.launchpad.net/tempest/+bug/1946321
14:35:34 <kopecmartin> 'jsonschema' has no attribute 'compat'
14:35:45 <kopecmartin> we have a fix for master
14:35:46 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/812804
14:36:09 <kopecmartin> what will we do with older releases? we can't modify tempest there :/
14:36:34 <gmann> that is fixed by this #link https://review.opendev.org/c/openstack/devstack/+/812092
14:36:58 <gmann> we need to use the latest compatible version on stable branches where we use old tempest
14:37:53 <kopecmartin> that makes sense
14:38:17 <kopecmartin> ok then, gmann please review this when you have a moment
14:38:18 <gmann> we have a set of compatible tempest version for any stable branch so using any version from that range works
14:38:18 <kopecmartin> #link https://review.opendev.org/c/openstack/tempest/+/812804
14:38:47 <gmann> ah sure, i thought I did :) will re-review
14:38:54 <kopecmartin> thanks
14:39:03 <kopecmartin> another bug is this one:
14:39:04 <kopecmartin> #link https://bugs.launchpad.net/tempest/+bug/1945495
14:39:10 <kopecmartin> test_rebuild_server fails with mismatch - fixed and floating ip type after rebuild
14:39:15 <gmann> done
14:39:19 <kopecmartin> there is a race condition in the test
14:39:29 <kopecmartin> i'm not sure how to approach that
14:39:50 <kopecmartin> we don't have any waiter for a floating ip to wait until it's attached to a vm
14:43:42 <kopecmartin> as mentioned in the comments , there is a possibility that the ip doesn't get assigned between lines 193 and 214 and that will result in the mismatch
14:43:44 <kopecmartin> #link https://opendev.org/openstack/tempest/src/commit/669900f622ce46c62260910d2cf9fa1d32216191/tempest/api/compute/servers/test_server_actions.py#L193
14:43:53 <kopecmartin> #link https://opendev.org/openstack/tempest/src/commit/669900f622ce46c62260910d2cf9fa1d32216191/tempest/api/compute/servers/test_server_actions.py#L214
14:44:36 <gmann> may be we can use os-interface nova APi to poll that?
14:44:57 <gmann> #link https://docs.openstack.org/api-ref/compute/?expanded=show-port-interface-details-detail
14:46:31 <kopecmartin> so basically to create a waiter - whenever the server has a floating ip attached before the rebuild, wait until the floating ip is attached after the rebuild and then continue
14:47:45 <gmann> yeah, if this is race on FIP attachment. but i have not checked the log
14:48:46 <kopecmartin> that's the only way i can image we could get to the mismatch error
14:49:12 <kopecmartin> ok ,i'll try to create that waiter and let's see
14:49:31 <kopecmartin> thanks for the consult
14:49:39 <kopecmartin> this all from my side
14:50:01 <kopecmartin> if there isn't anything else, i think we can close today's office hour
14:50:37 <gmann> nothing from me, we can close
14:50:45 <kopecmartin> thank you all and see you at the PTG next week o/
14:50:48 <gmann> thanks kopecmartin
14:50:52 <kopecmartin> #endmeeting