14:00:11 #startmeeting qa 14:00:11 Meeting started Tue Oct 12 14:00:11 2021 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:11 The meeting name has been set to 'qa' 14:00:19 #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours 14:00:22 the agenda ^^^ 14:00:28 hello 14:01:15 hi soniya29 14:02:12 let's start 14:02:18 #topic Announcement and Action Item (Optional) 14:02:23 o/ 14:02:43 PTG is gonna be the next week, but we'll talk about that in a minute 14:02:45 hi yoctozepto 14:02:51 #topic Xena Priority Items progress 14:02:57 #link https://etherpad.opendev.org/p/qa-xena-priority 14:03:03 any progress on this front? 14:03:12 nothing from my side unfortunately 14:03:17 o/ 14:03:23 nothing from me 14:04:33 #topic OpenStack Events Updates and Planning 14:04:38 #link https://etherpad.opendev.org/p/qa-yoga-ptg 14:04:46 PTG is next week 14:05:17 if you haven't proposed any topic yet, feel free to do so, rather sooner than later 14:05:39 a reminder: 14:05:40 we have booked the following slots: 13-15 UTC October 18th and 15-17 UTC October 19th 14:05:48 kopecmartin, sure 14:06:24 speaking about the PTG, the QA office hour next week is cancelled due to PTG 14:06:40 +1 14:06:40 we can talk in qa sessions :) 14:06:55 moving on 14:06:57 #topic Gate Status Checks 14:07:04 #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade+OR+project:openstack/hacking) 14:07:08 any issues to bring up? 14:07:14 I don't see any urgent patches 14:07:28 I think it seems green. 14:07:47 that's great 14:07:47 #topic Periodic jobs Status Checks 14:07:53 #link https://zuul.openstack.org/builds?job_name=tempest-full-victoria-py3&job_name=tempest-full-ussuri-py3&job_name=tempest-full-train-py3&pipeline=periodic-stable 14:07:58 #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic 14:08:11 all green too it seems (one failure, although that seems to be random) 14:08:48 #topic Sub Teams highlights 14:08:53 any updates, highlights we need to discuss today ? 14:09:04 Changes with Review-Priority == +1 14:09:05 #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade+OR+project:openstack/hacking) 14:09:09 there are none 14:11:10 #topic Open Discussion 14:11:11 one issue nitesh brought before meeting about env not having L3 network and how to run tests as router cannot be created without L3 14:11:56 answer is to stop the network resource creation and rely on env existing network 14:12:19 slaweq ^^ is this right approach from neutron perspective right? 14:12:42 this is the option in tempest btw: https://opendev.org/openstack/tempest/src/commit/a7bcabc8976e6e646d5e4379e3289b43586261c1/tempest/config.py#L71 14:13:00 gmann: if You have provider network configured, You can plug vms directly to that network 14:13:11 and then they should be accessible using fixed ips directly 14:13:32 yeah, that is Tempest expecation and tempest do not create any network resource 14:13:42 wow, you are fast. I wanted to bring up the issue of pip installs failing with latest setuptools 14:13:47 this is hidden in CI by our use of pre-built, cached wheels 14:14:08 but an increasing number of people show up failing to install devstack locally because of that 14:14:15 slaweq: thanks for confirmation 14:14:19 yw 14:14:36 it also affects running unit tests 14:14:43 frickler: we are facing that on stable right? 14:14:52 no, on master too 14:15:01 or especially there 14:15:20 I made a requirements job withour our ci-built wheels 14:15:28 is it this bug? 14:15:29 #link https://bugs.launchpad.net/devstack/+bug/1942396 14:15:56 https://review.opendev.org/c/openstack/requirements/+/813292 14:16:34 yes, that bug 14:16:34 ehm, no 14:16:34 https://bugs.launchpad.net/cinder/+bug/1946340 14:17:02 it is mostly an issue with oslo.vmware and designate 14:18:03 but also some further uninstallable libs I listed in https://review.opendev.org/c/openstack/requirements/+/813302 14:18:33 for stable branches, capping setuptools and possibly pip might be a solution 14:18:48 yeah, 14:18:53 for master, we need to replace the outdated libs 14:19:25 for suds-jurko, there is a suds-community fork, which will hopefully release 1.0.0 with a fix soon 14:19:44 for the other libs, I don't know a solution yet 14:20:35 I'm also thinking we should do a periodic devstack job than runs without prebuilt wheels, so that we can detect this situation 14:20:51 +1 on job 14:20:53 that is a good idea 14:21:12 maybe we could detect these kind of things sooner 14:21:29 but if it does and on replacement (like current siutaiton) what we do? ask projects/requirement team to remove the deps? 14:21:39 *no replacement 14:22:01 there isn't other option than waiting until the libs will make their code compatible with newer setuptools/pip, is there? 14:22:21 those libs mostly seem to be abandoned 14:22:28 oh 14:22:34 like suds-jurko hasn't been updated in some years 14:22:42 yeah 14:22:53 which makes them good candidates for replacement anyway, yes 14:24:15 seems 8 project using it https://codesearch.opendev.org/?q=suds-jurko&i=nope&literal=nope&files=&excludeFiles=&repos= 14:25:27 I think adding such job and then notify requirement/projects team to start thinking about replacement/remove-deps can be good way forward 14:25:51 yup, i agree 14:25:54 and reply on prebuilt wheels until then but it does not solve the local devstack issue 14:26:01 rely 14:26:29 in that case we can add var to in the setuptool/pip for local env. frickler ? 14:26:47 * to pin the setuptool/pip 14:27:08 hmm, I think the mirrors are accessible from the outside, but not sure if we want to point people to that 14:27:37 in the current case, just removing oslo.vmware from nova/cinder/designate reqs is enough 14:27:50 because most deployments don't actually use it 14:28:49 but we need to disable corresponding tests too right? 14:29:28 and it might be hard to find all unit/functional tests corresponding to vmware driver in nova until we see all failing one 14:29:31 I didn't check that, just verified that devstack installation finishes 14:30:05 I can try to build that job later and maybe also a corresponding tox job 14:30:19 then we can see in nova what actually is failing 14:30:29 frickler: if we remove it from requirement repo then we have nova.cinder unit test jobs 14:30:32 yah 14:30:44 functional too 14:33:17 anything else to add? 14:33:56 anything else for the open discussion? 14:34:56 nothing from me 14:35:06 #topic Bug Triage 14:35:11 #link https://etherpad.opendev.org/p/qa-bug-triage-xena 14:35:16 i recorded the numbers as always ^^ 14:35:20 i have 2 bugs to bring up 14:35:27 #link https://bugs.launchpad.net/tempest/+bug/1946321 14:35:34 'jsonschema' has no attribute 'compat' 14:35:45 we have a fix for master 14:35:46 #link https://review.opendev.org/c/openstack/tempest/+/812804 14:36:09 what will we do with older releases? we can't modify tempest there :/ 14:36:34 that is fixed by this #link https://review.opendev.org/c/openstack/devstack/+/812092 14:36:58 we need to use the latest compatible version on stable branches where we use old tempest 14:37:53 that makes sense 14:38:17 ok then, gmann please review this when you have a moment 14:38:18 we have a set of compatible tempest version for any stable branch so using any version from that range works 14:38:18 #link https://review.opendev.org/c/openstack/tempest/+/812804 14:38:47 ah sure, i thought I did :) will re-review 14:38:54 thanks 14:39:03 another bug is this one: 14:39:04 #link https://bugs.launchpad.net/tempest/+bug/1945495 14:39:10 test_rebuild_server fails with mismatch - fixed and floating ip type after rebuild 14:39:15 done 14:39:19 there is a race condition in the test 14:39:29 i'm not sure how to approach that 14:39:50 we don't have any waiter for a floating ip to wait until it's attached to a vm 14:43:42 as mentioned in the comments , there is a possibility that the ip doesn't get assigned between lines 193 and 214 and that will result in the mismatch 14:43:44 #link https://opendev.org/openstack/tempest/src/commit/669900f622ce46c62260910d2cf9fa1d32216191/tempest/api/compute/servers/test_server_actions.py#L193 14:43:53 #link https://opendev.org/openstack/tempest/src/commit/669900f622ce46c62260910d2cf9fa1d32216191/tempest/api/compute/servers/test_server_actions.py#L214 14:44:36 may be we can use os-interface nova APi to poll that? 14:44:57 #link https://docs.openstack.org/api-ref/compute/?expanded=show-port-interface-details-detail 14:46:31 so basically to create a waiter - whenever the server has a floating ip attached before the rebuild, wait until the floating ip is attached after the rebuild and then continue 14:47:45 yeah, if this is race on FIP attachment. but i have not checked the log 14:48:46 that's the only way i can image we could get to the mismatch error 14:49:12 ok ,i'll try to create that waiter and let's see 14:49:31 thanks for the consult 14:49:39 this all from my side 14:50:01 if there isn't anything else, i think we can close today's office hour 14:50:37 nothing from me, we can close 14:50:45 thank you all and see you at the PTG next week o/ 14:50:48 thanks kopecmartin 14:50:52 #endmeeting