gmann | thanks | 00:24 |
---|---|---|
opendevreview | OpenStack Proposal Bot proposed openstack/devstack master: Updated from generate-devstack-plugins-list https://review.opendev.org/c/openstack/devstack/+/885827 | 02:19 |
opendevreview | Merged openstack/tempest master: Remove setting of DEFAULT_IMAGE_NAME to non exit image https://review.opendev.org/c/openstack/tempest/+/886796 | 02:33 |
opendevreview | Merged openstack/devstack-plugin-ceph master: Skip test_rebuild_volume_backed_server test for ceph job https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/887003 | 03:02 |
opendevreview | Ghanshyam proposed openstack/devstack master: Set two different image in tempest irespective of DEFAULT_IMAGE_NAME https://review.opendev.org/c/openstack/devstack/+/886795 | 04:11 |
opendevreview | Merged openstack/devstack master: Updated from generate-devstack-plugins-list https://review.opendev.org/c/openstack/devstack/+/885827 | 04:42 |
ykarel | frickler, hi can you please revisit https://review.opendev.org/c/openstack/devstack/+/886250 | 06:40 |
frickler | ykarel: done, thx for the reminder | 07:02 |
ykarel | thx frickler | 07:02 |
*** elodilles_pto is now known as elodilles | 07:22 | |
opendevreview | Lukas Piwowarski proposed openstack/tempest master: Fix cleanup of resources for object_storage tests https://review.opendev.org/c/openstack/tempest/+/881575 | 08:21 |
*** sean-k-mooney1 is now known as sean-k-mooney | 09:53 | |
opendevreview | Alfredo Moralejo proposed openstack/stackviz master: Add stestr as runtime dependency https://review.opendev.org/c/openstack/stackviz/+/887029 | 10:00 |
*** tobias-urdin is now known as tobias-urdin-pto | 10:43 | |
slaweq | frickler gmann kopecmartin dansmith hi, I'm not sure if You have +2 power in stable branches in devstack, but if so, please check https://review.opendev.org/q/I83fc9250ae5b7c1686938a0dd25d66b40fc6c6aa - we need that in neutron-tempest-plugin to move on with some patches. Thx in advance | 12:55 |
frickler | slaweq: did you test this with patches in neutron? would be good to have a reference to that | 13:34 |
slaweq | frickler see https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/873380 and patch on which this one is proposed: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/880889/ | 14:56 |
slaweq | that last one depends on this devstack change | 14:56 |
kopecmartin | #startmeeting qa | 15:00 |
opendevmeet | Meeting started Tue Jun 27 15:00:23 2023 UTC and is due to finish in 60 minutes. The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
opendevmeet | The meeting name has been set to 'qa' | 15:00 |
kopecmartin | #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours | 15:00 |
kopecmartin | agenda ^^ | 15:00 |
lpiwowar | o/ | 15:00 |
kopecmartin | o/ | 15:01 |
frickler | \o | 15:01 |
kopecmartin | o/ | 15:03 |
kopecmartin | #topic Announcement and Action Item (Optional) | 15:03 |
kopecmartin | I'm gonna be on holiday starting this Thursday till July 10th .. i won't be able to be on IRC, however, i'll have access to my email | 15:04 |
kopecmartin | i don't see a lot of things happening at the moment, so I'll cancel the office hour next week, however | 15:05 |
kopecmartin | if you feel like it, one of you can host it | 15:05 |
kopecmartin | up to you :) | 15:05 |
kopecmartin | #topic Bobcat Priority Items progress | 15:06 |
kopecmartin | #link https://etherpad.opendev.org/p/qa-bobcat-priority | 15:07 |
kopecmartin | any updates? | 15:07 |
* kopecmartin checking out the etherpad | 15:07 | |
lpiwowar | Nothing from my side on the priority items:) | 15:08 |
kopecmartin | ack, if there's anything for me to review, please let me know | 15:09 |
kopecmartin | #topic Gate Status Checks | 15:09 |
kopecmartin | #link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) | 15:10 |
kopecmartin | one patch | 15:10 |
* kopecmartin taking look | 15:11 | |
lpiwowar | kopecmartin this one can be reviewed now I think. As already discussed the issue with test_list_application_credentials should not be related to the object_storage tests. | 15:12 |
lpiwowar | #link https://review.opendev.org/c/openstack/tempest/+/881575 | 15:13 |
* kopecmartin checking | 15:14 | |
kopecmartin | done | 15:15 |
kopecmartin | #topic Bare rechecks | 15:15 |
kopecmartin | #link https://etherpad.opendev.org/p/recheck-weekly-summary | 15:15 |
kopecmartin | hmm, the numbers look a bit worse :/ | 15:16 |
kopecmartin | QA is at almost 30%, 9 bare rechecks out of 31 during the last 7 days | 15:16 |
frickler | 31 rechecks in 7 days is what worries me more | 15:16 |
kopecmartin | yeah, i've noticed that over the last month we had several patches that we had to recheck more than 10 or 20 times to get them through :/ | 15:17 |
kopecmartin | i think that half of the time CI (not just QA) faced a bug and the other half tempest jobs were timeouting | 15:18 |
kopecmartin | let's check the CI status | 15:19 |
kopecmartin | #topic Periodic jobs Status Checks | 15:19 |
kopecmartin | periodic stable full | 15:20 |
kopecmartin | #link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1 | 15:20 |
kopecmartin | periodic stable slow | 15:20 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena | 15:20 |
kopecmartin | periodic extra tests | 15:20 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests | 15:20 |
kopecmartin | periodic master | 15:20 |
kopecmartin | #link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic | 15:20 |
kopecmartin | tempest-slow-2023-1 .. it either fails because a test timeouted or the whole job tiemouted | 15:21 |
kopecmartin | it's similar with tempest-all and tempest-full-py3-ipv6 | 15:21 |
kopecmartin | not sure what to do, ideally we'd need to check the logs why the jobs timeout - hitting maximum allowed time probably and the only thing we can do about that is to run less tests | 15:23 |
kopecmartin | can't think of anything else | 15:23 |
kopecmartin | unfortunately i won't get to that before my PTO | 15:24 |
kopecmartin | moving on | 15:25 |
kopecmartin | #topic Distros check | 15:25 |
kopecmartin | cs-9 | 15:26 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=0 | 15:26 |
kopecmartin | fedora | 15:26 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=devstack-platform-fedora-latest&skip=0 | 15:26 |
kopecmartin | debian | 15:26 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&skip=0 | 15:26 |
kopecmartin | focal | 15:26 |
kopecmartin | #link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-focal&skip=0 | 15:26 |
kopecmartin | rocky | 15:26 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx | 15:26 |
kopecmartin | openEuler | 15:26 |
kopecmartin | #link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=0 | 15:26 |
kopecmartin | fedora is about to be dropped, the rest looks more or less fine | 15:27 |
kopecmartin | #link https://review.opendev.org/c/openstack/devstack/+/885467 | 15:27 |
kopecmartin | i commented there ^^ that we proposed patches against other repos which depend on the stuff we're about to delete from devstack | 15:27 |
kopecmartin | i'd wait a week or so and than proceed with the patch | 15:28 |
frickler | can we link the other patches with the same topic? | 15:29 |
kopecmartin | sure | 15:29 |
kopecmartin | done | 15:31 |
kopecmartin | moving on | 15:32 |
kopecmartin | #topic Sub Teams highlights | 15:32 |
kopecmartin | Changes with Review-Priority == +1 | 15:32 |
kopecmartin | #link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade) | 15:32 |
kopecmartin | one of them we've just discussed and the other patch is blocked on the CI | 15:32 |
kopecmartin | #topic Open Discussion | 15:33 |
kopecmartin | anything for the open discussion? | 15:33 |
frickler | maybe just mentioning that we have the bookworm repo mirrored now in opendev | 15:34 |
frickler | so if someone wants to work on devstack for bookworm, the path is free | 15:34 |
kopecmartin | great news \o/ | 15:35 |
kopecmartin | #topic Bug Triage | 15:37 |
kopecmartin | #link https://etherpad.openstack.org/p/qa-bug-triage-bobcat | 15:37 |
kopecmartin | numbers recorded ^^ (that's pretty much all I had time for unfortunately0 | 15:37 |
kopecmartin | if there isn't anything else , this is it :) | 15:38 |
kopecmartin | thanks everyone | 15:39 |
kopecmartin | see you in 2 weeks | 15:39 |
frickler | ack. enjoy your time off | 15:39 |
kopecmartin | thanks | 15:39 |
kopecmartin | #endmeeting | 15:39 |
opendevmeet | Meeting ended Tue Jun 27 15:39:52 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:39 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/qa/2023/qa.2023-06-27-15.00.html | 15:39 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/qa/2023/qa.2023-06-27-15.00.txt | 15:39 |
opendevmeet | Log: https://meetings.opendev.org/meetings/qa/2023/qa.2023-06-27-15.00.log.html | 15:39 |
lpiwowar | thanks kopecmartin! | 15:40 |
dansmith | gmann: have you seen any response from whoami-rajat about the ceph issue? | 15:58 |
frickler | kopecmartin: did you modify the topic yet? https://review.opendev.org/q/topic:drop-fedora-support looks empty to me and https://review.opendev.org/q/topic:drop-old-distros has only the devstack patches. also I'd be fine with keeping the former topic, didn't see your comment earlier | 16:08 |
opendevreview | Merged openstack/devstack stable/yoga: Fix installation of OVS/OVN from sources https://review.opendev.org/c/openstack/devstack/+/882529 | 16:23 |
gmann | frickler: kopecmartin about recheck: gate has been very unstable last week and more of timeout in every changes | 17:21 |
gmann | ironic octavia gate broke due to default image name change in devstack-tempest job which is reverted now | 17:21 |
gmann | and ceph failure in rebuild test which is not yet fixed but test is skipped in ceph job | 17:22 |
gmann | dansmith: no, not sure he is pto | 17:23 |
dansmith | okay | 17:23 |
gmann | dansmith: frickler: can you check this, fixing the configuration of two different image in tempest https://review.opendev.org/c/openstack/devstack/+/886795 | 17:51 |
opendevreview | Ghanshyam proposed openstack/devstack master: Set two different image in tempest irespective of DEFAULT_IMAGE_NAME https://review.opendev.org/c/openstack/devstack/+/886795 | 18:43 |
whoami-rajat | dansmith, gmann hey, just saw all the messages, sorry was occupied elsewhere for past few days | 19:05 |
whoami-rajat | dansmith, gmann i think i have some idea of why it's failing for RBD | 19:05 |
whoami-rajat | If we see the nova logs, it's failing to switch the green thread hence it fails | 19:05 |
whoami-rajat | Jun 26 01:03:11.721921 np0034441113 nova-compute[97944]: ERROR oslo_messaging.rpc.server return self.greenlet.switch() | 19:05 |
whoami-rajat | Jun 26 01:03:11.721921 np0034441113 nova-compute[97944]: ERROR oslo_messaging.rpc.server eventlet.timeout.Timeout: 20 seconds | 19:05 |
whoami-rajat | i checked the cinder logs and the reimage operation indeed takes more than 20 seconds | 19:06 |
whoami-rajat | first log time: 01:02:49.782746 | 19:06 |
dansmith | whoami-rajat: where are we waiting there? | 19:06 |
whoami-rajat | last log time: 01:03:23.661182 | 19:06 |
whoami-rajat | ~34 seconds | 19:06 |
dansmith | we've made a call to cinder so we're waiting for network not a call to librbd or anything right/ | 19:06 |
dansmith | and why is that any different from a non-ceph volume? | 19:07 |
whoami-rajat | dansmith, yeah right, we did make an API call to cinder and not to any low level library like librbd ... | 19:08 |
dansmith | ah, we're waiting too short for the instance event I see | 19:08 |
dansmith | so not a deadlock just our timer for the operation I see | 19:08 |
dansmith | didn't we scale that by volume.size? | 19:08 |
whoami-rajat | it's i think the greenlet timeout? not sure how we set that | 19:09 |
whoami-rajat | the config option was for the API call right? | 19:09 |
dansmith | ahh | 19:09 |
dansmith | we have time-per-gb conf setting | 19:09 |
dansmith | we just probably need to up that in our devstack config | 19:09 |
dansmith | I dunno why it matters only for ceph, but maybe that's why there's a response on the bug about non-ceph sometimes too | 19:10 |
dansmith | https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3607 | 19:10 |
dansmith | 20s per GB is the default, | 19:11 |
dansmith | so that's the timer we're hitting | 19:11 |
whoami-rajat | i think the option is fine, 20 per GB | 19:11 |
whoami-rajat | yeah | 19:11 |
dansmith | just need to set that to 60s for gate jobs | 19:11 |
dansmith | because gate workers have slow-ass IO :) | 19:12 |
whoami-rajat | maybe it's just the rbd operations that provide worse performance (because of all the RBD operations executed in new subprocesses) for smaller volumes and scale well as we increase the size | 19:13 |
whoami-rajat | hence we don't see this in iSCSI | 19:13 |
whoami-rajat | but not sure | 19:13 |
dansmith | and on a loop | 19:13 |
dansmith | I will fix and we'll see | 19:13 |
whoami-rajat | ack | 19:16 |
whoami-rajat | found the culprit | 19:16 |
whoami-rajat | Jun 26 01:03:16.229232 np0034441113 cinder-volume[99943]: DEBUG cinder.image.image_utils [req-25e4ae38-b629-4698-97ea-d5b712cb4209 req-e4bf7950-b372-4010-9fd0-80f48fdd26c1 tempest-ServerActionsV293TestJSON-85724339 None] Image fetch details: dest /opt/stack/data/cinder/conversion/image_download_e7228a06-ab6d-4ddd-9fa5-d23219175760_xk1iwkrj, size 950.00 MB, duration 25.41 sec {{(pid=99943) fetch /opt/stack/cinder/cinder/image/image_utils.py:6 | 19:16 |
whoami-rajat | 28}} | 19:16 |
whoami-rajat | 25 seconds for 950 MB, really slow conversion ... | 19:16 |
opendevreview | Dan Smith proposed openstack/devstack master: nova: Bump timeout-per-gb for BFV rebuild ops https://review.opendev.org/c/openstack/devstack/+/887110 | 19:17 |
opendevreview | Dan Smith proposed openstack/devstack-plugin-ceph master: Unskip rebuild_volume_backed_server test https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/887111 | 19:19 |
dansmith | yup | 19:19 |
dansmith | whoami-rajat: we'll see how this goes ^ | 19:19 |
dansmith | actually, I guess we need a nova patch to depend on the devstack one | 19:20 |
dansmith | here: https://review.opendev.org/c/openstack/nova/+/887112 | 19:21 |
dansmith | whoami-rajat: good spot, I didn't even really have time to dig into it yesterday | 19:21 |
dansmith | hmm, I wonder if this rebuild is synchronous in the nova api? | 21:00 |
dansmith | shouldn't be, but it looks like tempest timed out waiting for rebuild to happen as a single op.. I'll have to look when it finishes | 21:00 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!