Wednesday, 2024-01-31

*** jph7 is now known as jph01:40
opendevreviewTakashi Kajinami proposed openstack/hacking master: Drop tox target for python 3.5  https://review.opendev.org/c/openstack/hacking/+/90728204:04
opendevreviewNobuhiro MIKI proposed openstack/devstack master: glance: Set the port used by uwsgi as a constant  https://review.opendev.org/c/openstack/devstack/+/90224106:55
opendevreviewMartin Kopec proposed openstack/tempest master: General doc updates  https://review.opendev.org/c/openstack/tempest/+/90579008:00
opendevreviewMerged openstack/hacking master: Drop tox target for python 3.5  https://review.opendev.org/c/openstack/hacking/+/90728208:37
*** zigo_ is now known as zigo09:43
gmannkopecmartin: it was meeting time today or I missed the alternate days of meeting?16:53
kopecmartin#startmeeting qa17:00
opendevmeetMeeting started Wed Jan 31 17:00:33 2024 UTC and is due to finish in 60 minutes.  The chair is kopecmartin. Information about MeetBot at http://wiki.debian.org/MeetBot.17:00
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:00
opendevmeetThe meeting name has been set to 'qa'17:00
kopecmartinshould be right now, or my calendar is mistaken 17:00
gmannkopecmartin: I think 1 hr before? let me check17:01
kopecmartin17:00 utc17:01
gmannkopecmartin: oh you are right, it is now. i had wrongly in my calendar 17:01
gmannsorry17:01
kopecmartinnp, i mess up the time zones like 2 out of 10 times too 17:02
kopecmartin#link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_next_Office_hours17:03
kopecmartinlet's go quickly through the agenda17:03
kopecmartin#topic Announcement and Action Item (Optional)17:03
kopecmartinno updates from my side17:03
kopecmartini have planning of hte next PTG on my list, haven't got to that yet17:03
gmann++17:04
kopecmartin#topic Caracal Priority Items progress17:04
kopecmartin#link https://etherpad.opendev.org/p/qa-caracal-priority17:04
kopecmartini don't think that there are any updates there 17:04
kopecmartineveryone is busy with other things 17:05
gmannthere is one related update for RBAC17:05
gmannkeystone update to allow project scope token is merged. which I was testing if any impact on Tempest tests or not17:05
gmannwith old defaults, everything pass but with new default there are some failure which I hope tests adjustment is needed. I will work on that after mid Feb or so17:06
gmann#link https://review.opendev.org/q/topic:%22keystone-srbac-test%2217:06
gmannthese are ^^ testing changes17:06
gmannonce we modify the tempest test and green on new defaults, we should make keystone new default enable by default in devstack and see how it work with all other services new default17:07
kopecmartinack, sounds like a plan 17:07
gmannthat will complete out integrated testing of new RBAC at least for  4-5 services 17:07
gmanncurrently all new defaults jobs run with keystone old defaults17:08
gmannthat is all on this from my side17:08
kopecmartinsome of the SRBAC issues we can see in the LPs under "Make pre-provisioned credentials stable again" topic, right?17:09
kopecmartine.g.17:09
kopecmartin#link https://bugs.launchpad.net/tempest/+bug/202086017:09
gmannyeah17:09
gmannpre-provisioned is also one thing to do as next step17:09
gmanni did not had time to look into those yet17:10
kopecmartinokey, yeah me neither tbh17:10
kopecmartinthe cycle is gonna end soon and we haven't figured out a plan for this "stack.sh fails on Rocky 9 and Centos 9 because of global virtualenv"17:11
kopecmartinat some point we wanted to go with this17:11
kopecmartin#link https://review.opendev.org/c/openstack/keystone/+/899519 17:11
kopecmartinbut it doesn't look like that's a plan anymor e17:11
kopecmartinanyway, i see like this17:13
kopecmartinno one is very affected by this otherwise someone would pick this up, clearly it's not a priority .. i guess we're fine then17:13
kopecmartin#topic Gate Status Checks17:15
kopecmartin#link https://review.opendev.org/q/label:Review-Priority%253D%252B2+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)17:15
kopecmartinno reviews17:15
kopecmartin#topic Sub Teams highlights17:15
kopecmartinChanges with Review-Priority == +117:15
kopecmartin#link https://review.opendev.org/q/label:Review-Priority%253D%252B1+status:open+(project:openstack/tempest+OR+project:openstack/patrole+OR+project:openstack/devstack+OR+project:openstack/grenade)17:15
kopecmartinno reviews17:15
kopecmartin#topic Periodic jobs Status Checks17:15
kopecmartinperiodic stable full17:15
kopecmartin#link https://zuul.openstack.org/builds?pipeline=periodic-stable&job_name=tempest-full-yoga&job_name=tempest-full-xena&job_name=tempest-full-zed&job_name=tempest-full-2023-1&job_name=tempest-full-2023-217:15
kopecmartinperiodic stable slow17:15
kopecmartin#link https://zuul.openstack.org/builds?job_name=tempest-slow-2023-2&job_name=tempest-slow-2023-1&job_name=tempest-slow-zed&job_name=tempest-slow-yoga&job_name=tempest-slow-xena17:15
kopecmartinperiodic extra tests17:15
kopecmartin#link https://zuul.openstack.org/builds?job_name=tempest-full-2023-2-extra-tests&job_name=tempest-full-2023-1-extra-tests&job_name=tempest-full-zed-extra-tests&job_name=tempest-full-yoga-extra-tests&job_name=tempest-full-xena-extra-tests17:16
kopecmartinperiodic master17:16
kopecmartin#link https://zuul.openstack.org/builds?project=openstack%2Ftempest&project=openstack%2Fdevstack&pipeline=periodic17:16
kopecmartinall look good, except tempest-all job17:16
kopecmartinit's been timeouting for a while now .. i wanted to take a look at that last week but .. ah17:16
kopecmartin#link https://zuul.openstack.org/builds?job_name=tempest-all&project=openstack/tempest17:16
kopecmartinwe're already at the maximum timeout limit, right?17:16
kopecmartinoh, maybe not17:17
kopecmartin#link https://opendev.org/openstack/tempest/src/branch/master/zuul.d/integrated-gate.yaml17:17
gmannI think in tempest-all we run all including scenario test in parallel right17:17
kopecmartinsome jobs like e.g. tempest-ipv6-only has timeout set to 3 hours, tempest-all timetouted slightly over 217:17
gmannyeah https://opendev.org/openstack/tempest/src/branch/master/tox.ini#L7817:18
kopecmartineasy fix then 17:18
gmannslow test job also 3 hr right?17:19
gmannand no stackviz report in that, did you see17:20
gmann#link https://zuul.openstack.org/build/d90f94dbf12946eeaaab91c489d37832/logs17:20
kopecmartini think so, checking .. yeah, no report, the playbook got killed - 2024-01-31 06:00:33.447887 | RUN END RESULT_TIMED_OUT: [untrusted : opendev.org/openstack/tempest/playbooks/devstack-tempest.yaml@master]17:21
kopecmartinafter 2 hours exactly17:21
gmannkopecmartin: anywyas, as it run slow tests too I think making timeout as 3 hrs make sense 17:21
gmannyeah, may be that is why17:21
gmannand bcz it was periodic we might have missed to do that when we increased it for slow test job17:22
opendevreviewMartin Kopec proposed openstack/tempest master: Increase timeout for tempest-all job  https://review.opendev.org/c/openstack/tempest/+/90734217:22
gmannperfect, thanks17:22
kopecmartinmoving on 17:23
kopecmartin#topic Distros check17:23
kopecmartincs-917:23
kopecmartin#link https://zuul.openstack.org/builds?job_name=tempest-full-centos-9-stream&job_name=devstack-platform-centos-9-stream&skip=017:23
kopecmartindebian17:23
kopecmartin#link https://zuul.openstack.org/builds?job_name=devstack-platform-debian-bullseye&job_name=devstack-platform-debian-bookworm&skip=017:23
kopecmartinrocky17:23
kopecmartin#link https://zuul.openstack.org/builds?job_name=devstack-platform-rocky-blue-onyx17:23
kopecmartinopenEuler17:23
kopecmartin#link https://zuul.openstack.org/builds?job_name=devstack-platform-openEuler-22.03-ovn-source&job_name=devstack-platform-openEuler-22.03-ovs&skip=017:23
kopecmartinjammy17:23
kopecmartin#link https://zuul.opendev.org/t/openstack/builds?job_name=devstack-platform-ubuntu-jammy-ovn-source&job_name=devstack-platform-ubuntu-jammy-ovs&skip=017:23
kopecmartinthese look good \o/17:23
kopecmartin#topic Open Discussion17:24
kopecmartinanything for the open discussion?17:24
kopecmartinthe doc patch should be ready now17:26
kopecmartin#link https://review.opendev.org/c/openstack/tempest/+/90579017:26
gmann+W, thanks17:26
kopecmartinthanks17:27
kopecmartini'm a bit lost with this one17:27
kopecmartin#link https://review.opendev.org/c/openstack/tempest/+/89112317:27
kopecmartinI understand that nova has an API that can print me the nodes17:27
kopecmartindo we want to set that new config (src/dest host) in devstack all the time , for all the jobs?17:27
kopecmartini mean, i can't make any job to set that config from tempest's side, it has to be done in devstack, the question is whether for all jobs (multinode ones) or not17:28
kopecmartinif not, we could put setting of that option under an if and trigger that if only from the jobs we want, that could wor17:28
gmannlet me check the comment there17:28
kopecmartini'd like to see that test be executed in the CI too, but setting the configs is a bit complex in the CI17:29
gmannkopecmartin: i remember now, so in devstack we can set it easily right?17:30
kopecmartini think so , shouldn't be too difficult 17:30
kopecmartinit's the only place where we know the names of the hosts17:31
kopecmartinso it's either there or nowhere17:31
gmannfor example, we d for flavor #link https://github.com/openstack/devstack/blob/5c1736b78256f5da86a91c4489f43f8ba1bce224/lib/tempest#L29817:31
gmannyeah17:31
gmannwe can call nova GET service API and filter with compute service ebabled and get the hostname17:32
gmannsomething like this #link https://github.com/openstack/tempest/blob/master/tempest/api/compute/base.py#L68817:33
gmannkopecmartin: and we can set it only for the multinode job based on request var so that we do not call nova service APIs everytime in devstack17:34
kopecmartinyeah, that works 17:34
gmannTEMEPST_SET_SRC_DEST_HOST=False by default in devstack and set it true from multinode jo b17:34
gmannsomething like this17:34
kopecmartingreat17:34
kopecmartincool, what about this one17:35
kopecmartin#link https://review.opendev.org/c/openstack/tempest/+/89873217:35
gmannbcz we should have those test run somewhere in our CI otherwise it is easy to have error in those and wothout knowning17:35
kopecmartinagree, yes17:35
kopecmartinto me it seems as a good case for reusing the min_compute_nodes option17:36
kopecmartinin general, i prefer solution that aren't hardcoded .. although at this point i would agree with just len(nodes)/2 just to get it over with :D17:37
gmannkopecmartin: but we are not using min_compute_node to boot servers, that is only used to decide if we want to run the test or not17:37
gmannproblem is here where it can boot 100 of servers if we have env of 100 compute nde env https://review.opendev.org/c/openstack/tempest/+/898732/15/tempest/scenario/test_instances_with_cinder_volumes.py#9017:38
kopecmartinthe loop is under control using that variable on line 10817:39
kopecmartineven if there is 100 hosts, if min_compute_nodes is 1 then it will create just one server 17:39
gmannso if min_compute_nodes is 2 which is for multinode jobs so number of hosts selected will be 2?17:40
kopecmartinyes, per hosts[:min_compute_nodes] 17:41
kopecmartinor we can create a new opt , like max_compute_nodes17:41
kopecmartinand let it iterate only until we reach the limit , not over all the hosts found17:41
gmannso basically user need to increase the min_compute_nodes to number of nodes they want to run this test. for example if they have 100 nodes and want to test 20 then they can set min_compute_nodes to 20 and this test will create servers/volume on those 20 nodes17:43
gmannright?17:43
kopecmartinyes17:44
gmannok, i think that should work. let me review it after meeting., I think i missed that point17:45
gmannthanks for clarification17:45
kopecmartini'm judging based on the for cycle there, e.g.:17:45
kopecmartin#link https://paste.openstack.org/show/bfAyQIkKDdr8lWwm4pGJ/17:45
kopecmartinalthough now i'm not sure where is the logic that creates the servers on all a.k.a different nodes .. i don't see how openstack knows on which node to create the resource 17:46
gmannI think which node is not needed as it can boot servers on any of the avaailable nodes which is ok17:48
kopecmartinyeah, i guess ... i need to revisit the patch, i forgot some details  .. it needs a recheck anyway17:49
gmannanyways I will review it today and add comment if any update needed17:49
kopecmartingood, at least we agree on the approach, i'll check the one thing and comment there 17:49
kopecmartinperfect, thank you17:49
gmann++17:49
kopecmartinthen we have there this beauty :D ( the last one, I promise)17:50
kopecmartin#link https://review.opendev.org/c/openstack/tempest/+/87992317:50
kopecmartini think that the name of the --no-prefix opt is the problem 17:50
kopecmartinit can conflict with the --prefix (not technically, but based on the name)17:50
gmannI think this need more discussion on design, is it urgent to delay till PTG?17:50
kopecmartinwell, it's 2 months .. i hope i'll be doing different things then 17:51
gmannif yes then how about we have a call on it to finalize it as doing it as gerrit way is becoming litlte difficult 17:51
kopecmartinsure, that would work 17:51
gmannsure, maybe we can have call after mid Feb (I need to finish few things before that) ?17:51
kopecmartinok , sounds good17:52
kopecmartinin the meantime i'll try to polish the code a bit more 17:52
gmanncool, I will send invite for that. and same time as our meeting works fine ? Wed 17 UTC ?17:52
kopecmartinyes17:52
gmannFeb 21, 17 UTC17:52
kopecmartinack17:53
gmannkopecmartin: i will say wait for our discussion to update anything so that we can save your time on that in case we agree to different approach17:53
opendevreviewAbhishek Kekane proposed openstack/devstack master: Make `centralized_db` driver as default cache driver  https://review.opendev.org/c/openstack/devstack/+/90711017:53
kopecmartinokey then 17:54
gmanncool17:54
kopecmartin#topic Bug Triage17:54
kopecmartin#link https://etherpad.openstack.org/p/qa-bug-triage-caracal17:54
kopecmartinthere's been a small increase in the bugs in tempest .. but i haven't checked the details yet 17:54
gmannkopecmartin: just one thing on call, should I sent invite to your redhat email or gmail?17:55
kopecmartinanyway, that's all from me17:55
kopecmartinmkopec@redhat.com17:55
gmannsent17:56
kopecmartinthank you17:56
kopecmartinalhtough the time is a bit weird, 2am on Thursday for me 17:56
gmannupdated time in UTC, i selected wrong one initially :)17:56
kopecmartin.. yeah :D 17:56
gmann:)17:56
kopecmartinyep, much better now 17:57
kopecmartini'm not that young anymore, i'm usually asleep at 2am :D 17:57
kopecmartinanyway, that's a wrap 17:58
kopecmartinthank you for all the feedback 17:58
gmannhehe, I can still do when I need to finish things in quiet time when my son sleep :P17:58
kopecmartinyeah, the quietness is great at night 17:58
gmannotherwise he want to do code with me and do not let me to do :)17:58
gmannnothing else from me, thanks kopecmartin. 17:59
kopecmartinawesome, he should really start with programming as soon as possible 17:59
gmann:)17:59
kopecmartinperfect mind excercise 17:59
kopecmartin.. 17:59
kopecmartin#endmeeting17:59
opendevmeetMeeting ended Wed Jan 31 17:59:57 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:59
opendevmeetMinutes:        https://meetings.opendev.org/meetings/qa/2024/qa.2024-01-31-17.00.html17:59
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/qa/2024/qa.2024-01-31-17.00.txt17:59
opendevmeetLog:            https://meetings.opendev.org/meetings/qa/2024/qa.2024-01-31-17.00.log.html17:59
abhishekkAnybody knows whether post-run will be effective on grenade job to execute some script after completion of setup?18:25
abhishekkIF yes can we restrict it just for old setup and not to run after new setup?18:25
clarkbabhishekk: are you asking about zuul post-run?18:26
abhishekkyes18:26
clarkbzuul's idea for post run is that you use it to perform actions after your testing is completed like collecting logs, copying artifacts, etc. Basically not stuff that is being tested but stuff necessary to record and report18:27
abhishekkI want to do some entries in old db and after upgrade verify it is there in new db18:28
clarkbyou should do that as part of the actual run18:28
clarkbas that is part of your testing18:28
abhishekkthe think is I am changing db in between18:28
abhishekkpre I am using sqlite and after upgrade I am using mysql18:29
abhishekkso I need to verify data from sqlite db is transferred to mysql18:29
clarkband I'm guessing you don't want to override the parent jobs run: playbook ?18:29
abhishekkhttps://review.opendev.org/c/openstack/glance/+/90164918:30
clarkbmy suggestion woudl still be to add your checks to the run playbook beacuse validating database content is useful generally18:30
abhishekkI want to write something for ToDO in commit18:30
clarkbright I think that will involve changes within grenade's test runs not the post-run collection. I believe there are hooks in grenade to do validation like that18:31
abhishekkcould you point one if possible18:32
clarkbwell the grenade readme says this: allow projects to create resources that should survive upgrade. verify resources are still working during shutdown. verify resources are still working after upgrade18:33
abhishekkare you talking about this?18:33
abhishekkhttps://github.com/openstack/grenade/tree/master/projects/40_glance18:33
clarkbno https://opendev.org/openstack/grenade/src/branch/master/README.rst is where the text comes from18:34
clarkblooks like you should create a resources.sh script under 40_glance to drive this18:34
clarkbnova does so18:34
clarkbthere is a create and verify hook18:34
clarkbso you would create what ever you need to in order to side effect in the database. Then verify would probably use the apis to read it back again and confirm the database still contains that info18:35
abhishekkack, thanks18:35
abhishekkwill have a look18:36
clarkbin general zuul expects you to run setup necessary to perform testing in pre-run (so git repo setup happens here as well as network configuration etc), then run does your actual testing, then pre-run does reporting and collecting and shouldn't be used for testing18:37
clarkbabhishekk: https://opendev.org/openstack/grenade/src/branch/master/PLUGINS.rst this seems to be the documentation18:42
clarkband nova has examples18:42
abhishekkack, looking at it, got some idea from resource.sh, need to understand more when destroy phase will execute (in old or new setup)18:44
clarkbthat PLUGINS.rst doc has a flow documented18:44
clarkblooks like destroy is at the very end18:44
abhishekk++18:44
abhishekkmakes sense18:45
abhishekkThanks again,looks easy now18:46
opendevreviewMerged openstack/tempest master: General doc updates  https://review.opendev.org/c/openstack/tempest/+/90579019:45
gmannkopecmartin: left some comment in this, test is not running on any jobs - https://review.opendev.org/c/openstack/tempest/+/89873220:15
opendevreviewAbhishek Kekane proposed openstack/devstack master: Make `centralized_db` driver as default cache driver  https://review.opendev.org/c/openstack/devstack/+/90711020:26
opendevreviewAshley Rodriguez proposed openstack/devstack-plugin-ceph master: Add ingress deamon  https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/90034820:45
opendevreviewAbhishek Kekane proposed openstack/devstack master: Make `centralized_db` driver as default cache driver  https://review.opendev.org/c/openstack/devstack/+/90711021:37

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!