14:00:30 <masayukig> #startmeeting qa
14:00:31 <openstack> Meeting started Tue Jan 26 14:00:30 2021 UTC and is due to finish in 60 minutes.  The chair is masayukig. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:34 <openstack> The meeting name has been set to 'qa'
14:01:03 <gmann> elod: may be we need to check with train fix only and other EM branch also if fix is already there
14:01:30 <masayukig> Hi, who all here today?
14:01:31 <masayukig> sorry for interrupt but it's the time anyway
14:01:46 <kopecmartin> masayukig: hi
14:01:48 <gmann> yeah
14:01:50 <gmann> o/
14:01:56 <paras333> o/
14:02:04 <masayukig> ok, let's start
14:02:18 <masayukig> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
14:02:23 <masayukig> Here's the agenda
14:02:44 <masayukig> #topic Announcement and Action Item (Optional)
14:03:03 <masayukig> I have no announcement and I don't see any items on the agenda.
14:03:07 <masayukig> So, let's skip
14:03:19 <masayukig> #topic Wallaby Priority Items progress
14:03:29 <masayukig> #link https://etherpad.opendev.org/p/qa-wallaby-priority
14:03:49 <masayukig> Any updates on the priority?
14:04:07 <gmann> I have some progress
14:04:13 <masayukig> cool
14:04:52 <paras333> I need some feedback on the way I am doing the run_validation , Is this the best way #link https://review.opendev.org/c/openstack/tempest/+/763925 so if you guys can take a look that would be great
14:04:55 <gmann> we have all the patches up for tempest-horizon test moving
14:04:59 <gmann> #link https://review.opendev.org/q/topic:%22merge-horizon-test%22+(status:open%20OR%20status:merged)
14:05:19 <gmann> tempest one is merged so dashbaord test run as part of tempest-full* job
14:06:13 <masayukig> paras333: thanks. ack, I hope I have the time to look at it this week
14:06:23 <paras333> masayukig: yeah no rush
14:06:25 <gmann> on RBAC testing, we have all the patches approved now
14:07:14 <gmann> #link https://review.opendev.org/q/topic:%22admin-auth-system-scope%22+(status:open%20OR%20status:merged)
14:07:34 <gmann> next we need to start moving test to new policy for compute and keystone
14:07:39 <paras333> gmann: masayukig: for creating the gate job, what would be the place to start, I have never created the gate jobs before so might need your help for guest image work?
14:08:01 <masayukig> gmann: great. I'll check tempest-horizon ones.
14:08:14 <paras333> I am thinking to use windows image for the guest image and run sanity tests on it?
14:08:29 <gmann> paras333: you can create in https://github.com/openstack/tempest/blob/master/zuul.d/tempest-specific.yaml
14:08:49 <gmann> as they will be tempest specific
14:09:01 <paras333> gmann: ok I will take a look
14:09:03 <gmann> paras333 or you are planning to run those in other project gate too  ?
14:09:13 <paras333> gmann: no just on tempest for now
14:09:23 <gmann> ok then tempest-specific.yaml is fine
14:09:27 <paras333> if we see value we can add this to others as well
14:09:48 <gmann> sure, that time we can move in integrated file
14:09:55 <paras333> gmann: ack
14:10:45 <amotoki> gmann: I didn't see errors in stable/ussuri as we specify USE_PYTHON3=True. stable/ussuri should fail if USE_PYTHON3=False though I am not sure we need to cover such case. It looks like during an office hour, so I don't comment more now.
14:11:14 <gmann> amotoki: we will have that topic next so you are on perfect time :)
14:11:46 <masayukig> ok, let's move on to the next topic if we have nothing else on this topic.
14:12:12 <masayukig> #topic OpenStack Events Updates and Planning
14:12:16 <masayukig> let's skip this topic
14:12:26 <masayukig> #topic Gate Status Checks
14:12:32 <masayukig> this is the topic
14:12:45 <gmann> yeah
14:13:04 <gmann> amotoki: i do not think we need to support USE_PYTHON3=False for ussuri onwards
14:13:50 <gmann> and we have removed many py2 hack in ussuri onwards so supporting that case seems like reverting the plan of python-only
14:13:51 <amotoki> gmann: I just proposed a fix for branches where python3_enabled() is conditional.
14:13:56 <gmann> python3-only
14:14:15 <gmann> may be we should remove that in ussuri.
14:14:16 <masayukig> ++
14:14:20 <gmann> yoctozepto: frickler ^^?
14:14:26 <amotoki> gmann: USE_PYTHON3 was dropped in victoria...
14:14:41 <amotoki> i am okay with either.
14:15:21 <gmann> yeah as we were transitioning to python3-only in ussuri so backporting 'USE_PYTHON3 drop' to ussuri make sense now
14:16:09 <elod> i think the backport is not necessary
14:16:14 <elod> simply ignore that
14:16:15 <gmann> supporting fix in train I agree. other em branch is also fine if fix is there
14:16:24 <elod> i mean the ussuri patch
14:16:30 <gmann> elod: yeah that also work but sometime it come again
14:16:54 <elod> yes, for train and older the patch is OK
14:17:18 <gmann> amotoki: elod thanks for that, I will check those today
14:17:21 <elod> gmann: py2 is not even used in ussuri, so it's not necessary, i think
14:17:24 <amotoki> so should I drop my fix for ussuri?
14:17:44 <elod> gmann: thanks too, and thanks amotoki for the patch, too!
14:17:52 <gmann> amotoki: yeah, we can drop for ussuri and if anyone need py2 on ussuri thenj we say 'not supported'
14:18:01 <elod> ++
14:18:02 <gmann> i mean drop fix
14:18:21 <amotoki> okay, if so, we need to update the commit message from a train fix and other backports (i.e. to drop the cherry-picked  line)
14:18:37 <elod> amotoki: I can do that if you want
14:18:38 <gmann> ah yeah.
14:18:41 <amotoki> is it worht rerunning CIs for all fixes in train and older?
14:19:14 <elod> good question :) maybe not? :)
14:19:28 <amotoki> I think it is tricky if no corresponding cherry-picked commit is abandoned...
14:19:39 <amotoki> s/no//
14:19:41 <gmann> ok, i am fine for that and anyone can know by checking ussuri patch status
14:20:25 <gmann> amotoki: anything ok for me, fixing cmt msg is perfect but up to you
14:21:43 <amotoki> gmann: okay, so I will keep the train fix as-is. and will abandon the ussuri fix.
14:22:15 <gmann> ok
14:22:24 <elod> one minor note: currently train patch requires stein patch to merge first (grenade)
14:22:38 <elod> the others are not dependent on each other
14:22:42 <gmann> elod: yeah we need to merge in reverse way
14:22:52 <elod> ++
14:23:20 <masayukig> thanks
14:23:51 <amotoki> granade job is non-voting. perhaps it allows us to land the fix in any order.
14:24:19 <amotoki> anyway it is up to the qa team
14:24:37 <elod> amotoki: except @ train :) strange, but that is how it is now :)
14:24:39 <gmann> let's see gate result i think we made it voting in stein. or not yet n-v it
14:24:51 <gmann> yeah and in train it is voting
14:25:22 <gmann> tempest-full-train-py3  in periodic fail with same get-pip error - https://zuul.openstack.org/build/dac028e6585b410c8bc108390b614f5a
14:25:30 <gmann> this is python3 job
14:25:40 <elod> I mean strangely in ussuri grenade is non-voting
14:25:55 <gmann> elod: :) we forget to undo that may be
14:26:03 <amotoki> ah. I did not notice elod added Depends-On to the train patch.
14:26:14 <gmann> amotoki: elod tempest-full-train-py3  in periodic fail with same get-pip error - https://zuul.openstack.org/build/dac028e6585b410c8bc108390b614f5a
14:26:18 <gmann> ?
14:26:33 <elod> gmann: yes, the same error
14:26:46 <amotoki> gmann: yes the same error
14:27:07 <gmann> but it is py3 path
14:27:38 <amotoki> in train, we always insatll pip for py27
14:27:46 <amotoki> and the error happens here.
14:27:55 <elod> latest get-pip.py does not work with py35 either
14:28:02 <gmann> ah right
14:28:15 <gmann> py35 is until stable/rocky only afaik
14:28:39 <gmann> ok. let's get those fix in and we will have stable/train gate back
14:28:40 <elod> tempest-full-train-py3 fails in rocky and queens
14:29:04 <gmann> elod: yeah they use py35 and stein onwards it is py36 due to bionic node migration
14:29:14 <elod> exactly
14:29:57 <masayukig> ok, anything else to discuss for this topic
14:29:59 <masayukig> ?
14:30:25 <gmann> nothing from me, may be we can move next
14:30:54 <masayukig> ok
14:31:06 <masayukig> #topic Periodic jobs Status Checks
14:31:22 <masayukig> This is the similar topic, though
14:31:23 <gmann> we discussed it for train which is failing now
14:31:27 <masayukig> yeah
14:31:35 <gmann> for ussuri and victoria it is green
14:32:14 <masayukig> tempest-all is failing on Periodic master
14:32:24 <gmann> yeah that is still broken
14:32:28 <gmann> same issue
14:32:49 <paras333> gmann: sorry, If i missed it. Is this job fixed https://zuul.opendev.org/t/openstack/build/fdb70e77db1e43a8b793d4058bb8b8b8 ?
14:33:01 <paras333> low-constraints one
14:33:21 <gmann> paras333: no, we are still discussing in TC and ML if to drop the l-c or fix
14:33:30 <paras333> gmann: ack thanks
14:33:39 <masayukig> gmann: ok, we need to fix the issue, anyway
14:33:42 <gmann> paras333: we can make it n-v until than and unblock the gate
14:33:57 <gmann> masayukig: yeah
14:34:04 <masayukig> let's move on to the next topic if nothing
14:34:05 <paras333> gmann: yeah that make sense, I will add the patch for hacking then
14:34:05 <gmann> someone need to debug it
14:34:12 <gmann> paras333: cool.
14:34:29 <masayukig> yeah
14:34:53 <masayukig> paras333: ++
14:35:09 <masayukig> ok, let's move on to the next topic
14:35:10 <masayukig> #topic Sub Teams highlights (Sub Teams means individual projects under QA program)
14:35:47 <masayukig> #link https://review.openstack.org/#/q/project:openstack/tempest+status:open
14:35:52 <masayukig> #link https://review.openstack.org/#/q/project:openstack/patrole+status:open
14:35:52 <masayukig> #link https://review.openstack.org/#/q/project:openstack/devstack+status:open
14:35:52 <masayukig> #link https://review.openstack.org/#/q/project:openstack/grenade+status:open
14:35:52 <masayukig> #link https://review.opendev.org/#/q/project:openstack/hacking+status:open
14:35:59 <gmann> we still have slow rate for tempest reivew
14:36:26 <kopecmartin> yeah, there are a lot of reviews missing second core vote :/
14:36:50 <gmann> yeah
14:37:05 <gmann> one way is to change the policy to single core approval
14:37:44 <gmann> also it is not 2nd core missing but many of them are not yet reviewed at all
14:38:11 <gmann> *it is not only 2nd core...
14:38:24 <masayukig> yeah..
14:39:24 <gmann> I feel it is time to go with single core approval and if we feel we need another core to have a look we can always have that
14:39:26 <openstackgerrit> Artom Lifshitz proposed openstack/whitebox-tempest-plugin master: WIP: Different approach to TripleO job  https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/762866
14:39:39 <masayukig> yeah, but I think we need to discuss to change the policy to single core approval, at least
14:39:54 <masayukig> on IRC and/or ML
14:40:01 <gmann> yeah, here we are doing :)
14:40:16 <masayukig> or patch for the policy document
14:40:18 <masayukig> yeah :)
14:40:20 <gmann> I think we can discuss in IRC
14:40:31 <gmann> as it is upto tempest team
14:40:43 <gmann> if we decide then i can push patch
14:40:49 <gmann> if all are ok with that
14:40:59 <paras333> gmann: masayukig: If I understand correctly we do need atleast one other reviewer as well right even though he/she was not the core reviewer?
14:41:11 <paras333> to merge the patch?
14:41:26 <gmann> paras333: yeah at least one core review can merge it
14:41:34 <gmann> paras333: currently it need two core
14:41:41 <masayukig> yes
14:42:21 <paras333> correct
14:42:37 <masayukig> we need two +2 to approve the patch basically
14:43:16 <masayukig> There are some exceptions such as very urgent, very easy, etc., though
14:43:17 <paras333> yeah I know that I am just thinking , do we need one more reviewer to merge it even though they can just +1 ?
14:43:24 <amotoki> single core + domain expert (including non-core) is an option if you don't have enough core review bandwidth. I am not sure it works for you casre
14:43:39 <paras333> so one +1 and one +2 basically
14:43:58 <gmann> paras333: yeah with +1 we will still need at least one +2
14:44:32 <paras333> correct yeah I am totally onboard with this
14:44:39 <gmann> amotoki: yeah that is issue, tempest team is facing review bandwidth (two core to merge) since an year or so
14:44:40 <masayukig> amotoki: yeah, that's a good idea.
14:45:07 <paras333> as amotoki suggested this should be the great idea
14:45:17 <gmann> amotoki: masayukig we can always ask domain expert review from tempest team or other team anyways
14:45:21 <paras333> to have one core expert reviewing as well
14:45:31 <gmann> like sometime i ask neutron team to ask neutron test changes
14:45:41 <masayukig> I haven't the bw recently.. :(
14:45:43 <masayukig> gmann: yeah
14:46:21 <gmann> mostly we stuck with kopecmartin and myself waiting for other +2 for our patches
14:47:13 <masayukig> sorry about that.. don't blame me :p
14:47:41 <gmann> :) no its complete team i think not just you. you are doing already more than your current bw
14:48:07 <gmann> so if i propose the patch and then we can review it if any oppose in thay
14:48:08 <gmann> that
14:48:23 <gmann> and if we merge that then i can push it on ML as notification
14:48:27 <gmann> is it fine?
14:48:40 <kopecmartin> sure
14:48:42 <amotoki> another idea is to apply some exception that a patch from a core reivewer can be approved by a single +2.
14:48:47 <masayukig> gmann: thanks :) yeah, that's good for me
14:49:55 <gmann> amotoki: i see but not sure if that sounds good and fair for non-core? any other project have such policy
14:50:39 <gmann> that can solve the things at some extend for core patches but non-core patches still will face the issue
14:50:52 <amotoki> gmann: that's just an idea. horizon stable has such policy that a backport from stable core can be approved by a single +2, but it is just about stable backports.
14:51:08 <gmann> ohk
14:51:33 <masayukig> intersting
14:51:38 <gmann> few exception we still have for gate fix and trivial changes
14:51:53 <masayukig> yeah #link https://docs.openstack.org/tempest/latest/REVIEWING.html#when-to-approve
14:52:45 <dansmith> gmann: we kinda have that policy for stable in nova
14:52:51 <amotoki> in case of backport, a stalbe core is expected to be familiar with the stalbe policy, so such rule can be reaonable but it is not usually true for master changes.
14:52:57 <dansmith> right ^
14:53:10 <dansmith> it's a little different than a review on master though because the patch isn't net-new
14:53:34 <dansmith> but I think that it probably makes sense to do what gmann is suggesting, especially for smaller things that are unlikely to generate problems and can be validated by a SME
14:53:35 <gmann> yeah for backport it might be easy as code changes already reviewed by two core on master
14:53:48 <gmann> yeah
14:53:57 <masayukig> #topic Open Discussion
14:54:04 <masayukig> we're in this topic
14:54:13 <dansmith> can I bring up devstack things here?
14:54:33 <masayukig> dansmith: yes
14:54:44 <dansmith> so I have been working on this: https://review.opendev.org/c/openstack/devstack/+/771505
14:55:01 <dansmith> it makes devstack setup 25% faster in my local env
14:55:24 <dansmith> it's really not very complicated and easy to disable to that it becomes exactly like the current code in terms of behavior
14:55:51 <dansmith> there is a lot left to optimize to get it even faster I think, I've only really parallelized nova and some of the project setup
14:56:20 <masayukig> 25% faster sounds great!
14:56:21 <dansmith> there's a lot of serialized work we can squeeze out of the devstack process, which (a) could help the gate and (b) makes a quick devstack run locally much more palatable
14:56:59 <dansmith> I would love to keep working on it, but I don't want this one patch to grow too large, and I don't want to put in the effort if people here aren't likely to accept it
14:57:20 <gmann> dansmith: Thanks, I will be checking it sometime this week. so you change is enabling it by default but a way to disable too?
14:57:25 <dansmith> so, I'm just hoping to get some reviews on it (I know, right after the discussion about review bandwidth :)
14:57:33 <amodi> gmann: thanks for the reply re: devstack based barbican job
14:57:38 <dansmith> https://review.opendev.org/c/openstack/devstack/+/771505/6/inc/async
14:57:56 <gmann> dansmith:  i see
14:57:57 <dansmith> it's off by default, but I'd definitely like to enable it on some jobs and then flip to default at some point
14:58:22 <gmann> dansmith: do you have some job with enable?
14:58:44 <dansmith> gmann: it ran enabled by default before I added the toggle in the very last job
14:59:15 <dansmith> gmann: I haven't seen it fail the devstack job yet, and the tempest jobs mostly have worked, but some failed for other unrelated reasons
14:59:37 <dansmith> so there should be several logs
14:59:44 <dansmith> but I can push a patch on top to change the devstack job config to async if you want just to make it obvious
15:00:01 <gmann> dansmith: or may be a new devstack-platform job can be added in same patch to see the enable behaviors ?
15:00:32 <gmann> devstack-platform-async
15:00:40 <dansmith> sure, although you can't really compare the runtimes of two jobs to tell the improvement, you have to see it over many runs
15:00:44 <masayukig> sorry, we're running out of the time for the office hour. So, I'm closing it.
15:01:00 <openstackgerrit> Paras Babbar proposed openstack/hacking master: Updating lower-constarints job as non voting  https://review.opendev.org/c/openstack/hacking/+/772556
15:01:01 <gmann> yeah we can close office hour and continue
15:01:02 <dansmith> I was hoping to show those numbers, but the same worker will yield vastly different runtimes on the same job from minute to minute :)
15:01:08 <masayukig> #endmeeting