15:01:14 <mnaser> #startmeeting tc
15:01:15 <openstack> Meeting started Thu Feb  4 15:01:14 2021 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:18 <openstack> The meeting name has been set to 'tc'
15:01:24 <mnaser> #topic rollcall
15:01:27 <mnaser> o/
15:01:29 <gmann> o/
15:01:34 <dansmith> o/
15:01:48 <jungleboyj> o/
15:01:55 <belmoreira> o/
15:02:47 <mnaser> #topic Follow up on past action items
15:03:00 <mnaser> #link http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-01-28-15.00.html
15:03:24 <mnaser> diablo_rojo complete retirement of karbor
15:03:42 * mnaser looks up review
15:03:53 <mnaser> #link https://review.opendev.org/c/openstack/governance/+/767056
15:04:01 <mnaser> #link https://review.opendev.org/c/openstack/project-config/+/767057
15:04:03 <mnaser> that seems to have landed
15:04:34 <mnaser> any other tc members want to vote on the governance repo change and we can land that cange
15:04:36 <gmann> yeah, few usage one is the only left one #link https://review.opendev.org/q/topic:%2522retire-karbor%2522+status:open
15:04:44 <gmann> especially system-config and openstack-map
15:05:21 <mnaser> osf/openstack-map i think ttx can help with that
15:05:49 <mnaser> and opendev/system-config maybe we can ask for fungi or clarkb help for that one
15:05:53 <gmann> recheck them to have zuul green
15:06:11 <gmann> ah we have rebase issue https://review.opendev.org/c/osf/openstack-map/+/767236/2/deployment_tools.yaml
15:06:14 <mnaser> tc-members: please review https://review.opendev.org/c/openstack/governance/+/767056 so we can land this
15:06:16 <gmann> i can fix that quickly
15:06:28 <fungi> i'll take a look at the opendev patches for that now
15:06:31 <mnaser> we can remove that action item for the upcoming meetings, i think it's just an open review topic for now
15:06:37 <mnaser> thank you fungi !
15:06:41 <gmann> +1
15:07:21 <mnaser> next up was tc-members review and leave comments on the etherpad for SIG updates <= i think we can leave that for the discusion item later
15:07:37 <mnaser> mnaser continue to sync with tripleo team to reduce usage < also can be left for the discussion point later
15:07:58 <mnaser> qa team to land parallelizing changes for devstack < we can probably leave this also for teh gate performance discussion point if y'all are okay with it?
15:08:22 <jungleboyj> ++
15:08:38 <mnaser> ok cool
15:08:40 <mnaser> #topic Audit SIG list and chairs (diablo_rojo)
15:09:10 <mnaser> tc-members: https://etherpad.opendev.org/p/2021-SIG-Updates -- please review this to help diablo_rojo and ricolin go over auditing those
15:09:27 <mnaser> as well as the attached reviews
15:09:39 <mnaser> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019994.html
15:10:38 <mnaser> ok, time to get into the heavier subjects :)
15:10:45 <mnaser> #topic Gate performance and heavy job configs (dansmith)
15:10:55 <dansmith> yo
15:11:23 <mnaser> on my site, I reached out to the TripleO team and they most definitely are working on improving/reducing the footprint -- it was brought up in their meeting a few days ago
15:11:31 <dansmith> so, I've been working (with gracious help from fungi and clarkb) on trying to figure out what metrics we can generate and use to try to quantify the relative weights of jobs in each project
15:11:37 <dansmith> and the results are pretty interesting
15:12:03 <dansmith> the two worst offenders, by far, are tripleo and neutron and I too have had good response from both teams in my reaching out
15:12:16 <mnaser> #link http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-02-02-14.00.log.html#l-37
15:12:21 <dansmith> I've got a hacky tool to do some measurement and I'm planning an email to the list after this meeting to share some of the important numbers
15:12:43 <dansmith> I hope that some awareness will yield some initial good-faith trimming of jobs
15:12:49 <gmann> +1
15:13:06 <dansmith> I think the TC could also use this data and try to determine some target "weights" for jobs to try to keep things reasonable,
15:13:28 <dansmith> because again this week, lag times have been pretty bad.. not horrific, but definitely more than a day to get anything useful done
15:13:36 <fungi> in particular, i like that your tool can show the aggregate node utilization for a single build, which makes it more striking in side-by-side comparisons between different jobs what the relative costs for them are
15:13:41 <gmann> also this can be good thing to monitor in Tact SIG
15:14:06 <gmann> something like weekly or monthly usage on ML
15:14:15 <fungi> gmann: yep, discussions so far have been in the context of the tact sig, in our irc channel
15:14:25 <gmann> +1
15:14:31 <dansmith> yeah, I'd like to revisit this periodically and look at trends and identify things that make jobs suddenly worse
15:14:48 <dansmith> but we also really need to get people to work on fixing the spurious failures, of which we have a lot these days
15:14:57 <dansmith> as well as turning an eye to improving efficiency of what we're testing
15:15:10 <mnaser> on that note too, the tripleo team brought up a really good point which was -- having a target set for how much utilization they should have
15:15:26 <mnaser> "you're using too much" -- "how much should we be using" is a very fair question to ask
15:15:32 <fungi> yeah, i get the impression a lot of these jobs test mostly the exact same things, rather than only testing what other jobs aren't covering
15:15:36 <dansmith> yeah, and this script gives us a way to weigh a patch, wehich we can then say "should be X instead of currently Y"
15:15:58 <dansmith> fungi: yes, we really need to spend time making sure that's not too overlappy
15:16:18 <dansmith> I think tripleo is probably better than most at only testing a smaller subset, especially after they make their proposed changes
15:16:19 <gmann> and also combining the jobs when testing with same config can helpful . there are lot of jobs separated out per set of tests not config
15:16:28 <dansmith> partially because one run is so heavy that they're not inclined to add more of those
15:16:32 <dansmith> but nova has a ton of overlap,
15:16:40 <dansmith> and I've already proposed some reduction there and more can be done
15:17:27 <dansmith> so anyway, that's my status for the moment
15:17:35 <mnaser> i do think it does still bring back a really good point of "how much usage should we be at?"
15:17:43 <mnaser> projects have historically never had any 'quotas'
15:17:44 <gmann> for tripleo question: can we set a cap for each project utilization ? it should be done based on nature of testing at project not equal to each project
15:17:55 <gmann> yeah, that one
15:18:00 <dansmith> gmann: we can't, currently
15:18:16 <dansmith> there is fair queuing across projects,
15:18:36 <dansmith> but projects like tripleo have many trees, and it's definitely not a quota
15:18:52 <dansmith> I have to step away for something important right now, so we can move on
15:19:37 <gmann> yeah but if we set current-working-time(after optimization when we say current is fine now) as quota so that it does not goes high in future or can be seen if it goes
15:20:00 <mnaser> it looks like we do have some actionable things out of this which is reaching out to the ML and bringing this up and it seems within the tact sig there is work and efforts
15:20:09 <mnaser> in order to try and watch what is happening
15:20:19 <dansmith> well, a target quota is good, but doesn't enforce anything, and enforcing quotas are not an option at the moment
15:20:28 <dansmith> yup yup
15:20:33 <dansmith> look for my email tome soon
15:20:42 <mnaser> yes, i agree, just a target that we can all agree on :)
15:20:54 <jungleboyj> It would seem having a benchmark so we see when a project is suddenly consuming more is at least a good starting point.
15:21:06 <jungleboyj> Raise a flag before things get bad.
15:21:12 <gmann> yeah
15:22:27 <mnaser> #action dansmith reach out to ML regarding node usage
15:22:50 <mnaser> #action tact-sig review/trend project usage to raise flags before things "get bad"
15:22:55 <mnaser> #topic Dropping lower-constraints testing from all projects (gmann)
15:23:06 <jungleboyj> ++
15:23:44 <mnaser> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019672.html
15:23:52 <mnaser> i know there was a few alternatives proposed which were being implemented in nova
15:24:14 <gmann> I am testing with direct deps only in nova and placement
15:24:18 <gmann> #link https://review.opendev.org/q/topic:%22l-c-direct-deps-only%22+(status:open%20OR%20status:merged)
15:24:35 <gmann> it is under progress as it is not yet green
15:24:46 <gmann> once I finish that then i can push result on ML
15:25:10 <mnaser> should we keep this a topic to continue to discuss in the TC meeting?
15:25:13 <fungi> just be prepared to rip those out when stable branches are created. if you're not constraining your indirect dependencies then they will change over time and become incompatible
15:25:37 <gmann> for stable, I think we should remove it
15:26:07 <gmann> mnaser: we need to get consensus in TC but may be after we finish the discussion in ML
15:26:19 <mnaser> ok so i think it is still productive to keep it open
15:26:24 <gmann> may be we can drop it for now and once we are ready to discuss in TC then we can add
15:26:46 <mnaser> i'm okay with that too
15:26:46 <gmann> yeah either way is ok,
15:26:53 <rosmaita> so it sounds like the direction is keep the job in master only and for direct dependencies only?
15:27:19 <gmann> rosmaita: I am testing that if that work fine or need more work. will post the result on ML
15:27:31 <rosmaita> ok, great
15:27:41 <gmann> let me add test for cinder too
15:27:49 <rosmaita> i have no objection to that idea
15:28:34 <gmann> yeah that surly can reduce the maintenance  effort
15:29:16 <mnaser> ok cool, gmann: your call on keeping it on the discussion list
15:29:24 <mnaser> sorry, agenda rathert
15:29:32 <gmann> ok
15:30:16 <gmann> I can drop/keep it based on ML discussion. like nothing to discuss then drop
15:30:23 <gmann> before next Wed
15:30:29 <mnaser> ok that sounds like a good idea
15:30:33 * diablo_rojo sneaks in late
15:30:47 <mnaser> #action gmann continue to follow-up on using direct dependencies for l-c jobs
15:30:58 <mnaser> o/ diablo_rojo
15:31:00 <mnaser> #topic Mistral Maintenance (gmann)
15:31:07 <mnaser> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020137.html
15:31:14 * jungleboyj marks diablo_rojo tardy
15:31:20 <mnaser> didnt.. or does.. tripleo use mistral?
15:31:36 <gmann> humm do not know that.
15:31:43 <dansmith> not anymore I don't think
15:31:44 <dansmith> could be wrong
15:31:47 <fungi> i thought they did at one point
15:31:56 <fungi> but yeah, i vaguely recall they stopped a while back
15:32:15 <mnaser> i think they dropped mistral in favour of using ansible
15:32:17 <mnaser> i.. think
15:32:24 <mnaser> https://specs.openstack.org/openstack/tripleo-specs/specs/ussuri/mistral-to-ansible.html
15:33:02 <mnaser> which was implemetned in ussuri-3
15:33:05 <mnaser> #link https://blueprints.launchpad.net/tripleo/+spec/tripleo-mistral-to-ansible
15:33:09 <mnaser> so that was a big consumer that droped off
15:33:14 <mnaser> #link https://review.opendev.org/q/project:openstack/mistral
15:33:47 <mnaser> ~100 changes or so in the past year
15:34:07 <gmann> other project using it is Tacker which is optional integration afaik and work without mistral too. As discussed with one of the Tacker member they will discuss it and see what they can do, either help in mistral or deprecate the support.
15:34:40 <mnaser> i think it will become more tricky when we'll have to figure out a ptl for the project
15:34:56 <mnaser> but maybe a distributed leadership model could work because it is a very low volume project right now
15:35:09 <gmann> yeah Renat  mentioned he will not be able to run as PTL in next election
15:35:42 <gmann> mnaser: but do we have any other maintainer there other than Renat ?
15:36:01 <gmann> may be there are.
15:36:02 <mnaser> i see some patches by `ali.abdelal@nokia.com`
15:36:06 <mnaser> https://review.opendev.org/c/openstack/mistral/+/755481
15:36:14 <mnaser> and i see Eyal too making reviews
15:36:18 <gmann> yeah
15:36:49 <mnaser> #link https://review.opendev.org/admin/groups/19588f2c593d37749e32bc5400365f654c683d19,members
15:36:50 <gmann> Distributed leadership will be helpful but let's see
15:37:30 <belmoreira> do we know the user adoption of Mistral?
15:37:35 <gmann> other thing i was thinking to add it in newsletter to convey about it need maintainer...
15:37:56 <gmann> belmoreira: good point.
15:37:57 <mnaser> belmoreira: that's a good question, maybe user survey data can help with this
15:38:03 <fungi> might be able to figure out some percentage of user survey respondents who claimed to be using it?
15:38:07 <mnaser> gmann: maybe we can reach out to mistral-core before that
15:38:08 <fungi> jinx
15:38:21 <gmann> one I know is via Tacker in NEC/NTT japan which we are checking option to go woth non-mistral way
15:38:22 <belmoreira> mnaser *I'm trying to find out*
15:38:43 <gmann> mnaser: +1,
15:38:54 * mnaser looks at aprice
15:39:11 <mnaser> oh wait i'm mblind, there is a 'projects used in production deployments'
15:39:22 <mnaser> under 'deployments' https://www.openstack.org/analytics
15:39:46 <fungi> it's amazing how she answers your questions before you even know you have them
15:39:51 <mnaser> so this shows .. i think .. 8% proudction, 4% testing, 13% interested
15:40:09 <gmann> yeah
15:40:24 <belmoreira> this is 2019 data it will great to get the latest
15:40:35 <belmoreira> maybe aprice can help
15:40:43 <fungi> all it shows to me is that we need to drop a few projects so that graph won't be unreadably crowded ;)
15:40:51 <aprice> o/
15:41:01 <jungleboyj> fungi:  ++
15:41:09 <belmoreira> fungi :)
15:41:12 <aprice> in another meeting simultaneously - is there a tl;dr on the current user survey request?
15:41:25 <mnaser> aprice: 2020 data :)
15:42:02 <gmann> fungi: or divide into separate category based on usage like high vs low or so?
15:42:05 <aprice> ah yes, i didnt realize that wasn't on o.o/analytics yet. i can work with jimmy to get that live soon
15:42:11 <aprice> probably in the next 1-2 days
15:42:17 <mnaser> cool, awesome!
15:42:19 <aprice> but if there's a specific data point yall need sooner, lmk
15:42:30 <mnaser> aprice: % of usage of mistral in prod environments
15:42:50 <aprice> mnaser: ok - i can pull that in like an hour or so and drop it back in this channel
15:42:56 <mnaser> aprice: yay thank you!
15:42:59 <aprice> np!
15:43:09 <mnaser> so we'll have that info, anyone would like to volunteer to reach out to the current maintainers?
15:43:46 <gmann> for 'you need help' ?
15:44:09 <mnaser> yes, to try and see if any of them would like to run into ptl next release
15:44:15 <gmann> ok
15:44:18 <mnaser> so we can try to get ahead of the troubled waters ahead :-)
15:44:18 <gmann> i can do that
15:44:24 <mnaser> thanks gmann
15:44:37 <mnaser> #action gmann reach out to mistral maintainers about PTL for next cycle
15:44:38 <ricolin> gmann, let me help you with that too
15:44:44 <mnaser> #undo
15:44:45 <openstack> Removing item from minutes: #action gmann reach out to mistral maintainers about PTL for next cycle
15:44:49 <mnaser> #action gmann/ricolin reach out to mistral maintainers about PTL for next cycle
15:44:50 <mnaser> :>
15:44:51 <gmann> sure
15:45:28 <mnaser> so with the combination of anyone stepping up + usage in the wild as suggested by belmoreira, we should be able to take a decision with the project's direction
15:45:55 <gmann> yeah
15:46:25 <mnaser> cool, thanks for bringing up gmann
15:46:26 <mnaser> #topic Open Reviews
15:46:30 <mnaser> #link https://review.opendev.org/q/project:openstack/governance+is:open
15:47:18 <mnaser> i'm going to go over a few that need to be merged noew
15:50:12 <mnaser> https://review.opendev.org/c/openstack/governance/+/770616
15:50:16 <mnaser> this really could use more votes
15:50:56 <mnaser> https://review.opendev.org/c/openstack/governance/+/773383 also can help the osa team land things
15:52:07 <gmann> done
15:52:21 <jungleboyj> Done.
15:52:31 <jungleboyj> I want to read the cool down one again.  Will do that.
15:54:34 <mnaser> cool, dansmith had some concerns too, so appreciate a review on https://review.opendev.org/c/openstack/governance/+/770616 when possible
15:54:58 <jungleboyj> Done.  I think the wording is positive enough while communicating our reasons.
15:55:01 <dansmith> yeah I don't think my concerns were addressed, but they weren't super important
15:55:11 <jungleboyj> Don't quit, but work on what you want to do.
15:56:03 <mnaser> dansmith: i'm happy to revise if you want if you feel strongly about the concerns :)
15:56:19 <dansmith> I know, I'll try to hit it again after this
15:56:32 <mnaser> cool, no problem
15:57:05 <mnaser> and also some of those applications for projects that have historically been very interop-y -- https://review.opendev.org/c/openstack/governance/+/773684 (cinder) and https://review.opendev.org/c/openstack/governance/+/773090 (neutron)
15:57:11 <mnaser> those cant land yet but earlier reviews is helpful
15:58:10 <mnaser> gmann: this might be at a standstill https://review.opendev.org/c/openstack/governance/+/771785 -- i think we'll have to ping the monasca team to properly deprecate it
15:58:35 <openstackgerrit> Merged openstack/governance-sigs master: Add Dirk Mueller as Packaging SIG co-chair  https://review.opendev.org/c/openstack/governance-sigs/+/772881
15:58:37 <gmann> yeah, I will check with them or if they need help
15:58:56 <mnaser> cool, awesome, we wer eable to clear out most of our queue
15:59:07 <gmann> \o/
15:59:32 <ricolin> :)
15:59:42 <mnaser> thanks for coming everyone!
15:59:43 <openstackgerrit> Merged openstack/governance master: Add rbd-iscsi-client to cinder project  https://review.opendev.org/c/openstack/governance/+/772597
15:59:52 <mnaser> very productive
15:59:54 <mnaser> #endmeeting