15:01:18 <mnaser> #startmeeting tc
15:01:18 <gmann> mnaser: ping
15:01:19 <openstack> Meeting started Thu Feb 18 15:01:18 2021 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:23 <openstack> The meeting name has been set to 'tc'
15:01:29 <jungleboyj> :-)
15:01:43 <gmann> :)
15:01:44 <ricolin> o/
15:01:49 <dansmith> o/
15:02:39 <mnaser> #topic rollcall
15:02:44 <gmann> o/
15:02:45 <mnaser> thanks for hosting last week btw, gmann
15:02:45 <diablo_rojo> o/
15:02:46 <ricolin> o/
15:02:47 <jungleboyj> o/
15:02:51 * mnaser has been dealing with all sorts of fun
15:03:55 <mnaser> #topic Follow up on past action items
15:04:01 <mnaser> gmann continue to follow-up on using direct dependencies for l-c jobs
15:04:26 <gmann> I posted the result of direct deps in l-c test #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020556.html
15:05:12 <gmann> and the proposal alternate to removal of l-c testing
15:05:23 <gmann> which is what was initially proposed in ML
15:05:43 <gmann> let's wait for more response on that and then we can discuss/document that
15:05:55 <mnaser> ack, thanks for all the follow up on this
15:06:39 <gouthamr> o/
15:07:03 <mnaser> diablo_rojo to send the k8s steering meeting details on ML and add section for new meeting in etherpad
15:07:36 <diablo_rojo> Already added a new section to the etherpad
15:07:40 <diablo_rojo> but I need to send an email
15:07:59 <mnaser> ok, we have it as a discussion item, so i will leave it for us to talk bout it when we get to it potentially ?
15:08:23 <diablo_rojo> I don't think it warrants a whole section now that its been mentioned and I will send the email lol
15:08:52 <mnaser> ok so we can drop that section them
15:09:04 <diablo_rojo> Want to link the etherpad?
15:09:09 <diablo_rojo> Yeah, mnaser I think so.
15:09:15 <mnaser> yeah taht might be helpful
15:09:26 <gmann> #link https://etherpad.opendev.org/p/kubernetes-cross-community-topics
15:09:28 <gmann> L29
15:09:41 <diablo_rojo> gmann, to the rescue
15:09:41 <gmann> sorry L20
15:09:51 <gmann> :)
15:09:56 <diablo_rojo> as I fumble around with tab complete
15:10:10 <diablo_rojo> Yes line 20, feel free to add topics
15:10:15 <mnaser> can we keep a rolling action item to add topics there
15:10:32 <mnaser> i'd really like us to take advantage of this :>
15:11:03 <diablo_rojo> Yeah go for it :)
15:11:08 <belmoreira> o/
15:11:22 <jungleboyj> ++
15:11:27 <mnaser> #action diablo_rojo send out email to ML wrt k8s cross-community discussion
15:11:41 <mnaser> #action tc-members fill out https://etherpad.opendev.org/p/kubernetes-cross-community-topics with topics
15:11:46 <diablo_rojo> +2
15:12:05 <mnaser> cool, anything else about this?
15:12:12 <diablo_rojo> Nope I don't think so
15:12:13 <mnaser> #action mnaser drop k8s steering commitee meeting from agenda
15:12:27 <mnaser> gmann to follow up with monasca team for https://review.opendev.org/c/openstack/governance/+/771785
15:12:50 <gmann> I pinged Monasca PTL chaconpiza no IRC as well as sent email to him but no response. I will wait for tomorrow afternoon otherwise push the changes and add monasca team as reviewer
15:13:32 <mnaser> hmm
15:13:37 <mnaser> wonder if we just dont have an active email too
15:13:41 <mnaser> https://review.opendev.org/q/owner:martin%2540chaconpiza.com
15:13:48 <ricolin> I also pinged him but no response as well
15:14:04 <mnaser> does monasca team have a scheduled team
15:14:09 * mnaser opens eavesdrop.openstack.org
15:14:19 <mnaser> http://eavesdrop.openstack.org/#Monasca_Team_Meeting
15:14:24 <mnaser> Weekly on Tuesday at 1300 UTC in #openstack-monasca (IRC webclient)
15:14:45 <mnaser> http://eavesdrop.openstack.org/meetings/monasca/2021/monasca.2021-02-16-13.00.log.txt
15:14:48 <ttx> there was a meeting this week
15:14:53 <mnaser> maybe we can add something to their team agenda - https://etherpad.opendev.org/p/monasca-team-meeting-agenda
15:15:06 <gmann> sure, that will help
15:15:28 <mnaser> cool, so lets try that route
15:15:35 <gmann> +1
15:15:36 <mnaser> ill keep the action item for the next week if that's cool
15:15:41 <gmann> sure
15:15:45 <mnaser> #action gmann to follow up with monasca team for https://review.opendev.org/c/openstack/governance/+/771785
15:15:56 <mnaser> #topic Audit SIG list and chairs (diablo_rojo)
15:16:35 <diablo_rojo> I let this fall down my todo list again but I will make progress by next week.
15:16:42 <diablo_rojo> So.. no updates atm
15:17:09 <mnaser> ok cool no problem :)
15:17:19 <mnaser> #topic Gate performance and heavy job configs (dansmith)
15:17:32 <dansmith> nothing really new since last week or so,
15:17:40 <dansmith> I haven't gone checking to see if things have been dropped,
15:17:44 <dansmith> gate perf is still pretty bad
15:17:57 <mnaser> dansmith: is there any easy measurable signal we can all look at to track if things are trending better or worse?
15:17:58 <dansmith> we have merged one big improvement to devstack time, and would like to advocate for another which will make a big deal
15:18:19 <dansmith> mnaser: that metric is super hard to generate, so it's more of a gut feeling sort of thing
15:18:29 <dansmith> I imagine gmann also has a gut feeling
15:18:41 <mnaser> i wonder if zuul has some 'avg wait time for change in gate'
15:18:41 <dansmith> I tried to get some attention in the cinder channel to the test failures last week but got no replies
15:18:53 <mnaser> wasn't there some twitter bot that would whine when our queues were long :)
15:18:55 <dansmith> so I will try them again, as it seems like in terms of gate resets they're the high offender
15:19:08 <dansmith> mnaser: the gate queue depth is easy to see,
15:19:26 <dansmith> and I watch my patch time-to-run values with dash.py,
15:19:26 <mnaser> so i think the reset's are what hurt the most right
15:19:28 <gmann> did tripleo optimization done?
15:19:37 <dansmith> gmann: I don't know I haven't checked
15:19:46 <dansmith> they were collecting votes, but not sure if it actually happened
15:19:48 <fungi> the average wait time is going to vary a lot depending on activity levels and time of day/week too, so it's hard to know what you're comparing against
15:20:00 <gmann> k
15:20:04 <mnaser> fungi: but i think a good signal that we can focus on is # of gate failures correct?
15:20:13 <mnaser> the more gate fails/resets, the worse it gets, esp with gate being high prio
15:20:16 <fungi> previous week? same week in the previous development cycle? same week in the prior year?
15:20:27 <dansmith> mnaser: gate resets hurt a lot, but load is a huge part of it right now I think
15:20:38 <gmann> i am not sure if that came in past also, can we have some user flag to stop the run ?
15:20:48 <mnaser> it may be a bit of a domino effect too
15:20:50 <fungi> gate reset count also varies depending on the depth of the queues themselves
15:20:55 <mnaser> fungi: yes, good poin
15:20:56 <gmann> so that i can stop if i see some job failing and i do not want rest of other to continue
15:21:07 <dansmith> right, the gate queue isn't that deep right now
15:21:07 <mnaser> gmann: you could abandon your patch
15:21:15 <dansmith> so a reset would suck, but it wouldn't be huge
15:21:18 <fungi> the deeper the queue, the more changes have a chance to fail more than once as changes ahead of them fail
15:21:20 <gmann> well that need abandon and restore right
15:21:31 <dansmith> gmann: or push a new rev
15:22:02 <gmann> many case i see failed in gate pipeline so we have to do recheck anyways to why to wait for other
15:22:13 <gmann> at least gate pipeline can be helpful
15:22:39 <gmann> remove the review if any of the job fail
15:22:41 <dansmith> I think the TC should focus on the people stuff we can resolve,
15:22:44 <mnaser> fungi: multinode jobs and jobs that pause and wait for another job (container stuff) -- do these enforce a cloud in the same nodepool-pool?
15:23:04 <dansmith> technical brainstorming and feature implementation is really not a TC thing, and we can discuss those things in -infra
15:23:20 <dansmith> so I'll try to check on the optimization efforts to lower the test load before next week's meeting here
15:23:21 <fungi> mnaser: yes, if builds depend on one another then zuul asks nodepool to fill them all from the same provider
15:23:27 <dansmith> and see if we need some reminders
15:23:39 <fungi> so that their network communication (if any) will be as local as possible
15:23:41 <mnaser> thats's fair.  dansmith: is there a script that generates your 'usage per project' thing?
15:23:54 <mnaser> so we can track if we need to ping more people and if so, who
15:23:58 <dansmith> mnaser: yes, linked in my email
15:25:08 <mnaser> #link https://gist.github.com/kk7ds/5edbfacb2a341bb18df8f8f32d01b37c
15:25:16 <mnaser> ok, i'll try and see if i can run that to look over it here too
15:25:31 <mnaser> ok, lets keep the item for now, i don't know if there's a whole lot more we can immediately do
15:25:48 <mnaser> #topic Mistral Maintenance (gmann)
15:26:08 <gmann> we are good on this. Renat replied to try the DPL model in Xena.
15:26:19 <mnaser> #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020137.html
15:26:39 <mnaser> ok great, so i think we can drop this from the agenda and have DPL patches for next time around?
15:26:44 <gmann> yeah.
15:27:42 <mnaser> #action mnaser drop Mistral Maintenance tp[oc
15:27:43 <mnaser> #undo
15:27:43 <openstack> Removing item from minutes: #action mnaser drop Mistral Maintenance tp[oc
15:27:45 <mnaser> #action mnaser drop Mistral Maintenance topic
15:28:01 <mnaser> #topic Student Programs (diablo_rojo)
15:28:12 <diablo_rojo> Another reminder!
15:28:25 <diablo_rojo> Tomorrow the GSoC applications close.
15:28:40 <diablo_rojo> Still have some more time to apply for outreachy.
15:28:58 <mnaser> do projects need to register or what exactly is actionable here?
15:28:59 <diablo_rojo> I'm happy to help however I can if people are interested in applying for interns.
15:29:21 <diablo_rojo> For outreacy mentors need to apply with a project for interns to work.
15:29:34 <diablo_rojo> Similar for GSoC
15:29:54 <diablo_rojo> We've only been accepted to GSoC once though so I am less familiar with the process.
15:30:09 <mnaser> by we = openstack, or a specific openstack project, or osf, or?
15:30:28 <diablo_rojo> OpenStack
15:30:38 <diablo_rojo> for GSoC
15:31:03 <diablo_rojo> OpenStack has long been a participant in outreachy- projects varied under that umbrella
15:31:38 <mnaser> i'd hate for us to miss out on taking advantage of the opportunity :X
15:32:04 <mnaser> taking advantage sounds like a poor taste of wording but
15:32:12 <mnaser> i cant find anything better :)
15:32:22 <diablo_rojo> Still :) Making use of the opportunity
15:32:23 <jungleboyj> wah wah wah
15:32:46 <diablo_rojo> Let me know if you need help! Otherwise, we can move onto ttx's topic
15:33:02 <mnaser> i'd probably ask if openinfra can take advantage of this
15:33:04 <mnaser> but yeah, sure
15:33:22 <gmann> I am going to put application for QA today
15:33:57 <mnaser> ack
15:34:03 <diablo_rojo> gmann, for outreachy?
15:34:18 <diablo_rojo> mnaser, I have passed along both to all open infra folks.
15:34:25 <diablo_rojo> (zuul, kata, etc)
15:34:26 <gmann> need to check which one or both
15:34:36 <mnaser> diablo_rojo: awesome, sorry, i meant opendev, but yeah :p
15:34:37 <diablo_rojo> gmann, sounds good, let me know if I can help.
15:34:43 <gmann> sure
15:35:12 <mnaser> ok great
15:35:15 <mnaser> #topic Recommended path forward for OSarchiver (ttx)
15:35:25 <ttx> Hi! Was just looking for a bit of guidance, on behalf of the Large Scale SIG
15:35:35 <ttx> OVHCloud released https://github.com/ovh/osarchiver as open source (BSD-3)
15:35:45 <ttx> It's a tool to help with archiving OpenStack databases, but it is relatively openstack-agnostic
15:35:57 <ttx> They are interested in making the tool more visible to OpenStack users by placing it under OpenStack governance
15:36:04 <ttx> There are multiple ways to get there
15:36:13 <ttx> - We could just host it under the Large Scale SIG
15:36:20 <ttx> ...but it's not really large scale specific
15:36:31 <ttx> - We could leverage the https://opendev.org/openstack/osops common repository (owned by the "Operation Docs and Tooling" SIG)
15:36:42 <ttx> ...but that SIG and repository are not very active/visible, so it's unclear that would be a win
15:36:52 <ttx> - We could make it its own smallish project on the "Operations tooling" side of the OpenStack map
15:37:03 <ttx> ...but a full project team may be overkill as the tool is mostly feature-complete
15:37:13 <ttx> What would be the TC's recommendation for them to follow?
15:37:29 <ttx> Placing it under OpenStack governance involves relicensing it under Apache-2, so I feel like we should not ask them to move/relicense if that does not really increase the tool visibility.
15:38:05 <mnaser> the difference between this and the built-in openstack tools is that this archives things in another db, right?
15:38:34 <ttx> also works for any project, iiuc
15:39:15 <mnaser> i mean, a sig can have deliverables
15:39:19 <ttx> (which raises the other alternative solution: keep it where it is because it's actually not that useful)
15:39:20 <dansmith> either they relicense and get the visibility or don't right?
15:39:28 <dansmith> isn't apache license a requirement for openstack?
15:39:40 <mnaser> ansible-sig publishes the openstack collections
15:39:48 <fungi> apache license is a loose requirement for deliverables shipped as a part of openstack
15:39:53 <mnaser> they are not an official deliverable of openstack
15:39:56 <fungi> we have a number of exceptions documented
15:39:59 <ttx> dansmith: yes it is. They are happy to relicense. but if it's to end up in a dark corner of OSops, that might not be worth the pain
15:40:05 <dansmith> ack
15:40:09 <mnaser> and im pretty sure because of legacy reasons, openstack collections werent apache 2
15:40:21 <dansmith> I guess if it's not a deliverable per se, maybe the license requirement isn't a thing?
15:40:40 <fungi> #link https://governance.openstack.org/tc/reference/licensing.html Licensing requirements
15:40:49 <belmoreira> for me the main question is to have a visible "place" in the community were ops can share their tools
15:40:51 <dansmith> I'm also not sure how much the "being under the big tent umbrella awning" or whatever really increases visibility anymore, but..
15:40:54 <ttx> if it's a SIG thing, does not have to be apache-2. But relicensing is not really the issue
15:41:22 <ttx> I'm reulctant to encourage them to move in if it does not really increase visibility
15:41:55 <mnaser> something i will openly admit
15:42:00 <mnaser> people don't like gerrit
15:42:01 <ttx> belmoreira: would you say OSops is such a place? Can anyone name one tool that is there?
15:42:20 <mnaser> projects will _actually_ get less contributions if they are on gerrit.
15:42:25 <ricolin> ttx how's large scale SIG feel about it if we end up put it under large scale SIG?
15:42:27 <fungi> people don't like lots of tools. some people also don't like github, or gitlab, or bitbucket, or...
15:42:31 <ttx> The people at OVH are happy to place it under Gerrit fwiw
15:42:46 * dansmith hates github
15:42:50 <dansmith> there, I said it
15:42:54 <mnaser> same.
15:42:58 <mnaser> but we're a minority :)
15:43:02 <belmoreira> if we have a light weight project that can have releases, like other projects, it would have a lot of visibility
15:43:15 <ttx> dansmith: I still don;t understand how to use it properly, and when I ask, I realize nobody else uses it properly
15:43:16 <ricolin> dansmith, you're not alone there:)
15:43:20 <mnaser> if they don't mind it being under gerrit, then i don't see why they can't have it as a repo for their project
15:43:53 <dansmith> ttx: that's mostly what I hate about it.. encourages bad behavior, like "here are 23 commits that finally end up with all the typos fixed, kthx"
15:44:04 <ttx> anyway, that's a tangent. I guess the rael question is, do y'all think OSops is going to be back ni fashion
15:44:11 <ttx> in*
15:44:23 <fungi> i'm not sold on the visibility argument, i'll admit. i think the reasons for becoming an official deliverable would need to be for something more than just increased visibility to make it worthwhile (we can't guarantee increased visibility/adoption)
15:44:36 <dansmith> fungi: that's my feeling, stated above
15:44:38 <ttx> if not, would you entertain OSarchiver to be a full project/openstack deliverable, or is it overkill
15:45:02 <ttx> if not, I guess that leaves the Large Scale SIG hosting solution
15:45:05 <mnaser> ttx: we were doing some stuff here.. https://opendev.org/vexxhost/openstack-tools -- i think the hardest thing about osops is we all do things differently and velocity can slow things down.  i would maybe suggest osarchiver to be a sig deliverable -- https://opendev.org/openstack/governance/src/branch/master/reference/sigs-repos.yaml
15:45:12 <ttx> or leave it where it is and link to it where we can
15:45:34 <mnaser> hosting under large scale sig would be my vote tbh
15:45:40 <ttx> mnaser: maybe all we need is some doc page that points to tools
15:45:45 <ttx> wherever they are hosted
15:45:50 <ttx> and promote THAT instead
15:45:55 <fungi> i don't think it's overkill to make a project team for something like that, but the team responsible for the software would need to make the judgement call there as to whether it's worthwhile *for them*
15:46:07 <mnaser> ttx: i think that would be a lot more successful instead of a centralized repo
15:46:36 <ttx> right, the overhead of common repository governance is just not worth it
15:47:15 <ttx> So the trick is to find a good place to document external tools
15:47:17 <mnaser> if they want to take advantage of gerrit + opendev + ci access, move it into the sig, if none of that interests them, it can stay at github, and we can point some osops 'tools' page to include them
15:47:24 <ttx> or SIG-maintained tools
15:48:05 <ttx> any suggestion on where we could document those? I guess we could have a page on the openstack website, if all else fails
15:48:24 <ttx> not sure our official docs would work a lot better
15:48:29 <mnaser> maybe the osops repo can have doc/ added to it
15:48:39 <ricolin> ttx I think I agree on to provide docs and visibility at this stage will be more important than to put it all in most reason place
15:48:39 <mnaser> and we can publish that there and point to it from the openstack website
15:48:52 <ttx> I think OSops is over-complex in a world where repos are cheap
15:49:26 <mnaser> i just prefer any form of code review over wiki
15:49:32 <gmann> we can add link in openstack map too as 'external tools which can be useful'
15:50:06 <ricolin> an deep dive article might be some good option too:)
15:50:09 <ttx> OK, so how about... for OSarchiver it can stay where it is or get hosted under Large Scale SIG. We list that tool (and others) on some page, ideally maintained by the Operations Docs/Tooling SIG
15:50:19 <ttx> Then link to that page from openstack.org
15:50:49 <mnaser> i like that
15:51:02 <jungleboyj> Makes sense.
15:51:54 <ttx> OK, I'll check with teh Ops tooling/Docs SIG if maintaining such page works for them. It's probably a better use of time that administering OSops
15:52:13 <mnaser> ttx: i think maybe we can keep this agenda item for next week to see where things are with this in terms of your final idea
15:52:15 <ttx> and it can still self-reference OSops tools
15:52:32 <ttx> Sure, I'll try to be around next week as well.
15:52:45 <ttx> Thanks everyone!
15:52:55 <mnaser> thanks for bringing this up
15:53:06 <mnaser> #topic open reviews
15:54:07 <mnaser> i think https://review.opendev.org/c/openstack/governance/+/770616 can merge
15:54:17 <mnaser> we need a follow up patch to mv it to the cycle goal
15:55:16 <mnaser> checking in on the other patches
15:55:22 <mnaser> #link https://review.opendev.org/q/projects:openstack/governance+is:open
15:57:33 <mnaser> ok great, everything can land
15:57:37 <mnaser> so we just have the monasca stuff
15:57:41 <gmann> +1
15:57:41 <mnaser> and we'll be fully cleared out on that
15:58:13 <mnaser> anything else? :)
15:58:25 <gmann> nothing from me.
15:59:17 <jungleboyj> Nothing from me.
15:59:39 <mnaser> great
15:59:41 <mnaser> thanks all!!!
15:59:45 <mnaser> #endmeeting