15:01:18 #startmeeting tc 15:01:18 mnaser: ping 15:01:19 Meeting started Thu Feb 18 15:01:18 2021 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:23 The meeting name has been set to 'tc' 15:01:29 :-) 15:01:43 :) 15:01:44 o/ 15:01:49 o/ 15:02:39 #topic rollcall 15:02:44 o/ 15:02:45 thanks for hosting last week btw, gmann 15:02:45 o/ 15:02:46 o/ 15:02:47 o/ 15:02:51 * mnaser has been dealing with all sorts of fun 15:03:55 #topic Follow up on past action items 15:04:01 gmann continue to follow-up on using direct dependencies for l-c jobs 15:04:26 I posted the result of direct deps in l-c test #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020556.html 15:05:12 and the proposal alternate to removal of l-c testing 15:05:23 which is what was initially proposed in ML 15:05:43 let's wait for more response on that and then we can discuss/document that 15:05:55 ack, thanks for all the follow up on this 15:06:39 o/ 15:07:03 diablo_rojo to send the k8s steering meeting details on ML and add section for new meeting in etherpad 15:07:36 Already added a new section to the etherpad 15:07:40 but I need to send an email 15:07:59 ok, we have it as a discussion item, so i will leave it for us to talk bout it when we get to it potentially ? 15:08:23 I don't think it warrants a whole section now that its been mentioned and I will send the email lol 15:08:52 ok so we can drop that section them 15:09:04 Want to link the etherpad? 15:09:09 Yeah, mnaser I think so. 15:09:15 yeah taht might be helpful 15:09:26 #link https://etherpad.opendev.org/p/kubernetes-cross-community-topics 15:09:28 L29 15:09:41 gmann, to the rescue 15:09:41 sorry L20 15:09:51 :) 15:09:56 as I fumble around with tab complete 15:10:10 Yes line 20, feel free to add topics 15:10:15 can we keep a rolling action item to add topics there 15:10:32 i'd really like us to take advantage of this :> 15:11:03 Yeah go for it :) 15:11:08 o/ 15:11:22 ++ 15:11:27 #action diablo_rojo send out email to ML wrt k8s cross-community discussion 15:11:41 #action tc-members fill out https://etherpad.opendev.org/p/kubernetes-cross-community-topics with topics 15:11:46 +2 15:12:05 cool, anything else about this? 15:12:12 Nope I don't think so 15:12:13 #action mnaser drop k8s steering commitee meeting from agenda 15:12:27 gmann to follow up with monasca team for https://review.opendev.org/c/openstack/governance/+/771785 15:12:50 I pinged Monasca PTL chaconpiza no IRC as well as sent email to him but no response. I will wait for tomorrow afternoon otherwise push the changes and add monasca team as reviewer 15:13:32 hmm 15:13:37 wonder if we just dont have an active email too 15:13:41 https://review.opendev.org/q/owner:martin%2540chaconpiza.com 15:13:48 I also pinged him but no response as well 15:14:04 does monasca team have a scheduled team 15:14:09 * mnaser opens eavesdrop.openstack.org 15:14:19 http://eavesdrop.openstack.org/#Monasca_Team_Meeting 15:14:24 Weekly on Tuesday at 1300 UTC in #openstack-monasca (IRC webclient) 15:14:45 http://eavesdrop.openstack.org/meetings/monasca/2021/monasca.2021-02-16-13.00.log.txt 15:14:48 there was a meeting this week 15:14:53 maybe we can add something to their team agenda - https://etherpad.opendev.org/p/monasca-team-meeting-agenda 15:15:06 sure, that will help 15:15:28 cool, so lets try that route 15:15:35 +1 15:15:36 ill keep the action item for the next week if that's cool 15:15:41 sure 15:15:45 #action gmann to follow up with monasca team for https://review.opendev.org/c/openstack/governance/+/771785 15:15:56 #topic Audit SIG list and chairs (diablo_rojo) 15:16:35 I let this fall down my todo list again but I will make progress by next week. 15:16:42 So.. no updates atm 15:17:09 ok cool no problem :) 15:17:19 #topic Gate performance and heavy job configs (dansmith) 15:17:32 nothing really new since last week or so, 15:17:40 I haven't gone checking to see if things have been dropped, 15:17:44 gate perf is still pretty bad 15:17:57 dansmith: is there any easy measurable signal we can all look at to track if things are trending better or worse? 15:17:58 we have merged one big improvement to devstack time, and would like to advocate for another which will make a big deal 15:18:19 mnaser: that metric is super hard to generate, so it's more of a gut feeling sort of thing 15:18:29 I imagine gmann also has a gut feeling 15:18:41 i wonder if zuul has some 'avg wait time for change in gate' 15:18:41 I tried to get some attention in the cinder channel to the test failures last week but got no replies 15:18:53 wasn't there some twitter bot that would whine when our queues were long :) 15:18:55 so I will try them again, as it seems like in terms of gate resets they're the high offender 15:19:08 mnaser: the gate queue depth is easy to see, 15:19:26 and I watch my patch time-to-run values with dash.py, 15:19:26 so i think the reset's are what hurt the most right 15:19:28 did tripleo optimization done? 15:19:37 gmann: I don't know I haven't checked 15:19:46 they were collecting votes, but not sure if it actually happened 15:19:48 the average wait time is going to vary a lot depending on activity levels and time of day/week too, so it's hard to know what you're comparing against 15:20:00 k 15:20:04 fungi: but i think a good signal that we can focus on is # of gate failures correct? 15:20:13 the more gate fails/resets, the worse it gets, esp with gate being high prio 15:20:16 previous week? same week in the previous development cycle? same week in the prior year? 15:20:27 mnaser: gate resets hurt a lot, but load is a huge part of it right now I think 15:20:38 i am not sure if that came in past also, can we have some user flag to stop the run ? 15:20:48 it may be a bit of a domino effect too 15:20:50 gate reset count also varies depending on the depth of the queues themselves 15:20:55 fungi: yes, good poin 15:20:56 so that i can stop if i see some job failing and i do not want rest of other to continue 15:21:07 right, the gate queue isn't that deep right now 15:21:07 gmann: you could abandon your patch 15:21:15 so a reset would suck, but it wouldn't be huge 15:21:18 the deeper the queue, the more changes have a chance to fail more than once as changes ahead of them fail 15:21:20 well that need abandon and restore right 15:21:31 gmann: or push a new rev 15:22:02 many case i see failed in gate pipeline so we have to do recheck anyways to why to wait for other 15:22:13 at least gate pipeline can be helpful 15:22:39 remove the review if any of the job fail 15:22:41 I think the TC should focus on the people stuff we can resolve, 15:22:44 fungi: multinode jobs and jobs that pause and wait for another job (container stuff) -- do these enforce a cloud in the same nodepool-pool? 15:23:04 technical brainstorming and feature implementation is really not a TC thing, and we can discuss those things in -infra 15:23:20 so I'll try to check on the optimization efforts to lower the test load before next week's meeting here 15:23:21 mnaser: yes, if builds depend on one another then zuul asks nodepool to fill them all from the same provider 15:23:27 and see if we need some reminders 15:23:39 so that their network communication (if any) will be as local as possible 15:23:41 thats's fair. dansmith: is there a script that generates your 'usage per project' thing? 15:23:54 so we can track if we need to ping more people and if so, who 15:23:58 mnaser: yes, linked in my email 15:25:08 #link https://gist.github.com/kk7ds/5edbfacb2a341bb18df8f8f32d01b37c 15:25:16 ok, i'll try and see if i can run that to look over it here too 15:25:31 ok, lets keep the item for now, i don't know if there's a whole lot more we can immediately do 15:25:48 #topic Mistral Maintenance (gmann) 15:26:08 we are good on this. Renat replied to try the DPL model in Xena. 15:26:19 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020137.html 15:26:39 ok great, so i think we can drop this from the agenda and have DPL patches for next time around? 15:26:44 yeah. 15:27:42 #action mnaser drop Mistral Maintenance tp[oc 15:27:43 #undo 15:27:43 Removing item from minutes: #action mnaser drop Mistral Maintenance tp[oc 15:27:45 #action mnaser drop Mistral Maintenance topic 15:28:01 #topic Student Programs (diablo_rojo) 15:28:12 Another reminder! 15:28:25 Tomorrow the GSoC applications close. 15:28:40 Still have some more time to apply for outreachy. 15:28:58 do projects need to register or what exactly is actionable here? 15:28:59 I'm happy to help however I can if people are interested in applying for interns. 15:29:21 For outreacy mentors need to apply with a project for interns to work. 15:29:34 Similar for GSoC 15:29:54 We've only been accepted to GSoC once though so I am less familiar with the process. 15:30:09 by we = openstack, or a specific openstack project, or osf, or? 15:30:28 OpenStack 15:30:38 for GSoC 15:31:03 OpenStack has long been a participant in outreachy- projects varied under that umbrella 15:31:38 i'd hate for us to miss out on taking advantage of the opportunity :X 15:32:04 taking advantage sounds like a poor taste of wording but 15:32:12 i cant find anything better :) 15:32:22 Still :) Making use of the opportunity 15:32:23 wah wah wah 15:32:46 Let me know if you need help! Otherwise, we can move onto ttx's topic 15:33:02 i'd probably ask if openinfra can take advantage of this 15:33:04 but yeah, sure 15:33:22 I am going to put application for QA today 15:33:57 ack 15:34:03 gmann, for outreachy? 15:34:18 mnaser, I have passed along both to all open infra folks. 15:34:25 (zuul, kata, etc) 15:34:26 need to check which one or both 15:34:36 diablo_rojo: awesome, sorry, i meant opendev, but yeah :p 15:34:37 gmann, sounds good, let me know if I can help. 15:34:43 sure 15:35:12 ok great 15:35:15 #topic Recommended path forward for OSarchiver (ttx) 15:35:25 Hi! Was just looking for a bit of guidance, on behalf of the Large Scale SIG 15:35:35 OVHCloud released https://github.com/ovh/osarchiver as open source (BSD-3) 15:35:45 It's a tool to help with archiving OpenStack databases, but it is relatively openstack-agnostic 15:35:57 They are interested in making the tool more visible to OpenStack users by placing it under OpenStack governance 15:36:04 There are multiple ways to get there 15:36:13 - We could just host it under the Large Scale SIG 15:36:20 ...but it's not really large scale specific 15:36:31 - We could leverage the https://opendev.org/openstack/osops common repository (owned by the "Operation Docs and Tooling" SIG) 15:36:42 ...but that SIG and repository are not very active/visible, so it's unclear that would be a win 15:36:52 - We could make it its own smallish project on the "Operations tooling" side of the OpenStack map 15:37:03 ...but a full project team may be overkill as the tool is mostly feature-complete 15:37:13 What would be the TC's recommendation for them to follow? 15:37:29 Placing it under OpenStack governance involves relicensing it under Apache-2, so I feel like we should not ask them to move/relicense if that does not really increase the tool visibility. 15:38:05 the difference between this and the built-in openstack tools is that this archives things in another db, right? 15:38:34 also works for any project, iiuc 15:39:15 i mean, a sig can have deliverables 15:39:19 (which raises the other alternative solution: keep it where it is because it's actually not that useful) 15:39:20 either they relicense and get the visibility or don't right? 15:39:28 isn't apache license a requirement for openstack? 15:39:40 ansible-sig publishes the openstack collections 15:39:48 apache license is a loose requirement for deliverables shipped as a part of openstack 15:39:53 they are not an official deliverable of openstack 15:39:56 we have a number of exceptions documented 15:39:59 dansmith: yes it is. They are happy to relicense. but if it's to end up in a dark corner of OSops, that might not be worth the pain 15:40:05 ack 15:40:09 and im pretty sure because of legacy reasons, openstack collections werent apache 2 15:40:21 I guess if it's not a deliverable per se, maybe the license requirement isn't a thing? 15:40:40 #link https://governance.openstack.org/tc/reference/licensing.html Licensing requirements 15:40:49 for me the main question is to have a visible "place" in the community were ops can share their tools 15:40:51 I'm also not sure how much the "being under the big tent umbrella awning" or whatever really increases visibility anymore, but.. 15:40:54 if it's a SIG thing, does not have to be apache-2. But relicensing is not really the issue 15:41:22 I'm reulctant to encourage them to move in if it does not really increase visibility 15:41:55 something i will openly admit 15:42:00 people don't like gerrit 15:42:01 belmoreira: would you say OSops is such a place? Can anyone name one tool that is there? 15:42:20 projects will _actually_ get less contributions if they are on gerrit. 15:42:25 ttx how's large scale SIG feel about it if we end up put it under large scale SIG? 15:42:27 people don't like lots of tools. some people also don't like github, or gitlab, or bitbucket, or... 15:42:31 The people at OVH are happy to place it under Gerrit fwiw 15:42:46 * dansmith hates github 15:42:50 there, I said it 15:42:54 same. 15:42:58 but we're a minority :) 15:43:02 if we have a light weight project that can have releases, like other projects, it would have a lot of visibility 15:43:15 dansmith: I still don;t understand how to use it properly, and when I ask, I realize nobody else uses it properly 15:43:16 dansmith, you're not alone there:) 15:43:20 if they don't mind it being under gerrit, then i don't see why they can't have it as a repo for their project 15:43:53 ttx: that's mostly what I hate about it.. encourages bad behavior, like "here are 23 commits that finally end up with all the typos fixed, kthx" 15:44:04 anyway, that's a tangent. I guess the rael question is, do y'all think OSops is going to be back ni fashion 15:44:11 in* 15:44:23 i'm not sold on the visibility argument, i'll admit. i think the reasons for becoming an official deliverable would need to be for something more than just increased visibility to make it worthwhile (we can't guarantee increased visibility/adoption) 15:44:36 fungi: that's my feeling, stated above 15:44:38 if not, would you entertain OSarchiver to be a full project/openstack deliverable, or is it overkill 15:45:02 if not, I guess that leaves the Large Scale SIG hosting solution 15:45:05 ttx: we were doing some stuff here.. https://opendev.org/vexxhost/openstack-tools -- i think the hardest thing about osops is we all do things differently and velocity can slow things down. i would maybe suggest osarchiver to be a sig deliverable -- https://opendev.org/openstack/governance/src/branch/master/reference/sigs-repos.yaml 15:45:12 or leave it where it is and link to it where we can 15:45:34 hosting under large scale sig would be my vote tbh 15:45:40 mnaser: maybe all we need is some doc page that points to tools 15:45:45 wherever they are hosted 15:45:50 and promote THAT instead 15:45:55 i don't think it's overkill to make a project team for something like that, but the team responsible for the software would need to make the judgement call there as to whether it's worthwhile *for them* 15:46:07 ttx: i think that would be a lot more successful instead of a centralized repo 15:46:36 right, the overhead of common repository governance is just not worth it 15:47:15 So the trick is to find a good place to document external tools 15:47:17 if they want to take advantage of gerrit + opendev + ci access, move it into the sig, if none of that interests them, it can stay at github, and we can point some osops 'tools' page to include them 15:47:24 or SIG-maintained tools 15:48:05 any suggestion on where we could document those? I guess we could have a page on the openstack website, if all else fails 15:48:24 not sure our official docs would work a lot better 15:48:29 maybe the osops repo can have doc/ added to it 15:48:39 ttx I think I agree on to provide docs and visibility at this stage will be more important than to put it all in most reason place 15:48:39 and we can publish that there and point to it from the openstack website 15:48:52 I think OSops is over-complex in a world where repos are cheap 15:49:26 i just prefer any form of code review over wiki 15:49:32 we can add link in openstack map too as 'external tools which can be useful' 15:50:06 an deep dive article might be some good option too:) 15:50:09 OK, so how about... for OSarchiver it can stay where it is or get hosted under Large Scale SIG. We list that tool (and others) on some page, ideally maintained by the Operations Docs/Tooling SIG 15:50:19 Then link to that page from openstack.org 15:50:49 i like that 15:51:02 Makes sense. 15:51:54 OK, I'll check with teh Ops tooling/Docs SIG if maintaining such page works for them. It's probably a better use of time that administering OSops 15:52:13 ttx: i think maybe we can keep this agenda item for next week to see where things are with this in terms of your final idea 15:52:15 and it can still self-reference OSops tools 15:52:32 Sure, I'll try to be around next week as well. 15:52:45 Thanks everyone! 15:52:55 thanks for bringing this up 15:53:06 #topic open reviews 15:54:07 i think https://review.opendev.org/c/openstack/governance/+/770616 can merge 15:54:17 we need a follow up patch to mv it to the cycle goal 15:55:16 checking in on the other patches 15:55:22 #link https://review.opendev.org/q/projects:openstack/governance+is:open 15:57:33 ok great, everything can land 15:57:37 so we just have the monasca stuff 15:57:41 +1 15:57:41 and we'll be fully cleared out on that 15:58:13 anything else? :) 15:58:25 nothing from me. 15:59:17 Nothing from me. 15:59:39 great 15:59:41 thanks all!!! 15:59:45 #endmeeting