15:00:48 #startmeeting tc 15:00:49 Meeting started Thu Mar 11 15:00:48 2021 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:52 The meeting name has been set to 'tc' 15:00:56 o/ 15:00:56 #topic roll call 15:00:57 o/ 15:00:59 o/ 15:00:59 o/ 15:01:03 o/ 15:01:17 ✋ 15:02:12 \o/ 15:02:51 #topic Follow up on past action items 15:03:05 #link http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-03-04-15.03.html 15:03:12 we don't have anything listed, so we can skip that for today 15:03:18 #topic Audit SIG list and chairs (diablo_rojo) 15:03:26 cc ricolin on this one too 15:03:45 ping diablo_rojo_phon 15:04:53 we only got one patch in review now https://review.opendev.org/c/openstack/governance-sigs/+/778304 15:05:27 right, i think gmann brings up a good point about archiving things 15:05:36 yes 15:06:29 I through we have ways to retire a SIG 15:06:31 should we add sig also in TC-liaison list so that we periodically checks the status/health ? 15:06:46 gmann, +1 15:07:06 gmann: ++ 15:07:08 we had them in there originally 15:07:08 ricolin: yeah we have for moving to 'completed' state but not for 'un finished ' or so 15:07:50 the first cycle or two that we did liaisons, we had an optional section for sigs and board-appointed committees/working groups 15:07:51 ricolin: i mean if any SIG is retired from 'forming' state only 15:08:06 fungi: i see. 15:08:35 while reiterating the liaison for Xena cycle we can add SIG also in our automatic assignment script 15:08:39 at the time people felt just keeping up with all the project teams was more work than we were able to get done in the cycle, but we also had much loftier goals for health measurement back then 15:08:54 yeah 15:09:05 I guess this is currently all we have for retire process https://governance.openstack.org/sigs/reference/sig-guideline.html#retiring-a-sig 15:09:07 it's pretty hard to keep up with all the teams with all the things we have to deal with, that's what i found anyways 15:09:45 ricolin: yeah, may be we can add 'forming' -> 'retire' also there 15:10:10 +1 15:10:26 maybe add a reason 15:10:26 mnaser: true, at least we know if any SIG is not active then who from TC can follow up quickly 15:10:38 and say 'folded into XYZ' 15:10:45 mnaser: ++ 15:10:45 we should also ask to retire or migrate SIG repo too in doc I assume 15:11:18 mnaser: +1 for reason, nice idea 15:11:25 a reason will definitely something good for reactive 15:11:53 I will update the doc to reflect on these suggestion 15:12:00 thanks. 15:12:17 Will also update the container SIG patch too 15:12:28 diablo_rojo_phon, ^^^ 15:12:43 cool 15:12:45 and should I add SIG in TC liaison list if all ok? 15:12:48 Merged openstack/election master: Close Xena Elections https://review.opendev.org/c/openstack/election/+/779845 15:12:55 i guess we could 15:13:07 ok, 15:13:47 I think we should 15:13:53 but on the other hand 15:13:57 popup tema 15:14:14 what about popup team 15:14:16 popup team has TC liaison/volunteer already. 15:14:29 oh, than we're all good:) 15:14:42 #link https://governance.openstack.org/tc/reference/popup-teams.html 15:14:47 'TC Liaison' 15:15:29 the TC liaison for image encryption should update 15:16:14 or we're fine to have non-TC for TC liaison for popup team? 15:16:31 i think its ok for it to just be a liasion and not necessary a tc member 15:16:41 but maybe that's another discussion topic 15:16:42 :p 15:17:02 yeah 15:17:10 :-) I mean, he is an honorary member. :-) 15:17:13 ricolin: wanna add that to next weeks agenda? 15:17:15 I think no need to make more discussion as I believe on fungi for it:) 15:17:26 Poor guy will never be able to get away. 15:17:30 mnaser, I think we're good on this 15:17:44 :) we will not let him to go away 15:17:51 :-) 15:17:52 heh, yeah i was a tc member when i originally served as the sponsor for that pop-up 15:17:57 yep! 15:18:04 i still attend their weekly meetings 15:18:14 +1 15:18:34 #topic Gate performance and heavy job configs (dansmith). 15:18:42 https://media1.giphy.com/media/KczBU4M2IEdClprXaq/giphy.gif?cid=ecf05e4772p0dj0zuysiay11z145hvnovyiqbthd0thwb6nx&rid=giphy.gif 15:18:45 i want to say we originally decided that tc liaisons for pop-up teams didn't need to be tc members, i just happened to be in that case 15:18:45 oof, sorry 15:18:47 i think this one has been a rotating topic without that much progress 15:18:56 its been a busy week for all of us i think 15:19:13 yeah, so, 15:19:15 i saw we finally caught up with our node request backlog around 02:00 utc today 15:19:21 the gate has been crazy busy 15:19:25 yeah 15:19:29 I've seen a lot of cinder fail, 15:19:29 mnaser: It has at least gotten some visibility in Cinder and we are working on cleaning up failures that are slowing the checks. 15:19:41 and the tempest queue has been somewhat problematic 15:19:58 we're definitely doing a lot of work, which is great 15:20:04 yeah yesterday we finally got many of them merged in tempest but it was issue there 15:20:19 given the last couple weeks have been atyipcal (for normal, not for this part of the cycle), it's hard to tell how good we are or aren'tm 15:20:29 but some things have taken millions of rechecks to get landed 15:20:29 and obviously it start happen during release time 15:20:31 today is not so bad, i guess because we're at/past the freeze deadline now? 15:20:48 fungi: the major rush was yesterday for sure 15:20:51 node backlog reached nominal levels around 13:00 utc 15:21:25 there's a little bump at the moment, but there were brief periods in the past two hours where we weren't even using all of our quota 15:21:40 mnaser: personally I think this is a good thing for us to keep eyes on.. doesn't have to be every week, but I think keeping it on the radar has yielded good stuff, IMHO 15:21:52 yeah i think lets keep it on the radar 15:21:53 i agree 15:21:58 agree 15:22:04 ++ 15:22:24 also the additional quota from inap has really helped in the past few weeks 15:22:35 fungi: yeah, really seems like it 15:22:37 things would have been much worse without it 15:22:52 yesterday it was almost eight hours to get jobs running for a while, 15:22:57 but with a huuuge queue 15:23:09 so it felt like things were doing pretty well considering all the fail 15:23:58 there's been some push on the ml to solve some cinder-related failures by switching the iscsi signalling is it? 15:24:14 something which was causing a lot of job failures anyway 15:24:23 Yes. 15:24:39 I am not sure where that landed after discussion yesterday though. 15:24:53 switching to or from iscsi, 15:25:02 or switching something about how we use it? 15:25:08 Switching how we use it. 15:25:11 lio vs tgt i think? 15:25:16 From tgt to lio 15:25:25 ah 15:25:43 anyway, would be good not to lose sight of it with the change volume dropping as the cycle goes through its post-freeze state change 15:26:11 this sounds all good so we'll keep watching over things :) 15:26:22 i think we can move on to the next item 15:26:26 yup 15:26:31 #topic Consensus on lower constraints testing (gmann) 15:26:54 it seems no objection on the proposed plan on ML #link http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020556.html 15:27:11 which is basically 1. Only keep direct deps in lower-constraints.txt 2. Remove the lower constraints testing from all stable branches. 15:27:47 and it will be easy to maintain 15:28:08 like for nova 77 deps can be removed from l-c #link https://review.opendev.org/c/openstack/nova/+/772780 15:28:11 #2 includes removing it from new stable branches when they get created 15:28:20 +1 15:28:53 stable branches need stable jobs, and those won't be stable over time 15:28:59 as next step, i feel we should document it somewhere, in project guide or PTI or resolution ? 15:29:07 I agree, it makes most sense to have some stub for it and fix when needed 15:29:52 I feel PTI is better place? 15:30:05 gmann, do you think a goal is too strong/enforcing for this? 15:30:07 or resolution and then update PTI 15:30:39 ricolin: it is not strong, i think just removing the indirect one which would not cause much work 15:30:40 we haven't previously required the use of lower-constraints jobs, so it seems weird to have a policy requiring something about a non-required job 15:31:28 indeed 15:31:39 i think so far the pti only lists necessary policy, so this would be a shift to also including guidance i guess 15:32:03 I am with fungi on this 15:32:07 better not 15:32:25 Agreed. 15:32:29 * mnaser personally defers to the others on thisd 15:32:42 true for projects clarity we can add somewhere at least when project asked TC to have some guidelines on this 15:32:48 i agree with the guidance, just seems like maybe not something that needs to be enshrined in openstack's governance 15:33:21 agreed 15:33:43 does the qa team maintain content in the project teams guide? 15:33:57 i do not think so 15:34:06 i wonder if a section in there on testing recommendations (not policy) would fit 15:34:07 i think pti is the place where we all look on testing guidelines 15:34:33 well, we certainly look there for policies that the tc has officially voted on 15:35:04 pti does not mention l-c at all 15:35:12 yeah that's what projects were looking for, TC decide on l-c testing 15:35:17 in fact, the only place is pt guide 15:35:25 yoctozepto: yes that was the confusion i think when this bring up on ML 15:35:26 just remember the pti is part of openstack's governing documents (it's in the governance repository along with things like tc resolutions) 15:35:37 yes 15:35:46 and it was hard to maintain and we did not find pti does not talk about it so remove it? 15:35:53 remove the testing job? 15:36:28 * yoctozepto with his masakari ptl and kolla cores hats on admits to removing all l-c jobs 15:36:38 I feel having all testing policy in single place will be more clear 15:36:47 not a single tear was shed 15:36:55 policy yes, but is this policy when it's about something not required? 15:37:02 and 'do not test l-c on stable and direct deps on master' s policy for testing 15:37:13 feels too brute 15:37:26 at least like 'only requirement is to test direct deps on master' 15:37:44 so we are then adding one now, aren't we? 15:38:06 i will say adding the one we were testing without any clearity 15:38:51 if we end up removing the l-c testing then I would agree 15:38:57 would make sense to query projects; perhaps some do not test l-c at all 15:39:09 fwiw, masakari had broken jobs which ran noop with l-c so :-) 15:39:25 just saying :D 15:39:38 yeah because there was no clarity on whether to test or not 15:39:44 indeed 15:40:13 so do we want to test l-c? we know the shortcomings of the newly proposed approach 15:40:17 and after checking 'who need these' and 'whether it is worth to test or not' we end up like yes we can at least test direct deps in consistent way 15:40:21 it makes sense obviously 15:41:19 "accidental version bump, you shall not pass!" 15:41:56 Hehe 15:42:00 I would vote on making this a policy then 15:42:19 in PTG, many project will be discussing on these like nova will so I think we should be ready with TC guidelines by then. 15:42:34 so far, openstack has not mandated lower bounds testing, but many projects used lower-constraints jobs as an ad hoc standard. recent changes in pip made it apparent they could not be easily maintained on stable branches. some projects were cool with removing their l-c jobs entirely (they're not required after all), others wanted to keep the jobs but were looking for a compromise and so we've suggested 15:42:35 that compromise is to just take them out of stable branches. none of that is policy 15:43:10 yes, none *is* at the moment 15:43:11 so maybe this is something we can leave up to the projects to decide but list the different options? 15:43:35 it's all up to individual teams if they want to run l-c jobs at all, and they can also *try* to run them in stable branches if they like tilting at windmills, but it's inadvisable 15:43:41 mnaser: project wanted TC to decide 15:44:04 that was the original discussion started when neutron asked on ML 15:44:19 nova wants the tc to tell them whether and how to run lower-constraints jobs? 15:44:20 after oslo started the thread on dropping those. 15:44:21 so if projects want the tc to decide, then it sounds like policy 15:45:08 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019660.html 15:45:37 from here it was started to have some common guidelines 15:46:00 a tc policy of "you can do this if you want" isn't a policy, so if some projects want the tc to make a policy about lower-constraints jobs then it sounds like they're asking the tc to require these jobs when they were not previously required. that's a lot different from mere guidelines 15:46:28 ok, so a guideline sounds like a list of approaches to take 15:46:36 well it can be "l-c testing can be done with direct deps only for master and not needed for stable as mandatory" 15:47:08 is nova asking the tc to decide how all projects will do lower bounds testing, or is nova asking the tc to provide them with some suggestions? the first is policy, the second is not 15:47:47 its not nova, its from all other projects like neutron was seeing for some common strategy on this. 15:47:55 where come project were dropping it and some not 15:48:17 and i anticipate at least some projects to object to being required to add lower bounds testing they don't feel they have the capacity to stay on top of 15:48:49 we have job testing it and it run on all project/stable also so why not to make it in pti on what we expect on that. 15:49:15 and you'll need to decide how to determine what kinds of deliverables are required to have/add lower bounds testing, vs how to identify deliverables where it doesn't make sense 15:49:27 that is true in many other testing also, not all projects test also defined pti 15:49:39 let's recap what we know 15:49:50 1) l-c testing was largely broken 15:49:55 2) we survived 15:49:58 so? 15:50:09 no need to policy if not required :D 15:50:26 i think we should revisit this next week 15:50:26 i'd like sometime to chat over the next topic. 15:50:26 it's not like upper-constraints which is centrally maintained, lower bounds are different for every project and not always trivial to identify, i'm unconvinced that it makes sense to start forcing it on project teams who don't see value in it 15:50:35 or we keep discussing this 15:50:42 and move the rest of topics next week 15:50:44 but yeah 15:50:54 ok for next week as next topic is more important 15:50:58 perhaps it's good to vocalize on ptg 15:51:14 yoctozepto: ++ 15:51:49 #topic PTL assignment for Xena cycle leaderless projects (gmann) 15:51:54 #link https://etherpad.opendev.org/p/xena-leaderless 15:52:28 We have 4 project left as leader-less and 4 project as late candidacy 15:52:38 better than last cycle i think 15:53:00 (let's keep retiring and it will get better and better, yes) 15:53:02 That is better. 15:53:07 It is 15:53:09 out of first 4, Mistral might go with DPL as discussed preciously 15:53:21 very well 15:53:25 I volunteer as tribute for Barbican. 15:53:42 you are too kind 15:53:44 :) 15:53:46 :-) 15:53:47 nice 15:53:48 :) 15:53:54 * redrobot was not paying attention to PTL nomination deadline. 15:54:29 redrobot: i your defense, we didn't provide as many warnings it was coming up as we have in past cycles 15:55:19 I can reach out to Mistral team for DPL model 15:55:19 gmann: should we then move mistral to dpl in the whiteboard? 15:55:20 we actually ended up with a lot fewer "leaderless" results than in past cycles 15:55:23 gmann: ack 15:55:40 yoctozepto: let's check with them on required liaison list or so 15:55:43 fungi 😅 15:55:52 gmann: yeah, I figured from your subsequent message 15:56:06 basically we need to decide on Keystone and Zaqar 15:56:18 Wow. Keystone ... 15:56:19 Zaqar seems not active in last cycle 15:56:42 Yeah my feelings too jungleboyj 15:56:43 may be we can get release team input also if they are doing wallaby release or not 15:57:36 knikolla was suggesting dpl for keystone 15:57:47 zaqar is not deployable by kolla nor charms 15:57:59 I think tripleo and osa do deploy it though 15:58:09 Ok. I assume there is still enough activty there to spread out the responsibility? 15:58:37 Someone go find Brant Knudson 15:58:39 I agree with gmann that Zaqar is likely 5 - bye-bye for now 15:58:50 for keystone? i don't get the impression keystone is dead, at least, they're on top of vulnerability reports from my vmt perspective 15:59:03 yeah, I will ping release team on Zaqar release status 15:59:33 agree on keystone, it is active project just no PTL 15:59:33 fungi: Yeah. If knikolla is recommending depl, that seems fine. 15:59:38 https://review.opendev.org/q/project:openstack/releases+zaqar 15:59:57 yoctozepto: thanks 16:00:24 nothing in wallaby whatsoever 16:01:46 yeah 16:02:51 another important data point is also to understand if the project is actually used 16:03:50 yeah, good point 16:04:10 may be we can check latest user survey data 16:04:29 Makes sense. 16:05:01 yeah, on that note I already wrote regarding deployment tools 16:06:13 we are out of time anyways. we can keep discussing it on etherpad or after meeting 16:06:14 there was not enough steam to even add it in kolla and charms :/ 16:06:15 Project like Heat use Zaqar to impl. singal, will be great to send them some notify if we gonna remove Zaqar 16:06:33 ricolin: that's interesting 16:07:00 o/ sorry i'm late 16:07:07 yoctozepto, not hard dependency, just provide it as one of singal backend 16:07:21 I mean from heat side 16:07:27 knikolla, o/ 16:07:48 ricolin: yeah, I've done a quick read 16:07:58 they should be happy to maintain less code :-) 16:08:10 hi knikolla 16:08:30 oh, we are past time indeed 16:08:49 Not too badly 16:09:11 if knikolla could say a word about keystone governance model 16:09:24 we would have a (almost) complete set of information 16:09:32 \o/ 16:10:05 None of the cores has reached out to me showing interest in taking over as PTL 16:10:21 And pretty much everyone has cycled through the role (or is ptl of some other project) 16:10:38 duh 16:11:23 Maybe ping them? They might be too shy to step up? 16:11:31 ohk, how about DPL model? anyone interested in that or something you have discussed in keystone meeitng or so 16:12:04 speaking from experience, it takes a lot of convincing for a former ptl to come out of retirement 16:12:27 I was thinking more the cores 16:12:33 I don’t think it’s a question of shyness 16:12:38 gmann: hberaud said mistral is not releasing either 16:12:53 spotz: he was saying basically all the keystone cores are also former keystone ptls 16:13:05 ++ 16:13:08 yoctozepto: yeah but as per Renat (former PTL) he is ok to help on that 16:13:10 Ahhh 16:13:15 gmann: ahh, ack! 16:13:40 I get the impression that everyone is being pulled in other directions and doesn't feel like they have the time to commit to being PTL. 16:13:46 * bnemec sympathizes 16:13:56 bnemec: ++ 16:14:14 I could not agree more 16:14:36 I’ll ping the cores privately before Tuesday’s meeting, for a final attempt 16:14:46 Othwerise, I guess DPL will be it. 16:15:00 +1 16:15:04 thanks knikolla 16:15:08 ++ 16:15:52 may be we should end meeting 16:15:53 all right, I think we have gathered all we could 16:15:56 mnaser: ? 16:16:01 my thoughts exactly 16:16:04 sorry, yes 16:16:05 #endmeeting