16:00:10 #startmeeting tc 16:00:10 Meeting started Wed Nov 30 16:00:10 2022 UTC and is due to finish in 60 minutes. The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:10 The meeting name has been set to 'tc' 16:00:16 #topic Roll call 16:00:17 o/ 16:00:18 o/ 16:00:19 o/ 16:00:22 o/ 16:00:30 o/ 16:01:05 Hello all. 16:01:47 o/ 16:02:21 let's wait for a min if rosmaita spotz arne_wiebalck noonedeadpunk join, i cannot see any name in absence section 16:02:31 meanwhile this is today agenda #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions 16:02:35 o/ 16:04:02 let's start 16:04:06 #topic Follow up on past action items 16:04:14 gmann to check with foundation about zoom pro account if any and can be shared with TC monthly video call 16:04:36 still did not get the response from foundation staff, I will send the reminder 16:04:40 #action gmann to check with foundation about zoom pro account if any and can be shared with TC monthly video call 16:04:59 other followup is about election things 16:05:21 I have pushed the TC charter change to reflect what we discussed in PTG 16:05:22 #link https://review.opendev.org/c/openstack/governance/+/865367 16:05:25 o/ here for a few minutes but as mentioned when this time slot was chosen I have a conflict 16:05:28 please review ^^ 16:05:44 spotz: ack 16:05:58 as soon as we can merge it, will be good to plan for next election in advance 16:06:36 o/ 16:06:42 I will keep eyes on gerrit if any comment on that 16:06:54 #topic Gate health check 16:07:21 volume detach timeout is happening 100% in lvm job (nova-lvm) and we had to disable those failing test for now 16:07:30 other than the nova thing, no gate health concerns from me this time, but with the holiday my data is limited 16:07:31 it started when migrated to Jammy 16:07:39 ok 16:07:48 volume detach seems to be a constant problem 16:08:22 we had one issue with trunk ports and live-migration on Neutron but it should be fixed already 16:08:22 yeah, even those failing tests are preparing server with ssh-able and make it ready before volume is attached/detached 16:08:31 slaweq: +1 16:09:22 any other issues/item to highlight for gate health ? 16:10:20 #topic 2023.1 TC tracker checks 16:10:29 #link https://etherpad.opendev.org/p/tc-2023.1-tracker 16:10:45 one update from me is on election things which I already mentioned the patch to review 16:11:03 and other is about i18 SIG discussion in board meeting on Dec 6th. 16:11:13 rosmaita: ^^ can you give brief update on this 16:11:32 sure 16:11:58 we will propose to the Foundation Board that they fund a hosted version of Weblate 16:12:13 which the SIG i18n has idenitified as the best platform to use 16:12:56 we also proposed that the Foundation Staff should help us find a volunteer to change the Zanata-to-gerrit plumbing to use weblate 16:13:24 this will free up the i18n team to do the content migration and ultimately focus on doing translations 16:15:21 yeah, it will be discussed in Dec 6th board meeting and let's see how it goes 16:16:06 rosmaita: anything else on this? or any query form any other tc-members ? 16:16:10 *from 16:16:31 nothing from me 16:16:54 o/ 16:17:07 sorry, was side-pinged last moment :( 16:17:18 np! 16:17:26 ok, let's move 16:17:44 any other updates from anyone on the tracker items ? 16:18:26 #topic FIPS testing on ubuntu paid subscription 16:18:30 #link https://review.opendev.org/c/openstack/project-config/+/861457 16:19:13 as we discussed it in PTG also, with currently available options to test FIPS, using ubuntu paid subscription is one of the option where we can make FIPS jobs voting 16:19:26 centos stream is not stable enough to test FIPS on gate 16:19:47 opendev is looking for TC official agreement on this. 16:20:13 note that tinwood, the openstack charms ptl, was able to negotiate a free subscription for our test jobs to use 16:20:22 I would like to start the voting on this but before that if we need more discussion/questions, we have ade_lee here to explain the situation 16:20:43 so while people normally pay for access to ubuntu's fips-enabled packages, we're getting gratis access to them 16:21:04 I'll note I had a conversation with smoe of the opendev folks about this when it surfaced, and my impression is that a developer would be able to access the actual-fips-packages to do local testing of a failing job, just without support. As long as this is accurate (developers can troubleshoot broken fips jobs locally), I am on board. 16:21:50 I wonder if they would need to have this free subscribtion that covers 5 machines for that? 16:22:00 thats correct. the agreement was that the subscrption is free for use for fips testing , but with no support 16:22:51 note that the canonical folks didn't say they were providing free access to developers, just to our ci jobs 16:23:07 can that agreement be made public? 16:23:12 and they acknowledge that we can't reasonably *secure* that access due to the way our jobs are run 16:23:15 How could we expect developers to be able to maintain FIPS support if they cannot test that outside of zuul? 16:23:37 yeah I was wondering on that we will not be able to test it locally right? its is just in CI 16:23:55 do we expect developers to maintain osc support for rackspace's api without a paid account in rackspace? 16:24:09 I feel like most of the things you'd need to 'debug' from a fips job failure is not something you'd really necessarily have to reproduce locally. It'd be like an exception complaining that md5 isn't available or something 16:24:26 or is the free rackspace account we arranged for osc testing not acceptable practice either? 16:24:28 which is pretty straightforward and once we're gating on these, would be when you add a new feature that is using something like that 16:24:58 fungi: the same goes for all the special hardware support we have in lots of places 16:25:08 fungi: I have zero $10k nvidia GPUs locally 16:25:19 true, all backends 16:25:23 correct, though for the most part we don't currently test those upstream because they're not available to us 16:25:35 do we test gpus for reall? huh... 16:25:36 I'm still waiting for my s/390 machine in the mail 16:25:43 I'll note that many of those special-hardware-support tests are non-blocking, too (at least in Ironic they don't get to vote) 16:25:52 dansmith but isn't it like things which requires e.g. special hardware for testing are in 3rd party jobs? 16:26:19 but are we making fips as required to pass for all projects? 16:26:21 at least in neutron it is like that - if something require special hardware, we don't gate with such job 16:26:28 slaweq: yeah, but I think in most projects if a patch causes a particular driver to fail, a core team would expect that to be fixed before merge, no? 16:26:37 I guess it's up to projects to decide if add them or not after all? 16:26:57 that's why i brought up osc's testing. the client/sdk team has arranged gratis accounts with public cloud providers for use in upstream testing, something which an individual developer would not have access to without similar negotiations 16:26:58 but will it be hard to fix the FIPS things just from CI failure and even we are not able to produce them locally? 16:27:02 dansmith sure 16:27:03 the FIPS thing seems much easier to debug than hardware 16:27:08 yeah 16:27:14 gmann: right, I think it's much easier 16:27:24 "sorry MD5 not allowed, *sad trombone*" is pretty straightforward 16:28:31 we can always try to add it and if there will be too many issues with it and it will be hard to debug without access to the FIPS env for developers, we can always remove that job(s) from gate 16:28:38 * jungleboyj is happy to see someone else use sad trombone. :-) 16:28:53 right, well, that's what we've done up until now.. we've added them and then when centos breaks, we make them n-v 16:28:56 but probably it will be as dansmith says - it will be pretty easy to fix issues related to FIPS 16:29:01 true, we can re-iterate it based on future situations 16:29:03 so if this becomes a problem, it's easy to do the same 16:29:08 ok, so this does not seems blocking to allow CI testing on it 16:29:48 AFAIK, none of the times we've had to do that have been due to fips problems, just CS problems 16:29:49 and if any other distro (free version) become stable/available then we can always move FIPS jobs to that 16:30:01 also that ^ :) 16:30:03 I'll note that in the review posted here earlier; debian was listed as an alternative 16:30:11 but a nonviable one because we don't use it for many other things (yet) 16:30:25 though it is in the pti for 2023.1 16:30:27 So; I don't think it's resonable to say "if a free version becomes available" -- it is aavailable 16:30:37 JayF: becomes "viable" 16:30:51 also *stable* enough to test everywhere 16:31:04 FIPS goal is to add jops in every project as voting 16:31:05 part of the problem has been that CS9 breaks or behaves differently than *all* the other tests, and so it becomes a problem for people who are set up to test on ubuntu 16:31:12 I mean, viability is all in a matter of if we want to spend time on it. And debian is incredibly stable. I'm not suggesting we change it, I just don't want us to couch this as us having no choice; we do have a choice 16:31:25 we're just prioritizing using existing infra + a token over revamping the infra to use a more free solution 16:31:27 right, someone needs to work out the logistics for it and make sure the baseline testing on that platform without fips is also running and in good shape so that fips-specific issues can be differentiated from general platform-related issues 16:31:29 debian is similar in that it becomes some work if someone has to bootstrap the debian environment to see if it's a debian problem or a FIPS on 16:31:30 *one 16:31:43 question is what are current best and stable/viable options to start it 16:31:58 debian being stable isn't the concern, it's debian being different than all the other jobs 16:32:09 we had a debian failure for a few weeks recently 16:32:21 which wasn't a debian problem (AFAIK) but just a "different than focal" problem 16:32:27 Which I'm fine with; I just don't want us to pretend like we don't have fully-foss options when we would if we prioritized debian CI support. 16:32:52 I'm prioritizing this purely as a "minimal change from our other jobs" perspective... right. 16:33:02 note that nobody has said ubuntu's packages for this aren't fully foss, they just charge money for access to them (that doesn't make then not free/libre open source) 16:33:13 to be fair, we have debian and ubuntu jobs running in osa and we haven't seen debian being much different or failing differently rather then ubuntu for quite a while 16:33:35 noonedeadpunk: we had an example in the devstack gate just a couple weeks ago 16:33:39 but yes, that we're currently running ubuntu we can't use debian only for fips 16:34:04 open source licenses don't preclude someone from charging money when distributing the software 16:34:15 that's fair; I should've been more precise with my choice of words 16:34:22 Well, I used devstack last time 3 years ago or so, so hard to judge on it 16:34:37 noonedeadpunk: https://review.opendev.org/c/openstack/devstack/+/864135 16:35:07 (turned out to be within our control, we just didn't know it, because it was different) 16:35:20 at least in current situation, if debian become much tested in projects side also and have voting jobs then nobody stop us to move FIPS jobs on that 16:35:23 sounds like we're circling the drain of agreement, shall we vote? 16:35:33 getting charged for something to me sounds like the opposite of free 16:35:43 frickler: beer and speech :) 16:35:51 :-) 16:35:54 key part is we want to wait for FIPS testing or can start with ubuntu paid(free for us) for now? 16:36:23 any other point/discussion ? otherwise let's vote 16:37:21 gmann: Can you post question for vote before starting vote ? :) 16:37:27 seems we are good to vote, just to make sure and to avoid invalid voting :) this is wording of vote "Considering the currently available options, Is it ok to use the Ubuntu paid subscription for FIPS testing?" 16:37:32 noonedeadpunk: yeah ^^ 16:37:38 fine with me 16:38:33 silence means no objections I guess? 16:38:35 i am ready 16:38:44 ++ 16:38:46 yeah, let's start 16:38:47 #startvote Considering the currently available options, Is it ok to use the Ubuntu paid subscription for FIPS testing? Yes, No 16:38:47 Begin voting on: Considering the currently available options, Is it ok to use the Ubuntu paid subscription for FIPS testing? Valid vote options are Yes, No. 16:38:47 Vote using '#vote OPTION'. Only your last vote counts. 16:39:00 #vote Yes 16:39:00 #vote Yes 16:39:04 #vote Yes 16:39:04 #vote Yes 16:39:06 #vote Yes 16:39:11 #vote yes 16:39:21 #vote yes 16:40:25 #vote yes 16:40:28 I think knikolla[m] spotz left to vote, waiting for a min if they will. 16:40:43 #vote Yes 16:40:48 cool 16:40:52 #endvote 16:40:52 Voted on "Considering the currently available options, Is it ok to use the Ubuntu paid subscription for FIPS testing?" Results are 16:40:52 Yes (9): spotz_, knikolla[m], gmann, slaweq, arne_wiebalck, rosmaita, dansmith, JayF, noonedeadpunk 16:40:53 sorry, am in two meetings at the same time. 16:40:59 knikolla[m]: ack, np 16:41:20 ok so we have the agreement now, I will post the link to project-config patch 16:41:27 moving to next topic 16:41:28 fungi: can you send it? 16:41:44 #topic Adjutant situation (not active) 16:41:52 Last change merged Oct 26, 2021 (more than 1 year back) 16:41:59 Gate is broken 16:42:01 dansmith: send what" 16:42:04 ? 16:42:16 fungi: merge the fips patch 16:42:21 oh, sure 16:42:40 thanks all 16:42:47 it seems Adjutant situation is not improved since we abandon the proposal of marking it 'inactive' 16:42:49 after the meeting adjourns i'll include a link to the minutes in the change approval 16:42:50 #link https://review.opendev.org/c/openstack/governance/+/849153 16:42:57 ade_lee: thanks for all effort on this goal 16:43:01 fungi: thanks 16:43:32 and I pinged PTL on IRC I think couple of times but no response 16:43:41 even no response on gerrit 16:44:20 I feel we should mark it as 'Inactive' (restoring the above patch) and then we will see if we can retire it if no maintainer or not ? 16:44:42 I would vote +1 to such a patch given the current situation 16:44:57 ++ 16:45:01 sounds fine to me 16:45:17 out of curiosity, has anyone sent e-mail to the ptl? i suppose it's possible they're just unaware of the responsibilities they signed up for or what they should be paying attention to 16:45:30 Unfortunately we have stopped using Adjutant in our cloud in favor of a different solution so I can't justify the time to track its state as I used to a few years ago. 16:45:32 overall stats of the Adjutant are like https://paste.opendev.org/show/b4OWBZ74G8t6QKKs4Fqq/ 16:45:47 I'm +1 for marking it as "inactive" 16:46:07 fungi: I do not think they are unware, they ack in the governance patch (marking inactive) and we abandon the same 16:46:23 according paste it jobs are not really broken :D 16:46:25 got it 16:46:32 but anyways let me propose it for inactive and will send email also to PTL to ack it 16:46:38 Last time I looked at adjutant it was broken mainly due to django version 16:46:40 yeah, and I think that it was someone who was ptl for many cycles already, not someone pretty new 16:46:49 yeah 16:47:12 oh, thanks, for some reason i thought adjutant ended up with a volunteer new ptl eventually 16:47:14 #action gmann to mark Adjutant a 'inactive' and send notification to PTL 16:47:30 ok, moving next 16:47:34 #topic Recurring tasks check 16:47:39 Bare 'recheck' state 16:47:48 #link https://etherpad.opendev.org/p/recheck-weekly-summary 16:47:55 slaweq: over to you 16:47:56 all good there I think 16:48:09 average number of recheckes before merge is a bit higher this last week 16:48:26 but as we had issue in neutron and there were some other issues also, I think it's just related 16:48:32 and should be better next week 16:48:37 yeah 16:48:39 bare rechecks are good 16:48:48 thanks for monitoring 16:49:01 especially for projects where there is more rechecks - most of them aren't bare 16:49:07 that's all 16:49:14 that is nice 16:49:14 That is a good improvement. 16:49:31 #topic Open Reviews 16:49:31 sweet 16:49:34 #link https://review.opendev.org/q/projects:openstack/governance+is:open 16:49:40 tc charter change we already talked 16:50:00 sounds like 865367 ready for merge? 16:50:05 fungi: for all other project updates, we are waiting for project-config patches to merge, can you please check 16:50:11 or we need 2/3 of tc for this one? 16:50:34 noonedeadpunk: yeah 16:50:49 ok that is all from today agenda, anything anyone would like to bring ? 16:50:52 gmann: i can take a look 16:50:57 fungi: thanks 16:51:23 FYI, next meeting is video call on Dec 7. 16:51:32 so, 16:51:59 I will be out the rest of the year before our next meeting, just FYI so I shan't be around for lively discussion or to open a video call on our gmeet, if we still do that 16:52:24 hopefully rosmaita slaweq or spotz can do it, if we still need it pending the zoom stuff 16:52:29 dansmith: ack 16:52:32 sure 16:52:34 sure 16:52:40 i should be around 16:53:35 if nothing else, let's close the meeting. Thanks everyone for joining. 16:53:39 #endmeeting