18:00:49 <gouthamr> #startmeeting tc 18:00:49 <opendevmeet> Meeting started Tue Aug 13 18:00:49 2024 UTC and is due to finish in 60 minutes. The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:49 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:49 <opendevmeet> The meeting name has been set to 'tc' 18:01:09 <gouthamr> Welcome to the weekly meeting of the OpenStack Technical Committee. A reminder that this meeting is held under the OpenInfra Code of Conduct available at https://openinfra.dev/legal/code-of-conduct 18:01:14 <gouthamr> Today's meeting agenda can be found at https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee 18:01:18 <gouthamr> #topic Roll Call 18:01:20 <noonedeadpunk> o/ 18:01:22 <JayF> o/ 18:01:39 <dansmith> o/ 18:01:41 <gmann> o/ 18:02:16 <spotz[m]> o/ 18:02:17 <gtema> o/ 18:02:52 <gouthamr> noted absence: f r i c k l e r , s l a w e k 18:03:39 <gouthamr> perfect; we've everyone accounted for.. i do appreciate non TC members waving at us as well; please ignore the $topic if you'd like to say hi :D 18:04:19 <gouthamr> lets get started 18:04:29 <gouthamr> #topic Action Items from last week 18:06:54 <gouthamr> we has an AI regarding Rocky Linux mirrors; i mentioned it in the TC summary email as a call for volunteers 18:07:48 <gouthamr> there was an AI to encourage leadership nominations; we've been doing this in several places 18:08:16 <fungi> next week will be better for getting some opendev collaboratory sysadmin input, i think. we've been a bit scattered last week and this week with folks travelling 18:08:26 <fungi> (on the rocky mirroring discussion) 18:08:32 <gouthamr> +1 fungi 18:09:20 <gouthamr> we had another AI regarding deciding if "direct outreach" can be done in a way where the fairness of the election officials isn't diluted 18:09:45 <gouthamr> direct outreach would be required/helpful only where there are no PTL nominees for a project 18:10:21 <gouthamr> i think there's an acceptable solution here: 18:10:28 <gouthamr> #link https://review.opendev.org/c/openstack/election/+/925873 (Add email template for direct reminder about nomination deadline) 18:11:10 <gouthamr> i think slaweq rethought this and instead of encouraging the existing PTL, this email reminder will go out to all project contributors 18:11:38 * gouthamr prefers this; and thinks it might help 18:11:49 <noonedeadpunk> I also didn't connect with Neil but also hardly around this week :( 18:11:53 <gouthamr> could others take a look as well? 18:12:04 <gouthamr> noonedeadpunk: ack; ty 18:12:08 <spotz[m]> Neil as at Flock last week 18:12:12 <fungi> neil didn't seem to be around either, yeah 18:12:23 <gouthamr> gmann: thanks for the review 18:12:26 <noonedeadpunk> Yup 18:12:36 <fungi> though mnasiadka did chime in on the discussion at least 18:12:47 <spotz[m]> I think he's at Devconf this week 18:13:48 <gouthamr> alright; that's all the AIs I was tracking.. anyone else got any? 18:14:48 <gouthamr> we'll chat about elections in a little bit 18:14:55 <gouthamr> #topic A check on gate health 18:15:41 <dansmith> I saw a number of typical fails this week, all volume-related but rechecks were moderately successful, which means things aren't too bad 18:16:02 <dansmith> We also merged some new image format tests and sample generation in devstack/tempest, 18:16:22 <gmann> ++ 18:16:40 <gouthamr> ncioe 18:16:41 <dansmith> which immediately started failing for nova's ceph job (because it doesn't actually ever see the image) and for the rocky jobs (because centos/rock's qemu and mkisofs are stripped down in terms of support) 18:16:43 <gouthamr> nice* 18:16:54 <dansmith> all of that is fixed (or in the gate to be fixed), but just FYI if any other fallout happens 18:17:07 <fungi> there was a related discussion in #openstack-infra earlier today about whether we should switch the tox-py312 jobs from on-demand compiled python on debian-bullseye nodes to packaged python on the now available ubuntu-noble nodes, or whether it would be disruptive and should wait for after the release 18:17:59 <gmann> py312 job is non voting in this cycle so doing it now should not be issue even give more time to fix it when it become voting in next cycle 18:18:01 <fungi> and also some jobs are still getting switched from running on ubuntu-focal to ubuntu-jammy (ran into release-related failures last week necessitating it, for example) 18:18:24 <gouthamr> oh; non devstack based jobs i presume? 18:18:29 <JayF> well it's non-voting by default, are we sure no projects -- like SDKS -- are using it to gate against? 18:18:40 <dansmith> gmann: agree, and better to be on noble anyway 18:18:47 <fungi> clarkb was talking about possibly also switching the openstack tenant's default zuul nodeset from ubuntu-focal to ubuntu-jammy soon which might catch some more that haven't updated yet 18:20:31 <gouthamr> ++ 18:20:37 <fungi> #link https://review.opendev.org/922992 Switch the openstack-tox-py312* jobs to distro packages 18:20:42 <fungi> in case anyone has feedback 18:21:42 <gouthamr> thanks for the link, and the updates 18:21:46 <fungi> yw 18:22:01 <gmann> fungi: focal->jammy- you mean to do this at zuul base job or specific to openstack-tox job ? 18:22:29 <gmann> we override the nodeset in openstack-tox and other job explicitly 18:22:29 <fungi> gmann: the default nodeset in the base job, yes 18:22:58 <fungi> so the impact is mainly going to be to jobs that don't inherit from openstack-tox 18:23:05 <fungi> nor from devstack 18:23:30 <gmann> yeah, should be ok I think as we do override the py version job with nodeset they are available by default 18:24:07 <fungi> agreed, i don't think the impact will be widespread, just possibly surprising, especially for infrequently-run jobs 18:24:12 <gmann> or we can do that in next cycle where we anyways need to move our testing to noble 18:24:27 <gmann> either way is ok for me 18:24:39 <fungi> thanks, i'll make sure to let him know once he's home 18:24:49 <gmann> thanks 18:24:56 <gouthamr> gmann: about that, were you planning on proposing a goal for this transition? or should we ask for volunteers interested in the effort? 18:25:06 <gmann> I will do that 18:25:38 <gmann> one more info - with new oslo.policy 4.4.0 (enabling new RBAC by default), there are many projects failing on requirement u-c change, which I am fixing one by one https://review.opendev.org/q/topic:%22secure-rbac%22+status:open 18:26:08 <gmann> but if any project is missed and u-c change merge, this is something help to fix those 18:26:28 <gmann> #link https://review.opendev.org/c/openstack/requirements/+/925464 18:26:35 <gmann> this is requirement change 18:27:06 <gmann> this only impact projects have not enabled the new RBAC by default. cinder, tacker are two example 18:28:39 <gmann> that is all on this. 18:28:47 <gouthamr> thanks gmann 18:29:32 <gouthamr> on my end, devstack-plugin-ceph is finally merging changes to drop the old problematic package-based installation scripts 18:30:02 <gouthamr> #link https://review.opendev.org/q/topic:%22cleanup-ceph-installer%22 (Cleanup ceph installer) 18:30:36 <gouthamr> this stuff was propped up with borrowed beams and duct tape 18:31:00 <gouthamr> but, i'll be happy to help people doing devstack testing with ceph to move away from it 18:31:34 <gouthamr> (it = installing ceph via packages and using scripts to run ceph on the devstack host) 18:32:32 <gouthamr> any other updates/questions/concerns to share regarding the gate? 18:33:48 <gouthamr> #topic 2025.1 Elections 18:34:14 <gouthamr> the nomination period begins tomorrow (Aug 14th) at 23:45 UTC 18:34:40 <gouthamr> however, based on prior feedback, we've pre-created directories, and you see PTL nominees already adding their nominations 18:35:25 <gouthamr> the election officials (slaweq and ianchoi) will be sending out an email kicking off the nominations 18:36:24 <gouthamr> we've four seats to fill here; if you're not considering running for re-election, i'd like to take a moment to thank you for your service on the OpenStack TC 18:37:12 <gouthamr> if you are considering running for re-election, i'd like to thank you for your continued commitment 18:38:29 <gouthamr> that's all i wanted to get in on $topic 18:38:44 <gouthamr> does anyone have anything to share/add? 18:40:10 <gouthamr> #topic 2024.2 TC Tracker 18:40:14 <gouthamr> #link https://etherpad.opendev.org/p/tc-2024.2-tracker (Technical Committee activity tracker) 18:41:26 <gouthamr> one update i can note here is from thuvh on the monasca change 18:41:40 <gouthamr> #link https://review.opendev.org/c/openstack/governance/+/923919 (Inactive state extensions: Monasca) 18:43:04 <gouthamr> anything else that needs to be brought up here, today? 18:44:15 <gtema> not from me 18:44:49 <spotz[m]> not me 18:45:21 <gouthamr> thank you; lets move on to open discussion 18:45:25 <gouthamr> #topic Open Discussion and Reviews 18:46:01 <gouthamr> think we've been on top of reviews! thank you gmann for all the DPL transition patches 18:46:21 <fungi> it may merit recalling the discussion last week about kolla's desire to integrate support for using hashicorp consul (a non-open-source software product). whether it posed any legal risks got discussed on the legal-discuss mailing list, but the outcome of that discussion has since been copied to openstack-discuss for further input (like whether the tc wants to place any limits on whether 18:46:22 <fungi> kolla should install non-open-source software in upstream tests, whether kolla can distribute pre-built container images containing non-open-source software, and so on) 18:46:23 <gmann> np! 18:47:23 <JayF> Honestly, I don't like it but I think it's better to let that project team make that decision. 18:48:21 <noonedeadpunk> Well, we had quite some discussion about ability to have gpl2 licensed code for Ansible collections 18:48:36 <spotz[m]> I don't care for it either because if folks don't a license they could be stuck 18:48:39 <noonedeadpunk> As there was a need to import Ansible livs 18:48:54 <fungi> #link https://lists.openstack.org/archives/list/openstack-discuss@lists.openstack.org/message/TDHNR76BXRF6MJGRKOECM7H3CT5ICPZS/ LICENSE question - Openstack Masakari 18:48:56 <noonedeadpunk> And the only proposed way back then was to manage under a sig 18:49:15 <noonedeadpunk> Which really results in slightly annoying situation right now 18:50:14 <noonedeadpunk> So if projects are allowed to do so, it would be interesting to re raise discussion about Ansible collections sig as well 18:51:17 <fungi> i'll note on the topic of gpl2 the concern was specific to license incompatibility with apache, but that has very recently been a topic of debate as well, and there are some legal opinions that there may not be any actual risks to combining apache and gpl2 licensed software 18:51:46 <noonedeadpunk> But bsl is not compatible either, is it? 18:51:57 <fungi> but the kolla/masakari/consul discussion is something altogether different. gpl2 is an open source license, consul uses a non-open-source license 18:52:00 * noonedeadpunk not a lawyer though 18:53:06 <gouthamr> i think using community resources to test a non-opensource license software should be discouraged; can kolla live without this? and are scripts to avoid consul usable (and shipped/supported)? 18:53:25 <fungi> how you combine them has some effect on whether you need the licenses to be compatible 18:53:34 <gouthamr> #link: https://governance.openstack.org/tc/resolutions/20170530-binary-artifacts.html (2017-05-30 Guidelines for Managing Releases of Binary Artifacts) 18:53:44 <noonedeadpunk> There's actually a proposal to masakari to use k8s for detecting compute state 18:53:57 <gouthamr> (^ fungi pointed this out during the original discussion; thanks) 18:54:07 <noonedeadpunk> It's at blueprint stage right now, but it would eliminate need for consul there 18:54:41 <fungi> yeah, the tc resolution on binary artifacts mainly just requires that the kolla team not claim openstack is supporting production use or claiming responsibility for the images they publish 18:55:06 <noonedeadpunk> Though I kind of don't like either method for masakari... I really don't understand why they can't use some tooz driver for detecting if node is down 18:55:16 <gouthamr> #link https://blueprints.launchpad.net/masakari/+spec/host-monitors-by-kubernetes (host monitor by kubernetes) 18:55:40 <noonedeadpunk> Anyway... 18:57:16 <gmann> next item about testing runtime 18:57:22 <gmann> Testing runtime for 2025.1 (SLURP release) is ready for review. Please check, or if needed, we can discuss it here also. I have noted the diff from the previous cycle runtime in the commit-msg. 18:57:23 <gmann> #link https://review.opendev.org/c/openstack/governance/+/926150 18:57:59 <gmann> as this is SLURP release, along with ubuntu noble as our default testing ditro version, we will keep testing the jammy also for smooth upgrade 18:58:09 <gmann> and keeping py3.9 as min 18:58:27 <gmann> I proposed job template also 18:58:37 <gmann> #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926152/1 18:58:45 <noonedeadpunk> I wonder if this hashi part can be in a separate repo from rest of the kolla? 18:58:50 <gmann> this is for upcoming stable/2024.2 18:59:18 <gmann> #link https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/926153/1 18:59:20 <gmann> and this for 2025.1 ^^ 18:59:22 <noonedeadpunk> Just to be able to govern it somehow differently from rest of the kolla as well as allow users to avoid using these bits 18:59:35 <gouthamr> intertwined communication; fungi noonedeadpunk: maybe we can add this as a topic to the next week's meeting? 18:59:39 <fungi> noonedeadpunk: yes, one of the questions i asked and never got answered from the kolla end is whether it could be made to work with hashicorp's container images and/or whether the kolla team intends to redsitribute copies of consul in their own container images 18:59:51 <gtema> we do plan to ask project for dropping 3.8 though, right? 19:00:00 <fungi> gouthamr: yes, or just follow up on the mailing list in the meantime 19:00:11 <gouthamr> perfect 19:00:15 <gmann> gtema: yes, that is ongoing, I am going to push some in this cycle and start of next cycle 19:00:24 <gouthamr> gmann: thank you for working on the PTI/runtime updates! 19:00:31 <gouthamr> we're at the hour 19:00:32 <gmann> gtema: but we will be able to drop that before m-1 of 2025.1 19:00:40 <gtema> gmann - perfect, thanks 19:00:58 <gouthamr> fungi: thanks for bringing this up; i'll set a topic for next week; and we can gather some context as well 19:01:06 <gouthamr> thank you all for attending this meeting 19:01:19 <gouthamr> i'll see you here next week \o/ 19:01:22 <gouthamr> #endmeeting