14:04:21 <karolinku[m]> #startmeeting RDO meeting - 2023-11-15
14:04:21 <opendevmeet> Meeting started Wed Nov 15 14:04:21 2023 UTC and is due to finish in 60 minutes.  The chair is karolinku[m]. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:04:21 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:04:21 <opendevmeet> The meeting name has been set to 'rdo_meeting___2023_11_15'
14:05:23 <jcapitao[m]> o/
14:05:41 <karolinku[m]> #chair amoralej_  jcapitao[m]
14:05:41 <opendevmeet> Current chairs: amoralej_ jcapitao[m] karolinku[m]
14:06:33 <apevec> []/
14:06:46 <karolinku[m]> #chair apevec
14:06:46 <opendevmeet> Current chairs: amoralej_ apevec jcapitao[m] karolinku[m]
14:07:16 <apevec> it is NOT video Meet today?
14:07:25 <amoralej_> nop, it was last week
14:07:35 <amoralej_> it's just the first on each month
14:08:46 <rdogerrit> Merged openstack/ovn-bgp-agent-distgit xena-rdo: Dummy commit to force rebuild  https://review.rdoproject.org/r/c/openstack/ovn-bgp-agent-distgit/+/50798
14:09:20 <karolinku[m]> lets start with first topic
14:09:28 <karolinku[m]> #topic Testing RDO on OKD using openstack-k8s-operators
14:09:44 <amoralej_> i added that
14:09:57 <amoralej_> #link https://review.rdoproject.org/etherpad/p/rdo-on-okd
14:10:22 <amoralej_> i've been doing some progress on it but i've found the first real issues
14:10:54 <amoralej_> mainly related to external operators which are in the built-in redhat-catalog but not in the community-catalog which is shipped in okd
14:11:26 <amoralej_> that is requiring some changes from regular deployment scripts in install_yamls
14:12:01 <amoralej_> i've discussed with the nmstate operator, i.e. they will maintain it updated in the operatorshub.io hopefully
14:12:51 <amoralej_> i'm documenting it in the etherpad and i will prepare a script or some way to make it work together with install_yamls
14:14:18 <amoralej_> that has been my progress on it during the last days
14:15:18 <karolinku[m]> #info rdo-on-okd testing in progress, solving external operator issues
14:15:29 <apevec> you have a list of such non-public operators?
14:15:59 <amoralej_> they are public, just not in the catalog
14:16:04 <apevec> they are on "Red Hat Marketplace" I guess?
14:16:08 <apevec> err?
14:16:11 <amoralej_> so
14:16:15 <apevec> public how?
14:16:16 <jcapitao[m]> hopefully it'll be a short list
14:16:30 <rdogerrit> Merged openstack/ovn-bgp-agent-distgit yoga-rdo: Dummy commit to force rebuild  https://review.rdoproject.org/r/c/openstack/ovn-bgp-agent-distgit/+/50797
14:16:30 <rdogerrit> Merged openstack/ovn-bgp-agent-distgit zed-rdo: Dummy commit to force rebuild  https://review.rdoproject.org/r/c/openstack/ovn-bgp-agent-distgit/+/48415
14:16:30 <amoralej_> public in the code repos
14:16:55 <amoralej_> or in some cases even in operatorshub, but not in the community-operators catalog included in okd
14:16:59 <amoralej_> which is different
14:17:20 <jcapitao[m]> is it shipped as a container to be listed in catalog ?
14:17:21 <apevec> ok, please teach us diff :)
14:17:33 <amoralej_> so, current situation is
14:17:38 <apevec> isn't operatorhub a catalog?
14:17:56 <amoralej_> yes but not included by default in okd
14:18:17 <amoralej_> there is community-operators catalog which is different to operatorhub
14:18:33 <karolinku[m]> I assume that we can't use operatorhub?
14:18:33 <amoralej_> let me explain it with examples
14:18:48 <karolinku[m]> instead of  community-operators catalog?
14:18:49 <amoralej_> actually, for some cases what i'm doing is adding operatorhub catalog
14:18:56 <amoralej_> so having both
14:19:02 <amoralej_> i.e
14:19:38 <amoralej_> 1- cert-manager can be installed from community-operators catalog, similarly to redhat-operators although i had to do some minor fixes in the script
14:20:18 <amoralej_> 2- kubernetes-nmstate operators. For that there is very old version in operatorhub and nothing in community-operators catalog. So i'm installing "manually" from upstream instructions
14:20:44 <amoralej_> ^ that's the one i was discussing with the upstream team so that they maintain it updated in operatorhub
14:22:02 <amoralej_> 3. metallb operator. That is updated in operatorhub so in that case i add the catalog and install it from there but fails in openshift instead of vanilla k8s. I need to apply some yamls manually later which are in the upstream git repo
14:22:25 <apevec> where is  community-operators catalog actually hosted, url?
14:22:35 <amoralej_> yes, let me show ....
14:23:22 <amoralej_> i think it's https://github.com/redhat-openshift-ecosystem/community-operators-prod
14:24:05 <apevec> wonder why was that mess needed vs just putting everything public into operatorhub...
14:24:24 <karolinku[m]> #link https://github.com/redhat-openshift-ecosystem/community-operators-prod
14:24:39 <rdogerrit> Merged openstack/ovn-bgp-agent-distgit wallaby-rdo: Dummy commit to force rebuild  https://review.rdoproject.org/r/c/openstack/ovn-bgp-agent-distgit/+/50796
14:24:59 <apevec> we can give honest feedback, right :)
14:25:10 <amoralej_> i'm not totally sure, tbh, but given my experience with metallb it may be related to k8s vs openshift differences
14:25:55 <amoralej_> as said before, the operator in the hub does not install cleanly in openshift but needs some adjustments related to scc which is openshift specific i.e
14:26:48 <amoralej_> apevec, yep, i understand that's part of it, and a first achievement is getting nmstate operator updated in operatorshub :)
14:27:08 <apevec> hmm ok, that'd make sense, but then why are there openshift specifcs...
14:27:25 <amoralej_> that's wider question i guess :)
14:28:32 <apevec> I guess I'm asking for a friend, sfio we'll soon need to publish our sf-operator, so we'll need to figure out where, and I was assuming operatorhub it THE place
14:28:45 <jcapitao[m]> and then I guess the nmstate team will have to maintain the community nmstate right ?
14:28:49 <amoralej_> https://github.com/metallb/metallb-operator/tree/main/config/openshift if you are interested
14:29:00 <apevec> * I guess I'm asking for a friend, sfio we'll soon need to publish our sf-operator, so we'll need to figure out where, and I was assuming operatorhub.io is THE place
14:29:23 <amoralej_> apevec, i'd say so
14:29:45 <amoralej_> well, depends :)
14:30:00 <amoralej_> or both :)
14:30:20 <amoralej_> definetively i'd say operatorhub.io is the main public operators repo for k8s
14:31:02 <amoralej_> but adding it to community-operators in okd makes it easier for okd users
14:32:03 <karolinku[m]> amoralej, does adding both community operators and operatorhub raised any conflicts?
14:32:32 <amoralej_> may
14:32:47 <amoralej_> but when you create a subscription to install a operator you have to specify the source catalog
14:33:17 <karolinku[m]> so you have to choose one in fact
14:33:20 <amoralej_> so the user is explicit about what operator you are installing from which catalog
14:33:21 <amoralej_> yes
14:33:45 <karolinku[m]> let's make third cat... oh wait
14:33:47 <amoralej_> it may be confusing to have the same operator from two catalogs, but when installing you need to choose
14:35:47 <amoralej_> said that, don't consider me an operators expert yet, i've been playing with that two days so i may be wrong ...
14:35:51 <amoralej_> :)
14:37:10 <amoralej_> but definetively there are operators which are in redhat-operators catalog (openshift) which are not in community-operators (okd), and there are operators in operatorshub which aren't either. And i could add operatorshub as a new catalog
14:37:58 <amoralej_> those are the facts i found
14:39:21 <jcapitao[m]> thank you for the feedback
14:40:26 <jcapitao[m]> I think we need to check if there will be an effort between Openshift and OKD to transfer the operators
14:40:43 <jcapitao[m]> or if the community will have to do the job itself
14:41:49 <amoralej_> my current approach is to provide feedback to operators community
14:42:17 <amoralej_> afaik there are ways to automate pushing operators to operatorhub, that should be up to them
14:42:46 <jcapitao[m]> yes, for a POC i think it's the good approach
14:42:49 <amoralej_> our task will be being able to deploy them in the best way
14:43:03 <amoralej_> and integrate it in install_yamls the best we can
14:43:19 <amoralej_> as those represent differences between openshift and okd that need to be managed
14:43:47 <karolinku[m]> i think it would be valuable feedback
14:43:48 <amoralej_> i.e. the scripts and yamls in install_yamls does not work because they are tied to openshift catalog name etc..
14:43:58 <amoralej_> even in the best case
14:45:02 <amoralej_> so, my approach is to encapsulate those differences the best we can and look how to integrate in existing tooling
14:45:11 <apevec> can it be overriden, install_yamls is basically Makefile?
14:45:34 <amoralej_> yes, makefile with scripts and yaml files
14:45:47 <amoralej_> we'll need to parametrized it via variable i guess
14:45:53 <amoralej_> okd=1 i.e
14:45:56 <apevec> yeah
14:46:42 <amoralej_> i.e. https://github.com/openstack-k8s-operators/install_yamls/blob/main/scripts/gen-olm-cert-manager.sh for cert-manager
14:47:04 <amoralej_> https://github.com/openstack-k8s-operators/install_yamls/blob/main/scripts/gen-olm-cert-manager.sh#L83-L85
14:47:18 <amoralej_> that fails in okd
14:49:46 <amoralej_> that was it wrt okd from my side
14:49:55 <apevec> yeah for redhat-operators you need openshift pull-secret
14:50:23 <apevec> thanks amoralej++ for the knowledge sharing!
14:51:16 <karolinku[m]> anything else in this topic?
14:51:20 <amoralej_> np, i feel i'm just scratching the surface, but it's good learning
14:52:52 <karolinku[m]> #topic openfloor
14:53:30 <karolinku[m]> jcapitao[m], do you want to say sth about oslosphinx removal?
14:54:40 <jcapitao[m]> I retired it in Fedora but not a big deal to report I'd say
14:55:30 <karolinku[m]> ack
14:55:32 <amoralej_> for rdo, we will be maintaining it in the repo while used upstream?
14:57:14 <amoralej_> btw if you have a chance https://review.rdoproject.org/r/q/I24f5d95de9473547eb2f47df1e75b9609cb8ec23
14:57:30 <amoralej_> we need it after modifying source-branch for antelope
14:59:21 <karolinku[m]> we are running out of time, so who would like to chair next week?
14:59:36 <jcapitao[m]> ouch the epoch
14:59:43 <amoralej_> yeap :(
14:59:49 <amoralej_> i couldn't fine a better way
14:59:55 <amoralej_> i can take it
15:00:14 <karolinku[m]> #action amoralej is chairing next week
15:00:19 <amoralej_> s/fine/find/ ^
15:01:03 <jcapitao[m]> I voted +2
15:02:30 <amoralej_> ack, thanks
15:04:23 <karolinku[m]> #endmeeting