09:00:08 <flwang1> #startmeeting magnum
09:00:09 <openstack> Meeting started Wed May 20 09:00:08 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:12 <openstack> The meeting name has been set to 'magnum'
09:00:18 <flwang1> #topic roll call
09:00:25 <flwang1> o/
09:00:26 <strigazi> o/
09:01:11 <flwang1> brtknr: ?
09:01:20 <brtknr> o/
09:01:50 <flwang1> #topic PTG
09:01:51 <jakeyip_> hi :)
09:02:01 <flwang1> jakeyip_: hi Jake, how are you
09:02:18 <flwang1> did you guys saw our PTG time?
09:02:37 <jakeyip_> no actually. any links?
09:02:57 <flwang1> 3rd June   21UTC-23UTC     http://ptg.openstack.org/ptg.html
09:03:26 <jakeyip_> thanks
09:03:29 <flwang1> pls put the topics you'd like to discuss on PTG to this etherpad https://etherpad.opendev.org/p/magnum-victoria-ptg
09:04:01 <brtknr> o/ jakeyip_ long time no see
09:04:28 <jakeyip_> o/ brtknr good to see all of you again :)
09:04:50 <flwang1> jakeyip_: keen to upgrade to ussuri? :D
09:04:51 <strigazi> jakeyip_: o/
09:04:54 <jakeyip_> sorry just been lurking, was busy with other things
09:05:00 <jakeyip_> we are going to train only actually :P
09:05:24 <flwang1> btw, guys, catalyst cloud just upgraded to stable/train this week
09:05:29 <flwang1> and so far so good
09:05:44 <flwang1> surely, we cherrypicked some features from master :)
09:05:49 <brtknr> flwang1: i tried to sign up to CC again, its really hard to get an account...
09:05:52 <strigazi> jakeyip_: you want this https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/cluster.py#L185
09:05:55 <jakeyip_> nice. we were trying to move to Octavia/OVN, was a nightmare
09:06:16 <flwang1> brtknr: it shouldn't be, can you send me an email and I can follow up?
09:06:58 <brtknr> flwang1: sure will do
09:07:22 <flwang1> brtknr: strigazi: any question for the ptg?
09:07:34 <brtknr> flwang1: nope all good
09:07:40 <strigazi> jakeyip_: this is the design https://specs.openstack.org/openstack/magnum-specs/specs/ussuri/labels-override.html  :)
09:07:44 <strigazi> flwang1: no Qs
09:07:58 <flwang1> thanks
09:08:12 <strigazi> flwang1: put the date and or link in etherpad?
09:08:13 <flwang1> #topic helm config
09:08:23 <flwang1> strigazi: sure, will do
09:08:33 <strigazi> flwang1: shall I start on helm-config?
09:08:43 <jakeyip_> strigazi: oo nice the labels will be super useful for us
09:08:50 <flwang1> strigazi: go ahead
09:08:57 <jakeyip_> strigazi: sorry please go ahead
09:09:25 <strigazi> I put the SPEC up after discussion within our team, dtomasgu will drive the implementation.
09:09:43 <strigazi> I added example user workflow too
09:10:03 <strigazi> The "burning" issue is what will the datatype be.
09:10:53 <flwang1> you mean in db?
09:11:08 <flwang1> i saw you suggested Text
09:11:12 <strigazi> At cern we agreed on json/yaml but I'm not sure if it is possible what I added here:
09:11:29 <strigazi> https://review.opendev.org/#/c/727756/3/specs/victoria/helm-config.rst@106
09:12:12 <strigazi> I added an alternative that we rejected downstream but I added there for completeness:
09:12:25 <strigazi> https://review.opendev.org/#/c/727756/3/specs/victoria/helm-config.rst@95
09:13:14 <openstackgerrit> Spyros Trigazis proposed openstack/magnum-specs master: Add helm-config  https://review.opendev.org/727756
09:13:22 <strigazi> thanks for the typo ^^
09:13:39 <flwang1> ok, i will review it again
09:14:12 <strigazi> Also, could we merge these two: https://storyboard.openstack.org/#!/story/2003902 https://storyboard.openstack.org/#!/story/2004215 ?
09:14:15 <strigazi> brtknr: ^^
09:14:37 <strigazi> dtomasgu: wants to them together (I though he would be here)
09:14:42 <strigazi> dtomasgu: wants to do them together (I though he would be here)
09:14:52 <brtknr> sure
09:15:02 <brtknr> they're similar but slightly different things
09:15:16 <jakeyip_> is it possible to copy a cluster template? or get a config from an existing one?
09:15:20 <brtknr> but if we can have one config file to rule them all, that sounds good tom e
09:15:28 <strigazi> (I think there are not that similar, but I don't want to be the weird guy)
09:15:33 <dioguerra> Im here
09:15:36 <brtknr> one config file that has labels, helm_charts, helm_requirements, everything
09:15:50 <brtknr> am i correct in my understanding?
09:16:25 <strigazi> It will be a new config option that contains everything I think
09:16:29 <dioguerra> That was my original idea yes
09:16:41 <strigazi> mutually exclusive with commandline args
09:16:48 <brtknr> that sounds good to me :)
09:16:54 <brtknr> I will stop using terraform after that
09:16:57 <dioguerra> although splitting is not a bad option either because of the size. One file for the cluster other files for the rest
09:17:16 <brtknr> splitting?
09:17:33 <dioguerra> helm and cluster-configuration?
09:18:17 <brtknr> i dont think size should be an issue
09:18:28 <brtknr> if we are parsing everything on client side?
09:18:36 <dioguerra> :+1:
09:19:04 <brtknr> anyway this discussion requires another spec tbh
09:19:16 * strigazi thinks it is, but he stopped caring about it
09:20:12 <brtknr> thinks size is an issue or thinks this is discussion for another spec?
09:21:20 <strigazi> It is different work, one is to restructure/enhance the client, one is add new fields in the API and client
09:21:29 <strigazi> but, let's merge to speed thnigs up
09:22:02 <flwang1> not sure if it's in the scope of this spec, so please correct me if this is a silly question. with this helm-config, is it possible that we disable an addon for an existing cluster?
09:22:18 <flwang1> or enable an addon for an existing cluster
09:22:25 <strigazi> flwang1 it makes more doable
09:22:29 <strigazi> flwang1 it makes it more doable
09:22:50 <strigazi> we can add one action and easily trigger a "helm upgrade"
09:23:02 <flwang1> yep, that's my understanding
09:23:09 <flwang1> i'd like to have it in the future
09:23:30 <strigazi> still out of scope of this (if we want reasoanbly narrow scope)
09:23:49 <flwang1> fair enough
09:23:52 <strigazi> flwang1: brtknr, Note:
09:24:29 <strigazi> at CERN we are using fluxCD which upgrades automatically helm deployments
09:24:55 <dioguerra> brtknr: i think enabling disabling with the values.yaml will be possible (bypassing labels)
09:25:20 <strigazi> this is a answer to which question ^^
09:26:08 <jakeyip_> sorry silly question, is this to provide a list of charts or 1 chart to a COE ?
09:26:23 <dioguerra> 11:22 flwang1, sorry wrong person
09:26:35 <strigazi> a meta-chart with subcharts #mindblown
09:27:06 <jakeyip_> ok
09:27:23 <strigazi> jakeyip_: example https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/requirements.yaml#L15
09:27:25 <brtknr> it will be interesting to see how labels and helm_values override each other :)
09:27:52 <strigazi> brtknr: not really, pretty straigh foward. mutually exclusive
09:28:33 <strigazi> anything else on helm-config?
09:28:43 <brtknr> eg. nginx_tag passed via label vs tag passed via values.yaml
09:28:49 <strigazi> agree to merge the work and the spec?
09:28:56 <brtknr> yes
09:29:04 <strigazi> brtknr if you do this: 401
09:29:43 <brtknr> oh so make labels mutually exclusive with helm_values option?
09:29:59 <strigazi> yes, to silly-proof
09:30:55 <dioguerra> brtknr: both default to the same and you do a XOR
09:31:24 <dioguerra> will see...
09:32:06 <brtknr> ok but some labels are not relevant to helm charts
09:32:13 <strigazi> I'm not writing XOR anywhere in the SPEC. I will add a list the SPEC that if a meta-chart is used, it throughs an error.
09:32:16 <brtknr> what about those?
09:32:41 <brtknr> or are we going to identify the labels which are relevant to charts and only exclude those?
09:32:44 <strigazi> I will add the silly-proof sections in the spec.
09:32:53 <strigazi> That's what I said.
09:32:59 <brtknr> ok
09:33:24 <brtknr> my bad
09:33:27 <strigazi> I will add new section that will refactor the client
09:33:36 <strigazi> I will add a new section in the SPEC that will refactor the client
09:33:42 <strigazi> ok? flwang1 dtomasgu brtknr
09:33:47 <brtknr> ok
09:33:55 <flwang1> strigazi: good for me
09:34:54 <flwang1> #topic helm 3
09:35:10 <flwang1> strigazi: sorry, anything else for helm-config?
09:36:07 <strigazi> I'm good
09:37:16 <flwang1> strigazi: thanks
09:37:22 <dioguerra> ok
09:37:25 <flwang1> brtknr: anything else we need to discuss for helm 3?
09:38:36 <brtknr> well its ready to merge based on your feedback
09:38:48 <brtknr> thats all
09:38:54 <flwang1> brtknr: cool
09:39:03 <flwang1> move on?
09:39:16 <flwang1> #topic filter support for cluster template listing
09:39:20 <brtknr> I have tested locally but feel free to test yourselves for your peace of mind
09:39:35 <flwang1> brtknr: i have done that before, will do it again
09:39:50 <brtknr> flwang1: thanks
09:39:50 <flwang1> i'd like to introduce a basic filter funciton for template listing
09:40:09 <flwang1> because i found it's really annoying to find public templates
09:40:42 <flwang1> brtknr: strigazi: don't you guys have the same pain?
09:41:27 <dioguerra> i do
09:41:41 <strigazi> flwang1: in my projects I usually add my username in the name
09:42:03 <strigazi> so convention by name
09:42:16 <strigazi> but it's good to have
09:42:27 <dioguerra> do you have something? im interested how this works because we wanted to add a "recommended" template functionality
09:42:39 <brtknr> i usually work with fewer than 10 templates in any given project so havent hit problems
09:42:53 <brtknr> it would be good to deprecate old templates
09:43:02 <flwang1> dioguerra: it's a common filter you an find it anywhere in other openstack services
09:43:08 <brtknr> wouldnt needing to delete them
09:43:13 <jakeyip_> sort of, how do you want it to look?
09:43:53 <flwang1> openstack coe cluster template list --filter public=True,Hidden=False
09:43:56 <jakeyip_> this problem is similar to glance public images, I guess?
09:43:57 <flwang1> something like above
09:44:13 <flwang1> jakeyip_: not really, i just want to do filter here
09:46:26 <brtknr> openstack coe cluster template list --hidden --public would be nicer
09:46:32 <jakeyip_> glance is similar? `glance image-list --visibility private --hidden False`
09:47:09 <flwang1> brtknr: jakeyip_: i'm trying to propose a more general filter
09:47:12 <dioguerra> actually, i like the 1option if it can be expanded to other elements
09:47:20 <flwang1> i'm just taking public, hidden as a smaple
09:47:37 <brtknr> what other things would need filtering?
09:47:58 <flwang1> brtknr: e.g. network_driver=calico
09:48:43 <flwang1> i haven't got a solid design, i just want to know you guys ideas if you are ok with the direction
09:49:21 <jakeyip_> public and hidden alone will be great for us. when we upgrade CT we hid the old one and put a new one with same name. eventually I need to clean them up, so I need to get the hidden and public ones. right now I'm just grepping on the two fields, which I show using https://review.opendev.org/#/c/702322/
09:50:50 <jakeyip_> flwang1:  have you tried displaying network_driver field, and grepping on that as a workaround?
09:51:25 <flwang1> jakeyip_: does the fields works?
09:51:47 <brtknr> we should implement filtering schema for which there is precedence for in other projects, not try and invent our own
09:52:06 <flwang1> brtknr: yep, that's my rough idea
09:52:19 <brtknr> with nova for example, you can do openstack server list --status ACTIVE
09:52:36 <brtknr> with glance, you can do openstack image list --public --shared --private
09:52:39 <flwang1> anyway, i will think about this again later
09:52:50 <strigazi> flwang1: SPEC ^^
09:52:54 <strigazi> :)
09:52:58 <flwang1> strigazi: sure
09:53:01 <flwang1> will do
09:53:14 <flwang1> anything else?
09:53:32 <strigazi> one thing
09:53:33 <jakeyip_> flwang1: yeah think so. another thing is that openstack cli output can be json so you can do fancy things with jq too
09:53:54 <strigazi> I will add again build of hyperkube in our CI.
09:54:00 <strigazi> I will add again builds of hyperkube in our CI.
09:54:22 <flwang1> strigazi: why?
09:54:35 <strigazi> Upstream kubernetes thought it was very easy for us to use it and the dropped build starting from 1.19
09:54:53 <strigazi> Upstream kubernetes thought it was very easy for us to use it and they dropped the build starting from 1.19
09:55:00 * strigazi finds the link
09:55:38 <strigazi> there you go https://github.com/kubernetes/kubernetes/pull/88676
09:56:33 <strigazi> it lasted a few releases for us, it was good
09:57:00 <flwang1> that's not a good idea :(
09:57:09 <flwang1> strigazi: ok, pls add it
09:57:12 <brtknr> argh what a pain
09:57:14 <flwang1> and seems we have to
09:57:59 <brtknr> can we build a single hyperkube image or one per k8s service?
09:58:04 <strigazi> poseidon typhoon does the same, they maintain a build
09:58:16 <strigazi> brtknr: hyperkube was always one
09:59:06 <flwang1> that's all for today?
09:59:32 <strigazi> i'm good
10:00:06 <flwang1> strigazi: i got several patches +2 by brtknr, could you please take a look them?
10:00:29 <flwang1> #endmeeting