09:00:08 #startmeeting magnum 09:00:09 Meeting started Wed May 20 09:00:08 2020 UTC and is due to finish in 60 minutes. The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:12 The meeting name has been set to 'magnum' 09:00:18 #topic roll call 09:00:25 o/ 09:00:26 o/ 09:01:11 brtknr: ? 09:01:20 o/ 09:01:50 #topic PTG 09:01:51 hi :) 09:02:01 jakeyip_: hi Jake, how are you 09:02:18 did you guys saw our PTG time? 09:02:37 no actually. any links? 09:02:57 3rd June   21UTC-23UTC     http://ptg.openstack.org/ptg.html 09:03:26 thanks 09:03:29 pls put the topics you'd like to discuss on PTG to this etherpad https://etherpad.opendev.org/p/magnum-victoria-ptg 09:04:01 o/ jakeyip_ long time no see 09:04:28 o/ brtknr good to see all of you again :) 09:04:50 jakeyip_: keen to upgrade to ussuri? :D 09:04:51 jakeyip_: o/ 09:04:54 sorry just been lurking, was busy with other things 09:05:00 we are going to train only actually :P 09:05:24 btw, guys, catalyst cloud just upgraded to stable/train this week 09:05:29 and so far so good 09:05:44 surely, we cherrypicked some features from master :) 09:05:49 flwang1: i tried to sign up to CC again, its really hard to get an account... 09:05:52 jakeyip_: you want this https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/cluster.py#L185 09:05:55 nice. we were trying to move to Octavia/OVN, was a nightmare 09:06:16 brtknr: it shouldn't be, can you send me an email and I can follow up? 09:06:58 flwang1: sure will do 09:07:22 brtknr: strigazi: any question for the ptg? 09:07:34 flwang1: nope all good 09:07:40 jakeyip_: this is the design https://specs.openstack.org/openstack/magnum-specs/specs/ussuri/labels-override.html :) 09:07:44 flwang1: no Qs 09:07:58 thanks 09:08:12 flwang1: put the date and or link in etherpad? 09:08:13 #topic helm config 09:08:23 strigazi: sure, will do 09:08:33 flwang1: shall I start on helm-config? 09:08:43 strigazi: oo nice the labels will be super useful for us 09:08:50 strigazi: go ahead 09:08:57 strigazi: sorry please go ahead 09:09:25 I put the SPEC up after discussion within our team, dtomasgu will drive the implementation. 09:09:43 I added example user workflow too 09:10:03 The "burning" issue is what will the datatype be. 09:10:53 you mean in db? 09:11:08 i saw you suggested Text 09:11:12 At cern we agreed on json/yaml but I'm not sure if it is possible what I added here: 09:11:29 https://review.opendev.org/#/c/727756/3/specs/victoria/helm-config.rst@106 09:12:12 I added an alternative that we rejected downstream but I added there for completeness: 09:12:25 https://review.opendev.org/#/c/727756/3/specs/victoria/helm-config.rst@95 09:13:14 Spyros Trigazis proposed openstack/magnum-specs master: Add helm-config https://review.opendev.org/727756 09:13:22 thanks for the typo ^^ 09:13:39 ok, i will review it again 09:14:12 Also, could we merge these two: https://storyboard.openstack.org/#!/story/2003902 https://storyboard.openstack.org/#!/story/2004215 ? 09:14:15 brtknr: ^^ 09:14:37 dtomasgu: wants to them together (I though he would be here) 09:14:42 dtomasgu: wants to do them together (I though he would be here) 09:14:52 sure 09:15:02 they're similar but slightly different things 09:15:16 is it possible to copy a cluster template? or get a config from an existing one? 09:15:20 but if we can have one config file to rule them all, that sounds good tom e 09:15:28 (I think there are not that similar, but I don't want to be the weird guy) 09:15:33 Im here 09:15:36 one config file that has labels, helm_charts, helm_requirements, everything 09:15:50 am i correct in my understanding? 09:16:25 It will be a new config option that contains everything I think 09:16:29 That was my original idea yes 09:16:41 mutually exclusive with commandline args 09:16:48 that sounds good to me :) 09:16:54 I will stop using terraform after that 09:16:57 although splitting is not a bad option either because of the size. One file for the cluster other files for the rest 09:17:16 splitting? 09:17:33 helm and cluster-configuration? 09:18:17 i dont think size should be an issue 09:18:28 if we are parsing everything on client side? 09:18:36 :+1: 09:19:04 anyway this discussion requires another spec tbh 09:19:16 * strigazi thinks it is, but he stopped caring about it 09:20:12 thinks size is an issue or thinks this is discussion for another spec? 09:21:20 It is different work, one is to restructure/enhance the client, one is add new fields in the API and client 09:21:29 but, let's merge to speed thnigs up 09:22:02 not sure if it's in the scope of this spec, so please correct me if this is a silly question. with this helm-config, is it possible that we disable an addon for an existing cluster? 09:22:18 or enable an addon for an existing cluster 09:22:25 flwang1 it makes more doable 09:22:29 flwang1 it makes it more doable 09:22:50 we can add one action and easily trigger a "helm upgrade" 09:23:02 yep, that's my understanding 09:23:09 i'd like to have it in the future 09:23:30 still out of scope of this (if we want reasoanbly narrow scope) 09:23:49 fair enough 09:23:52 flwang1: brtknr, Note: 09:24:29 at CERN we are using fluxCD which upgrades automatically helm deployments 09:24:55 brtknr: i think enabling disabling with the values.yaml will be possible (bypassing labels) 09:25:20 this is a answer to which question ^^ 09:26:08 sorry silly question, is this to provide a list of charts or 1 chart to a COE ? 09:26:23 11:22 flwang1, sorry wrong person 09:26:35 a meta-chart with subcharts #mindblown 09:27:06 ok 09:27:23 jakeyip_: example https://gitlab.com/gitlab-org/charts/gitlab/-/blob/master/requirements.yaml#L15 09:27:25 it will be interesting to see how labels and helm_values override each other :) 09:27:52 brtknr: not really, pretty straigh foward. mutually exclusive 09:28:33 anything else on helm-config? 09:28:43 eg. nginx_tag passed via label vs tag passed via values.yaml 09:28:49 agree to merge the work and the spec? 09:28:56 yes 09:29:04 brtknr if you do this: 401 09:29:43 oh so make labels mutually exclusive with helm_values option? 09:29:59 yes, to silly-proof 09:30:55 brtknr: both default to the same and you do a XOR 09:31:24 will see... 09:32:06 ok but some labels are not relevant to helm charts 09:32:13 I'm not writing XOR anywhere in the SPEC. I will add a list the SPEC that if a meta-chart is used, it throughs an error. 09:32:16 what about those? 09:32:41 or are we going to identify the labels which are relevant to charts and only exclude those? 09:32:44 I will add the silly-proof sections in the spec. 09:32:53 That's what I said. 09:32:59 ok 09:33:24 my bad 09:33:27 I will add new section that will refactor the client 09:33:36 I will add a new section in the SPEC that will refactor the client 09:33:42 ok? flwang1 dtomasgu brtknr 09:33:47 ok 09:33:55 strigazi: good for me 09:34:54 #topic helm 3 09:35:10 strigazi: sorry, anything else for helm-config? 09:36:07 I'm good 09:37:16 strigazi: thanks 09:37:22 ok 09:37:25 brtknr: anything else we need to discuss for helm 3? 09:38:36 well its ready to merge based on your feedback 09:38:48 thats all 09:38:54 brtknr: cool 09:39:03 move on? 09:39:16 #topic filter support for cluster template listing 09:39:20 I have tested locally but feel free to test yourselves for your peace of mind 09:39:35 brtknr: i have done that before, will do it again 09:39:50 flwang1: thanks 09:39:50 i'd like to introduce a basic filter funciton for template listing 09:40:09 because i found it's really annoying to find public templates 09:40:42 brtknr: strigazi: don't you guys have the same pain? 09:41:27 i do 09:41:41 flwang1: in my projects I usually add my username in the name 09:42:03 so convention by name 09:42:16 but it's good to have 09:42:27 do you have something? im interested how this works because we wanted to add a "recommended" template functionality 09:42:39 i usually work with fewer than 10 templates in any given project so havent hit problems 09:42:53 it would be good to deprecate old templates 09:43:02 dioguerra: it's a common filter you an find it anywhere in other openstack services 09:43:08 wouldnt needing to delete them 09:43:13 sort of, how do you want it to look? 09:43:53 openstack coe cluster template list --filter public=True,Hidden=False 09:43:56 this problem is similar to glance public images, I guess? 09:43:57 something like above 09:44:13 jakeyip_: not really, i just want to do filter here 09:46:26 openstack coe cluster template list --hidden --public would be nicer 09:46:32 glance is similar? `glance image-list --visibility private --hidden False` 09:47:09 brtknr: jakeyip_: i'm trying to propose a more general filter 09:47:12 actually, i like the 1option if it can be expanded to other elements 09:47:20 i'm just taking public, hidden as a smaple 09:47:37 what other things would need filtering? 09:47:58 brtknr: e.g. network_driver=calico 09:48:43 i haven't got a solid design, i just want to know you guys ideas if you are ok with the direction 09:49:21 public and hidden alone will be great for us. when we upgrade CT we hid the old one and put a new one with same name. eventually I need to clean them up, so I need to get the hidden and public ones. right now I'm just grepping on the two fields, which I show using https://review.opendev.org/#/c/702322/ 09:50:50 flwang1: have you tried displaying network_driver field, and grepping on that as a workaround? 09:51:25 jakeyip_: does the fields works? 09:51:47 we should implement filtering schema for which there is precedence for in other projects, not try and invent our own 09:52:06 brtknr: yep, that's my rough idea 09:52:19 with nova for example, you can do openstack server list --status ACTIVE 09:52:36 with glance, you can do openstack image list --public --shared --private 09:52:39 anyway, i will think about this again later 09:52:50 flwang1: SPEC ^^ 09:52:54 :) 09:52:58 strigazi: sure 09:53:01 will do 09:53:14 anything else? 09:53:32 one thing 09:53:33 flwang1: yeah think so. another thing is that openstack cli output can be json so you can do fancy things with jq too 09:53:54 I will add again build of hyperkube in our CI. 09:54:00 I will add again builds of hyperkube in our CI. 09:54:22 strigazi: why? 09:54:35 Upstream kubernetes thought it was very easy for us to use it and the dropped build starting from 1.19 09:54:53 Upstream kubernetes thought it was very easy for us to use it and they dropped the build starting from 1.19 09:55:00 * strigazi finds the link 09:55:38 there you go https://github.com/kubernetes/kubernetes/pull/88676 09:56:33 it lasted a few releases for us, it was good 09:57:00 that's not a good idea :( 09:57:09 strigazi: ok, pls add it 09:57:12 argh what a pain 09:57:14 and seems we have to 09:57:59 can we build a single hyperkube image or one per k8s service? 09:58:04 poseidon typhoon does the same, they maintain a build 09:58:16 brtknr: hyperkube was always one 09:59:06 that's all for today? 09:59:32 i'm good 10:00:06 strigazi: i got several patches +2 by brtknr, could you please take a look them? 10:00:29 #endmeeting