09:03:59 <flwang1> #startmeeting magnum
09:04:00 <openstack> Meeting started Wed Sep 16 09:03:59 2020 UTC and is due to finish in 60 minutes.  The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:04:01 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:04:03 <openstack> The meeting name has been set to 'magnum'
09:04:47 <flwang1> #topic roll call
09:04:54 <flwang1> i think only brtknr and me?
09:04:58 <flwang1> jakeyip: around?
09:05:08 <jakeyip> flwang1:  hi o/
09:05:14 <flwang1> jakeyip: hello
09:05:30 <brtknr> o/
09:05:36 <flwang1> as for your storageclass question, the easiest way is using the post_install_manifest config
09:06:00 <flwang1> #topic hyperkube
09:06:15 <flwang1> brtknr: i'd like to discuss the hyperkube first
09:06:31 <brtknr> sure
09:06:54 <flwang1> i contacted with the rancher team who is maintaining their hyperkube image, i was told it's a long term solution for their RKS
09:07:12 <brtknr> thats good to hear
09:08:07 <flwang1> personally, i don't mind moving to binary, what i thinking is we need to make the decision as a team
09:08:26 <brtknr> I tested hyperkube with 1.19 and it works very well
09:08:30 <flwang1> and i prefer to do it in next cycle
09:08:44 <brtknr> with rancher container
09:08:52 <flwang1> brtknr: that's good to know
09:09:12 <flwang1> maybe we can build in our pipeline with heat-container-agent
09:09:15 <brtknr> I think a good short term solution is to introduce a new label, e.g. kube_source:
09:09:49 <brtknr> and we can override hyperkube source with whatever we use
09:09:58 <brtknr> or kube_prefix
09:10:09 <flwang1> i'm wondering if it's necessary
09:10:31 <openstackgerrit> Merged openstack/magnum master: [goal] Prepare pep8 testing for Ubuntu Focal  https://review.opendev.org/750591
09:10:40 <flwang1> because for prod usage, they always have their own container registry
09:10:50 <flwang1> and they will keep the hyperkube image there
09:10:59 <flwang1> instead of download it from the original source everytime
09:11:16 <flwang1> the only case we need a new label is probably for the devstack
09:11:16 <brtknr> who is they?
09:11:39 <flwang1> most of the company/org using magnum
09:12:05 <flwang1> what's the case for stackHPC?
09:12:44 <flwang1> do you set the "CONTAINER_INFRA_PREFIX"?
09:13:01 <brtknr> we have some customers who use container registry, others dont
09:13:48 <flwang1> don't they have concern if any image has any change?
09:14:03 <flwang1> anyway, i think we need to get input from Spyros as well
09:14:25 <brtknr> sure
09:15:09 <jakeyip> are the 'current' versions of hyperkube compatible ? e.g. 1.15 - 1.1.7
09:15:25 <flwang1> but at least, we have a solution
09:15:41 <flwang1> jakeyip: until v1.18.x
09:15:50 <flwang1> there is no hyperkube since v1.19.x
09:16:01 <brtknr> there is no official hyperkube
09:16:03 <jakeyip> sorry I meant, is rancher hyperkube 1.15 - 1.17 compatible with k8s 's
09:16:06 <brtknr> only third party
09:16:39 <jakeyip> was thinking change the default registry at the next release? this change can be a backport
09:16:53 <jakeyip> to train or whatever to use rancher's hyperkube
09:18:41 <flwang1> jakeyip: maybe not, they're using a suffix for the image name
09:21:40 <brtknr> flwang1: rackspace also build hyperkube: https://quay.io/repository/rackspace/hyperkube?tab=tags
09:23:04 <flwang1> brtknr: but i can't see a v1.19.x image
09:23:13 <flwang1> from your above link
09:25:31 <brtknr> well https://github.com/rancher/hyperkube/releases
09:25:48 <brtknr> we can use these releases anyways
09:26:49 <jakeyip> hmm what about taking the rancher one with suffix and putting it into docker.io/openstackmagnum/
09:27:03 <flwang1> brtknr: yes, we can. we just need to figure out how, for those are not using CONTAINER_INFRA_PREFIX
09:28:20 <flwang1> jakeyip: we can do that. we just need to make the decision how can we keep supporting v1.19.x and keep the backward compatibility
09:28:57 <jakeyip> hmm, will using CONTAINER_INFRA_PREFIX work? because of the -rancher suffix in tags?
09:29:06 <flwang1> the only way to support that is probably like brtknr proposed, adding a new label to allow passing in the full image URL
09:29:16 <flwang1> include name and tag
09:29:55 <flwang1> jakeyip: if you donwload and retag, then upload to your own registry, then it should work without any issue
09:30:37 <jakeyip> but what will the default be?
09:31:14 <flwang1> what do you mean the default?
09:32:02 <jakeyip> the one in templates
09:32:35 <jakeyip> e.g. `${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}hyperkube:\${KUBE_TAG}`
09:32:53 <flwang1> we need some workaround there
09:33:19 <flwang1> e.g. if label hyperkube_source passed in, it will replace above image location
09:33:32 <flwang1> brtknr: is that your idea?
09:33:40 <brtknr> yep
09:33:54 <brtknr> or just kube_prefix
09:34:07 <brtknr> similar to kube_tag
09:34:08 <jakeyip> if not?
09:34:19 <flwang1> jakeyip: something like that
09:34:55 <flwang1> i will send an email to spyros to get his comments
09:34:59 <flwang1> let's move on?
09:35:30 <jakeyip> I was thinking if it is possible to change this to docker.io/openstackmagnum/? for <=1.18 we mirror k8s.gcr.io
09:35:57 <jakeyip> for >1.18 we mirror rancher and don't put suffix. so it won't break for users by default?
09:39:00 <flwang1> which means we have to copy everything patch versions to docker.io/openstackmagnum?
09:39:12 <flwang1> s/everything/every
09:40:27 <jakeyip> only those that falls between min max version in wiki?
09:40:55 <flwang1> we probably cannot
09:41:25 <flwang1> we only have 20 mins, let's move on
09:41:33 <flwang1> i will send an email and copy you guys
09:41:34 <jakeyip> sure
09:41:46 <flwang1> to spyros to discuss this in a mail thread
09:42:00 <brtknr> ok sounds good
09:42:02 <flwang1> #topic  Victoria release
09:42:18 <flwang1> brtknr: we need to start to wrap this release
09:42:41 <flwang1> that said, tagging patches we want to include in this release
09:42:46 <flwang1> like we did before
09:45:39 <flwang1> the final release will be around mid of Oct
09:45:44 <flwang1> so we have about 1 month
09:47:19 <flwang1> that's all from my side
09:47:29 <flwang1> brtknr: jakeyip: anything else you want to discuss?
09:47:46 <jakeyip> ~
09:48:29 <flwang1> brtknr: ?
09:48:49 <flwang1> jakeyip: any feedback from your users about k8s?
09:48:54 <flwang1> i mean about magnum
09:48:57 <jakeyip> i have a question on storageclass / post_install_manifest - we have multiple az so I don't think it'll work for us?
09:49:22 <jakeyip> ideally it should be tied to a template. I think the helm thing CERN is doing will help?
09:49:23 <flwang1> why?
09:50:19 <brtknr> i was hoping one day that would have a post_install_manifest that is tied to a cluster template
09:50:27 <brtknr> should be quite easy to implement
09:50:48 <jakeyip> for Nectar, we cannot cross attach nova and cinder az. so e.g. we have a magnum template for AZ A, which spins up instances in AZ A. the storageclass needs to point to AZ A also.
09:50:54 <flwang1> jakeyip: there is az parameter in the storageclass
09:51:42 <flwang1> i see
09:52:02 <flwang1> because the two az are sharing the same control plan, is it?
09:52:09 <flwang1> plane
09:52:46 <flwang1> maybe you can put az as a prefix or suffix for the storage class
09:52:48 <jakeyip> the two AZs are in different institutions, so different network and everything
09:53:12 <flwang1> are they having different magnum api/conductor?
09:53:18 <jakeyip> same
09:53:25 <flwang1> ok, right
09:53:53 <flwang1> jakeyip: if so, we probably need a label for that
09:54:40 <flwang1> jakeyip: you can propose a patch for that
09:54:51 <flwang1> brtknr will be happy to review it
09:54:56 <jakeyip> I was wondering if https://review.opendev.org/#/c/731790/ will help?
09:55:22 <flwang1> don't know, sorry
09:56:04 <flwang1> anything else?
09:56:17 <jakeyip> a bit of good news - we are finally on train
09:56:31 <flwang1> jakeyip: that's cool
09:56:41 <flwang1> we're planning to upgrade to Victoria
09:56:45 <flwang1> now we're on Train
09:56:52 <jakeyip> cool
09:57:08 <brtknr> I dont think Nice!
09:57:09 <jakeyip> does anyone of plans of setting up their own registry due to the new dockerhub limits?
09:57:12 <brtknr> Nice!
09:57:22 <brtknr> jakeyip: yes we are exploring it
09:57:33 <flwang1> so did us
09:57:35 <brtknr> i have written a script to pull retag and push image to private register
09:57:35 <jakeyip> I was thinking of harbor?
09:58:01 <brtknr> although insecure registry is broken in magnum
09:58:08 <brtknr> so i have proposed this patch: https://review.opendev.org/#/c/749989/
09:58:11 <flwang1> jakeyip: do you mean this https://docs.docker.com/docker-hub/download-rate-limit/ ?
09:58:27 <jakeyip> flwang1: yes
09:58:28 <brtknr> flwang1: thats right
09:59:29 <brtknr> 100 pulls per IP address per 6 hours
09:59:31 <jakeyip> we basically cache all images magnum needs to spin up in https://hub.docker.com/u/nectarmagnum (for our supported CT)
09:59:33 <brtknr> as anonymous user
09:59:48 <jakeyip> copy, not cache
10:00:18 <jakeyip> so brtknr I have a pull/push script too :P
10:00:24 <flwang1> Anonymous users100 pulls
10:00:42 <brtknr> jakeyip: is yours async ?
10:01:08 <jakeyip> no... :(
10:01:29 <brtknr> :)
10:01:55 <flwang1> jakeyip: can't nectar just upgrade to "Team" plan?
10:02:20 <brtknr> is there an option for docker auth in magnum?
10:03:33 <jakeyip> hmm, does 'Team' plan gives you unlimited anonymous transfers? the chart is unclear
10:03:48 <flwang1> i will ask Docker company if we can get a nonprofit discount
10:03:59 <flwang1> let me close the meeting first
10:04:03 <flwang1> #endmeeting