09:03:59 #startmeeting magnum 09:04:00 Meeting started Wed Sep 16 09:03:59 2020 UTC and is due to finish in 60 minutes. The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:04:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:04:03 The meeting name has been set to 'magnum' 09:04:47 #topic roll call 09:04:54 i think only brtknr and me? 09:04:58 jakeyip: around? 09:05:08 flwang1: hi o/ 09:05:14 jakeyip: hello 09:05:30 o/ 09:05:36 as for your storageclass question, the easiest way is using the post_install_manifest config 09:06:00 #topic hyperkube 09:06:15 brtknr: i'd like to discuss the hyperkube first 09:06:31 sure 09:06:54 i contacted with the rancher team who is maintaining their hyperkube image, i was told it's a long term solution for their RKS 09:07:12 thats good to hear 09:08:07 personally, i don't mind moving to binary, what i thinking is we need to make the decision as a team 09:08:26 I tested hyperkube with 1.19 and it works very well 09:08:30 and i prefer to do it in next cycle 09:08:44 with rancher container 09:08:52 brtknr: that's good to know 09:09:12 maybe we can build in our pipeline with heat-container-agent 09:09:15 I think a good short term solution is to introduce a new label, e.g. kube_source: 09:09:49 and we can override hyperkube source with whatever we use 09:09:58 or kube_prefix 09:10:09 i'm wondering if it's necessary 09:10:31 Merged openstack/magnum master: [goal] Prepare pep8 testing for Ubuntu Focal https://review.opendev.org/750591 09:10:40 because for prod usage, they always have their own container registry 09:10:50 and they will keep the hyperkube image there 09:10:59 instead of download it from the original source everytime 09:11:16 the only case we need a new label is probably for the devstack 09:11:16 who is they? 09:11:39 most of the company/org using magnum 09:12:05 what's the case for stackHPC? 09:12:44 do you set the "CONTAINER_INFRA_PREFIX"? 09:13:01 we have some customers who use container registry, others dont 09:13:48 don't they have concern if any image has any change? 09:14:03 anyway, i think we need to get input from Spyros as well 09:14:25 sure 09:15:09 are the 'current' versions of hyperkube compatible ? e.g. 1.15 - 1.1.7 09:15:25 but at least, we have a solution 09:15:41 jakeyip: until v1.18.x 09:15:50 there is no hyperkube since v1.19.x 09:16:01 there is no official hyperkube 09:16:03 sorry I meant, is rancher hyperkube 1.15 - 1.17 compatible with k8s 's 09:16:06 only third party 09:16:39 was thinking change the default registry at the next release? this change can be a backport 09:16:53 to train or whatever to use rancher's hyperkube 09:18:41 jakeyip: maybe not, they're using a suffix for the image name 09:21:40 flwang1: rackspace also build hyperkube: https://quay.io/repository/rackspace/hyperkube?tab=tags 09:23:04 brtknr: but i can't see a v1.19.x image 09:23:13 from your above link 09:25:31 well https://github.com/rancher/hyperkube/releases 09:25:48 we can use these releases anyways 09:26:49 hmm what about taking the rancher one with suffix and putting it into docker.io/openstackmagnum/ 09:27:03 brtknr: yes, we can. we just need to figure out how, for those are not using CONTAINER_INFRA_PREFIX 09:28:20 jakeyip: we can do that. we just need to make the decision how can we keep supporting v1.19.x and keep the backward compatibility 09:28:57 hmm, will using CONTAINER_INFRA_PREFIX work? because of the -rancher suffix in tags? 09:29:06 the only way to support that is probably like brtknr proposed, adding a new label to allow passing in the full image URL 09:29:16 include name and tag 09:29:55 jakeyip: if you donwload and retag, then upload to your own registry, then it should work without any issue 09:30:37 but what will the default be? 09:31:14 what do you mean the default? 09:32:02 the one in templates 09:32:35 e.g. `${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}hyperkube:\${KUBE_TAG}` 09:32:53 we need some workaround there 09:33:19 e.g. if label hyperkube_source passed in, it will replace above image location 09:33:32 brtknr: is that your idea? 09:33:40 yep 09:33:54 or just kube_prefix 09:34:07 similar to kube_tag 09:34:08 if not? 09:34:19 jakeyip: something like that 09:34:55 i will send an email to spyros to get his comments 09:34:59 let's move on? 09:35:30 I was thinking if it is possible to change this to docker.io/openstackmagnum/? for <=1.18 we mirror k8s.gcr.io 09:35:57 for >1.18 we mirror rancher and don't put suffix. so it won't break for users by default? 09:39:00 which means we have to copy everything patch versions to docker.io/openstackmagnum? 09:39:12 s/everything/every 09:40:27 only those that falls between min max version in wiki? 09:40:55 we probably cannot 09:41:25 we only have 20 mins, let's move on 09:41:33 i will send an email and copy you guys 09:41:34 sure 09:41:46 to spyros to discuss this in a mail thread 09:42:00 ok sounds good 09:42:02 #topic Victoria release 09:42:18 brtknr: we need to start to wrap this release 09:42:41 that said, tagging patches we want to include in this release 09:42:46 like we did before 09:45:39 the final release will be around mid of Oct 09:45:44 so we have about 1 month 09:47:19 that's all from my side 09:47:29 brtknr: jakeyip: anything else you want to discuss? 09:47:46 ~ 09:48:29 brtknr: ? 09:48:49 jakeyip: any feedback from your users about k8s? 09:48:54 i mean about magnum 09:48:57 i have a question on storageclass / post_install_manifest - we have multiple az so I don't think it'll work for us? 09:49:22 ideally it should be tied to a template. I think the helm thing CERN is doing will help? 09:49:23 why? 09:50:19 i was hoping one day that would have a post_install_manifest that is tied to a cluster template 09:50:27 should be quite easy to implement 09:50:48 for Nectar, we cannot cross attach nova and cinder az. so e.g. we have a magnum template for AZ A, which spins up instances in AZ A. the storageclass needs to point to AZ A also. 09:50:54 jakeyip: there is az parameter in the storageclass 09:51:42 i see 09:52:02 because the two az are sharing the same control plan, is it? 09:52:09 plane 09:52:46 maybe you can put az as a prefix or suffix for the storage class 09:52:48 the two AZs are in different institutions, so different network and everything 09:53:12 are they having different magnum api/conductor? 09:53:18 same 09:53:25 ok, right 09:53:53 jakeyip: if so, we probably need a label for that 09:54:40 jakeyip: you can propose a patch for that 09:54:51 brtknr will be happy to review it 09:54:56 I was wondering if https://review.opendev.org/#/c/731790/ will help? 09:55:22 don't know, sorry 09:56:04 anything else? 09:56:17 a bit of good news - we are finally on train 09:56:31 jakeyip: that's cool 09:56:41 we're planning to upgrade to Victoria 09:56:45 now we're on Train 09:56:52 cool 09:57:08 I dont think Nice! 09:57:09 does anyone of plans of setting up their own registry due to the new dockerhub limits? 09:57:12 Nice! 09:57:22 jakeyip: yes we are exploring it 09:57:33 so did us 09:57:35 i have written a script to pull retag and push image to private register 09:57:35 I was thinking of harbor? 09:58:01 although insecure registry is broken in magnum 09:58:08 so i have proposed this patch: https://review.opendev.org/#/c/749989/ 09:58:11 jakeyip: do you mean this https://docs.docker.com/docker-hub/download-rate-limit/ ? 09:58:27 flwang1: yes 09:58:28 flwang1: thats right 09:59:29 100 pulls per IP address per 6 hours 09:59:31 we basically cache all images magnum needs to spin up in https://hub.docker.com/u/nectarmagnum (for our supported CT) 09:59:33 as anonymous user 09:59:48 copy, not cache 10:00:18 so brtknr I have a pull/push script too :P 10:00:24 Anonymous users100 pulls 10:00:42 jakeyip: is yours async ? 10:01:08 no... :( 10:01:29 :) 10:01:55 jakeyip: can't nectar just upgrade to "Team" plan? 10:02:20 is there an option for docker auth in magnum? 10:03:33 hmm, does 'Team' plan gives you unlimited anonymous transfers? the chart is unclear 10:03:48 i will ask Docker company if we can get a nonprofit discount 10:03:59 let me close the meeting first 10:04:03 #endmeeting