*** ianychoi__ is now known as ianychoi | 00:44 | |
*** wuchunyang has joined #openstack-containers | 02:13 | |
*** k_mouza has joined #openstack-containers | 02:25 | |
*** k_mouza has quit IRC | 02:30 | |
*** rcernin has quit IRC | 02:46 | |
*** rcernin has joined #openstack-containers | 02:57 | |
*** k_mouza has joined #openstack-containers | 03:24 | |
*** k_mouza has quit IRC | 03:28 | |
*** k_mouza has joined #openstack-containers | 03:32 | |
*** k_mouza has quit IRC | 03:36 | |
*** dave-mccowan has joined #openstack-containers | 03:48 | |
*** wuchunyang has quit IRC | 04:09 | |
*** wuchunyang has joined #openstack-containers | 04:18 | |
*** wuchunyang has quit IRC | 04:23 | |
*** ykarel|away has joined #openstack-containers | 04:43 | |
*** ykarel|away is now known as ykarel | 04:46 | |
*** dave-mccowan has quit IRC | 05:00 | |
*** k_mouza has joined #openstack-containers | 05:18 | |
*** rcernin has quit IRC | 05:19 | |
*** k_mouza has quit IRC | 05:23 | |
*** rcernin has joined #openstack-containers | 05:28 | |
*** vishalmanchanda has joined #openstack-containers | 06:40 | |
*** k_mouza has joined #openstack-containers | 07:40 | |
*** k_mouza has quit IRC | 07:44 | |
*** kevko has joined #openstack-containers | 07:46 | |
*** kevko has quit IRC | 07:47 | |
*** kevko has joined #openstack-containers | 07:47 | |
*** rcernin has quit IRC | 08:06 | |
*** yolanda has quit IRC | 08:22 | |
*** k_mouza has joined #openstack-containers | 08:22 | |
*** yolanda has joined #openstack-containers | 08:22 | |
*** k_mouza has quit IRC | 08:26 | |
*** k_mouza has joined #openstack-containers | 08:31 | |
*** k_mouza has quit IRC | 08:35 | |
*** ykarel_ has joined #openstack-containers | 08:43 | |
*** k_mouza has joined #openstack-containers | 08:44 | |
*** ykarel has quit IRC | 08:46 | |
*** ykarel_ is now known as ykarel | 08:47 | |
*** flwang1 has joined #openstack-containers | 09:01 | |
flwang1 | brtknr: meeting? | 09:02 |
---|---|---|
brtknr | flwang1: hello! | 09:02 |
brtknr | yes | 09:02 |
flwang1 | #startmeeting magnum | 09:03 |
openstack | Meeting started Wed Sep 16 09:03:59 2020 UTC and is due to finish in 60 minutes. The chair is flwang1. Information about MeetBot at http://wiki.debian.org/MeetBot. | 09:04 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 09:04 |
*** openstack changes topic to " (Meeting topic: magnum)" | 09:04 | |
openstack | The meeting name has been set to 'magnum' | 09:04 |
flwang1 | #topic roll call | 09:04 |
*** openstack changes topic to "roll call (Meeting topic: magnum)" | 09:04 | |
flwang1 | i think only brtknr and me? | 09:04 |
flwang1 | jakeyip: around? | 09:04 |
jakeyip | flwang1: hi o/ | 09:05 |
flwang1 | jakeyip: hello | 09:05 |
brtknr | o/ | 09:05 |
flwang1 | as for your storageclass question, the easiest way is using the post_install_manifest config | 09:05 |
flwang1 | #topic hyperkube | 09:06 |
*** openstack changes topic to "hyperkube (Meeting topic: magnum)" | 09:06 | |
flwang1 | brtknr: i'd like to discuss the hyperkube first | 09:06 |
brtknr | sure | 09:06 |
flwang1 | i contacted with the rancher team who is maintaining their hyperkube image, i was told it's a long term solution for their RKS | 09:06 |
brtknr | thats good to hear | 09:07 |
flwang1 | personally, i don't mind moving to binary, what i thinking is we need to make the decision as a team | 09:08 |
brtknr | I tested hyperkube with 1.19 and it works very well | 09:08 |
flwang1 | and i prefer to do it in next cycle | 09:08 |
brtknr | with rancher container | 09:08 |
flwang1 | brtknr: that's good to know | 09:08 |
flwang1 | maybe we can build in our pipeline with heat-container-agent | 09:09 |
brtknr | I think a good short term solution is to introduce a new label, e.g. kube_source: | 09:09 |
brtknr | and we can override hyperkube source with whatever we use | 09:09 |
brtknr | or kube_prefix | 09:09 |
flwang1 | i'm wondering if it's necessary | 09:10 |
openstackgerrit | Merged openstack/magnum master: [goal] Prepare pep8 testing for Ubuntu Focal https://review.opendev.org/750591 | 09:10 |
flwang1 | because for prod usage, they always have their own container registry | 09:10 |
flwang1 | and they will keep the hyperkube image there | 09:10 |
flwang1 | instead of download it from the original source everytime | 09:10 |
flwang1 | the only case we need a new label is probably for the devstack | 09:11 |
brtknr | who is they? | 09:11 |
flwang1 | most of the company/org using magnum | 09:11 |
flwang1 | what's the case for stackHPC? | 09:12 |
flwang1 | do you set the "CONTAINER_INFRA_PREFIX"? | 09:12 |
brtknr | we have some customers who use container registry, others dont | 09:13 |
flwang1 | don't they have concern if any image has any change? | 09:13 |
flwang1 | anyway, i think we need to get input from Spyros as well | 09:14 |
brtknr | sure | 09:14 |
jakeyip | are the 'current' versions of hyperkube compatible ? e.g. 1.15 - 1.1.7 | 09:15 |
flwang1 | but at least, we have a solution | 09:15 |
flwang1 | jakeyip: until v1.18.x | 09:15 |
flwang1 | there is no hyperkube since v1.19.x | 09:15 |
brtknr | there is no official hyperkube | 09:16 |
jakeyip | sorry I meant, is rancher hyperkube 1.15 - 1.17 compatible with k8s 's | 09:16 |
brtknr | only third party | 09:16 |
jakeyip | was thinking change the default registry at the next release? this change can be a backport | 09:16 |
jakeyip | to train or whatever to use rancher's hyperkube | 09:16 |
flwang1 | jakeyip: maybe not, they're using a suffix for the image name | 09:18 |
brtknr | flwang1: rackspace also build hyperkube: https://quay.io/repository/rackspace/hyperkube?tab=tags | 09:21 |
flwang1 | brtknr: but i can't see a v1.19.x image | 09:23 |
flwang1 | from your above link | 09:23 |
brtknr | well https://github.com/rancher/hyperkube/releases | 09:25 |
brtknr | we can use these releases anyways | 09:25 |
jakeyip | hmm what about taking the rancher one with suffix and putting it into docker.io/openstackmagnum/ | 09:26 |
flwang1 | brtknr: yes, we can. we just need to figure out how, for those are not using CONTAINER_INFRA_PREFIX | 09:27 |
flwang1 | jakeyip: we can do that. we just need to make the decision how can we keep supporting v1.19.x and keep the backward compatibility | 09:28 |
jakeyip | hmm, will using CONTAINER_INFRA_PREFIX work? because of the -rancher suffix in tags? | 09:28 |
flwang1 | the only way to support that is probably like brtknr proposed, adding a new label to allow passing in the full image URL | 09:29 |
flwang1 | include name and tag | 09:29 |
flwang1 | jakeyip: if you donwload and retag, then upload to your own registry, then it should work without any issue | 09:29 |
jakeyip | but what will the default be? | 09:30 |
flwang1 | what do you mean the default? | 09:31 |
jakeyip | the one in templates | 09:32 |
jakeyip | e.g. `${CONTAINER_INFRA_PREFIX:-k8s.gcr.io/}hyperkube:\${KUBE_TAG}` | 09:32 |
flwang1 | we need some workaround there | 09:32 |
flwang1 | e.g. if label hyperkube_source passed in, it will replace above image location | 09:33 |
flwang1 | brtknr: is that your idea? | 09:33 |
brtknr | yep | 09:33 |
brtknr | or just kube_prefix | 09:33 |
brtknr | similar to kube_tag | 09:34 |
jakeyip | if not? | 09:34 |
flwang1 | jakeyip: something like that | 09:34 |
flwang1 | i will send an email to spyros to get his comments | 09:34 |
flwang1 | let's move on? | 09:34 |
jakeyip | I was thinking if it is possible to change this to docker.io/openstackmagnum/? for <=1.18 we mirror k8s.gcr.io | 09:35 |
jakeyip | for >1.18 we mirror rancher and don't put suffix. so it won't break for users by default? | 09:35 |
flwang1 | which means we have to copy everything patch versions to docker.io/openstackmagnum? | 09:39 |
flwang1 | s/everything/every | 09:39 |
jakeyip | only those that falls between min max version in wiki? | 09:40 |
flwang1 | we probably cannot | 09:40 |
flwang1 | we only have 20 mins, let's move on | 09:41 |
flwang1 | i will send an email and copy you guys | 09:41 |
jakeyip | sure | 09:41 |
flwang1 | to spyros to discuss this in a mail thread | 09:41 |
brtknr | ok sounds good | 09:42 |
flwang1 | #topic Victoria release | 09:42 |
*** openstack changes topic to "Victoria release (Meeting topic: magnum)" | 09:42 | |
flwang1 | brtknr: we need to start to wrap this release | 09:42 |
flwang1 | that said, tagging patches we want to include in this release | 09:42 |
flwang1 | like we did before | 09:42 |
flwang1 | the final release will be around mid of Oct | 09:45 |
flwang1 | so we have about 1 month | 09:45 |
flwang1 | that's all from my side | 09:47 |
flwang1 | brtknr: jakeyip: anything else you want to discuss? | 09:47 |
jakeyip | ~ | 09:47 |
flwang1 | brtknr: ? | 09:48 |
flwang1 | jakeyip: any feedback from your users about k8s? | 09:48 |
flwang1 | i mean about magnum | 09:48 |
jakeyip | i have a question on storageclass / post_install_manifest - we have multiple az so I don't think it'll work for us? | 09:48 |
jakeyip | ideally it should be tied to a template. I think the helm thing CERN is doing will help? | 09:49 |
flwang1 | why? | 09:49 |
brtknr | i was hoping one day that would have a post_install_manifest that is tied to a cluster template | 09:50 |
brtknr | should be quite easy to implement | 09:50 |
jakeyip | for Nectar, we cannot cross attach nova and cinder az. so e.g. we have a magnum template for AZ A, which spins up instances in AZ A. the storageclass needs to point to AZ A also. | 09:50 |
flwang1 | jakeyip: there is az parameter in the storageclass | 09:50 |
flwang1 | i see | 09:51 |
flwang1 | because the two az are sharing the same control plan, is it? | 09:52 |
flwang1 | plane | 09:52 |
flwang1 | maybe you can put az as a prefix or suffix for the storage class | 09:52 |
jakeyip | the two AZs are in different institutions, so different network and everything | 09:52 |
flwang1 | are they having different magnum api/conductor? | 09:53 |
jakeyip | same | 09:53 |
flwang1 | ok, right | 09:53 |
flwang1 | jakeyip: if so, we probably need a label for that | 09:53 |
flwang1 | jakeyip: you can propose a patch for that | 09:54 |
flwang1 | brtknr will be happy to review it | 09:54 |
jakeyip | I was wondering if https://review.opendev.org/#/c/731790/ will help? | 09:54 |
flwang1 | don't know, sorry | 09:55 |
flwang1 | anything else? | 09:56 |
jakeyip | a bit of good news - we are finally on train | 09:56 |
flwang1 | jakeyip: that's cool | 09:56 |
flwang1 | we're planning to upgrade to Victoria | 09:56 |
*** SecOpsNinja has joined #openstack-containers | 09:56 | |
flwang1 | now we're on Train | 09:56 |
jakeyip | cool | 09:56 |
brtknr | I dont think Nice! | 09:57 |
jakeyip | does anyone of plans of setting up their own registry due to the new dockerhub limits? | 09:57 |
brtknr | Nice! | 09:57 |
brtknr | jakeyip: yes we are exploring it | 09:57 |
flwang1 | so did us | 09:57 |
brtknr | i have written a script to pull retag and push image to private register | 09:57 |
jakeyip | I was thinking of harbor? | 09:57 |
brtknr | although insecure registry is broken in magnum | 09:58 |
brtknr | so i have proposed this patch: https://review.opendev.org/#/c/749989/ | 09:58 |
flwang1 | jakeyip: do you mean this https://docs.docker.com/docker-hub/download-rate-limit/ ? | 09:58 |
jakeyip | flwang1: yes | 09:58 |
brtknr | flwang1: thats right | 09:58 |
brtknr | 100 pulls per IP address per 6 hours | 09:59 |
jakeyip | we basically cache all images magnum needs to spin up in https://hub.docker.com/u/nectarmagnum (for our supported CT) | 09:59 |
brtknr | as anonymous user | 09:59 |
jakeyip | copy, not cache | 09:59 |
jakeyip | so brtknr I have a pull/push script too :P | 10:00 |
flwang1 | Anonymous users100 pulls | 10:00 |
brtknr | jakeyip: is yours async ? | 10:00 |
jakeyip | no... :( | 10:01 |
brtknr | :) | 10:01 |
flwang1 | jakeyip: can't nectar just upgrade to "Team" plan? | 10:01 |
brtknr | is there an option for docker auth in magnum? | 10:02 |
jakeyip | hmm, does 'Team' plan gives you unlimited anonymous transfers? the chart is unclear | 10:03 |
flwang1 | i will ask Docker company if we can get a nonprofit discount | 10:03 |
flwang1 | let me close the meeting first | 10:03 |
flwang1 | #endmeeting | 10:04 |
*** openstack changes topic to "OpenStack Containers Team | Meeting: every Wednesday @ 9AM UTC | Agenda: https://etherpad.openstack.org/p/magnum-weekly-meeting" | 10:04 | |
openstack | Meeting ended Wed Sep 16 10:04:03 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 10:04 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-16-09.03.html | 10:04 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-16-09.03.txt | 10:04 |
openstack | Log: http://eavesdrop.openstack.org/meetings/magnum/2020/magnum.2020-09-16-09.03.log.html | 10:04 |
jakeyip | I don't know how this affects magnum. I can see many containers in template pointing to docker.io | 10:05 |
brtknr | so anonymous pulls are limited to 100 containers/6hrs/IP address | 10:06 |
flwang1 | it will slow my devstack testing :) | 10:08 |
brtknr | in total, we have 16 images are rely on docker hub | 10:08 |
flwang1 | i have to leave, thank you for joining | 10:08 |
jakeyip | brtknr: yeah. so a user playing with it might quickly exhaust the limits. and break magnum for next user using that IP. | 10:08 |
jakeyip | also, what's going to happen to gate etc | 10:08 |
flwang1 | jakeyip: true | 10:08 |
brtknr | https://brtknr.kgz.sh/7mnu/ | 10:09 |
brtknr | 9 images on master nodes: https://brtknr.kgz.sh/aomh/ | 10:09 |
flwang1 | o/ | 10:09 |
jakeyip | bye flwang1 | 10:10 |
brtknr | https://brtknr.kgz.sh/b9xk | 10:10 |
flwang1 | take care, my friends | 10:10 |
brtknr | 7 images on worker | 10:10 |
brtknr | okay bye then | 10:10 |
jakeyip | brtknr: 100 / 16 = 6 clusters in 6 hours? :P | 10:12 |
brtknr | well 100/7 = 14 worker nodes | 10:13 |
brtknr | 100/9 = 11 master nodes | 10:14 |
brtknr | per 6 hrs | 10:14 |
brtknr | if your cluster has the same public facing ip | 10:14 |
jakeyip | yeah we have 1 public ip per cluster | 10:15 |
jakeyip | I guess the cheap/easy option is to get a plan at quay.io | 10:17 |
jakeyip | brtknr: I am looking at https://review.opendev.org/#/c/743945/ | 10:19 |
jakeyip | do you have a way to clean up any dead trustees? | 10:25 |
*** wuchunyang has joined #openstack-containers | 10:29 | |
*** wuchunyang has quit IRC | 10:32 | |
brtknr | jakeyip:openstack user delete `openstack user list | grep -v "$(openstack coe cluster list -c uuid -f value)" | grep <project-id> | cut -f4 -d" "` | 10:48 |
brtknr | something like that | 10:48 |
brtknr | as admin user | 10:49 |
jakeyip | ah yeah thanks. just noticed the name of user has cluster id in it | 10:50 |
*** ramishra has quit IRC | 11:59 | |
*** ramishra has joined #openstack-containers | 12:21 | |
*** dave-mccowan has joined #openstack-containers | 12:41 | |
*** vishalmanchanda has quit IRC | 13:04 | |
*** sapd1_x has joined #openstack-containers | 13:19 | |
*** ricolin has quit IRC | 13:20 | |
*** ykarel_ has joined #openstack-containers | 13:35 | |
*** ykarel has quit IRC | 13:37 | |
*** ricolin has joined #openstack-containers | 13:45 | |
*** k_mouza has quit IRC | 13:49 | |
*** k_mouza has joined #openstack-containers | 13:56 | |
*** ykarel_ is now known as ykarel| | 14:04 | |
*** ykarel| is now known as ykarel | 14:04 | |
openstackgerrit | Bharat Kunwar proposed openstack/magnum master: Support kube_prefix label to override hyperkube source https://review.opendev.org/752254 | 14:49 |
openstackgerrit | Bharat Kunwar proposed openstack/magnum master: Support kube_prefix label to override hyperkube source https://review.opendev.org/752254 | 14:53 |
*** k_mouza has quit IRC | 14:55 | |
*** vishalmanchanda has joined #openstack-containers | 14:58 | |
*** kevko has quit IRC | 15:01 | |
*** k_mouza has joined #openstack-containers | 15:01 | |
*** ykarel is now known as ykarel|away | 15:12 | |
*** ykarel|away has quit IRC | 15:37 | |
*** k_mouza has quit IRC | 15:39 | |
*** k_mouza has joined #openstack-containers | 15:40 | |
*** mgariepy has quit IRC | 15:51 | |
*** mgariepy has joined #openstack-containers | 15:51 | |
SecOpsNinja | When a heat stack fails is there any way to force to force a retry of stack? or do i need to delete k8s cluster and force the recriation with magnum ? | 16:26 |
*** ykarel|away has joined #openstack-containers | 16:43 | |
*** k_mouza has quit IRC | 16:48 | |
*** ykarel|away has quit IRC | 16:50 | |
*** tobberydberg has quit IRC | 16:51 | |
*** tobberydberg_ has joined #openstack-containers | 16:51 | |
*** sapd1_x has quit IRC | 16:52 | |
*** johanssone_ has quit IRC | 16:59 | |
*** johanssone has joined #openstack-containers | 17:03 | |
*** SecOpsNinja has left #openstack-containers | 18:07 | |
*** vishalmanchanda has quit IRC | 18:28 | |
*** rcernin has joined #openstack-containers | 22:10 | |
*** rcernin has quit IRC | 22:18 | |
*** rcernin has joined #openstack-containers | 22:33 | |
*** rcernin has quit IRC | 22:33 | |
*** rcernin has joined #openstack-containers | 22:33 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!