*** itlinux has joined #openstack-containers | 00:23 | |
openstackgerrit | Feilong Wang proposed openstack/magnum master: [k8s] Using node instead of minion https://review.openstack.org/608799 | 01:07 |
---|---|---|
*** hongbin has joined #openstack-containers | 01:08 | |
*** slagle has joined #openstack-containers | 01:20 | |
*** ricolin has joined #openstack-containers | 01:20 | |
*** dave-mccowan has quit IRC | 03:02 | |
*** ramishra has joined #openstack-containers | 03:08 | |
*** jaewook_oh has joined #openstack-containers | 03:46 | |
*** hongbin has quit IRC | 04:00 | |
*** udesale has joined #openstack-containers | 04:07 | |
openstackgerrit | Kien Nguyen proposed openstack/magnum master: Update auth_uri option to www_authenticate_uri https://review.openstack.org/560919 | 04:11 |
*** pcaruana has joined #openstack-containers | 04:38 | |
*** janki has joined #openstack-containers | 05:25 | |
*** ttsiouts has joined #openstack-containers | 06:34 | |
*** dabukalam has joined #openstack-containers | 07:00 | |
*** rcernin has quit IRC | 07:08 | |
*** serlex has joined #openstack-containers | 07:15 | |
*** jaewook_oh has quit IRC | 07:28 | |
*** jaewook_oh has joined #openstack-containers | 07:28 | |
*** jaewook_oh has quit IRC | 07:29 | |
*** ttsiouts has quit IRC | 07:30 | |
*** ttsiouts has joined #openstack-containers | 07:31 | |
*** jaewook_oh has joined #openstack-containers | 07:32 | |
*** mattgo has joined #openstack-containers | 07:32 | |
*** ttsiouts has quit IRC | 07:35 | |
*** ttsiouts has joined #openstack-containers | 07:36 | |
*** ttsiouts has quit IRC | 07:38 | |
*** ttsiouts has joined #openstack-containers | 07:38 | |
*** belmoreira has joined #openstack-containers | 07:40 | |
*** gsimondon has joined #openstack-containers | 07:42 | |
*** ttsiouts has quit IRC | 07:52 | |
*** ttsiouts has joined #openstack-containers | 08:09 | |
*** ricolin has quit IRC | 08:12 | |
*** janki is now known as janki|lunch | 08:16 | |
*** imdigitaljim has quit IRC | 08:25 | |
*** flwang1 has joined #openstack-containers | 08:40 | |
flwang1 | strigazi: pls ping me when you're available | 08:40 |
*** ricolin has joined #openstack-containers | 08:47 | |
*** belmoreira has quit IRC | 08:57 | |
*** belmorei_ has joined #openstack-containers | 08:57 | |
*** salmankhan has joined #openstack-containers | 09:11 | |
strigazi | flwang1: ping | 09:13 |
flwang1 | strigazi: pong | 09:13 |
flwang1 | strigazi: see my email? | 09:13 |
*** salmankhan has quit IRC | 09:15 | |
flwang1 | strigazi: 1. https://storyboard.openstack.org/#!/story/2003992 heat-container-agent version tag | 09:19 |
strigazi | let's start from the easy one, the container tags | 09:19 |
flwang1 | i'd like to upload a new image for heat-container-agent | 09:19 |
flwang1 | to fix the multi region problem | 09:19 |
flwang1 | the problem does exist, verified in our cloud | 09:20 |
strigazi | Wasn't there one already in docker.io? | 09:20 |
flwang1 | no, that one has a bug, which i fixed, but forgot to upload a newer version image | 09:20 |
strigazi | rocky-dev | 09:20 |
flwang1 | there is a bug in that one | 09:20 |
strigazi | which one? | 09:21 |
strigazi | https://review.openstack.org/#/c/584215/ | 09:21 |
strigazi | isn't it fixed here ^^ | 09:22 |
strigazi | flwang1: ^^ | 09:22 |
flwang1 | https://review.openstack.org/#/c/584215/4/magnum/drivers/common/image/heat-container-agent/scripts/heat-config-notify | 09:22 |
flwang1 | it is | 09:22 |
flwang1 | but we need a new image | 09:22 |
flwang1 | and we need bump the heat-container-agent version in magnum | 09:22 |
strigazi | and the fix is not included in rocky-dev? | 09:23 |
flwang1 | the image with rock-dev was built before we fix the bug | 09:23 |
flwang1 | yes | 09:23 |
strigazi | ok | 09:23 |
flwang1 | i just need your input for the tag | 09:23 |
strigazi | wait, I have a patch already to add it as a label | 09:24 |
flwang1 | still using rocky-dev, rocky or rocky-stable | 09:24 |
strigazi | or rocky-<six digits from commit id> | 09:25 |
strigazi | or rocky-<seven digits from commit id> | 09:25 |
strigazi | actually we can use both | 09:26 |
flwang1 | let's use 7 digits to be consistent with github | 09:27 |
strigazi | rocky-stable will be an extra tag the points to the stable of the branch | 09:27 |
strigazi | this waym the images will have the same sha | 09:27 |
flwang1 | ok, sounds a plan | 09:27 |
strigazi | and when we say stable we will also now where it came from | 09:27 |
flwang1 | cool, i like the idea | 09:27 |
flwang1 | move to next one? | 09:28 |
strigazi | one sec | 09:28 |
strigazi | to address this issue for good | 09:28 |
strigazi | should we finilize: | 09:28 |
strigazi | https://review.openstack.org/#/c/561858/ https://review.openstack.org/#/c/585420/ | 09:28 |
strigazi | add the label as a label | 09:29 |
strigazi | and build in the ci | 09:29 |
flwang1 | sure | 09:29 |
strigazi | I can also build an image right now to get the fix | 09:29 |
flwang1 | good to see you already have a patch to make it as a label https://review.openstack.org/#/c/561858/1/magnum/drivers/common/templates/kubernetes/fragments/start-container-agent.sh | 09:29 |
flwang1 | sure, go ahead | 09:30 |
strigazi | one more thing | 09:30 |
*** salmankhan has joined #openstack-containers | 09:31 | |
strigazi | the rocky branch uses rawhide atm | 09:31 |
flwang1 | strigazi: that's the one i'd like to address | 09:31 |
flwang1 | we need a simple fix and then backport | 09:31 |
flwang1 | i almost forgot that | 09:31 |
strigazi | I don't want to override something that works other sites already | 09:31 |
strigazi | I don't want to override something that works in other sites already | 09:32 |
strigazi | so | 09:32 |
strigazi | I build with rocky-stable | 09:32 |
strigazi | and we do a one-liner in rocky to use that | 09:32 |
strigazi | and we do a one-liner in the rocky branch to use that | 09:32 |
flwang1 | no change in master, you mean? | 09:32 |
flwang1 | directly change rocky branch | 09:33 |
flwang1 | ? | 09:33 |
strigazi | should we hard-code rocky-stable in master? | 09:33 |
strigazi | why not | 09:33 |
strigazi | it will be for a week there | 09:33 |
flwang1 | i mean we should hard code it in master and then get your patch in | 09:33 |
strigazi | 1. build with rocky stable 2. patch master 3. backport to rocky | 09:34 |
flwang1 | yes | 09:34 |
strigazi | done | 09:34 |
strigazi | or agreed | 09:34 |
strigazi | next item? | 09:34 |
flwang1 | +2 | 09:34 |
strigazi | coredns? | 09:34 |
flwang1 | version tag of coreDNS | 09:34 |
strigazi | push a patch quickly and back port? | 09:34 |
strigazi | and bump to latest stable release? | 09:35 |
flwang1 | which one are you talking about? still the heat-container-agent issue? | 09:35 |
strigazi | no, coredns | 09:35 |
flwang1 | oh, for coreDNS, we don't have to backport | 09:35 |
flwang1 | current version works fine so far | 09:36 |
flwang1 | but we do need a tag for that | 09:36 |
flwang1 | and we probably need a better name convention for all labels to follow for now on | 09:36 |
flwang1 | for/from | 09:36 |
strigazi | <something>_tag is not good? | 09:36 |
flwang1 | nope, i'm talking about all the labels | 09:37 |
flwang1 | sorry | 09:37 |
flwang1 | for the confusion | 09:37 |
flwang1 | coredns_tag is good enough | 09:37 |
flwang1 | i will propose a patch later | 09:37 |
flwang1 | next one? | 09:37 |
strigazi | ok | 09:37 |
flwang1 | Multi discovery service | 09:38 |
flwang1 | do you think it's a bad/stupid idea, even if user set a private discovery service, then when it's done, magnum will use the discovery.etcd.io as a backup one? | 09:38 |
strigazi | we could do that | 09:38 |
flwang1 | or soemthing like that | 09:39 |
strigazi | well, in china won't work | 09:39 |
flwang1 | it's not urgent, but i think it's useful | 09:39 |
flwang1 | user can set the backup one i think | 09:39 |
flwang1 | for a better solution | 09:39 |
strigazi | if the backup is configurable too makes sense | 09:39 |
flwang1 | in otherwords, the discovery services, should be a list, not a single one | 09:40 |
strigazi | ok | 09:40 |
flwang1 | another similar requirement, the dns_nameserver | 09:40 |
flwang1 | it definitely should be a list, instead of single server | 09:41 |
strigazi | will this work: https://github.com/openstack/magnum/blob/stable/rocky/magnum/drivers/k8s_fedora_atomic_v1/templates/kubecluster.yaml#L523 | 09:42 |
strigazi | ? | 09:42 |
flwang1 | i haven't tried, but we're defining the param as a string, so i'm not sure | 09:43 |
flwang1 | FWIW, it should be improved because neutron can support it | 09:43 |
flwang1 | I will dig and propose patch if it's necessary, agree? | 09:43 |
strigazi | one moment, | 09:44 |
strigazi | do you need may servers for the docker config? | 09:44 |
flwang1 | may servers? | 09:45 |
strigazi | many, sorry | 09:45 |
flwang1 | what's the context for 'docker config'? | 09:45 |
strigazi | dockerd and coredns are the components in the cluster that can be configured | 09:46 |
strigazi | with dns servers | 09:46 |
*** vabada has quit IRC | 09:46 | |
flwang1 | ah, i see what do you mean. probably yes, we just want to have backup dns server for prod use | 09:46 |
strigazi | agreed actually: https://developer.openstack.org/api-ref/network/v2/index.html#create-subnet | 09:47 |
strigazi | dns_nameservers (Optional)bodyarrayList of dns name servers associated with the subnet. Default is an empty list. | 09:47 |
flwang1 | so are we on the same page now? | 09:48 |
strigazi | yes | 09:48 |
flwang1 | cool, next one | 09:48 |
flwang1 | stack delete | 09:49 |
strigazi | the only thing to solve is how to pass it in the magnum API. ok stack delete | 09:49 |
*** dave-mccowan has joined #openstack-containers | 09:49 | |
strigazi | from your email I didn't get the problem with stack delete | 09:49 |
strigazi | admins can do stack delete | 09:49 |
flwang1 | yep, it's still the LB delete issue | 09:50 |
flwang1 | https://review.openstack.org/#/c/497144/ | 09:50 |
strigazi | what is the status of ^^ | 09:51 |
strigazi | ? | 09:51 |
flwang1 | my current plan is passing the cluster UUID to --cluster-name | 09:51 |
strigazi | is there disagreement? I forgot | 09:51 |
flwang1 | then with this patch https://github.com/kubernetes/cloud-provider-openstack/pull/223/files | 09:51 |
strigazi | sounds good | 09:51 |
flwang1 | then we should be good to figure out the correct LB | 09:52 |
flwang1 | Jim said he has a patch, but so far I haven't seen it | 09:52 |
flwang1 | so I will propose a patch set to use the way I mentioned above | 09:53 |
strigazi | let's propose one then | 09:53 |
strigazi | what is the change we need? | 09:53 |
strigazi | in magnum, i mean | 09:53 |
strigazi | in the config of the cloud-provider? | 09:53 |
flwang1 | we need pass in the uuid to kube-controller-manager with --cluster-name | 09:53 |
flwang1 | then use basically the code in https://review.openstack.org/#/c/497144/ to get the LB | 09:54 |
strigazi | https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/ this one? | 09:54 |
strigazi | --cluster-name string Default: "kubernetes" | 09:54 |
flwang1 | yes | 09:54 |
flwang1 | any concern? | 09:54 |
strigazi | The instance prefix for the cluster. what does this even mean? | 09:54 |
strigazi | where will it be used? | 09:55 |
flwang1 | i don't know, TBH, at this moment, need some test | 09:55 |
flwang1 | but Gardener is using the same way | 09:55 |
flwang1 | so i assume it's fine | 09:55 |
strigazi | question? | 09:56 |
strigazi | question. | 09:56 |
strigazi | we need to use the out-of-tree cloud-provider to benefit from this, correct? | 09:56 |
flwang1 | i need to check, probably not really necessary, we should be able to use kube-controller-manager as well | 09:57 |
*** vabada has joined #openstack-containers | 09:57 | |
flwang1 | i haven't tried yet, sorry, i can't provide much details | 09:57 |
strigazi | I think we need the out-of-tree one, since the patch you sent me is merged there, not in tree | 09:59 |
flwang1 | no mater which one we need, that's the way we can fix the issue | 10:00 |
strigazi | agreed | 10:00 |
flwang1 | and given we're moving to cloud-controller-manager, we should definitely dig into that | 10:01 |
flwang1 | ok, next one? | 10:01 |
strigazi | next one. | 10:01 |
flwang1 | cluster template update or sandbox failure | 10:01 |
flwang1 | which one you want to discuss first? | 10:01 |
strigazi | sandbox failure | 10:01 |
flwang1 | ok | 10:01 |
*** janki|lunch is now known as janki | 10:02 | |
flwang1 | recently, i have seen several times the 'sandbox failure' | 10:02 |
flwang1 | let me should you the log | 10:02 |
flwang1 | the problem is, the kubelet is in Ready status | 10:03 |
flwang1 | but some pods can't be created | 10:03 |
strigazi | you see this in the kubelet log? | 10:04 |
flwang1 | see http://paste.openstack.org/show/731752/ | 10:04 |
flwang1 | describe pod http://paste.openstack.org/show/731753/ | 10:06 |
strigazi | it seems related to calico maybe? the calico pods are running but all the other ones no. | 10:06 |
flwang1 | strigazi: yep, i think it's related to calico | 10:07 |
flwang1 | but it's not constantly repeatble | 10:07 |
flwang1 | just wanna ask to see if you have any idea | 10:08 |
strigazi | I haven't seen it in my devstack | 10:08 |
flwang1 | do you think it's related to the order we start the service? | 10:08 |
strigazi | let me spin a cluster | 10:08 |
strigazi | the systemd service of kubelet? | 10:09 |
flwang1 | i mean the dependency relationship of those components in heat template | 10:09 |
flwang1 | https://github.com/openstack/magnum/blob/master/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml#L679 | 10:10 |
strigazi | I don't think so, there is no dependency when you send requests in the k8s api | 10:10 |
flwang1 | right | 10:11 |
strigazi | for example | 10:11 |
strigazi | with kubeadm if you start the master and you don't have configured the cni, the pods are there pending | 10:11 |
flwang1 | same in magnum | 10:12 |
flwang1 | before having a Ready node/kubelet, those pods should be in pending | 10:13 |
flwang1 | but the weird thing is the calico controller pod can be created | 10:13 |
*** jaewook_oh has quit IRC | 10:15 | |
flwang1 | and any new pod can't be created | 10:15 |
strigazi | is docker running in that node? | 10:19 |
flwang1 | yes | 10:19 |
flwang1 | v1.13.1 | 10:19 |
strigazi | is this in the slow tesing env? | 10:28 |
flwang1 | no | 10:28 |
flwang1 | it's on our prod | 10:28 |
strigazi | in my devstack looks ok :( | 10:28 |
flwang1 | yep, as i said, it's not always repeatable | 10:29 |
flwang1 | i will dig anyway | 10:29 |
strigazi | did you restart kubelet? | 10:29 |
flwang1 | no, i didn't | 10:29 |
flwang1 | and i assume that not help | 10:29 |
flwang1 | let's discuss the cluster template ? | 10:30 |
strigazi | ok | 10:30 |
strigazi | since we mentioned cni, can you also look into the flannel patch? | 10:30 |
strigazi | xD | 10:30 |
flwang1 | yep, sure | 10:30 |
strigazi | so, CT | 10:31 |
flwang1 | you know, i was so busy recently | 10:31 |
strigazi | i do | 10:31 |
flwang1 | flannel is always on my list | 10:31 |
flwang1 | i will revisit it in this week for sure | 10:31 |
strigazi | The problem: | 10:31 |
strigazi | thanks | 10:31 |
strigazi | Operators want to maintain public CTs to advertise to users | 10:32 |
flwang1 | yes | 10:32 |
strigazi | As the time passes, operators want advertise new features or solve configurations issues in the clusters based in the values in those CTs | 10:33 |
strigazi | correct? | 10:33 |
flwang1 | yes | 10:33 |
strigazi | Operators do not want to change the name and uuid of the advertised cluster templates | 10:34 |
flwang1 | yes | 10:34 |
strigazi | So that user have the same entry point to consume magnum | 10:34 |
strigazi | So that users have the same entry point to consume magnum | 10:34 |
strigazi | here is where it becomes tricku | 10:35 |
strigazi | here is where it becomes tricky | 10:35 |
flwang1 | listening... | 10:35 |
strigazi | To maintain the integritity of the data model the cluster templates are immutable if they are referenced. Elaborating on this | 10:36 |
strigazi | s/integritity/integrity | 10:36 |
strigazi | A running cluster references a cluster template, this cluster template should not change during the life of the cluster because | 10:37 |
flwang1 | you know, i stay on glance for quite a long time | 10:37 |
flwang1 | glance image is immutable | 10:37 |
flwang1 | and I understand that | 10:38 |
strigazi | ok | 10:38 |
flwang1 | but template is different | 10:38 |
flwang1 | for image, it's very simple, technically, the image data can't be changed | 10:38 |
strigazi | so, you propose to allow editing the cluster template | 10:38 |
*** ricolin has quit IRC | 10:38 | |
flwang1 | but for template, there are too many things can be improved without impact the cluster itself | 10:38 |
flwang1 | i can give your a lot examples | 10:39 |
strigazi | you propose to allow editing the cluster template, correct? | 10:39 |
flwang1 | for admin, for some special attributes/labels | 10:39 |
strigazi | I know that CTs contain a lot more info | 10:39 |
strigazi | I know the issue very well | 10:39 |
flwang1 | for example, dns_nameserver | 10:39 |
strigazi | in two year with 100s of cluster and 10s of changes improvements, I know it very well | 10:40 |
flwang1 | add or replace a dns server won't impact the cluster itself, not version change, no flavor change | 10:40 |
flwang1 | same for container_infra_prefix | 10:41 |
flwang1 | i'm not proposing editing anything of a CT | 10:41 |
flwang1 | but some of them can be improved | 10:41 |
strigazi | allow partially editing? | 10:41 |
strigazi | partially or not | 10:42 |
flwang1 | just like, cluster update, you can't update anything of a cluster, but the node count | 10:42 |
flwang1 | partially | 10:42 |
flwang1 | like, dns_nameserver, container_infra_prefix, etc etc | 10:42 |
strigazi | First, I want to clarify that I and our team at CERN, we want to address the issue because it makes lifes horrible | 10:43 |
flwang1 | in other words, after changed those attributes/labels, the clusters created with that CT before and after are same | 10:43 |
strigazi | what I don't like is that we lose track of the changes. | 10:44 |
*** ttsiouts has quit IRC | 10:44 | |
flwang1 | that's a good point, but we can do it at stage 2 | 10:45 |
strigazi | we see that already because we touch the db | 10:45 |
strigazi | it is the same thing | 10:45 |
flwang1 | what do you mean? | 10:45 |
strigazi | for example | 10:45 |
strigazi | k8s x.y.z has a bug | 10:46 |
strigazi | k8s x.y.z+n includes the patch | 10:46 |
strigazi | when we want to advertise he change we just change the value in the db | 10:47 |
strigazi | which is horrible | 10:47 |
strigazi | but in practice it is the same thing as doing it with the API. | 10:47 |
strigazi | with the api you have some validation but in respect to logging and monitoing the data of the service is the same | 10:48 |
flwang1 | i see | 10:48 |
flwang1 | so let's do it? | 10:48 |
flwang1 | what's your concern now? | 10:48 |
strigazi | let's version the cluster templates, would be the answer | 10:49 |
flwang1 | another attribute needs can be updated is 'name' | 10:49 |
strigazi | just allow editing the CTs is not that small change | 10:50 |
flwang1 | for example, we'd like to keep a template name 'k8s-v1.11.2-prod' | 10:50 |
strigazi | the uuid will be different I understand | 10:50 |
flwang1 | when there is a new change, we can rename the existing one to 'k8s-v1.11.2-prod-20181009' and then create a new one with name 'k8s-v1.11.2-prod' | 10:51 |
flwang1 | yes | 10:51 |
strigazi | so for that model, there is a better and simpler solution | 10:51 |
flwang1 | we're using the same way for our public images | 10:51 |
strigazi | let me exmplain | 10:51 |
strigazi | 1. public CT {uuid: 1, name: foo, labels: {bar: val} } | 10:52 |
*** udesale has quit IRC | 10:52 | |
strigazi | you want: | 10:52 |
strigazi | public CT {uuid: 2, name: foo, labels: {bar: val5} } | 10:52 |
strigazi | correct? | 10:52 |
flwang1 | and we'd like to rename the CT with uuid:1 | 10:53 |
strigazi | why? | 10:53 |
flwang1 | to name foo-xxx | 10:53 |
flwang1 | because we don't want to confuse user | 10:53 |
strigazi | even better | 10:53 |
strigazi | got it | 10:53 |
flwang1 | user is using ansible/terraform to look for the template | 10:53 |
strigazi | oh, wait, what I don't understand is why you want the same id | 10:54 |
strigazi | it doesn't make sense | 10:54 |
flwang1 | i'm not asking the same id | 10:54 |
flwang1 | i don't want to touch the id | 10:54 |
flwang1 | and i don't think change id is a good idea for any case | 10:55 |
strigazi | so the proposal is, (it is not only my idea, proposed long time ago | 10:55 |
strigazi | to have a deprecated/hide field. | 10:56 |
flwang1 | that's the way proposed in glance looooong time ago | 10:56 |
strigazi | like you have with images | 10:56 |
flwang1 | if hide is true, then user can't list/show it, right? | 10:57 |
strigazi | yes, but, we don't want only hide the listing | 10:58 |
strigazi | we don't want users to use the old CTs | 10:58 |
strigazi | so deprecated | 10:58 |
strigazi | if it is deprecated you don't list and you can't use it | 10:58 |
flwang1 | ok | 10:58 |
strigazi | from the ops point of view | 10:59 |
strigazi | he releases a new CT | 10:59 |
flwang1 | it's a OK solution for me | 10:59 |
strigazi | same name, deprecates the old one | 10:59 |
flwang1 | why it's not implemented? | 10:59 |
strigazi | even not same name, as the operator wants | 10:59 |
strigazi | man power, priority | 10:59 |
flwang1 | i can do that | 11:00 |
flwang1 | it's not a big change i think | 11:00 |
strigazi | it way smaller than what we discussed | 11:00 |
flwang1 | i can do it in this cycle | 11:00 |
flwang1 | not really, but i won't argue | 11:01 |
flwang1 | i'm happy with this solution | 11:01 |
*** ttsiouts has joined #openstack-containers | 11:01 | |
flwang1 | when use want to update an existing cluster, does it need to get the template info? | 11:03 |
flwang1 | if the template is deprecated at that moment, what will happen? | 11:03 |
flwang1 | use/user | 11:03 |
strigazi | deprecated CTs won't be touched | 11:04 |
flwang1 | see my above question | 11:04 |
flwang1 | if there is a cluster created with a deprecated CT, and user want to update it to add more nodes, what could happen? | 11:05 |
strigazi | you create a new one with the same info | 11:05 |
strigazi | oh | 11:05 |
strigazi | wait, I misread it | 11:05 |
strigazi | it would work | 11:05 |
strigazi | scale will work | 11:06 |
flwang1 | so it is still accessible for that case, right? | 11:06 |
strigazi | yes | 11:06 |
flwang1 | that's the special case? | 11:06 |
strigazi | actully one creation is the special case | 11:06 |
strigazi | and show | 11:06 |
flwang1 | ok, depends on the view | 11:07 |
flwang1 | my next question is | 11:07 |
strigazi | well user should be able to do show | 11:07 |
strigazi | to see what they are using | 11:07 |
flwang1 | if user is using ansible or terraform, does that mean they need to check the deprecated attribute to get the correct one? | 11:07 |
strigazi | what terraform takes as parameters? | 11:08 |
strigazi | correct syntax, what are the parameters used by terraform? same as the client? | 11:08 |
strigazi | it doesn't matter. | 11:09 |
*** ttsiouts has quit IRC | 11:09 | |
flwang1 | https://www.terraform.io/docs/providers/openstack/ | 11:10 |
strigazi | if the CT is deprecated and the client (terraform, osc, ansible) tries to create a cluster with a CT that is deprecated the api will return 401 | 11:10 |
flwang1 | ok | 11:11 |
flwang1 | i will discuss the proposal with our support/business people and our customer to get ideas from their view | 11:11 |
strigazi | is it unreasonable? unexpected? | 11:11 |
flwang1 | no, as i said, it's a OK solution for me | 11:11 |
flwang1 | software is always a trade off game | 11:12 |
flwang1 | i just want to avoid some design issue when we can get comments from different parties | 11:12 |
strigazi | shouldn't this be done public though? | 11:13 |
strigazi | what are your concerns? | 11:13 |
flwang1 | like send an email to mailing list? | 11:13 |
*** ttsiouts has joined #openstack-containers | 11:13 | |
strigazi | ML or gerrit | 11:13 |
flwang1 | it's not consistent with the way we're dealing with images | 11:13 |
flwang1 | we do have some customers now | 11:14 |
strigazi | how are you dealing with images? | 11:14 |
flwang1 | so i'd like to know their preference | 11:14 |
flwang1 | we rename old images and create new one with same name | 11:14 |
flwang1 | again, i think this solution works | 11:15 |
flwang1 | i can start with a spec and send email to mailing list to get feedback | 11:15 |
strigazi | hmm, this is a convention. I don't think it is incompatible with this solution | 11:15 |
flwang1 | no, it shouldn't i think | 11:15 |
flwang1 | to be short | 11:16 |
strigazi | the name could change, it doesn't carry info. | 11:16 |
strigazi | ok | 11:16 |
flwang1 | i'm happy with current way | 11:16 |
flwang1 | i can propose spec | 11:16 |
flwang1 | yep, allow name change is a baby easy step we can take | 11:17 |
strigazi | change names yes, values no | 11:17 |
strigazi | what about cluster template versioning. We (CERN) could to it | 11:18 |
flwang1 | ok, we can start with name change, and discuss the details of how to how to handle this | 11:19 |
flwang1 | deal? | 11:19 |
strigazi | you mean name change and deprecated field? or just name change? | 11:20 |
flwang1 | separately | 11:20 |
flwang1 | name change firstly | 11:20 |
flwang1 | with that, 80% requirements can be covered | 11:21 |
strigazi | deal | 11:21 |
flwang1 | then let's propose a spec to discuss the 'deprecated' idea, sounds a plan? | 11:21 |
strigazi | yes | 11:21 |
flwang1 | cool | 11:21 |
flwang1 | strigazi: thank you for your time | 11:21 |
flwang1 | big progress today | 11:21 |
*** slagle has quit IRC | 11:22 | |
strigazi | you are welcome | 11:22 |
strigazi | we need to get Blizzard on board too though :) | 11:22 |
flwang1 | yep, we can add them as reviewer | 11:23 |
flwang1 | for code change | 11:23 |
strigazi | will you attend the meeting (tmr for you) | 11:23 |
flwang1 | yep | 11:23 |
flwang1 | could be late since i work late today | 11:24 |
flwang1 | 00:24 here | 11:24 |
*** ttsiouts has quit IRC | 12:01 | |
*** ttsiouts has joined #openstack-containers | 12:02 | |
*** zul has joined #openstack-containers | 12:05 | |
*** ttsiouts has quit IRC | 12:10 | |
*** ttsiouts has joined #openstack-containers | 12:22 | |
*** lpetrut has joined #openstack-containers | 12:58 | |
*** zul has quit IRC | 13:09 | |
*** udesale has joined #openstack-containers | 13:13 | |
*** ttsiouts has quit IRC | 13:13 | |
*** udesale has quit IRC | 13:21 | |
*** janki has quit IRC | 13:23 | |
*** ttsiouts has joined #openstack-containers | 13:37 | |
*** hongbin has joined #openstack-containers | 13:48 | |
*** gsimondo1 has joined #openstack-containers | 13:51 | |
*** ttsiouts has quit IRC | 13:52 | |
*** gsimondon has quit IRC | 13:54 | |
*** ttsiouts has joined #openstack-containers | 14:02 | |
*** itlinux has quit IRC | 14:07 | |
*** ttsiouts has quit IRC | 14:11 | |
*** ttsiouts has joined #openstack-containers | 14:17 | |
*** serlex has quit IRC | 14:19 | |
*** ramishra has quit IRC | 14:55 | |
*** munimeha1 has joined #openstack-containers | 14:55 | |
*** lpetrut has quit IRC | 14:56 | |
*** gsimondo1 has quit IRC | 15:07 | |
*** itlinux has joined #openstack-containers | 15:08 | |
*** isitirctime has joined #openstack-containers | 15:12 | |
isitirctime | Hope everyone is well. Is there a mechanism to rollback a magnum upgrade that failed? The stack is UPDATE_FAILED status and I can make any modifications now. | 15:13 |
isitirctime | I am thinking I need to modify the magnum_service table and set back to complete status and set the node count back. But I am a little worried about what modifications may be needed in the heat table | 15:18 |
*** ttsiouts has quit IRC | 15:33 | |
*** ttsiouts has joined #openstack-containers | 15:33 | |
isitirctime | sorry not magnum_service table cluster table | 15:36 |
*** ttsiouts has quit IRC | 15:38 | |
*** ianychoi has quit IRC | 15:39 | |
*** munimeha1 has quit IRC | 15:43 | |
*** janki has joined #openstack-containers | 15:52 | |
*** salmankhan has quit IRC | 15:58 | |
*** belmorei_ has quit IRC | 16:03 | |
*** isitirctime has quit IRC | 16:04 | |
*** belmoreira has joined #openstack-containers | 16:10 | |
*** chhagarw has joined #openstack-containers | 16:17 | |
*** mattgo has quit IRC | 16:27 | |
*** janki has quit IRC | 17:14 | |
*** janki has joined #openstack-containers | 17:22 | |
*** gsimondon has joined #openstack-containers | 17:51 | |
*** gsimondo1 has joined #openstack-containers | 18:14 | |
*** gsimondon has quit IRC | 18:16 | |
*** kaiokmo has quit IRC | 18:26 | |
*** chhagarw has quit IRC | 18:28 | |
*** chhagarw has joined #openstack-containers | 18:38 | |
*** janki has quit IRC | 18:38 | |
*** salmankhan has joined #openstack-containers | 18:44 | |
*** gsimondon has joined #openstack-containers | 18:47 | |
*** flwang1 has quit IRC | 18:47 | |
*** gsimondo1 has quit IRC | 18:48 | |
*** kaiokmo has joined #openstack-containers | 18:51 | |
*** chhagarw has quit IRC | 19:17 | |
*** gsimondo1 has joined #openstack-containers | 19:20 | |
*** gsimondon has quit IRC | 19:22 | |
*** spiette has quit IRC | 19:58 | |
*** pcaruana has quit IRC | 20:37 | |
*** ttsiouts has joined #openstack-containers | 20:51 | |
*** ttsiouts has quit IRC | 21:00 | |
strigazi | #startmeeting containers | 21:00 |
openstack | Meeting started Tue Oct 9 21:00:39 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. | 21:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 21:00 |
*** openstack changes topic to " (Meeting topic: containers)" | 21:00 | |
openstack | The meeting name has been set to 'containers' | 21:00 |
strigazi | #topic Roll Call | 21:00 |
*** ttsiouts has joined #openstack-containers | 21:00 | |
*** openstack changes topic to "Roll Call (Meeting topic: containers)" | 21:00 | |
strigazi | o/ | 21:00 |
ttsiouts | o/ | 21:01 |
cbrumm | o/ | 21:01 |
cbrumm | jim won't be able to make it today | 21:03 |
strigazi | Hello ttsiouts cbrumm, we will be cozy | 21:03 |
strigazi | flwang: will joing some time later I think | 21:04 |
strigazi | #topic announcements | 21:04 |
*** openstack changes topic to "announcements (Meeting topic: containers)" | 21:04 | |
flwang | o/ | 21:04 |
flwang | sorry, was in a standup | 21:05 |
strigazi | last week we added some patches from master to rocky with flwang, we'll do a point release this week I hope. I just want to add the flannel cni patch in this release. | 21:05 |
flwang | strigazi: i will review it today | 21:05 |
strigazi | the release will be 7.1.0 | 21:05 |
strigazi | flwang: thanks | 21:06 |
strigazi | #topic stories/tasks | 21:06 |
*** openstack changes topic to "stories/tasks (Meeting topic: containers)" | 21:06 | |
strigazi | We worked with ttsiouts a bit on the spec for nodegroups and he updated it, so reviews are welcome. Shoot questions to ttsiouts if you want :) | 21:07 |
flwang | great | 21:07 |
strigazi | since Jim is not here we can discuss it all together in gerrit, to be in sync. | 21:08 |
ttsiouts | sounds good! | 21:08 |
strigazi | This morning (morining for me), we discussed allowing cluster template rename with flwang. We don't have a story for it, but flwang will create one right? flwang we can push a patch as docs or a spec to describe the change. I can do the doc if you want | 21:10 |
cbrumm | is this so that renaming templates doesn't have to require database actions? | 21:11 |
cbrumm | a manual database action | 21:11 |
strigazi | yes | 21:11 |
strigazi | the uuid and the values will remain immutable | 21:12 |
strigazi | the next will be adding a "deprecated" field in the CT object | 21:12 |
cbrumm | so we can cleanly call one "latest" or "deprecated" | 21:12 |
flwang | i do have a story, wait a sec | 21:12 |
flwang | cbrumm: or any tag you want | 21:12 |
cbrumm | nice | 21:13 |
cbrumm | thank you | 21:13 |
flwang | https://storyboard.openstack.org/#!/story/2003960 | 21:13 |
flwang | i will update above story to reflect the requirements strigazi mentioned above | 21:13 |
strigazi | yes, thank you | 21:14 |
strigazi | to clarify, only the template name will change as a first step | 21:14 |
flwang | i just created 2 tasks | 21:15 |
flwang | one for name change, another one for 'deprecated' attribute | 21:15 |
strigazi | the second independent change is to add the deprecated field which will hide the template from listing and even if users know the uuid they won't be able to create new clusters. | 21:15 |
strigazi | flwang: exactly | 21:15 |
strigazi | cbrumm: ttsiouts the discussion starts here if you want to have a look http://eavesdrop.openstack.org/irclogs/%23openstack-containers/%23openstack-containers.2018-10-09.log.html#t2018-10-09T10:32:04 | 21:17 |
strigazi | Final item for me, as discussed with flwang this morning, I'll publish a new heat-container-agent image tagged rocky-stable which includes the multi-region fix | 21:18 |
ttsiouts | strigazi:thanks | 21:18 |
strigazi | and the a patch to include the agent tag in the labels. | 21:19 |
strigazi | and then a patch to include the agent tag in the labels. | 21:19 |
flwang | strigazi: https://review.openstack.org/#/c/585061/1/magnum/drivers/common/image/heat-container-agent/Makefile | 21:19 |
strigazi | that is all from me, any comments questions? | 21:19 |
flwang | above makefile could be useful for you | 21:20 |
strigazi | I'll have a look, other projects loci and kolla build with ansible | 21:20 |
flwang | no problem | 21:20 |
strigazi | https://review.openstack.org/#/c/585420/16/playbooks/vars.yaml | 21:21 |
strigazi | we can add account there etc | 21:21 |
*** cbrumm has quit IRC | 21:22 | |
flwang | cool, no matter what's the way to build, i'd like to see we have a more stable process | 21:23 |
strigazi | any question, comment? | 21:23 |
strigazi | flwang: also to solve, the "strigazi gets hit by a bus problem" | 21:23 |
flwang | what's your busy problem? missed the last one? | 21:24 |
flwang | bus | 21:24 |
strigazi | the process of building publishing the images is done by me only so far and it is not documented/automated | 21:25 |
flwang | cool, let me know if you need any help on that | 21:25 |
strigazi | I'll ping you for reviews on https://review.openstack.org/#/c/585420 | 21:26 |
flwang | @all, Catalyst Cloud just deployed Magnum on our production as a public managed k8s service, so welcome to come for questions, cheers | 21:26 |
flwang | strigazi: sure, no problem | 21:26 |
strigazi | \o/ | 21:27 |
flwang | strigazi: thank you for all you guys help | 21:27 |
*** cbrumm has joined #openstack-containers | 21:27 | |
strigazi | anytime, really anytime | 21:27 |
flwang | it wouldn't happen without your help and support | 21:28 |
flwang | such a great team | 21:28 |
strigazi | :) | 21:28 |
flwang | in the next couple weeks, i will focus on polishing our magnum and will do upstream first | 21:30 |
strigazi | do we need a magnum-ui release? | 21:30 |
strigazi | your fixeds are in? | 21:31 |
strigazi | your fixes are in? | 21:31 |
cbrumm | we have a few edits to the ui, mostly removal of user choices | 21:31 |
cbrumm | I'm not sure if their intended for upstream though | 21:31 |
strigazi | cbrumm: which option for example? | 21:32 |
*** salmankhan has quit IRC | 21:32 | |
flwang | strigazi: i would be good to have a new release for ui | 21:32 |
cbrumm | I would need to check with Jim. But our UI only asks for template and minion count | 21:32 |
strigazi | we could make this configurable i guess | 21:33 |
strigazi | flwang: you need to backport to stable | 21:33 |
strigazi | flwang: you can test this image: docker.io/openstackmagnum/heat-container-agent:rocky-stable | 21:34 |
flwang | strigazi: yep, sure | 21:34 |
flwang | strigazi: ok, will do | 21:34 |
strigazi | Anything else for the meeting? | 21:36 |
flwang | i'm good | 21:36 |
cbrumm | good here | 21:37 |
strigazi | ttsiouts: if you need anything come to my office :) | 21:37 |
strigazi | see you next week have a nice day cbrumm flwang | 21:37 |
ttsiouts | strigazi: will do | 21:38 |
ttsiouts | :) | 21:38 |
strigazi | #endmeeting | 21:38 |
*** openstack changes topic to "OpenStack Containers Team" | 21:38 | |
openstack | Meeting ended Tue Oct 9 21:38:16 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 21:38 |
cbrumm | bye all | 21:38 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-10-09-21.00.html | 21:38 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-10-09-21.00.txt | 21:38 |
openstack | Log: http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-10-09-21.00.log.html | 21:38 |
flwang | thank you | 21:39 |
*** ttsiouts has quit IRC | 21:39 | |
*** ttsiouts has joined #openstack-containers | 21:39 | |
*** ttsiouts has quit IRC | 21:44 | |
*** salmankhan has joined #openstack-containers | 21:51 | |
*** salmankhan has quit IRC | 21:56 | |
*** imdigitaljim has joined #openstack-containers | 22:02 | |
imdigitaljim | hey all | 22:03 |
imdigitaljim | sorry i missed meeting | 22:03 |
imdigitaljim | if you're present | 22:03 |
imdigitaljim | was in another meeting | 22:03 |
flwang | i'm still around | 22:06 |
flwang | imdigitaljim: i do have a question for you | 22:06 |
flwang | when you use calico and k8s v1.11.1+, did you ever see a 'sandbox create' probelm? | 22:06 |
imdigitaljim | not at all | 22:07 |
imdigitaljim | i think i do have some additional arguments that arent in magnum yet | 22:07 |
flwang | mind sharing that? | 22:07 |
flwang | especially the kubelet arguments | 22:07 |
imdigitaljim | yeah | 22:08 |
imdigitaljim | 1 sec | 22:08 |
imdigitaljim | and thats where it is | 22:08 |
flwang | thanks a lot | 22:08 |
*** gsimondo1 has quit IRC | 22:08 | |
imdigitaljim | np | 22:08 |
imdigitaljim | and the more our driver moves forward | 22:08 |
imdigitaljim | it might be unwieldy to push it on top of the current driver | 22:08 |
imdigitaljim | we were thinking it might just be another driver v2 perhaps | 22:08 |
flwang | that's not a bad idea | 22:08 |
imdigitaljim | i could at least stage it there | 22:09 |
flwang | is it using standalone cloud provider controller? | 22:09 |
imdigitaljim | and if you all want to use it | 22:09 |
imdigitaljim | or not | 22:09 |
imdigitaljim | it is using ccm yes | 22:09 |
flwang | cool | 22:09 |
imdigitaljim | we have *some* pending upstream stuff | 22:09 |
imdigitaljim | but other than that | 22:09 |
imdigitaljim | it even cleans up octavia, cinder, and neutron lbaas resources | 22:10 |
imdigitaljim | on cluster delete | 22:10 |
flwang | that's cool | 22:10 |
imdigitaljim | http://paste.openstack.org/show/731795/ | 22:10 |
imdigitaljim | our cluster boot time is starting to drop too | 22:10 |
imdigitaljim | its around 6minutes now but ive got some changes that might drop it to ~4 | 22:10 |
flwang | yep, the boot time is one of the area i'd like to improve | 22:11 |
imdigitaljim | i havent moved all eligible fields to the kubelet config yet but those are operational at this time | 22:11 |
flwang | "--pod-infra-container-image=${CONTAINER_INFRA_PREFIX:-gcr.io/google_containers/}pause:3.0" this one is very interesting for me | 22:11 |
imdigitaljim | yeah i mean if we switched to my driver, you'd immediately get the benefit :) | 22:12 |
imdigitaljim | https://www.ianlewis.org/en/almighty-pause-container | 22:12 |
flwang | i found it with google before with my issue of sandbox | 22:13 |
flwang | and i think that's probably related | 22:13 |
imdigitaljim | enforceNodeAllocatable: | 22:13 |
imdigitaljim | - pods | 22:13 |
flwang | sandbox is actually using the pause image, right? | 22:13 |
imdigitaljim | i think this is also | 22:13 |
imdigitaljim | what your problem is | 22:14 |
imdigitaljim | do you have this arg? | 22:14 |
imdigitaljim | and cgroupsperqos | 22:14 |
flwang | i don't have this arg, since we're using the upstream version | 22:15 |
imdigitaljim | try it out | 22:15 |
flwang | and no cgroupsPerQOS: true | 22:15 |
imdigitaljim | and see if it fixes it | 22:15 |
flwang | yep, that's what i'm going to try | 22:15 |
imdigitaljim | im guessing on your error | 22:15 |
flwang | wait a sec | 22:15 |
imdigitaljim | but i recall fixing a related issue with these args | 22:15 |
imdigitaljim | as flags i think its --enforce-node-allocatable=pods --cgroups-per-qos=true | 22:16 |
flwang | http://paste.openstack.org/show/731797/ | 22:18 |
imdigitaljim | whats the kubelet args on your minions | 22:19 |
imdigitaljim | ps -ef | grep kubelet | 22:19 |
imdigitaljim | output | 22:19 |
flwang | wait a sec | 22:20 |
flwang | http://paste.openstack.org/show/731799/ | 22:21 |
imdigitaljim | missing | 22:22 |
imdigitaljim | --enforce-node-allocatable=pods | 22:22 |
imdigitaljim | you have --enforce-node-allocatable= | 22:22 |
flwang | let me try | 22:22 |
imdigitaljim | --cgroups-per-qos=false | 22:22 |
imdigitaljim | you also have false | 22:22 |
imdigitaljim | :P | 22:22 |
flwang | the problem is not always repeatable | 22:22 |
imdigitaljim | yeah | 22:23 |
flwang | that's why it's weird | 22:23 |
imdigitaljim | i had the same problem | 22:23 |
flwang | really? | 22:23 |
imdigitaljim | and with these 2 things (for us) its 100% gone | 22:23 |
imdigitaljim | havent seen it for like 6mo | 22:23 |
flwang | cooooooool | 22:23 |
flwang | let me try now | 22:23 |
imdigitaljim | ping me im gonna minimize | 22:25 |
flwang | sure, thanks a lot | 22:25 |
flwang | same | 22:26 |
flwang | :( | 22:27 |
flwang | i'm going to add --pod-infra-container-image=${CONTAINER_INFRA_PREFIX:-gcr.io/google_containers/}pause:3.0 | 22:27 |
imdigitaljim | good hoice | 22:28 |
imdigitaljim | choice* | 22:28 |
imdigitaljim | i think theres a 3.1 | 22:28 |
imdigitaljim | but | 22:28 |
imdigitaljim | insignificant different | 22:28 |
flwang | http://paste.openstack.org/show/731800/ | 22:29 |
flwang | seems i can't just add --cgroups-per-qos=false in /etc/kubernetes/kubelet | 22:30 |
imdigitaljim | well its here now | 22:30 |
imdigitaljim | the --cgroups-per-qos=false --enforce-node-allocatable= at the beginning is from the atomic-system-container | 22:31 |
imdigitaljim | we've edited it out | 22:31 |
imdigitaljim | we want 100% control of the image flags | 22:31 |
imdigitaljim | although the way you have yours now it *should* be accepting the latest flag | 22:31 |
imdigitaljim | is your problem present after restarting the pods? | 22:31 |
flwang | --cgroups-per-qos=false --enforce-node-allocatable= --logtostderr=true --v=0 --address=0.0.0.0 --allow-privileged=true | 22:31 |
flwang | but with above args, how will it do? | 22:31 |
imdigitaljim | 1 | 22:32 |
imdigitaljim | . | 22:32 |
imdigitaljim | hyperkube kubelet --cgroups-per-qos=false --enforce-node-allocatable= --logtostderr=true --v=0 --address=0.0.0.0 --allow-privileged=true --pod-infra-container-image=gcr.io/google_containers/pause:3.0 --cgroups-per-qos=true --enforce-node-allocatable=pods | 22:32 |
imdigitaljim | near the end | 22:32 |
imdigitaljim | (theres more flags i just got it off) | 22:32 |
flwang | yep | 22:32 |
flwang | i mean | 22:32 |
flwang | with above args, how it works | 22:32 |
imdigitaljim | oh did it work for you? | 22:32 |
imdigitaljim | (or i dont understand) | 22:33 |
flwang | hyperkube kubelet --cgroups-per-qos=false --enforce-node-allocatable= --logtostderr=true --v=0 --address=0.0.0.0 --allow-privileged=true --pod-infra-container-image=gcr.io/google_containers/pause:3.0 --cgroups-per-qos=true --enforce-node-allocatable=pods | 22:33 |
flwang | no it doens't work | 22:33 |
imdigitaljim | ah ok still same issue | 22:33 |
imdigitaljim | hmm | 22:33 |
flwang | not sure, which value kubelet will take | 22:33 |
imdigitaljim | it takes the latest iirc | 22:33 |
imdigitaljim | but we dont have them at all | 22:34 |
imdigitaljim | just the 'correct' values | 22:34 |
imdigitaljim | what do other logs say | 22:34 |
imdigitaljim | like kubelet logs | 22:34 |
flwang | wait a sec | 22:35 |
flwang | http://paste.openstack.org/show/731801/ | 22:36 |
imdigitaljim | umm | 22:38 |
imdigitaljim | this is weird | 22:38 |
flwang | yep | 22:38 |
flwang | it's not always happened | 22:39 |
imdigitaljim | but would you be willing to delete and restart your calico | 22:39 |
flwang | delete calico node? | 22:39 |
flwang | i mean calico node pod? | 22:39 |
imdigitaljim | yeah delete the daemonset | 22:39 |
imdigitaljim | and redeploy? | 22:39 |
flwang | another weird thing is, the calico-kube-controller can start on that worker node | 22:40 |
imdigitaljim | hmm | 22:40 |
flwang | i think it's related to calico, but i have no clue | 22:40 |
imdigitaljim | on a side note | 22:40 |
imdigitaljim | we put the controller on the master | 22:41 |
flwang | how? by label? | 22:41 |
*** rcernin has joined #openstack-containers | 22:42 | |
imdigitaljim | yeah | 22:44 |
flwang | ok, interesting | 22:44 |
imdigitaljim | - key: dedicated | 22:45 |
imdigitaljim | value: master | 22:45 |
imdigitaljim | effect: NoSchedule | 22:45 |
imdigitaljim | - key: CriticalAddonsOnly | 22:45 |
imdigitaljim | value: "True" | 22:45 |
imdigitaljim | effect: NoSchedule | 22:45 |
imdigitaljim | tolerations: | 22:45 |
imdigitaljim | - key: dedicated | 22:45 |
imdigitaljim | value: master | 22:45 |
imdigitaljim | effect: NoSchedule | 22:45 |
imdigitaljim | - key: CriticalAddonsOnly | 22:45 |
imdigitaljim | value: "True" | 22:45 |
imdigitaljim | effect: NoSchedule | 22:45 |
imdigitaljim | nodeSelector: | 22:45 |
imdigitaljim | node-role.kubernetes.io/master: "" | 22:45 |
imdigitaljim | specifically | 22:45 |
flwang | how many replicas? just 1? | 22:46 |
*** hongbin has quit IRC | 22:58 | |
*** itlinux has quit IRC | 23:14 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!