*** itlinux has joined #openstack-containers | 00:11 | |
*** hongbin has quit IRC | 00:11 | |
*** itlinux has quit IRC | 00:11 | |
openstackgerrit | Lingxian Kong proposed openstack/magnum master: Delete Octavia loadbalancers for fedora atomic k8s driver https://review.openstack.org/497144 | 00:18 |
---|---|---|
mnaser | hey lxkong | 00:55 |
mnaser | sorry, busy @ kubecon | 00:55 |
lxkong | mnaser: oh i didn't realize you are in kubecon, must have a log of fun | 00:56 |
mnaser | lxkong: thanks for that super useful feedback, unfortunately, the heat-container-agent logs dont give us the info we need | 00:56 |
mnaser | how were you able to discover this? is there anywhere we can add more logging in our ci to catch this? | 00:56 |
lxkong | mnaser: i tested in my devstack | 00:56 |
mnaser | hmm | 00:56 |
mnaser | ok i see | 00:56 |
mnaser | should we run heat-container-agent with extra privs then? | 00:57 |
lxkong | yeah, probably | 00:57 |
lxkong | at least some volume mapping | 00:57 |
mnaser | lxkong: well it probably needs to be able to pull atomic images too | 00:58 |
lxkong | also because it's running inside atomic, not sure if it's needed to tweak some atomic configuration | 00:58 |
openstackgerrit | Lingxian Kong proposed openstack/magnum master: [DO NOT MERGE] Test https://review.openstack.org/623104 | 01:09 |
*** dave-mccowan has joined #openstack-containers | 01:17 | |
*** dave-mccowan has quit IRC | 02:14 | |
*** itlinux has joined #openstack-containers | 02:18 | |
*** hongbin has joined #openstack-containers | 02:45 | |
*** ykarel has joined #openstack-containers | 03:26 | |
*** PagliaccisCloud has joined #openstack-containers | 03:43 | |
zufar | Hi all, I am creating swarm cluster but stuck in CREATE_IN_PROGRESS status. when i check the heat-engine log, the last log is http://paste.opensuse.org/46623027 | 03:58 |
zufar | anyone know this problem? | 03:58 |
*** PagliaccisCloud has quit IRC | 04:03 | |
*** itlinux has quit IRC | 04:11 | |
*** hongbin has quit IRC | 04:20 | |
*** itlinux has joined #openstack-containers | 04:43 | |
*** udesale has joined #openstack-containers | 04:49 | |
*** ykarel has quit IRC | 05:03 | |
*** itlinux has quit IRC | 05:11 | |
*** ykarel has joined #openstack-containers | 05:20 | |
*** rcernin has quit IRC | 07:09 | |
*** zufar has quit IRC | 07:12 | |
*** pcaruana has joined #openstack-containers | 07:12 | |
strigazi | lxkong: mnaser for https://review.openstack.org/#/c/623724 , things must done like in : https://review.openstack.org/#/c/561858/1/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-minion.sh I'm working on that for upgrades. | 08:06 |
*** ykarel is now known as ykarel|lunch | 08:34 | |
lxkong | thanks strigazi, i have more comments to that patch, i'm also testing something related. We're very keen to get cluster creation time decreased | 08:37 |
strigazi | lxkong: do you pull from docker.io? | 08:38 |
strigazi | lxkong: this is a very big overhead | 08:38 |
lxkong | yeah, using internal docker registry is part of the plan | 08:38 |
lxkong | we also need to create masters and workers in parallel | 08:38 |
lxkong | which also matters | 08:39 |
strigazi | lxkong: I say it is first on the list, pull from local registry. Parallel pull is next. | 08:39 |
lxkong | yeah, agree | 08:39 |
lxkong | and this https://storyboard.openstack.org/#!/story/2004564 | 08:39 |
strigazi | lxkong: also all the implementation for pulling from local is done since a while. | 08:40 |
lxkong | using local docker registry is going to be slow in our internal for some reason ;-( | 08:41 |
lxkong | i mean the infra deployment to get a local docker registry up and running | 08:41 |
lxkong | strigazi: fyi, https://review.openstack.org/#/c/497144/ is ready for review | 08:43 |
strigazi | lxkong: i'm doing this now. | 08:43 |
lxkong | strigazi: i remember last time you said there is no cluster uuid in the lb description for some reason | 08:44 |
strigazi | lxkong: I'm using my patch for CPO as a dep | 08:44 |
lxkong | make sure you are using the latest version CCM/magnum | 08:45 |
lxkong | my devstack environment works well | 08:45 |
strigazi | I'm always using master... | 08:45 |
lxkong | i'm using a controller-manger patched by that PR | 08:45 |
lxkong | docker.io/lingxiankong/kubernetes-controller-manager:v1.11.5-alpha | 08:46 |
strigazi | I'm not using and won't use patched kubernetes. I'm using CPO v0.2.0 which has the patch for the LB. | 08:47 |
lxkong | i just changed magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh file | 08:47 |
lxkong | strigazi: so you could see a new lb is created for the service but without cluster uuid in the description? | 08:48 |
lxkong | in my CPO test before, that also worked well... | 08:49 |
strigazi | see my comments here: https://review.openstack.org/#/c/620761/ | 08:49 |
lxkong | ah, ok | 08:49 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: k8s_fedora: Use external kubernetes/cloud-provider-openstack https://review.openstack.org/577477 | 08:50 |
lxkong | you can test this patch again, maybe that's because the hook mechanism | 08:50 |
lxkong | e.g. in devstack env, you need to 'pip install -e .' in the magum repo folder | 08:50 |
lxkong | but with the new approach that won't apply | 08:51 |
strigazi | lxkong: I'm testing your patch, I'll leave a comment if it works or not | 08:51 |
lxkong | i will submitted a separate patch for a release note and/or some docs | 08:52 |
lxkong | strigazi: cool | 08:52 |
*** ttsiouts has joined #openstack-containers | 09:18 | |
*** belmoreira has quit IRC | 09:22 | |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: k8s_fedora: Use external kubernetes/cloud-provider-openstack https://review.openstack.org/577477 | 09:22 |
*** suanand has joined #openstack-containers | 09:23 | |
*** belmoreira has joined #openstack-containers | 09:26 | |
*** ttsiouts has quit IRC | 09:28 | |
*** ttsiouts has joined #openstack-containers | 09:29 | |
*** sayalilunkad has quit IRC | 09:31 | |
*** ttsiouts has quit IRC | 09:33 | |
*** ykarel|lunch is now known as ykarel | 09:33 | |
*** ttsiouts has joined #openstack-containers | 09:40 | |
strigazi | lxkong: left a comment | 09:41 |
strigazi | lxkong: I left a comment in 497144 | 09:42 |
lxkong | strigazi: hi, saw your comment, currently without cpo or patched k8s, this patch won't do anything for pre-delete, so personally i think that's fine to merge now(for us, we could backport immediately because we have patched controller-manager) | 09:51 |
lxkong | strigazi: but if you confident about the CPO patch, that's fine for waiting | 09:52 |
lxkong | strigazi: have you fully tested CPO? service, pv, etc ? | 09:53 |
lxkong | s/CPO/CPO in magnum | 09:53 |
strigazi | ok, I don't like taking a patch that is not doing anything but we can take this. | 09:55 |
strigazi | why CPO in magnum would work in a different way? | 09:56 |
*** jonaspaulo has joined #openstack-containers | 09:56 | |
jonaspaulo | hi all | 09:56 |
*** PagliaccisCloud has joined #openstack-containers | 09:56 | |
lxkong | technically not, i used to install CPO in static pod way, atomic system container is another thing, so i'm not sure | 09:56 |
strigazi | lxkong: I'm using a DS for the CPO, what atomic system container has to do with anything? | 09:58 |
lxkong | i remember the atomic system container has special configuration for e.g. volume mapping, using static pod, i can define volumes to map the local folder to container, i'm not familiar with atomic system container, just another thing to me | 10:03 |
lxkong | strigazi: personally i'm not fun of fedora... | 10:03 |
strigazi | I think atomic is irrelevant to the discussion of the CPO | 10:04 |
strigazi | lxkong: you can fork and use ubuntu | 10:04 |
lxkong | i hope i could :-) | 10:05 |
strigazi | why not? | 10:05 |
strigazi | maybe kubespray works better for you | 10:05 |
jonaspaulo | any1 getting this error on magnum on rocky: | 10:06 |
jonaspaulo | runc[2432]: Source [heat] Unavailable. runc[2432]: /var/lib/os-collect-config/local-data not found. Skipping runc[2432]: publicURL endpoint for orchestration service in null region not found | 10:06 |
lxkong | 'maybe kubespray works better for you', i also hope i could | 10:08 |
strigazi | jonaspaulo release? | 10:09 |
strigazi | jonaspaulo release of magnum and heat? | 10:09 |
strigazi | lxkong: or gardener | 10:10 |
jonaspaulo | i am using kolla-ansible to deploy everything installed through pip with OS release rocky | 10:10 |
brtknr | jonaspaulo is using rocky kolla ansible | 10:10 |
jonaspaulo | yep | 10:10 |
jonaspaulo | i have the patches alredy like https://bugs.launchpad.net/ubuntu/+source/magnum/+bug/1793813 | 10:11 |
openstack | Launchpad bug 1793813 in magnum (Ubuntu) "magnum-api not working with www_authenticate_uri" [Undecided,Confirmed] | 10:11 |
jonaspaulo | this one is also present https://review.openstack.org/#/c/620006/ | 10:12 |
jonaspaulo | the issue remains that inside the master vm | 10:12 |
jonaspaulo | the region_name is null despite the patches | 10:13 |
strigazi | can you do atomic images list in the master? | 10:13 |
jonaspaulo | here /etc/os-collect-config.conf , here /var/lib/os-collect-config/heat_local.json or here /run/os-collect-config/heat_local.json for example | 10:13 |
jonaspaulo | ok give me some minutes to redeploy | 10:13 |
strigazi | jonaspaulo: should be rocky-stable | 10:15 |
strigazi | jonaspaulo: should be docker.io/openstackmagnum/heat-container-agent:rocky-stable | 10:15 |
strigazi | jonaspaulo: https://github.com/openstack/magnum/blob/stable/rocky/magnum/drivers/common/templates/kubernetes/fragments/start-container-agent.sh | 10:16 |
*** ppetit has joined #openstack-containers | 10:20 | |
*** salmankhan has joined #openstack-containers | 10:27 | |
strigazi | lxkong: does cidner work with CPI v0.2.0? | 10:27 |
*** salmankhan has quit IRC | 10:28 | |
jonaspaulo | > docker.io/openstackmagnum/heat-container-agent rocky-stable 2723793fc200 2018-12-13 10:23 183.33 MB ostree | 10:32 |
*** sayalilunkad has joined #openstack-containers | 10:35 | |
*** jonaspaulo_ has joined #openstack-containers | 10:35 | |
jonaspaulo_ | sorry disconnected | 10:35 |
jonaspaulo_ | strigazi: it is rocky stable | 10:36 |
jonaspaulo_ | and the start container script is also 1:1 with the link you provided | 10:36 |
*** jonaspaulo has quit IRC | 10:37 | |
*** ppetit has quit IRC | 10:41 | |
*** salmankhan has joined #openstack-containers | 10:45 | |
*** ttsiouts has quit IRC | 10:51 | |
*** ttsiouts has joined #openstack-containers | 10:52 | |
*** ttsiouts has quit IRC | 10:57 | |
*** salmankhan has quit IRC | 10:57 | |
*** suanand has quit IRC | 11:06 | |
*** salmankhan has joined #openstack-containers | 11:06 | |
*** salmankhan has quit IRC | 11:11 | |
mkuf | Hi there, I'm trying to deploy a k8s cluster with magnum on fedora-atomic in queens. Cloudinit and wc-notify on the master-node exit successfully and k8s services are up but no further nodes/minions get deployed. Anyone an idea what might be the issue? | 11:11 |
*** salmankhan has joined #openstack-containers | 11:11 | |
ykarel | mkuf, have u checked /var/log/cloud-init-output.log on master node | 11:16 |
ykarel | there u can find some hints | 11:16 |
jonaspaulo_ | hi mkuf i am having also errors on rocky, but mine says publicURL endpoint for orchestration service in null region not found | 11:18 |
jonaspaulo_ | on the journalctl entries | 11:18 |
jonaspaulo_ | which is due to region_name being null | 11:18 |
jonaspaulo_ | but haven't figured out why yet | 11:18 |
mkuf | ykarel: yes, i already checked that. no errors get reported and most importantly the wc-notify gets executed and receives a 200 OK from heat. so from my understanding this should be the point when the minions get deployed. | 11:20 |
ykarel | mkuf, ack, yes after that only minions are deployed, | 11:23 |
mkuf | ykarel: at least, thats the point when the minions of a swarm cluster get deployed (which works fine, opposed to k8s) | 11:23 |
mkuf | strange. :/ | 11:23 |
ykarel | there must be some issue with wc_notify, can u paste somewhere the script content | 11:24 |
ykarel | i remeber there were some issue | 11:24 |
jonaspaulo_ | mkuf are you deploying with kolla-ansible? | 11:24 |
mkuf | ykarel: sure, i'll spin up a new cluster, give me a sec | 11:25 |
mkuf | jonaspaulo_: i'm using openstack-ansible for deployment | 11:25 |
jonaspaulo_ | kk | 11:26 |
jonaspaulo_ | so probably is not an issue with magnum | 11:26 |
jonaspaulo_ | but dont understand all other parameters are ok like the public uri etc | 11:26 |
jonaspaulo_ | and the region_name is null | 11:26 |
*** tobias-urdin is now known as tobias-urdin_afk | 11:41 | |
*** tobias-urdin_afk is now known as tobias-urdin | 11:42 | |
*** tobias-urdin is now known as tobias-urdin_afk | 11:43 | |
mkuf | ykarel: here's the service-state of wc-notify and a cat of the script that gets executed http://paste.openstack.org/show/737204/ also, a slightly redacted (removed domain-name) cloud-init-output.log, if you want to have a look http://paste.openstack.org/show/737205/ | 11:55 |
ykarel | atleast the script looks wrong | 12:06 |
ykarel | ok = ok | 12:06 |
ykarel | right side shoule be call to healthz | 12:06 |
ykarel | anyway this should not block minion creation | 12:07 |
brtknr | strigazi: if you try to create a magnum cluster with subnet ip range 192.168.0.0/16, it fails to create the cluster... is this a known issue documented anywhere? looks like it interacts with the default value of calico_ipv4pool: 192.168.0.0/16 | 12:13 |
*** shrasool has joined #openstack-containers | 12:14 | |
mkuf | ykarel: heat-engine.log shows a success message for the master creation but no further logs appear afterwards http://paste.openstack.org/show/737208/ | 12:15 |
*** PagliaccisCloud has quit IRC | 12:19 | |
*** udesale has quit IRC | 12:25 | |
*** udesale has joined #openstack-containers | 12:26 | |
ykarel | mkuf, hmm strange | 12:27 |
ykarel | i think strigazi or flwang can help in it, i have not deployed it since long | 12:27 |
*** mkuf_ has joined #openstack-containers | 12:45 | |
*** mkuf has quit IRC | 12:49 | |
*** tobias-urdin_afk is now known as tobias-urdin | 12:53 | |
*** robertomls has joined #openstack-containers | 13:11 | |
brtknr | mkuf_: what does your /var/log/cloud-init.log and /var/log/cloud-init-output.log files contain? | 13:13 |
brtknr | strigazi: although my k8s deployment is using flannel, not calico, so dont see why the different subnet would affect it | 13:24 |
*** zul has quit IRC | 13:32 | |
*** zul has joined #openstack-containers | 13:41 | |
*** irclogbot_0 has quit IRC | 14:00 | |
*** irclogbot_0 has joined #openstack-containers | 14:08 | |
*** mkuf has joined #openstack-containers | 14:10 | |
*** mkuf_ has quit IRC | 14:13 | |
*** irclogbot_0 has quit IRC | 14:14 | |
strigazi | brtknr: I think the cluster subnet should different the overlay subnet | 14:21 |
brtknr | strigazi: my overlay subnet is 10.100.0.0/16, the default | 14:22 |
strigazi | brtknr: can should be your cluster template? | 14:22 |
brtknr | strigazi: sorry? | 14:23 |
strigazi | brtknr: can you show me your cluster template? | 14:23 |
sayalilunkad | strigazi: hi! have you seen this error before? http://paste.openstack.org/show/737213/ | 14:23 |
*** irclogbot_0 has joined #openstack-containers | 14:24 | |
brtknr | strigazi: http://paste.openstack.org/show/737220/ | 14:24 |
brtknr | strigazi: this one works fine | 14:24 |
sayalilunkad | strigazi: happens when creating a template in rocky. Seems to happen after the www_authenticate_uri patches. | 14:24 |
brtknr | http://paste.openstack.org/show/737221/ | 14:25 |
brtknr | the second one doesnt work | 14:25 |
brtknr | the only difference is the subnet | 14:25 |
strigazi | brtknr: sorry , I was talking with my office mate and I mixed what I was telling him with what I was typing | 14:25 |
strigazi | sayalilunkad: seems irrelevant to that patch, try to set this: | 14:26 |
strigazi | [drivers] | 14:26 |
strigazi | send_cluster_metrics = False | 14:26 |
sayalilunkad | ok checking | 14:27 |
strigazi | brtknr: The subnets are different in the 2nd one, right? | 14:28 |
*** ttsiouts has joined #openstack-containers | 14:28 | |
brtknr | yeah | 14:31 |
strigazi | can you show me? I can try to repro | 14:31 |
brtknr | gateway | 14:31 |
brtknr | other 192.168.1.0/24 | 14:32 |
brtknr | gateway 10.0.0.0/24 | 14:32 |
brtknr | gateway is the network, other is the subnet that fails, gateway is the one that works | 14:32 |
sayalilunkad | strigazi: that is false by default.. | 14:33 |
strigazi | sayalilunkad: but it tries to use the k8s_monitor | 14:33 |
mkuf | brtknr: as far as I can see, both report success, you can have a look here cloud-init-output.log http://paste.openstack.org/show/737205/ and cloud-init.log https://pastebin.com/cyWM9f5P | 14:35 |
brtknr | strigazi: mkuf: context switch... hmm what does your heat stack say? | 14:39 |
brtknr | openstack stack resource list k8s-stack-name -n 4 | 14:40 |
*** hongbin has joined #openstack-containers | 14:42 | |
*** ykarel is now known as ykarel|away | 15:05 | |
*** itlinux has joined #openstack-containers | 15:07 | |
sayalilunkad | strigazi: is there any other config which needs to be disabled? | 15:10 |
*** ykarel|away has quit IRC | 15:15 | |
brtknr | strigazi: ignore my subnet issue. i cant pin down why it has suddenly started working. | 15:17 |
*** zufar has joined #openstack-containers | 15:44 | |
*** ttsiouts has quit IRC | 15:56 | |
*** ttsiouts has joined #openstack-containers | 15:57 | |
*** ykarel|away has joined #openstack-containers | 15:57 | |
zufar | Hi all, i am trying to run `openstack coe service list` but have error. | 16:08 |
zufar | CRITICAL keystonemiddleware.auth_token [-] Unable to validate token: Identity server rejected authorization necessary to fetch token data: ServiceError: Identity server rejected authorization necessary to fetch token data | 16:08 |
zufar | I can access other except magnum service. | 16:08 |
*** PagliaccisCloud has joined #openstack-containers | 16:13 | |
*** ttsiouts has quit IRC | 16:16 | |
*** udesale has quit IRC | 16:41 | |
brtknr | strigazi: is tehete a way to only give floating ip to the master atm? | 16:41 |
strigazi | brtknr: no :( | 16:42 |
brtknr | zufar: `openstack coe cluster list` doesnt seem to work for me either | 16:43 |
brtknr | i get `the JSON object must be str, not 'bytes'` | 16:43 |
brtknr | *openstack coe service list | 16:43 |
brtknr | strigazi: is that something worth having or does the master lb fulfil the same function? even when there is 1 node | 16:44 |
brtknr | ? | 16:44 |
*** shrasool has quit IRC | 16:45 | |
brtknr | also strigazi: do you mind putting a link to scheduled meetingss on the channel topic similar to a lot of other team topics? | 16:46 |
strigazi | brtknr: I don't have the power to do it, I can ask infra. Would a link to docs.openstack.org/magnum/latest/index.html help? | 16:50 |
brtknr | I was thinking of this page: https://wiki.openstack.org/wiki/Meetings/Containers | 16:50 |
strigazi | brtknr master lb does the same but in clouds without octavia or neutron lb or just one master, fips for only on master(s) makes sense. | 16:51 |
strigazi | brtknr: yes point to https://wiki.openstack.org/wiki/Meetings/Containers from https://docs.openstack.org/magnum/latest/index.html | 16:51 |
brtknr | strigazi: ah gotcha! yes, better than not having it :) | 16:52 |
*** robertomls has quit IRC | 16:59 | |
*** jonaspaulo_ has quit IRC | 17:05 | |
*** ramishra has quit IRC | 17:06 | |
mnaser | strigazi: have you looked much into cluster api? | 17:11 |
strigazi | mnaser: I was just looking | 17:12 |
*** TodayAndTomorrow has quit IRC | 17:12 | |
strigazi | mnaser: some things are better than heat, managing nodes with rolling updates | 17:13 |
mnaser | strigazi: i think also it allows us to avoid a lot of things that we dont have/want to deal with | 17:13 |
mnaser | scaling of nodes, updates, etc.. im thinking maybe this can be implemented as a magnum driver | 17:13 |
strigazi | mnaser: others like SoftwareDeployments and creation of networks, LBs order of actions, heat is better on that | 17:14 |
mnaser | strigazi: yeah, i think it uses kubeadm to deploy things? | 17:15 |
strigazi | mnaser: for the driver, yes, I was justing this with colleagues in my office :) | 17:15 |
mnaser | aha, cool | 17:15 |
strigazi | mnaser: we could use kubeadm with the clusterapi, we don't have to | 17:15 |
mnaser | strigazi: thinking out loud, i wonder why we dont use kubeadm to deploy things | 17:16 |
strigazi | mnaser: the only part I didn't like in the cluster api is: heavy dependency on user_data | 17:16 |
mnaser | yeah we're trying to move away from that.. that never ends well :p | 17:17 |
strigazi | mnaser: we don't use kubeadm already because we started first without it and sticked to what we know and have experience with | 17:17 |
mnaser | strigazi: gotcha i think hacking on seeing what things can look lik ewith kubeadm might be interesting | 17:17 |
strigazi | brtknr: mnaser: I have to commute home, I'll be back online later. mnaser: kubeadm 1.13.0 is very appealing | 17:19 |
strigazi | see you in a bit | 17:19 |
mnaser | strigazi: cool, take care :) have a good rest of your day | 17:19 |
brtknr | strigazi: cool, talk soon | 17:22 |
*** zul has quit IRC | 17:35 | |
*** ianychoi has quit IRC | 17:42 | |
*** salmankhan has quit IRC | 18:11 | |
openstackgerrit | Mark Goddard proposed openstack/magnum master: Query cluster's Heat stack ID in functional tests https://review.openstack.org/485130 | 18:12 |
*** ykarel|away has quit IRC | 18:30 | |
openstackgerrit | Feilong Wang proposed openstack/magnum master: Support Keystone AuthN and AuthZ for k8s https://review.openstack.org/561783 | 18:30 |
*** jmlowe has quit IRC | 18:47 | |
*** jmlowe has joined #openstack-containers | 19:12 | |
*** pcaruana has quit IRC | 19:47 | |
*** PagliaccisCloud has quit IRC | 19:52 | |
*** shrasool has joined #openstack-containers | 19:54 | |
*** munimeha1 has joined #openstack-containers | 20:28 | |
flwang | mkuf: what's your current issue? | 20:38 |
flwang | mnaser: i had discussion with strigazi at Berlin, and we generally agreed to have a driver with kubeadm as v2 or something like that | 20:42 |
flwang | mnaser: and re cluster-api, i think that's good to take a look, but i also agree it's not really necessary to combine the usage of cluster-api and kubeadm | 20:45 |
mnaser | flwang: yeah I think kubeadm as a first step for v2 driver is good | 20:46 |
mnaser | It’ll reduce the workload of maintaining all the provisioning toolset | 20:47 |
flwang | mnaser: true, indeed | 20:47 |
flwang | mnaser: btw, thank you so much for those good patches to fix the function test | 20:48 |
flwang | really appreciate it | 20:48 |
mnaser | flwang: I only made small fixes, most of the work was already done :) I’m gonna add conformance tests to it soon | 20:48 |
flwang | another thing we can do in the future is enabling the sonobuoy as e2e test | 20:48 |
flwang | haha, same thought | 20:48 |
mnaser | Took the words out of me haha | 20:48 |
flwang | we're printing it in same time, man | 20:49 |
mnaser | I’m a | 20:49 |
flwang | (09:48:38) mnaser: (09:48:38) flwang: | 20:49 |
mnaser | Actually dedicating some time today on that | 20:49 |
flwang | you're cracking my mind | 20:49 |
flwang | that's fantastic | 20:50 |
flwang | now i'm working on the auto healing stuff | 20:50 |
flwang | with NPD/Draino/Autoscaler | 20:50 |
mnaser | Yep. The jobs might be a bit longer but we’ll be able to know it’s a proper cluster | 20:50 |
mnaser | About end to end stuff | 20:51 |
flwang | mnaser: exactly, we can make it as a separated job | 20:51 |
flwang | and start with non-voting | 20:51 |
mnaser | Yup. Also I was thinking we need to move the system container image build pipeline to upstream | 20:52 |
flwang | mnaser: we do have patch for that, wait a sec | 20:52 |
flwang | https://review.openstack.org/#/c/585420/ | 20:52 |
mnaser | That way we always build those system containers easily (or if we move towards kubeadm we eliminate it entirely) | 20:53 |
flwang | i think that one is ready to go, i'm just waiting strigazi to remove the WIP | 20:53 |
*** lpetrut has joined #openstack-containers | 20:53 | |
flwang | with kubeadm, i think we probably need it based on a new OS, like Ubuntu? | 20:53 |
mnaser | Fedora atomic apparently can deploy with kubeadm | 20:54 |
flwang | but it's going to be end of life ? | 20:54 |
mnaser | Honestly.. I wouldn’t mind deploying on top of Ubuntu .. I feel people gonna find it easier to maintain honestly | 20:54 |
flwang | so personally, i think it's probabaly a good time to rethink the OS part | 20:54 |
flwang | i think see your point | 20:55 |
mnaser | I agree. I really think moving to something like Ubuntu makes a lot of sense. Even tools that run on top of clusters a lot test on Ubuntu | 20:55 |
mnaser | Atomic is kinda the thing on its own | 20:55 |
mnaser | I feel the more we align ourselves with the ecosystem the easier it’ll be for us | 20:56 |
flwang | mnaser: same here | 20:57 |
mnaser | Now if only we just shared time zones :p | 20:57 |
flwang | what's your tz now? | 20:57 |
mnaser | I’m pacific right now but home is eastern | 20:58 |
*** ianychoi has joined #openstack-containers | 20:59 | |
flwang | cool | 20:59 |
flwang | as you may know, i'm based in NZ | 20:59 |
flwang | so now it's 10AM | 20:59 |
flwang | and the summer Xmas is coming | 21:00 |
*** jmlowe has quit IRC | 21:01 | |
mnaser | flwang: yeah, so catalyst is in nz, cern is in eu, and we're in na | 21:03 |
mnaser | :P | 21:03 |
flwang | na or ca? | 21:08 |
flwang | i remembered your guys are in Montreal | 21:08 |
mnaser | flwang: canada :) | 21:10 |
mnaser | flwang: btw i think also k8s is great on ubuntu because we also can start relying on easily getting multiarch stuff going | 21:10 |
flwang | mnaser: yep, so let's start the idea with kubeadm on ubuntu | 21:11 |
mnaser | flwang: i think i'll have on a heat stack using softwaredeploymentgroup/etc outside magnum just to see it work as a heat stack | 21:12 |
mnaser | and then look into integrating it | 21:12 |
flwang | mnaser: i think we can still use heat for orchestrating the general infra, and then use kubeadm to bootstrap | 21:13 |
flwang | to bootstrap k8s | 21:13 |
mnaser | flwang: yup, its just easier for me to develop against our cloud with a stack instead of have the extra stuff magnum has :P | 21:14 |
flwang | mnaser: make sense | 21:15 |
lxkong | mnaser, flwang, sorry for chiming in, but mnaser, do you mind i changing something in this patch https://review.openstack.org/#/c/623724/? | 21:18 |
lxkong | mnaser: i have a working version after some testing | 21:19 |
mnaser | lxkong: no please go for it :D | 21:19 |
mnaser | thats exciting | 21:19 |
lxkong | the cluster creation time decreased from 24min to 17min in my devstack env | 21:19 |
lxkong | maybe it's not the final version, but we could discuss based on that | 21:20 |
*** jmlowe has joined #openstack-containers | 21:22 | |
*** jmlowe has quit IRC | 21:26 | |
*** tobias-urdin has quit IRC | 21:32 | |
*** jmlowe has joined #openstack-containers | 21:50 | |
*** lpetrut has quit IRC | 22:00 | |
*** salmankhan has joined #openstack-containers | 22:08 | |
*** shrasool has quit IRC | 22:18 | |
*** salmankhan has quit IRC | 22:21 | |
*** rcernin has joined #openstack-containers | 22:21 | |
*** PagliaccisCloud has joined #openstack-containers | 22:31 | |
*** itlinux has quit IRC | 22:36 | |
*** lbragstad has quit IRC | 22:37 | |
mnaser | lxkong: it makes sense, most clusters are like 10 minute provision so it will be half of it | 23:30 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!