*** hongbin has quit IRC | 00:24 | |
*** jento has joined #openstack-containers | 00:35 | |
*** PagliaccisCloud has quit IRC | 00:38 | |
*** ramishra has quit IRC | 03:31 | |
*** ykarel|away has joined #openstack-containers | 03:34 | |
*** ykarel|away has quit IRC | 03:50 | |
*** ykarel has joined #openstack-containers | 03:52 | |
*** ykarel has quit IRC | 04:07 | |
*** ykarel has joined #openstack-containers | 04:13 | |
*** Bhujay has joined #openstack-containers | 04:52 | |
*** Bhujay has quit IRC | 04:53 | |
*** Bhujay has joined #openstack-containers | 04:53 | |
*** Bhujay has quit IRC | 04:54 | |
*** Bhujay has joined #openstack-containers | 04:55 | |
*** Bhujay has quit IRC | 04:56 | |
*** Bhujay has joined #openstack-containers | 04:56 | |
*** ykarel has quit IRC | 05:08 | |
*** lpetrut has joined #openstack-containers | 05:38 | |
*** ramishra has joined #openstack-containers | 05:39 | |
*** PagliaccisCloud has joined #openstack-containers | 05:48 | |
*** ykarel has joined #openstack-containers | 06:02 | |
*** PagliaccisCloud has quit IRC | 06:12 | |
*** ykarel has quit IRC | 06:13 | |
*** lpetrut has quit IRC | 06:20 | |
*** ykarel has joined #openstack-containers | 06:30 | |
*** ykarel has quit IRC | 06:34 | |
*** Bhujay has quit IRC | 06:41 | |
*** lpetrut has joined #openstack-containers | 07:00 | |
*** lpetrut has quit IRC | 07:04 | |
*** ramishra has quit IRC | 07:17 | |
openstackgerrit | melissaml proposed openstack/magnum-ui master: Change openstack-dev to openstack-discuss https://review.openstack.org/625383 | 07:31 |
---|---|---|
*** pcaruana has joined #openstack-containers | 08:00 | |
*** Bhujay has joined #openstack-containers | 09:00 | |
*** ramishra has joined #openstack-containers | 09:26 | |
*** ivve has joined #openstack-containers | 10:14 | |
*** ivve has quit IRC | 10:43 | |
*** Bhujay has quit IRC | 11:01 | |
openstackgerrit | Lingxian Kong proposed openstack/magnum master: [k8s] Cluster creation speedup https://review.openstack.org/623724 | 12:00 |
*** PagliaccisCloud has joined #openstack-containers | 12:02 | |
*** Bhujay has joined #openstack-containers | 12:39 | |
*** ykarel has joined #openstack-containers | 12:41 | |
*** ykarel has quit IRC | 13:06 | |
*** PagliaccisCloud has quit IRC | 13:40 | |
*** ivve has joined #openstack-containers | 13:46 | |
*** PagliaccisCloud has joined #openstack-containers | 13:48 | |
*** Bhujay has quit IRC | 13:59 | |
*** Bhujay has joined #openstack-containers | 13:59 | |
*** Bhujay has quit IRC | 14:00 | |
*** Bhujay has joined #openstack-containers | 14:01 | |
*** Bhujay has quit IRC | 14:02 | |
*** Bhujay has joined #openstack-containers | 14:02 | |
*** Bhujay has quit IRC | 14:03 | |
*** Bhujay has joined #openstack-containers | 14:04 | |
*** Bhujay has quit IRC | 14:05 | |
*** Bhujay has joined #openstack-containers | 14:05 | |
*** ykarel has joined #openstack-containers | 15:00 | |
*** zufar has joined #openstack-containers | 15:12 | |
*** Bhujay has quit IRC | 15:13 | |
*** ignaziocassano1 has joined #openstack-containers | 15:17 | |
ignaziocassano1 | hello everybody | 15:17 |
ignaziocassano1 | Anyone used successfully magnum on queens ? | 15:17 |
zufar | Hi all, I am running kubernetes cluster now, but I get stuck on kube_masters in heat process. http://paste.opensuse.org/view//85589448 | 15:18 |
zufar | I try ssh to k8s master node and run troubleshooting command, but got nothing. http://paste.opensuse.org/view//12388190 | 15:19 |
zufar | anyone know what is happen? | 15:19 |
ignaziocassano1 | OK , you are not alone | 15:20 |
ignaziocassano1 | are you using queens ? | 15:20 |
zufar | ignaziocassano1: I am. only running magnum service, but failed when creating cluster | 15:20 |
zufar | always stuck on `kube_masters` heat. k8s or swarm cluster. | 15:20 |
ignaziocassano1 | I successfully run a swarm mode cluster | 15:21 |
zufar | yes. I am using queens. | 15:21 |
ignaziocassano1 | kubernets doe not work | 15:21 |
ignaziocassano1 | It cannot create the docker volume due to insufficent space | 15:21 |
ignaziocassano1 | I create the template with the docke volume size but hest seems to fail passing this value | 15:22 |
ignaziocassano1 | what appears in your kubernetes master node in /var/log/cloud-init log ? | 15:23 |
zufar | oh yes, by the way my openstack is not running cinder for volume. It is possible to run magnum? | 15:23 |
zufar | wait i will check. | 15:23 |
ignaziocassano1 | Zufar we talked via email on yessterday | 15:24 |
ignaziocassano1 | I am ignaziocassano@gmailcom | 15:24 |
ignaziocassano1 | I am not sure if you can use magnum without cinder | 15:25 |
ignaziocassano1 | Terorically yes+ | 15:25 |
ignaziocassano1 | But the only cluster I created in swarm mode is using cinder to attach a volume | 15:25 |
zufar | its curl to http://127.0.0.1:8080/healthz and no response. | 15:26 |
ignaziocassano1 | OK | 15:26 |
ignaziocassano1 | this is the last message | 15:26 |
ignaziocassano1 | go back | 15:27 |
ignaziocassano1 | you should see a message like insufficient space | 15:27 |
ignaziocassano1 | the file should be the cloud-init-output.log or similar name.....I am not connected to my server farm now | 15:28 |
zufar | I try to grep, ERROR: There is not enough free space in volume group atomicos to create data volume of size MIN_DATA_SIZE=2G. | 15:28 |
ignaziocassano1 | OK | 15:28 |
ignaziocassano1 | this is the issue | 15:28 |
ignaziocassano1 | the same I am facing | 15:28 |
ignaziocassano1 | if we do not solve it, the cluster will never running | 15:29 |
ignaziocassano1 | I think this is a bug | 15:29 |
ignaziocassano1 | another person wrote aboit it | 15:29 |
ignaziocassano1 | see https://ask.openstack.org/en/question/116465/magnum-kubernetes-cluster-stucks-in-create-in-progress-state-exactly-on-kube-master/ | 15:30 |
ignaziocassano1 | There is no answer yet | 15:32 |
ignaziocassano1 | Do you think you problem is the same ? | 15:32 |
ignaziocassano1 | OK | 15:33 |
ignaziocassano1 | on you master node try to see a file under /etc/sysconfig | 15:34 |
ignaziocassano1 | its name should be heat-params | 15:34 |
zufar | ignaziocassano1: yes i think. | 15:34 |
zufar | i have free space in kubernetes instance | 15:34 |
ignaziocassano1 | in that file de dockr volume size variable is always 0 also if you specify a value in the template cluser | 15:35 |
ignaziocassano1 | Yes | 15:35 |
zufar | when provisioning swarm, i get same error, stuck in swarm_masters | 15:36 |
ignaziocassano1 | you have free space but it need space to allocate a logical volume in the volume group ande the volum group is full becouse another logical volume allocate all space | 15:36 |
ignaziocassano1 | Try with swarm-mode and no swarm | 15:36 |
ignaziocassano1 | it should work | 15:37 |
ignaziocassano1 | I think heat templates gerated by magnum under queens are not correctly written | 15:37 |
ignaziocassano1 | I think heat templates generated by magnum under queens are not correctly written | 15:37 |
zufar | ignaziocassano1: how to do that (try with swarm-mode)? I am new in swarm | 15:39 |
ignaziocassano1 | We can wrote together to the emailing list ..... I tried but any help. Many people suggest to use rocky | 15:39 |
ignaziocassano1 | I am new to swarm too | 15:40 |
ignaziocassano1 | How do you write you template ? | 15:40 |
zufar | like the example from the documentation | 15:40 |
ignaziocassano1 | ok | 15:40 |
zufar | openstack coe cluster template create swarm-cluster-template --image fedora-atomic-latest --external-network external --dns-nameserver 8.8.8.8 --master-flavor amphora --flavor amphora --coe swarm | 15:40 |
ignaziocassano1 | instead of swarm coe type, write swrm-mode | 15:41 |
ignaziocassano1 | instead of swarm coe type, write swarm-mode | 15:41 |
zufar | okey i will try. | 15:41 |
ignaziocassano1 | sorry for may bad english | 15:41 |
zufar | no problem. | 15:42 |
zufar | anyone have problem like us? | 15:42 |
ignaziocassano1 | If wrote an email to te mailing liste about kubernetes, I will confirm the same problem. Probably if someone will see two people with the same problem ..... | 15:43 |
ignaziocassano1 | Yes, see the link I sent | 15:43 |
ignaziocassano1 | https://ask.openstack.org/en/question/116465/magnum-kubernetes-cluster-stucks-in-create-in-progress-state-exactly-on-kube-master/ | 15:43 |
ignaziocassano1 | ERROR: There is not enough free space in volume group atomicos to create data volume of size MIN_DATA_SIZE=2G. | 15:44 |
zufar | yes maybe in this mailinglist | 15:44 |
ignaziocassano1 | the same we have | 15:44 |
zufar | have you ever tried to scale up the flavor? | 15:44 |
ignaziocassano1 | ho yes | 15:44 |
ignaziocassano1 | I tried | 15:44 |
ignaziocassano1 | bu when the master instance starts, the entire disk is allocated with a logical volume | 15:45 |
ignaziocassano1 | but when the master instance starts, the entire disk is allocated with a logical volume | 15:45 |
ignaziocassano1 | In swarm-mode a strange things happens a new volume is created, ad the swarm logical volume is allocated in the new disk and it works | 15:46 |
ignaziocassano1 | But it is a cinder volume :-( | 15:46 |
ignaziocassano1 | I think you need cinder anycase | 15:47 |
zufar | I will try first for swarm-mode. I have not created cinder playbook for multi controller openstack. | 15:49 |
ignaziocassano1 | I need cinder because I use a lot of instances and I need starage persisency | 15:51 |
ignaziocassano1 | I need cinder because I use a lot of instances and I need storage persisency | 15:51 |
ignaziocassano1 | I need to backup volumes | 15:52 |
ignaziocassano1 | Zufar, sometimes when you create magnum cluster, heat give you deadlock problems ? | 15:54 |
zufar | i havent check log from heat. | 15:55 |
zufar | ignaziocassano1: do you have check volume limit from your openstack user? | 15:55 |
zufar | i get same thing when bootstraping swarm with swarm-mode. | 15:55 |
zufar | ERROR: There is not enough free space in volume group atomicos to create data volume of size MIN_DATA_SIZE=2G. | 15:55 |
ignaziocassano1 | Yes, But I increased them | 15:55 |
zufar | i will try magnum with packstack. maybe get different result. | 15:56 |
ignaziocassano1 | OK, this happens because you do not have cinder. I have cinder and when I create a swrm moode cluster the master instance got a volume attached ...I think your not, right ? | 15:57 |
zufar | yes I think. | 15:58 |
ignaziocassano1 | And this is very strange, because teorically swarm does not need cinder like kubernets | 15:58 |
ignaziocassano1 | I would like to try under rocky release but my openstack is in production and I cannot risk to stop my customers job | 16:00 |
zufar | I am in my labolatory. wait i have install openstack with packstack. | 16:00 |
ignaziocassano1 | OK | 16:00 |
ignaziocassano1 | If you can try with rocky..... | 16:01 |
ignaziocassano1 | Could you ? | 16:01 |
ignaziocassano1 | keep in touch, please. If you have news my email is ignaziocassano@gmail.com | 16:03 |
ignaziocassano1 | Another thing: in documentation the wrote you mast contact swrm on port 2376 but it is 2375 | 16:06 |
ignaziocassano1 | Another thing: in documentation they wrote you mast contact swrm on port 2376 but it is 2375 | 16:06 |
*** ramishra has quit IRC | 16:09 | |
zufar | ignaziocassano1: okey | 16:15 |
ignaziocassano1 | I added my comment in https://ask.openstack.org/en/question/116465/magnum-kubernetes-cluster-stucks-in-create-in-progress-state-exactly-on-kube-master/ | 16:21 |
ignaziocassano1 | If you want add yours it is welcome to get help | 16:21 |
*** ignaziocassano1 has quit IRC | 16:27 | |
*** ignaziocassano1 has joined #openstack-containers | 16:28 | |
*** ykarel has quit IRC | 16:34 | |
*** PagliaccisCloud has quit IRC | 16:47 | |
zufar | Hi I am creating a swarm cluster, but its stack on swarm_primary_master heat with CREATE_IN_PROGRESS. I try to login into swarm VM and see the log. | 17:57 |
zufar | requests.exceptions.ConnectionError: HTTPConnectionPool(host='10.60.60.10', port=5000): Max retries exceeded with url: /v3/auth/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f02f57321d0>: Failed to establish a new connection: [Errno 110] Connection timed out',)) | 17:57 |
zufar | Swarm Instance try to open connection to 10.60.60.10 which is my management ip address on OpenStack. But it cannot (by design it cannot, I try to curl manual to 10.60.60.10 and error). When curl to 10.61.61.10 which is my floating IP and external IP for OpenStack cluster, it works. | 17:57 |
zufar | Anyone know how to change the cloud-init to curl into 10.61.61.10? | 17:57 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Do not log the login command https://review.openstack.org/625351 | 18:02 |
*** PagliaccisCloud has joined #openstack-containers | 18:18 | |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Don not log the login command https://review.openstack.org/625351 | 18:20 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 18:42 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 19:01 |
*** ricolin has joined #openstack-containers | 19:09 | |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 19:16 |
*** ricolin has quit IRC | 19:23 | |
*** lbragstad has joined #openstack-containers | 19:33 | |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 19:38 |
*** hongbin has joined #openstack-containers | 19:55 | |
yankcrime | zufar: sounds like it's trying to notify heat (the orchestration service) that it's finished a particular step | 19:59 |
yankcrime | zufar: you probably want to configure heat's heat_waitcondition_server_url to be your internally facing loadbalancer or IP or whatever | 19:59 |
yankcrime | or configure your network so that openstack clients can hit that other IP | 20:00 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 20:07 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 20:21 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 20:42 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 20:48 |
openstackgerrit | Merged openstack/magnum stable/rocky: Cleaned up devstack logging https://review.openstack.org/617026 | 20:48 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 20:55 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 21:08 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 21:29 |
*** lbragstad has quit IRC | 21:58 | |
openstackgerrit | Merged openstack/magnum master: Changes in container builder https://review.openstack.org/625351 | 23:06 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: Fix use of magnum_repository in container-publish https://review.openstack.org/625408 | 23:22 |
openstackgerrit | Merged openstack/magnum stable/rocky: functional: use vexxhost-specific nodes with nested virt https://review.openstack.org/624729 | 23:27 |
openstackgerrit | Merged openstack/magnum stable/rocky: functional: add body for delete_namespaced_service in k8s https://review.openstack.org/625170 | 23:33 |
openstackgerrit | Merged openstack/magnum stable/rocky: functional: use default admission_control_list values https://review.openstack.org/625171 | 23:33 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!