openstackgerrit | Jake Yip proposed openstack/magnum-ui master: fix text for master flavor / node flavor https://review.openstack.org/613821 | 01:16 |
---|---|---|
*** jaewook_oh has joined #openstack-containers | 01:55 | |
*** jaewook_oh has quit IRC | 02:10 | |
*** hongbin has joined #openstack-containers | 02:24 | |
openstackgerrit | Feilong Wang proposed openstack/magnum master: Add server group for cluster worker nodes https://review.openstack.org/613825 | 02:40 |
openstackgerrit | Feilong Wang proposed openstack/magnum master: Add server group for cluster worker nodes https://review.openstack.org/613825 | 02:42 |
*** Nel1x has joined #openstack-containers | 02:45 | |
*** jaewook_oh has joined #openstack-containers | 02:59 | |
*** Nel1x has quit IRC | 03:09 | |
openstackgerrit | Merged openstack/magnum-ui master: fix text for master flavor / node flavor https://review.openstack.org/613821 | 03:09 |
*** Nel1x has joined #openstack-containers | 03:09 | |
openstackgerrit | Merged openstack/magnum stable/queens: Fix the heat-container-agent docker image https://review.openstack.org/613441 | 03:29 |
*** ramishra has joined #openstack-containers | 03:32 | |
*** udesale has joined #openstack-containers | 03:54 | |
*** janki has joined #openstack-containers | 04:34 | |
*** Nel1x has quit IRC | 04:47 | |
*** hongbin has quit IRC | 04:52 | |
*** Bhujay has joined #openstack-containers | 05:46 | |
*** ykarel has joined #openstack-containers | 06:03 | |
*** Bhujay has quit IRC | 06:15 | |
*** ramishra has quit IRC | 06:18 | |
*** ramishra has joined #openstack-containers | 06:32 | |
*** ykarel is now known as ykarel|lunch | 07:34 | |
*** pcaruana has joined #openstack-containers | 07:46 | |
*** lpetrut has joined #openstack-containers | 07:49 | |
*** pvradu has joined #openstack-containers | 08:04 | |
*** pvradu has quit IRC | 08:06 | |
*** pvradu has joined #openstack-containers | 08:06 | |
openstackgerrit | inspurericzhang proposed openstack/magnum-tempest-plugin master: [Trivial Fix] update home-page url https://review.openstack.org/613859 | 08:28 |
*** ykarel|lunch is now known as ykarel | 08:40 | |
*** mattgo has joined #openstack-containers | 08:53 | |
*** flwang1 has joined #openstack-containers | 09:29 | |
flwang1 | strigazi: around for a sync? | 09:29 |
strigazi | flwang1: yes, can you also have a quick look to https://review.openstack.org/#/c/612727/4 | 09:45 |
flwang1 | strigazi: sure, looking now | 09:47 |
flwang1 | strigazi: it looks good for me | 09:49 |
flwang1 | strigazi: do you want to address ricardo's comment? | 09:49 |
flwang1 | i think we have discussed that before | 09:49 |
strigazi | flwang1: I replied. The only version atm is magnum's version. What we have a magnum tag we can put the version and commit. | 09:50 |
flwang1 | comparing with a number version, personally, i prefer using 'release-dev/stable' for reasons: 1. number version is hard to remember and maintain | 09:50 |
strigazi | flwang1: it is definately for another patch. | 09:50 |
flwang1 | strigazi: ok | 09:51 |
flwang1 | strigazi: i have a question about this https://gitlab.cern.ch/cloud/atomic-system-containers/blob/cern-qa/.gitlab-ci.yml#L52 | 09:51 |
flwang1 | why do you need a special image here? | 09:52 |
strigazi | there is nothing special about this image | 09:52 |
strigazi | but you need some image | 09:52 |
strigazi | that has the docker client | 09:52 |
strigazi | only the client | 09:52 |
strigazi | flwang1: makes sense? | 09:53 |
flwang1 | so technically, i just need an image including docker client? | 09:53 |
strigazi | amm, you don't need any image. If you just copy the script section to a bash script you can run it anywhere | 09:54 |
strigazi | eg in your laptop | 09:55 |
flwang1 | strigazi: ok, cool. what's this line for https://gitlab.cern.ch/cloud/atomic-system-containers/blob/cern-qa/.gitlab-ci.yml#L54 ? | 09:55 |
flwang1 | it downloads the kubelet binary, but i can't see it's used anywhere | 09:55 |
strigazi | the apiserver system container needs to expose kubectl to the host. in the containers from gcr.io the apiserver container doesn't contain kubectl | 09:56 |
*** salmankhan has joined #openstack-containers | 09:57 | |
flwang1 | strigazi: ok, good to know | 09:58 |
flwang1 | strigazi: i can see from line 70-109, it's trying to push the image to gitlab-registry.cern.ch, so if i just want to build the image and reuse the gitlab registry, i don't need the job from line 70, right? | 10:01 |
strigazi | the second job is to tag the image with the version. The first job tags and pushed with the commit id as the image tag | 10:02 |
strigazi | so yes to your question. | 10:02 |
flwang1 | strigazi: cool | 10:03 |
strigazi | flwang1: locally I'm using smth like this https://paste.fedoraproject.org/paste/qRjftn4U6TIHFWnUV1vnkw/raw | 10:03 |
flwang1 | strigazi: haha, that's the magic script i wanna ask | 10:04 |
flwang1 | because each time when i asked for new images, you can build then in minutes | 10:04 |
strigazi | flwang1: it took me sometime to figure that you needed this. | 10:04 |
flwang1 | so i believe there are some magic | 10:04 |
flwang1 | strigazi: sorry for the confusing, actually, i need both | 10:05 |
flwang1 | we do need a formal pipeline to build those images, and I also want to know the handy, informal way | 10:06 |
strigazi | flwang1: we just need to finish this https://review.openstack.org/#/c/585420/, to add the publish job | 10:07 |
flwang1 | strigazi: anything else we need? | 10:08 |
flwang1 | how can i know if it's working or not from the jobs of jenkins? | 10:08 |
strigazi | I'm pushing the enable_tiller label and we need some input for the nodegroups spec. | 10:09 |
strigazi | how can i know if it's working or not from the jobs of jenkins? Not sure what do you mean | 10:09 |
flwang1 | for this one https://review.openstack.org/#/c/585420/ | 10:13 |
flwang1 | i think it will automatically build k8s images and push to openstackmagnum repo, no? | 10:14 |
strigazi | on push it will test if the containers are getting built | 10:14 |
strigazi | on merge it will push to docker.io | 10:14 |
flwang1 | but at this moment, how can i know if the patch is working? | 10:16 |
flwang1 | how can i check the jobs' log to make sure that? | 10:16 |
strigazi | they where here: http://logs.openstack.org/20/585420/16/check/magnum-container-build-base/8fd5cf8/ | 10:17 |
strigazi | if the built is done the playbook will be green | 10:18 |
strigazi | one sec | 10:18 |
flwang1 | no urgent | 10:25 |
flwang1 | so we still need to merge the tiller first? i was confused | 10:25 |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: WIP: build images in the ci https://review.openstack.org/585420 | 10:28 |
strigazi | flwang1: we can see now the result of the build ^^ | 10:29 |
flwang1 | strigazi: cool, if it passed, can we say this patch is ready to go? | 10:29 |
flwang1 | i mean ready for merging? | 10:29 |
strigazi | flwang1: I'll revert this https://review.openstack.org/#/c/585420/17/.zuul.yaml I put it there to not run things that we don't need. | 10:30 |
strigazi | flwang1: I just need one more playbook to pusblish | 10:30 |
flwang1 | ok, just add a comment and @ me if it's ready to go | 10:31 |
strigazi | tiller and nodegroups are separate tasks but important :) | 10:31 |
flwang1 | no problem | 10:31 |
flwang1 | but i probably don't have time until summit | 10:31 |
flwang1 | i will leave on Friday | 10:31 |
strigazi | OK | 10:31 |
flwang1 | do you still have a moment for a feature request? | 10:34 |
strigazi | sure | 10:34 |
flwang1 | do you remember the overlay2 + docker_volume_size issue? | 10:34 |
flwang1 | now user have to let docker share the root disk of the node | 10:35 |
strigazi | ok | 10:35 |
flwang1 | but if the root disk of flavor is small, then user may run out of disk soon | 10:36 |
flwang1 | because in Magnum, we're booting VM based on image and doesn't create new volume | 10:36 |
flwang1 | so the feature request is, could we create new volume when booting node and set the size based on a label passed by user | 10:37 |
strigazi | flwang1: you mean a new lvm in the vm? | 10:39 |
flwang1 | in nova, when user boot a server | 10:39 |
flwang1 | on dashboard, you can select boot from image, volume, volume snapshot, etc | 10:40 |
flwang1 | when select boot from image, user can set create a new volume or not | 10:40 |
flwang1 | if user select boot from image and create a new volume, then nova will create a new volume firstly and copy the image to that volume, and set the volume size based on user input | 10:41 |
flwang1 | back to the problem | 10:42 |
flwang1 | if you already got a flavor with big root disk size, that's not a problem | 10:42 |
flwang1 | because nova can use that big size | 10:43 |
flwang1 | but for public cloud, generally, the default root disk size in flaovr is 10Gb | 10:43 |
flwang1 | like us, we have 10Gb | 10:43 |
flwang1 | as the root disk size | 10:43 |
flwang1 | which is not enough for sharing between docker and the OS | 10:44 |
flwang1 | does that make sense for you? | 10:44 |
tobias-urdin | same here 10gb :) strigazi there is about 10 patches pending release for rocky, could we release 7.0.2? seems like there is atleast one fix still open https://review.openstack.org/#/q/project:openstack/magnum+branch:stable/rocky+is:open | 10:44 |
strigazi | https://review.openstack.org/#/c/612791/2 +2 | 10:46 |
strigazi | https://review.openstack.org/#/c/603603/ not needed | 10:46 |
strigazi | tobias-urdin: ^^ | 10:46 |
strigazi | flwang1 ok, so you need to have an external volume for docker storage, correct? | 10:47 |
flwang1 | strigazi: no | 10:47 |
flwang1 | no external volume because it won't work for overlay2 | 10:47 |
strigazi | flwang1: ok | 10:47 |
flwang1 | i just want to create a big root disk | 10:47 |
strigazi | flwang1: you want boot from volume? | 10:48 |
flwang1 | let me explain from another view, what's the root disk size in cern's flavor? | 10:48 |
strigazi | flwang1: with overlay2 the root disk and the docker storage are taking the full 10GB | 10:48 |
flwang1 | strigazi: yep, but that's not enough i think | 10:49 |
flwang1 | 10GB is too small | 10:49 |
strigazi | flwang1: at CERN we give 2 cores 4 gb ram and 20gb as defaults for k8s | 10:49 |
flwang1 | and currently, there is no way to change it without changing flavor | 10:49 |
flwang1 | how can you create a node with 60GB disk? | 10:49 |
*** dave-mccowan has joined #openstack-containers | 10:51 | |
strigazi | if you need 60GB you get another flavor | 10:51 |
strigazi | openstack coe cluster create --flavor foo-with-60gb | 10:52 |
*** salmankhan has quit IRC | 10:52 | |
strigazi | flwang1: at least at CERN we manage this with flavors | 10:53 |
flwang1 | strigazi: no, not really | 10:53 |
flwang1 | all our flavors have same root disk size | 10:53 |
flwang1 | tobias-urdin: ^ | 10:54 |
strigazi | flwang1 so what is the proposed solution for you? | 10:54 |
*** salmankhan has joined #openstack-containers | 10:57 | |
tobias-urdin | flwang1: we too have 10gb for all flavors since we are relying on volumes | 10:57 |
flwang1 | tobias-urdin: do you understand the problem we're talking about? | 10:58 |
flwang1 | strigazi: pls wait a sec, i''m find some code | 10:58 |
flwang1 | to show you | 10:58 |
strigazi | flwang1: sure, I got the problem. I never thought that all flavors would have the same disk size. | 10:58 |
openstackgerrit | Merged openstack/magnum master: Add heat_container_agent_tag label https://review.openstack.org/612727 | 10:59 |
flwang1 | because you guys are private cloud | 10:59 |
flwang1 | just like blizzard, they have 32GB root disk by default | 10:59 |
flwang1 | strigazi: https://github.com/openstack/magnum/blob/master/magnum/drivers/k8s_fedora_atomic_v1/templates/kubeminion.yaml#L487 | 11:00 |
strigazi | it is a matter of choice I think. In aws I can ask whatever I want for disk. | 11:00 |
strigazi | yes, this is the image/ | 11:00 |
flwang1 | just need to remove the image and create a volume based on the image before booting instance | 11:00 |
strigazi | ok, so boot from volume that I mentioned | 11:01 |
flwang1 | strigazi: ok, then yes | 11:01 |
tobias-urdin | i assume we are talking about the root disk being too small for something? we are relying on the extra "docker" volume for space | 11:01 |
flwang1 | tobias-urdin: and what's the docker_storage_driver you're using? | 11:01 |
* strigazi will be back in 3' | 11:01 | |
flwang1 | tobias-urdin: we have seen problems with overlay2/overlay + docker_volume_size > 0 | 11:02 |
tobias-urdin | we are using the devicemapper driver | 11:04 |
tobias-urdin | i ended up doing the same as blizzard, hiding cluster templates in horizon from users to simplify the usage | 11:04 |
tobias-urdin | still can use it though | 11:04 |
flwang1 | tobias-urdin: ok | 11:10 |
flwang1 | for devicemapper, it's ok | 11:11 |
strigazi | tobias-urdin: flwang1: anything else before I go for lunch? | 11:13 |
flwang1 | strigazi: if you don't object, i'd like to propose a patch | 11:14 |
strigazi | flwang1: for boot from volume? | 11:14 |
flwang1 | yes | 11:14 |
strigazi | flwang1: if it is an opt-in feaure it is fine for me. The templates will become a bit complicated though. | 11:15 |
flwang1 | strigazi: ok, i can see your point, i will think about it | 11:16 |
flwang1 | it's not super urgent | 11:16 |
strigazi | flwang1: I just want to ask, are you sure the shared file system will cope with the load. | 11:16 |
flwang1 | our presentation is urgent ;) | 11:16 |
strigazi | flwang1: k8s will become slow | 11:16 |
*** spsurya has quit IRC | 11:16 | |
flwang1 | you mean ceph? | 11:16 |
strigazi | yes | 11:16 |
flwang1 | i don't know | 11:17 |
flwang1 | i need your experience and tobias-urdin's | 11:17 |
strigazi | tobias-urdin's case is a little different, it is not the full node in a volume | 11:17 |
strigazi | I'll start today with the presentation. I'll ping you | 11:18 |
strigazi | see you later. | 11:18 |
*** spsurya has joined #openstack-containers | 11:18 | |
flwang1 | coo | 11:18 |
flwang1 | cool | 11:18 |
flwang1 | ttyl | 11:18 |
*** ramishra has quit IRC | 11:24 | |
*** ramishra has joined #openstack-containers | 11:24 | |
*** udesale has quit IRC | 11:26 | |
*** jaewook_oh has quit IRC | 11:30 | |
*** janki has quit IRC | 11:32 | |
*** ramishra_ has joined #openstack-containers | 11:40 | |
*** ramishra has quit IRC | 11:43 | |
*** serlex has joined #openstack-containers | 11:48 | |
strigazi | flwang1: i'm back | 11:59 |
*** ppetit has joined #openstack-containers | 12:27 | |
*** jmlowe has quit IRC | 12:34 | |
*** udesale has joined #openstack-containers | 12:47 | |
*** zul has joined #openstack-containers | 12:48 | |
*** munimeha1 has joined #openstack-containers | 13:02 | |
*** jmlowe has joined #openstack-containers | 13:22 | |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: k8s_fedora: Deploy tiller https://review.openstack.org/612336 | 13:45 |
*** hongbin has joined #openstack-containers | 14:09 | |
*** jmlowe has quit IRC | 14:16 | |
openstackgerrit | Merged openstack/magnum stable/rocky: Add prometheus-monitoring namespace https://review.openstack.org/612791 | 14:30 |
*** jmlowe has joined #openstack-containers | 14:37 | |
kaiokmo | hi everyone. I'm trying to create a k8s cluster using magnum (6.2.0, from stable/queens), but the creation does not seems to complete. | 14:41 |
kaiokmo | I can see, however, that the heat stack creation is completed. | 14:42 |
kaiokmo | the cluster status is stuck at "CREATE_IN_PROGRESS". nothing wrong in magnum-conductor or magnum-api logs. | 14:44 |
kaiokmo | any idea from where I can track down the issue? | 14:45 |
openstackgerrit | Merged openstack/magnum master: Trivial code cleanups https://review.openstack.org/601904 | 14:45 |
kaiokmo | btw, I'm using fedora-atomic 27 (2018_04_19) | 14:46 |
*** shrasool has joined #openstack-containers | 14:57 | |
*** pvradu has quit IRC | 15:06 | |
*** itlinux has quit IRC | 15:09 | |
*** jmlowe has quit IRC | 15:14 | |
*** mattgo has quit IRC | 15:19 | |
*** ykarel is now known as ykarel|away | 15:20 | |
*** jmlowe has joined #openstack-containers | 15:29 | |
*** lpetrut has quit IRC | 15:40 | |
*** ppetit has quit IRC | 15:43 | |
*** ykarel|away has quit IRC | 15:43 | |
*** jmlowe has quit IRC | 15:56 | |
*** jmlowe has joined #openstack-containers | 15:58 | |
*** itlinux has joined #openstack-containers | 16:01 | |
*** zul has quit IRC | 16:14 | |
*** shrasool has quit IRC | 16:15 | |
*** ykarel|away has joined #openstack-containers | 16:23 | |
*** shrasool has joined #openstack-containers | 16:24 | |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: k8s_fedora: Deploy tiller https://review.openstack.org/612336 | 16:35 |
*** graysonh has quit IRC | 16:42 | |
*** shrasool has quit IRC | 16:46 | |
*** shrasool has joined #openstack-containers | 16:50 | |
*** udesale has quit IRC | 17:00 | |
*** jmlowe has quit IRC | 17:01 | |
*** zul has joined #openstack-containers | 17:01 | |
*** shrasool has quit IRC | 17:12 | |
*** mattgo has joined #openstack-containers | 17:24 | |
*** jmlowe has joined #openstack-containers | 17:37 | |
*** salmankhan has quit IRC | 17:40 | |
*** shrasool has joined #openstack-containers | 17:50 | |
*** mattgo has quit IRC | 17:54 | |
*** lpetrut has joined #openstack-containers | 17:54 | |
*** shrasool has quit IRC | 17:59 | |
*** ykarel|away has quit IRC | 18:17 | |
flwang1 | kaiokmo: still around? | 18:27 |
flwang1 | kaiokmo: check this https://github.com/openstack/magnum/blob/master/magnum/conf/drivers.py#L33 make sure it's False | 18:28 |
kaiokmo | flwang1: thanks. it's "default=True" for me. changing it to False now | 18:37 |
kaiokmo | flwang1: is this a condition to a cluster creation to be completed or something? | 18:38 |
*** jmlowe has quit IRC | 18:46 | |
kaiokmo | flawn1: it worked. the cluster creation is now completed. thank you | 18:54 |
*** jmlowe has joined #openstack-containers | 18:55 | |
flwang1 | it's a known issue | 18:57 |
*** flwang1 has quit IRC | 18:57 | |
*** jmlowe has quit IRC | 19:08 | |
*** lpetrut has quit IRC | 19:18 | |
*** salmankhan has joined #openstack-containers | 19:37 | |
*** salmankhan has quit IRC | 19:41 | |
*** jmlowe has joined #openstack-containers | 19:46 | |
*** ramishra_ has quit IRC | 20:01 | |
*** shrasool has joined #openstack-containers | 21:17 | |
*** salmankhan has joined #openstack-containers | 21:30 | |
openstackgerrit | Erik Olof Gunnar Andersson proposed openstack/magnum master: Removed admin_ setings https://review.openstack.org/614034 | 21:38 |
openstackgerrit | Erik Olof Gunnar Andersson proposed openstack/magnum master: Removed admin_* from devstack config https://review.openstack.org/614034 | 21:39 |
*** itlinux has quit IRC | 21:46 | |
*** pcaruana has quit IRC | 21:57 | |
*** shrasool has quit IRC | 22:01 | |
*** serlex has quit IRC | 22:05 | |
*** shrasool has joined #openstack-containers | 22:17 | |
*** munimeha1 has quit IRC | 22:23 | |
*** shrasool has quit IRC | 23:03 | |
*** salmankhan has quit IRC | 23:25 | |
*** itlinux has joined #openstack-containers | 23:37 | |
*** hongbin has quit IRC | 23:43 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!