| *** hongbin has quit IRC | 00:13 | |
| *** sapd1 has joined #openstack-containers | 01:01 | |
| *** ricolin has joined #openstack-containers | 01:30 | |
| *** hongbin has joined #openstack-containers | 02:07 | |
| *** ArchiFleKs has quit IRC | 02:07 | |
| *** ArchiFleKs has joined #openstack-containers | 02:22 | |
| *** ykarel|away has joined #openstack-containers | 03:40 | |
| *** ykarel|away is now known as ykarel | 03:48 | |
| *** ricolin has quit IRC | 04:05 | |
| *** ramishra has joined #openstack-containers | 04:09 | |
| *** udesale has joined #openstack-containers | 04:36 | |
| *** ykarel has quit IRC | 04:57 | |
| *** ricolin has joined #openstack-containers | 05:02 | |
| *** ykarel has joined #openstack-containers | 05:11 | |
| *** suanand has joined #openstack-containers | 05:23 | |
| *** hongbin has quit IRC | 05:44 | |
| *** ramishra_ has joined #openstack-containers | 06:00 | |
| *** ramishra has quit IRC | 06:01 | |
| *** _fragatina has quit IRC | 06:12 | |
| *** ramishra_ is now known as ramishra | 06:17 | |
| *** _fragatina has joined #openstack-containers | 06:22 | |
| *** _fragatina has quit IRC | 06:37 | |
| *** _fragatina has joined #openstack-containers | 06:48 | |
| *** janki has joined #openstack-containers | 07:11 | |
| *** janki has quit IRC | 07:13 | |
| *** janki has joined #openstack-containers | 07:13 | |
| *** ykarel is now known as ykarel|lunch | 07:42 | |
| *** _fragatina has quit IRC | 07:57 | |
| *** ramishra has quit IRC | 08:08 | |
| brtknr | everyone: what approaches are people taking for configuring minions with multiple flavors? | 08:17 |
|---|---|---|
| *** ramishra has joined #openstack-containers | 08:18 | |
| *** ykarel|lunch is now known as ykarel | 08:29 | |
| ricolin | strigazi, flwang guys what is the best way to delete a single node from Magnum cluster | 08:51 |
| *** ign0tus has joined #openstack-containers | 09:04 | |
| *** Horrorcat has joined #openstack-containers | 09:30 | |
| strigazi | ricolin: apart from scale down by node? | 09:32 |
| strigazi | ricolin: apart from scale down by one node? | 09:32 |
| *** FlorianFa has joined #openstack-containers | 09:34 | |
| *** ign0tus has quit IRC | 09:55 | |
| Horrorcat | hi folks. I’m having trouble with spawning a k8s cluster using Magnum. I’m using the following temlate <http://paste.debian.net/hidden/f1e9611a/>. Magnum and Heat are on the Rocky release, everything else is Pike. I observed the logs of both magnum and heat during cluster creation and was not able to spot any errors, except the occasional database reconnect. | 10:12 |
| Horrorcat | The cluster spawns (CREATE_COMPLETE), but it is unusable, because the minions have the node.cloudprovider.kubernetes.io/uninitialized taint. | 10:12 |
| Horrorcat | from my research, this means that magnum failed to do a step during the initialisation, is that correct? | 10:12 |
| Horrorcat | if it is, where do I need to look to figure out *why* it didn’t do that step? | 10:13 |
| *** ign0tus has joined #openstack-containers | 10:13 | |
| Horrorcat | I also checked cloud-init logs on both master and minion, didn’t find anything suspicious there either. | 10:15 |
| *** ArchiFleKs has quit IRC | 10:19 | |
| *** sapd1 has quit IRC | 10:29 | |
| *** ArchiFleKs has joined #openstack-containers | 10:30 | |
| Horrorcat | I also get `No resources found.` from `kubectl get pods --all-namespaces` | 10:43 |
| *** mkuf has quit IRC | 10:51 | |
| *** udesale has quit IRC | 10:58 | |
| *** sapd1 has joined #openstack-containers | 11:24 | |
| *** janki has quit IRC | 11:48 | |
| *** janki has joined #openstack-containers | 11:48 | |
| *** mkuf has joined #openstack-containers | 11:51 | |
| *** janki has quit IRC | 11:55 | |
| *** janki has joined #openstack-containers | 11:55 | |
| *** sapd1 has quit IRC | 12:03 | |
| *** sapd1 has joined #openstack-containers | 12:23 | |
| *** _fragatina has joined #openstack-containers | 12:30 | |
| *** janki has quit IRC | 12:41 | |
| *** udesale has joined #openstack-containers | 12:45 | |
| *** suanand has quit IRC | 13:05 | |
| *** sapd1 has quit IRC | 13:08 | |
| *** dave-mccowan has joined #openstack-containers | 13:36 | |
| *** jmlowe has quit IRC | 13:54 | |
| imdigitaljim | horrorcat | 14:15 |
| imdigitaljim | that is a label that gets removed by the cloud-controller-manager when it comes online | 14:16 |
| imdigitaljim | maybe check that this came up okay? | 14:16 |
| *** ykarel is now known as ykarel|away | 14:18 | |
| *** ykarel|away has quit IRC | 14:23 | |
| *** jmlowe has joined #openstack-containers | 14:25 | |
| *** ykarel|away has joined #openstack-containers | 14:40 | |
| *** mrodriguez has joined #openstack-containers | 14:43 | |
| *** ykarel|away is now known as ykarel | 14:45 | |
| Horrorcat | imdigitaljim, I’m not at work anymore, but thanks for your reply. So I was able to figure out that it works with CoreOS instead of Fedora 27 Atomic. How would I check if the cloud-controller-manager came up? | 14:46 |
| imdigitaljim | you should be able to see with kubectl get all --all-namespaces=true | 14:47 |
| imdigitaljim | but it likely would show up in -n kube-system | 14:47 |
| imdigitaljim | with the all it might show up as a deployment or DS | 14:48 |
| imdigitaljim | i dont use either fedora or coreos so i dont know specifics sorry =| | 14:48 |
| Horrorcat | okay, yeah, that turns up empty, as I mentioned | 14:51 |
| Horrorcat | ah, you said get all | 14:51 |
| Horrorcat | that has some output which I don’t have with me | 14:51 |
| strigazi | imdigitaljim can you provide a reproducer for this comment: "the ro file-system doesnt protect you if you're root, you can just mount -o remount,rw /anything"? people from fedora think it does protect you. | 15:11 |
| *** ign0tus has quit IRC | 15:12 | |
| strigazi | protects you from the exploit, not if you are root in general in the host. | 15:12 |
| strigazi | imdigitaljim: you would run mount -o remount,rw /(root) ? | 15:13 |
| strigazi | imdigitaljim: this doesn't work "mount -o remount,rw /" | 15:13 |
| imdigitaljim | you can do | 15:15 |
| imdigitaljim | mount -o remount,rw /etc | 15:15 |
| imdigitaljim | mount -o remount,rw /usr | 15:15 |
| imdigitaljim | etc | 15:15 |
| imdigitaljim | maybe not on / | 15:15 |
| imdigitaljim | but i in fact did it to overcome some on-host writing | 15:15 |
| imdigitaljim | i know fedora had like 2 rw paths | 15:16 |
| imdigitaljim | maybe /var and /etc/? | 15:16 |
| imdigitaljim | i dont remember | 15:16 |
| imdigitaljim | but you can remount most directories | 15:16 |
| imdigitaljim | im not sure in terms of this exploit fwiw, but if you do acquire root the ro wont save you =] | 15:18 |
| *** udesale has quit IRC | 15:22 | |
| *** itlinux has joined #openstack-containers | 15:30 | |
| strigazi | this cve doesn't give you root access. It allows you to touch the runc binary | 15:35 |
| strigazi | imdigitaljim: ^^ | 15:35 |
| *** hongbin has joined #openstack-containers | 15:38 | |
| imdigitaljim | if you can overwrite a binary you can get a root shell =) my comments were just in the event that runc for FA27/28 was able to be hit despite ro mount | 15:40 |
| strigazi | imdigitaljim Can it be hit from inside an unprivileged container? | 15:48 |
| imdigitaljim | i believe so | 15:49 |
| strigazi | imdigitaljim: with selinux disabled. With selinux enforcing you cannot | 15:49 |
| strigazi | imdigitaljim: reproducer? | 15:49 |
| strigazi | imdigitaljim: fyi, https://github.com/kubernetes/autoscaler/pull/1690 | 15:50 |
| strigazi | imdigitaljim: the fun has started from people who want to sell their solution | 15:50 |
| imdigitaljim | is https://gitlab.cern.ch/cloud-infrastructure/magnum/blob/master/magnum/drivers/k8s_fedora_atomic_v1/templates/kubemaster.yaml#L590 | 15:51 |
| imdigitaljim | that accurate? | 15:51 |
| imdigitaljim | do you run in permissive anyways? | 15:51 |
| imdigitaljim | sell who's solution | 15:51 |
| imdigitaljim | for autoscaler? or exploit | 15:51 |
| strigazi | for the CVE: we run with selinux in persmissive. Do you have a reproducer for fedora atomic, with selinux in permissive? | 15:53 |
| imdigitaljim | no i havent tried for it | 15:53 |
| imdigitaljim | i believe in a few days they will release theirs | 15:53 |
| imdigitaljim | on the 20th | 15:53 |
| imdigitaljim | iirc | 15:53 |
| imdigitaljim | you can try it then | 15:54 |
| imdigitaljim | maybe? | 15:54 |
| imdigitaljim | we dont use fedora at all now so ill probably not be able to test it out | 15:54 |
| strigazi | for the CVE: per redhat, with selinux enforcing you are safe. | 15:54 |
| imdigitaljim | but i can try to pass along the publically avail exploit | 15:54 |
| imdigitaljim | if i see it | 15:54 |
| imdigitaljim | do you run enforcing now?" | 15:54 |
| imdigitaljim | your ^ master shows its disabled | 15:54 |
| strigazi | 16:53 < strigazi> for the CVE: we run with selinux in persmissive. | 15:54 |
| imdigitaljim | oh ok yeah so its not enalbed | 15:55 |
| imdigitaljim | well 'not enforcing' | 15:55 |
| strigazi | that is why i asked: Do you have a reproducer for fedora atomic, with selinux in permissive? | 15:55 |
| *** itlinux has quit IRC | 15:55 | |
| imdigitaljim | oh yeah, i dont | 15:56 |
| imdigitaljim | not much need to mess with atomic | 15:56 |
| *** itlinux has joined #openstack-containers | 15:56 | |
| imdigitaljim | just offering extra info | 15:56 |
| strigazi | I'm trying to understand if the exploit is possible. from a container, you can do: mount -o remount,rw /usr, you are in different namespace. | 15:58 |
| strigazi | I'm trying to understand if the exploit is possible. from a container, you can NOT do: mount -o remount,rw /usr, you are in different namespace. | 15:58 |
| imdigitaljim | and by the way thanks for the CAS pull link, we'll get some comments on there | 15:59 |
| strigazi | I'm trying to understand if the exploit is possible. from a container, you can NOT do: mount -o remount,rw /usr, you are in a different mount namespace. | 15:59 |
| imdigitaljim | to clarify: the mount -o comment can only happen after the exploit happens (if it can) | 15:59 |
| imdigitaljim | the container breakout has to happen first | 16:00 |
| imdigitaljim | idk the steps for that and they said they provided 7 days of notice before releasing the exploit | 16:00 |
| imdigitaljim | which expires in a few days | 16:00 |
| strigazi | ok, makes sense. I think the ro fs, protects from the expoit. I need an expert to confirm | 16:01 |
| imdigitaljim | so maybe after that you can see if you're affected | 16:01 |
| imdigitaljim | yeah it might | 16:01 |
| imdigitaljim | i just wanted to throw some extra info to help the analysis | 16:01 |
| strigazi | you can run: gitlab-registry.cern.ch/strigazi/containers/cve-2019-5736-poc | 16:01 |
| imdigitaljim | do you get like ro fails? | 16:02 |
| strigazi | in ubuntu with an unpatched docker, I was able to touch runc | 16:02 |
| strigazi | in atomi, I couldn't not | 16:02 |
| imdigitaljim | then it might be safe from this | 16:02 |
| strigazi | code here: https://github.com/q3k/cve-2019-5736-poc | 16:03 |
| imdigitaljim | im sure FA would backport/publish a runc update too | 16:03 |
| imdigitaljim | ? | 16:03 |
| strigazi | I think they have | 16:04 |
| strigazi | in fa29 | 16:05 |
| strigazi | dims: any comments? shall we push for upstreaming this? https://github.com/kubernetes/autoscaler/pull/1690 | 16:06 |
| *** itlinux has quit IRC | 16:07 | |
| *** ramishra has quit IRC | 16:07 | |
| *** hongbin has quit IRC | 16:09 | |
| *** ArchiFleKs has quit IRC | 16:13 | |
| *** ricolin_ has joined #openstack-containers | 16:14 | |
| dims | strigazi : looking | 16:15 |
| dims | strigazi : awesome! yes please | 16:15 |
| *** ricolin has quit IRC | 16:16 | |
| strigazi | dims: Thanks, we would like to have an implementaion for magnum. We don't want to block other implementaions of course. | 16:17 |
| strigazi | dims: it would be nice if we can have it in the upstream repo | 16:17 |
| *** ArchiFleKs has joined #openstack-containers | 16:17 | |
| dims | strigazi : right agree | 16:18 |
| *** itlinux has joined #openstack-containers | 16:19 | |
| *** hongbin has joined #openstack-containers | 16:24 | |
| *** itlinux has quit IRC | 16:36 | |
| *** itlinux has joined #openstack-containers | 16:38 | |
| *** trident has quit IRC | 16:46 | |
| *** _fragatina has quit IRC | 17:03 | |
| *** ianychoi has joined #openstack-containers | 17:06 | |
| *** _fragatina has joined #openstack-containers | 17:06 | |
| *** hongbin has quit IRC | 17:29 | |
| *** Florian has joined #openstack-containers | 17:34 | |
| *** itlinux has quit IRC | 17:35 | |
| *** ykarel is now known as ykarel|away | 17:54 | |
| *** jmlowe has quit IRC | 17:57 | |
| *** ykarel|away has quit IRC | 18:00 | |
| ricolin_ | dims, strigazi, just post my wip in https://github.com/kubernetes/autoscaler/pull/1691, I guess we can move following work on 1690 | 18:02 |
| *** ricolin_ has quit IRC | 18:19 | |
| *** ricolin_ has joined #openstack-containers | 18:20 | |
| *** ricolin_ has quit IRC | 18:25 | |
| *** henriqueof has joined #openstack-containers | 18:44 | |
| *** dims has quit IRC | 18:47 | |
| *** henriqueof has quit IRC | 19:15 | |
| *** henriqueof has joined #openstack-containers | 19:15 | |
| *** henriqueof has quit IRC | 19:18 | |
| *** henriqueof has joined #openstack-containers | 19:19 | |
| *** dave-mccowan has quit IRC | 19:42 | |
| *** _fragatina has quit IRC | 20:02 | |
| *** jmlowe has joined #openstack-containers | 20:07 | |
| *** Florian has quit IRC | 20:09 | |
| *** flwang1 has left #openstack-containers | 20:12 | |
| *** dave-mccowan has joined #openstack-containers | 20:35 | |
| *** spiette has quit IRC | 20:38 | |
| *** spiette has joined #openstack-containers | 20:38 | |
| *** spiette has quit IRC | 20:39 | |
| *** henriqueof has quit IRC | 20:48 | |
| brtknr | thats peculiar... 2 PRs for similar things? | 20:51 |
| brtknr | I'm pretty excited about autoscaling | 20:51 |
| brtknr | Also node groups | 20:52 |
| *** _fragatina has joined #openstack-containers | 21:15 | |
| brtknr | strigazi: okay I can confirm that stable/queens branch is erroring when creating nodes... heat_container_agent is failing to notify heat because of missing region_name somehow... | 21:31 |
| -openstackstatus- NOTICE: Jobs are failing due to ssh host key mismatches caused by duplicate IPs in a test cloud region. We are disabling the region and will let you know when jobs can be rechecked. | 21:31 | |
| brtknr | 6.1.0 works fine | 21:38 |
| brtknr | this is because iv.get('deploy_region_name') resolves to null | 21:38 |
| *** dave-mccowan has quit IRC | 21:49 | |
| brtknr | or maybe for another reason | 21:57 |
| -openstackstatus- NOTICE: The test cloud region using duplicate IPs has been removed from nodepool. Jobs can be rechecked now. | 22:13 | |
| *** hongbin has joined #openstack-containers | 22:20 | |
| *** dave-mccowan has joined #openstack-containers | 22:22 | |
| *** hongbin has quit IRC | 23:35 | |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!