*** dmellado has quit IRC | 00:03 | |
*** dmellado has joined #openstack-kuryr | 00:04 | |
*** celebdor has joined #openstack-kuryr | 00:04 | |
*** celebdor has quit IRC | 00:10 | |
*** mrostecki has quit IRC | 00:11 | |
*** oanson has quit IRC | 01:13 | |
*** hongbin has joined #openstack-kuryr | 02:26 | |
*** janki has joined #openstack-kuryr | 03:49 | |
*** janki has quit IRC | 03:50 | |
*** janki has joined #openstack-kuryr | 03:50 | |
*** hongbin has quit IRC | 04:42 | |
*** yboaron has joined #openstack-kuryr | 05:29 | |
*** gcheresh_ has joined #openstack-kuryr | 05:32 | |
*** gcheresh_ has quit IRC | 05:37 | |
*** gcheresh_ has joined #openstack-kuryr | 06:07 | |
*** gkadam has quit IRC | 06:13 | |
*** ccamposr has joined #openstack-kuryr | 06:59 | |
*** maysams has joined #openstack-kuryr | 08:12 | |
*** gkadam has joined #openstack-kuryr | 08:14 | |
*** alisanhaji has joined #openstack-kuryr | 08:27 | |
*** pcaruana has joined #openstack-kuryr | 08:29 | |
*** yboaron_ has joined #openstack-kuryr | 08:32 | |
*** yboaron has quit IRC | 08:34 | |
dulek | ltomasbo: Any idea why Octavia keeps breaking my default route? | 09:06 |
---|---|---|
dulek | "default via 192.168.0.1 dev o-hm0" | 09:06 |
ltomasbo | yes! | 09:06 |
ltomasbo | it is broken with ovn | 09:06 |
*** yboaron_ has quit IRC | 09:06 | |
dulek | ltomasbo: :) | 09:06 |
ltomasbo | I added this to the devstack/plugin | 09:06 |
*** yboaron_ has joined #openstack-kuryr | 09:07 | |
ltomasbo | if ! ps aux | grep -q [o]-hm0 && [ $OCTAVIA_NODE != 'api' ] ; then | 09:07 |
ltomasbo | sudo dhclient -v o-hm0 -cf $OCTAVIA_DHCLIENT_CONF | 09:07 |
ltomasbo | + sudo ip route replace default via 10.16.151.254 | 09:07 |
ltomasbo | switching it to your default route... | 09:07 |
ltomasbo | that worked for me | 09:07 |
dulek | You've added that to Octavia's devstack/plugin, right? | 09:08 |
dulek | Just the last line. | 09:08 |
ltomasbo | yep | 09:08 |
ltomasbo | exactly | 09:08 |
dulek | ltomasbo: Damn, but that doesn't get my pods access to the internet. | 09:10 |
*** kmadac has joined #openstack-kuryr | 09:10 | |
ltomasbo | umm | 09:13 |
ltomasbo | maybe you need to use masquerade... | 09:13 |
ltomasbo | running on RDO-cloud? | 09:13 |
ltomasbo | dulek, ^^ | 09:13 |
ltomasbo | try this: | 09:13 |
ltomasbo | sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT | 09:13 |
ltomasbo | sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT | 09:13 |
ltomasbo | sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE | 09:13 |
ltomasbo | sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited | 09:13 |
ltomasbo | sudo iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited | 09:13 |
dulek | ltomasbo: Nah, my local VM. | 09:14 |
*** celebdor has joined #openstack-kuryr | 09:15 | |
dulek | ltomasbo: Magic… xD | 09:16 |
dulek | ltomasbo: Can I expect that in the gates pods will have access to the outside? | 09:16 |
ltomasbo | dulek, they should | 09:18 |
*** FlorianFa has quit IRC | 11:01 | |
*** FlorianFa has joined #openstack-kuryr | 11:02 | |
*** kmadac1 has joined #openstack-kuryr | 11:02 | |
*** pcaruana has quit IRC | 11:04 | |
*** kmadac has quit IRC | 11:05 | |
*** kmadac1 has quit IRC | 11:08 | |
*** kmadac2 has joined #openstack-kuryr | 11:12 | |
*** pcaruana has joined #openstack-kuryr | 11:32 | |
dulek | celebdor: Can you help me decide how should I configure that coredns to make sense upstream? | 12:02 |
dulek | celebdor: Okay, nevermind, it's always that when I write on IRC I finally figure out a correct solution. :P | 12:04 |
*** kmadac2 has quit IRC | 12:11 | |
*** kmadac2 has joined #openstack-kuryr | 12:13 | |
*** danil_ has joined #openstack-kuryr | 12:22 | |
*** danil has quit IRC | 12:26 | |
*** mrostecki has joined #openstack-kuryr | 12:31 | |
dulek | yboaron_: Do I remember correctly that at the moment networking-ovn provider doesn't support UDP? | 12:41 |
yboaron_ | dulek, Hmmm , I'm not 100% sure, I think u r right | 12:42 |
dulek | Okay, thanks. | 12:43 |
*** mrostecki has quit IRC | 12:51 | |
*** mrostecki has joined #openstack-kuryr | 12:53 | |
alisanhaji | Hi people of the world, I have a question about the pod network in kuryr. Can you have multiple neutron networks to put pods in or can you only use the subnet specified by pod_subnet parameter in kuryr.conf? Thanks | 13:46 |
*** pcaruana has quit IRC | 13:51 | |
*** pcaruana has joined #openstack-kuryr | 14:01 | |
*** FlorianFa has quit IRC | 14:09 | |
dulek | alisanhaji: You can specify pod subnetpool if you expect to have a subnet per K8s namespace. | 14:10 |
dulek | ltomasbo: Ha, I've run out of security groups on my env. | 14:11 |
dulek | I assume the fact that I have 5 of them called "kube-system/coredns" indicates some resource leaking. | 14:11 |
*** gkadam_ has joined #openstack-kuryr | 14:16 | |
*** gkadam has quit IRC | 14:19 | |
*** alisanhaji has quit IRC | 14:26 | |
*** alisanhaji has joined #openstack-kuryr | 14:35 | |
ltomasbo | yep, could be | 14:48 |
dulek | ltomasbo, maysams: I have a Service definition that lists two protocols using the same port. I assume that is not supported? | 14:53 |
dulek | Error message: {"debuginfo": null, "faultcode": "Client", "faultstring": | 14:53 |
dulek | "Another Listener on this Load Balancer is already using protocol_port 53"} | 14:53 |
ltomasbo | you can open the same port on different protocols (tcp and udp) | 14:54 |
ltomasbo | but not on the same I suppose | 14:54 |
dulek | ltomasbo: That's how I have it 53/UDP,53/TCP. | 14:54 |
dulek | It's possible it doesn't work with OVN only, though. | 14:55 |
dulek | BTW - I such loadbalancer never gets deleted, because it constantly fails listener creation with that error… | 14:56 |
ltomasbo | maybe, not sure if ovn has support for udp | 14:56 |
ltomasbo | perhaps you can remove the listener then | 14:56 |
ltomasbo | the one already created | 14:56 |
dulek | ltomasbo: Deleting the loadbalancer helps. | 14:58 |
*** gkadam__ has joined #openstack-kuryr | 15:00 | |
dulek | yboaron_, ltomasbo: Just clarifying my findings: networking-ovn provider *does* support UDP, using same port for TCP and UDP was the issue. | 15:02 |
*** gkadam_ has quit IRC | 15:03 | |
ltomasbo | dulek, my last update on that was that ovn-provider was supporting both upd and tcp, but not on the same loadbalancer | 15:03 |
ltomasbo | you can have lbaas with tcp or udp, but not with both (unless they fixed that already) | 15:03 |
dulek | ltomasbo: Ah, makes sense, wonderful. | 15:05 |
alisanhaji | dulek: thanks! | 15:06 |
maysams | dulek: sry, I was afk. But seems you guys figure it out | 15:09 |
dulek | maysams: Yup. | 15:10 |
maysams | good! | 15:10 |
dulek | alisanhaji: What's your use case? We might be able to help more if we understand it. | 15:10 |
alisanhaji | well I want to have kuryr-kubernetes in magnum so that containers and pods be in the same networks. In this case I can configure a separate network for each magnum_k8s cluster but I am also thinking about having Ironic and a separate network for each tenant | 15:13 |
ltomasbo | alisanhaji, each tenant you refer kubernetes user? or openstack project? | 15:16 |
ltomasbo | alisanhaji, I imagine you are installing kuryr in a given OpenStack tenant, right? | 15:16 |
alisanhaji | I was wondering if it was also possible to have Kuryr use the same Neutron that was used to spawn the VM that hosts kubernetes with Kuryr. I think as long as I have a separate network for the VMs that host magnum and another network that runs inside these VMs, it should be fine (with tons of encapsulation) | 15:17 |
alisanhaji | ltomasbo: as an openstack project | 15:17 |
alisanhaji | ltomasbo: magnum runs a k8s clusters for | 15:18 |
ltomasbo | alisanhaji, ahh, sure, you can set in kuryr.conf the network for the containers (if no namespace/np is used) and then point it to the VM network you like | 15:18 |
alisanhaji | inside a project | 15:18 |
alisanhaji | ltomasbo: thanks! I think the trick is to make magnum create a new pod subnet or subnet pool for each new cluster and configure k8s kuryr to use that one | 15:19 |
ltomasbo | alisanhaji, ok, another option is to have a subnetpool | 15:21 |
ltomasbo | then, if you enable namespace isolation, or network policies | 15:21 |
ltomasbo | a new subnet and network will be created from that subnetpool for each kubernetes namespace | 15:21 |
ltomasbo | all connected to the same router of course | 15:21 |
alisanhaji | true, it would be even more convenient | 15:22 |
*** mrostecki has quit IRC | 15:23 | |
alisanhaji | ltomasbo and dulek thanks for the help, I will do some tests and see with Magnum team to try to put Kuryr among the network drivers | 15:25 |
dulek | alisanhaji: Hey, hey, the trick is having instance of kuryr-kubernetes per each K8s cluster! :) | 15:27 |
dulek | alisanhaji: That's how it's thought. | 15:27 |
dulek | alisanhaji: We allow running Kuryr in pods on the K8s cluster, just like you would run Calico. | 15:28 |
dulek | alisanhaji: Can you share a K8s cluster between OpenStack tenants in Magnum? | 15:29 |
alisanhaji | dulek: so each cluster that magnum spawns will come with it's own configured kuryr-kubernetes | 15:29 |
dulek | alisanhaji: Yes, I don't think it would even work to have single kuryr-controller serve multiple K8s clusters. | 15:30 |
dulek | If it does, it's by pure luck, we never assumed that. | 15:30 |
alisanhaji | dulek: I don't think so, each cluster is isolated for a specific project | 15:31 |
*** mrostecki has joined #openstack-kuryr | 15:31 | |
alisanhaji | I don't know if their is a "public" parameter | 15:31 |
dulek | alisanhaji: Okay, so basically the way to go is to have one kuryr-controller (and kuryr-daemons on nodes) for each K8s cluster Magnum deploys. | 15:34 |
dulek | And easiest way to achieve that would be to just run the K8s yamls that you can generate using script tools/generate_k8s_resource_definitions.sh. | 15:35 |
dulek | (It might not fit your needs, maybe you need your own, or you want to contribute fixes to our script, we're totally welcoming here) | 15:36 |
dulek | ltomasbo: Do you want me to update upstream container images? I have them built, but I didn't wanted to interrupt your debugging session. | 15:36 |
ltomasbo | dulek, sure! please do! | 15:37 |
ltomasbo | I'm just compiling kuryr myself on the testbed to not rely on that! | 15:37 |
ltomasbo | but updating the master images for both would be great! | 15:37 |
alisanhaji | dulek: ok thanks, I'll check this out, run some tests locally and see how I can automate it with Magnum | 15:38 |
dulek | alisanhaji: Sure, feel free to bug us with anything. | 15:38 |
dulek | alisanhaji: And don't do `git blame` on that script, it's me. ;) | 15:39 |
alisanhaji | dulek: haha they should have a command 'git peace' to know who to thank for a commit, to balance things out :D | 15:41 |
* dulek sets up a git alias. :D | 15:42 | |
alisanhaji | haha! | 15:43 |
*** pcaruana has quit IRC | 15:53 | |
*** janki has quit IRC | 15:55 | |
*** yboaron_ has quit IRC | 16:04 | |
*** pcaruana has joined #openstack-kuryr | 16:06 | |
*** gkadam__ has quit IRC | 16:42 | |
*** pcaruana has quit IRC | 16:55 | |
dulek | Ha, I've just successfully run first upstream K8s NP test! :D | 17:24 |
dulek | Thing is - that one was testing connection without any policy, let's see what happens with next. :) | 17:25 |
ltomasbo | dulek, great! \o/ | 17:25 |
maysams | dulek: uhuu.. That's great, dulek :) | 17:25 |
*** gcheresh_ has quit IRC | 17:32 | |
*** maysams has quit IRC | 17:39 | |
*** celebdor has quit IRC | 18:02 | |
dulek | http://paste.openstack.org/show/747373/ | 18:07 |
dulek | Okay, 3 out of 8 failed, it's not awful. | 18:07 |
dulek | Worst part is that those 8 tests took 36 minutes… | 18:08 |
*** ccamposr has quit IRC | 18:35 | |
*** openstackgerrit has joined #openstack-kuryr | 18:36 | |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: Switch Octavia API calls to openstacksdk https://review.openstack.org/638258 | 18:36 |
openstackgerrit | Michał Dulko proposed openstack/kuryr-kubernetes master: Add option to tag Octavia resources created by us https://review.openstack.org/638483 | 18:36 |
*** kaiokmo has quit IRC | 19:07 | |
*** mrostecki has quit IRC | 19:13 | |
*** kaiokmo has joined #openstack-kuryr | 19:38 | |
*** mrostecki has joined #openstack-kuryr | 20:34 | |
*** alisanhaji has quit IRC | 20:37 | |
*** celebdor has joined #openstack-kuryr | 21:05 | |
*** irclogbot_1 has joined #openstack-kuryr | 21:09 | |
*** irclogbot_1 has quit IRC | 21:28 | |
*** celebdor has quit IRC | 21:50 | |
*** celebdor has joined #openstack-kuryr | 21:57 | |
*** celebdor has quit IRC | 22:02 | |
*** rh-jelabarre has quit IRC | 23:43 | |
*** celebdor has joined #openstack-kuryr | 23:46 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!