*** ministry is now known as __ministry | 00:54 | |
*** mhen_ is now known as mhen | 01:45 | |
tafkamax | Ok I got my VLAN-s working in kolla-ansible now. But I have another issue, my provider networks are not getting DNS entries :S | 08:01 |
---|---|---|
tafkamax | I have an interesting issue when applying IP-s to a provider network. The subnet/vlan is created with size 192.168.40.1/21 - when assinging an IP it does not get connectivity. But when I change the subnet to /23 endfix it works. | 08:24 |
MikeCTZA | I did an openstack upgrade recently which had some issues (we had a network disconnect), but ran my kolla-ansible after and all looked OK, however - we are having some issues with keystone and a few admin related issues in dashboard and cli. the fun times of troubleshooting ... | 12:15 |
*** tkajinam is now known as Guest2415 | 12:52 | |
*** tkajinam is now known as Guest2416 | 13:00 | |
*** ministry is now known as __ministry | 13:55 | |
tafkamax | Hi, what are the recommended automated ways to talk to openstack via API. E.g. ansible role for creating VM-s. Terraform stuff e.g.? | 14:03 |
crab | ive used python for creating / destroying servers | 15:00 |
crab | ennumerating the hypervisors to see whats going on on them, if you mean stuff like that. | 15:00 |
crab | ^ tafkamax | 15:01 |
tafkamax | ok, well ansible is python :) | 15:06 |
tafkamax | I am just thinking of a VM lifecycle. Currently I am still using opennebula and we have created an ansible role for creating a VM-s via their ansible module. Ofcourse lifecycle would be better via terraform, but we are not that far yet. | 15:08 |
crab | tafkamax: well yeah its *written* in python, but i meant more like this: https://docs.openstack.org/mitaka/user-guide/sdk.html | 15:26 |
tafkamax | Ok I will take a look | 15:27 |
crab | i know mitakas ancient now, but for some reason they have changed all the urls so you cant just put antelope in there and expect the docs to work. :( | 15:27 |
crab | i used that sdk to write a couple of scripts which we run from cron at an offset of about 5 minutes. the first one checks all the hypervisors to see what resources they are using, and messes with a project quota, | 15:29 |
crab | and the second one cleans up / starts new servers as appropriate. | 15:29 |
crab | that way we can use spare capacity on our cloud to extend an htc system with virtual worker nodes. | 15:30 |
*** ministry is now known as __ministry | 20:06 | |
jsmdk | Hi | 20:17 |
jsmdk | Is it correct to debug openvswitch troulbes by tcpduming the qvo interface corresponding to the tap interface. I see no traffic there on openvswitch 2.17.8 | 20:20 |
jsmdk | so it looks to me the tap traffic get lost inside openvswitch | 20:20 |
DHE | tcpdump on an interface always shows real traffic moving through the interface | 21:23 |
DHE | even if a packet was discarded/filtered, if it arrived on an interface, tcpdump will show it | 21:23 |
DHE | but not in the other direction. if a packet was dropped before transmission, it was never really sent, ergo tcpdump doesn't see it | 21:24 |
jsmdk | ok,so if I do not see traffic on the qvol or in iptables -L -v -n could it be somehow lost in openvswitch it self | 21:24 |
jsmdk | *become lost* | 21:24 |
DHE | openvswitch is capable of switch ACLs and rewrites. openstack abuses this extensively. normally it works, but it's possible something is wrong and packets are being dropped or misrouted | 21:25 |
jsmdk | ok. I have two as far is I can see identical configured compute nodes except for the openvswitch version | 21:26 |
jsmdk | i checked the ovs flows and the output of ovs-vsctl show all the same | 21:26 |
DHE | is the traffic being transported by tunnel, like vxlan? | 21:27 |
jsmdk | no provider net vlan | 21:28 |
jsmdk | the tagging works in the network without openvswitch when configured in openstack it does not work, | 21:29 |
DHE | and the dedicated NIC for that provider net has been attached to the br-provider bridge? | 21:29 |
jsmdk | yup it is called br-provider the interface and the bridge_mappings is also correct | 21:29 |
jsmdk | it is configured with juju | 21:30 |
jsmdk | the nic has subinterfaces on the os level so it is not fully dedicated | 21:31 |
jsmdk | os = operating system | 21:31 |
DHE | you're using macvlan? | 21:31 |
jsmdk | I do not know I tagging with provider:segment on the neutron network | 21:32 |
DHE | but on the host you have a "nic" like eth0.123 where 123 is a vlan number? | 21:32 |
jsmdk | yes I have os and os.2002 and br-provider which is flat | 21:33 |
jsmdk | so the interface in the openvswitch is not a tagged one | 21:33 |
jsmdk | if that is what you are asking, thanks btw, I do not know the term macvlan.. | 21:34 |
DHE | it's the linux driver that makes these sub-interfaces | 21:34 |
jsmdk | ok | 21:35 |
DHE | if there's one for a vlan, it could break openstack since there's now 2 different drivers (macvlan and ovs) both trying to capture and process vlan tagged packets | 21:35 |
jsmdk | I see | 21:35 |
DHE | which is one reason I asked if the NIC you're using for vlans is dedicated to openvswitch/openstack | 21:35 |
DHE | though as I understand it, it's only really an issue when you want to use the same tag | 21:36 |
jsmdk | yeah, the working compute node does also macvlan | 21:36 |
jsmdk | does openvswitch rely on kernel modules e.g ip_conntrack? | 21:36 |
DHE | I'm not sure. I don't think so? because it has its own ACL capabilities including connection tracking | 21:38 |
jsmdk | ok, back in the days it had a kernel module i believe.. | 21:38 |
jsmdk | https://github.com/osrg/openvswitch/blob/master/INSTALL#L33C11-L35C39 | 21:38 |
jsmdk | prob. not relevant any longer | 21:38 |
jsmdk | Is there a command to inject some trafic into the switch for testing | 21:39 |
jsmdk | without a full guest instance | 21:39 |
DHE | is this running a neutron service, like dhcp or routers? | 21:39 |
DHE | if `ip netns ls` lists some stuff that looks openstack related, you may be able to get a shell inside those service apps/routers | 21:39 |
jsmdk | yeah it did a ping from the namespace of the networks dhcp netns | 21:40 |
jsmdk | I did * | 21:40 |
jsmdk | and it did not get an ARP response | 21:41 |
jsmdk | which is indicating to me a L2 issue | 21:41 |
jsmdk | which is not present only using macvlan on the same tag | 21:43 |
DHE | "Same tag" could be the problem... can you delete the macvlan interface? you okay to do that? | 21:43 |
jsmdk | I did not have them at the same time. I tagged it with macvlan to test the physical switching infrastructure which was fine | 21:45 |
jsmdk | I see they took out the kernel module from 2.17.8 https://mail.openvswitch.org/pipermail/ovs-dev/2022-July/395759.html | 21:46 |
jsmdk | the working compute node has 2.17.7 and the not working has 2.17.8 | 21:47 |
jsmdk | oh that was since 2.17.x | 21:47 |
jsmdk | 2.17.x 3.16 to 5.8 I wonder if my kernel is too new | 21:49 |
DHE | I think this was a case of "linux kernel ships with a module, we use that, so stop shipping a kernel driver with ovs now" | 21:50 |
jsmdk | ok | 21:50 |
DHE | I'm out of specific ideas at this point... this is one of those things where I wish I had shell access to see everything and poke at it, but I don't think I can help much past this point | 21:51 |
DHE | but if you do want a NIC that's both locally usable and usable for openstack, I would suggest using it with openvswitch and not macvlan. you can make a new bridge device (I call mine br-main) with "internal" interfaces instead of macvlan ports, and then a patch port to the br-provider bridges instead of giving it a real NIC. | 21:52 |
jsmdk | Yeah, ok thanks! | 21:53 |
DHE | the seprate bridge is because openstack will abuse ACLs on its own bridges, but you want a normal switch behaviour bridge for normal traffic | 21:53 |
jsmdk | my kernel might be too new, I will check when i can | 21:53 |
jsmdk | guess only applies if you use the ovs kmod.. | 21:56 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!