*** saneax has joined #openstack-chef | 04:56 | |
*** sanjayu_ has joined #openstack-chef | 04:58 | |
*** saneax has quit IRC | 05:01 | |
*** sanjayu__ has joined #openstack-chef | 07:25 | |
*** sanjayu_ has quit IRC | 07:28 | |
*** mago_ has joined #openstack-chef | 07:48 | |
*** sanjayu_ has joined #openstack-chef | 08:25 | |
*** sanjayu__ has quit IRC | 08:27 | |
*** mago_ has quit IRC | 09:50 | |
*** mago_ has joined #openstack-chef | 10:40 | |
*** openstack has joined #openstack-chef | 13:03 | |
*** ChanServ sets mode: +o openstack | 13:03 | |
Seb-Solon | Hey guys, are you familiar with the networking in openstack. I'm hitting a very weird bug and don't really know where to dig except jumping in openvswicth docs | 13:45 |
---|---|---|
scas | i run contrail / tungsten fabric for the most part. i only use ovs when developing for the cookbooks | 13:55 |
Seb-Solon | well, I went with the default of the allinone, maybe a bad idea ? | 13:58 |
scas | there's nothing wrong with allinone | 14:00 |
scas | well, probably has some bugs, but it works* | 14:01 |
scas | the ovs integration is kind of... fair weather working | 14:01 |
scas | if you found a problem in implementation, it would not be a big surprise to me | 14:02 |
Seb-Solon | I'm not that familliar with ovs but it has been quite a pain so far every time I have network issue in openstack | 14:03 |
Seb-Solon | I basically end up recreating bridge and so on but here it does seems to work so I believe it may be a bug | 14:03 |
Seb-Solon | or some extra parameter to add to the config (mtu or stuff like that) | 14:04 |
scas | i haven't touched mtu settings. it's all as it comes out of the box | 14:04 |
Seb-Solon | yeah, but maybe I have to in my case | 14:05 |
Seb-Solon | I have a small pastebin of my issue : https://pastebin.com/RjT2jvxs . (so that you can understand why I'm thinking about a mtu issue). | 14:07 |
Seb-Solon | i guess ovs may be my entrypoint because it is what end up packets once in "openstack" | 14:08 |
scas | are you running mixed mtu in your network? | 14:17 |
Seb-Solon | no I don't think so | 14:19 |
Seb-Solon | I though about it just because of the kind of issue : payload size is triggering the issue | 14:20 |
scas | hmmm... those messages and wireshark suggest to be a wiresharkism | 14:23 |
Seb-Solon | i only use wireshard to open the capture made with tcpdump. Maybe wireshard is complaining for nothing. However, From my desktop / openstack physical interface I can see some tcp retransmission that are not passed throught | 14:27 |
*** linkmark has joined #openstack-chef | 14:28 | |
scas | that's what i'm thinking. the wireshark messages may be misleading | 14:28 |
Seb-Solon | so maybe something still did not manage or a ack did not went out | 14:28 |
scas | if you use tcpdump and not wireshark, does anything anomalous pop up? | 14:28 |
Seb-Solon | the pcap are generated with tcpdump | 14:28 |
scas | fair enough | 14:29 |
Seb-Solon | i only use wireshark to open them once i scp to my laptop | 14:29 |
scas | that's what i do. okay. | 14:29 |
Seb-Solon | the weird thing I see is on that flagged segment by wireshard | 14:29 |
scas | it's early for me :) | 14:29 |
Seb-Solon | the TCP segment data, has almost all my payload but one or 2 byte missing | 14:30 |
Seb-Solon | np, It's 10.30Pm but took an expresso :P | 14:30 |
Seb-Solon | *AM | 14:30 |
scas | i'm waiting on caffeine to activate and i'm dealing with a fussy fedora installation | 14:31 |
Seb-Solon | my bad it's more that 2 bye but only a couple (5 lines +/- in wireshard raw view) | 14:33 |
Seb-Solon | haha, I used fedora until they decided to use dnf approximately | 14:33 |
Seb-Solon | when I have the choice I tend to go for debian stable and turn on some unstable repo if needed | 14:34 |
scas | my issues are from addon hardware, like nvidia gpu and a pcie wifi card | 14:38 |
scas | i didn't see the one issue until i rebooted it to upgrade | 14:38 |
Seb-Solon | oh, at home I decided to took the closed source from nvidia directly. At work I let the default one as I don't really need my GPU | 14:38 |
scas | yeah. i'm using the proprietary driver, but apparently pcie apmd is a problem with this intel wifi chip | 14:39 |
scas | being in the middle of an OS upgrade, it's not like i can break out to fix the problem in grub :D | 14:40 |
Seb-Solon | hahaah | 14:40 |
scas | frickler: were you able to get some results from the keystone endpoint changes needed? | 15:13 |
Seb-Solon | I can see that ovs version is taken from the cloud archive repo and points (for the moment) to 2.8.1. Openstack docs recommend using LTS (2.5.5). I'm hesitating to downgrade as it will require to downgrade a couple of other package I believe | 15:39 |
*** os-chef-bot has quit IRC | 15:46 | |
Seb-Solon | Btw i drilled down my problem to Content-Length: 1406 OK, Content-Length: 1407 NOK (in the http header) | 15:53 |
*** sanjayu_ has quit IRC | 15:54 | |
Seb-Solon | the frame length is 1472 which is usually the 1500 MTU. I have MTU 1500 but some of my bridge on the OS have a 1458 MTU. Maybe I am on something here | 16:04 |
Seb-Solon | ba any chance, do you know if all your interface have the same MTU (bridges and so on) | 16:05 |
scas | sounds like encapsulation overhead | 16:05 |
scas | encapsulated mtu has to be smaller than the physical mtu | 16:05 |
Seb-Solon | ip a | grep mtu gives me only 1500 on my other openstack setup (old version, old server but "working"). Mhh | 16:08 |
Seb-Solon | https://www.openstack.org/assets/presentation-media/the-notorious-mtu.pdf => I think this gonna help, it show the encapsulation your are talking + it mentions ovs silentely dropping packet :) | 16:22 |
scas | yup | 16:23 |
*** mago_ has quit IRC | 17:16 | |
*** os-chef-bot has joined #openstack-chef | 18:28 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!