-opendevstatus- NOTICE: zuul isn't executing check jobs at the moment, investigation is ongoing, please be patient | 07:17 | |
gokhani | hi folks, I need your recommendations about networking. I have only one ip address block to connect to internet. so I will use it both host and virtual machines for internet connection. How can I configure my netplan file for neutron provider network? when I map to br-ext, I lost my connection. Can I run it with veth-pairs or another way? my netplan file is https://paste.openstack.org/show/bHOqFt | 09:13 |
---|---|---|
gokhani | https://paste.openstack.org/show/bHOqFtXDTbSv5hUpVngi/ | 09:14 |
noonedeadpunk | mornings | 09:28 |
noonedeadpunk | gokhani: do you own a network equipment or renting one? | 09:28 |
noonedeadpunk | as network block could be splited into several smaller ones and served as different vlans | 09:29 |
gokhani | noonedeadpunk: mornings :) We have only a single uplink to the ISP, and a single public ip address reserved for us. And we are trying to manage netwrok traffic through a supermicro switch, which is not capable of inter-vlan routing. | 09:33 |
gokhani | noonedeadpunk: do you think this architecture can work ? https://imgur.com/a/xN49lnL | 09:42 |
noonedeadpunk | tbh I'm not sure it will.... so if you want to have vms that are connecting directly to public net, you need to have an interface that neutron can manage. So it' can't be same interface as you use to access compute nodes for example | 09:50 |
noonedeadpunk | if you say that vms will have only private network and reach internet through net nodes and floating ip - then you will likely have same issue on net node | 09:50 |
anskiy | i think, it should be possible with OVS. That is, I have current non-openstack configuration like this: public IP on compute from the same subnet as VMs | 10:00 |
noonedeadpunk | for some reason I thought that ovs kind of same except interface is managed by ovs? | 10:02 |
anskiy | if I understand the configuration correctly, it actually manages br-int (not the bridge from openstack_user_config), which connects to it via patch | 10:04 |
gokhani | anskiy: I am using lxb | 10:05 |
gokhani | noonedeadpunk: may be I need to break bond, and use one interface for host and another for neutron with same ip address block | 10:09 |
anskiy | gokhani: sorry, I don't have expirience with that. But I've just checked what I've been talking about: I have VLAN on compute node (which is default interface for it (the interface, which has default route)) and openstack network made with the same VLAN. Now, when I attach VM to this network I have the same connectivity as compute node. | 10:15 |
gokhani | anskiy: I can explain my config like this: I have a bridge as br-ext connected to vlan1. br-ext has default route (default via 10.160.143.1 dev br-ext proto static ). there is also a ip for haproxy vip on br-ext. The problem is when I map br-ext to neutron, neutron creates a brqxxx bridge on infra nodes and ips on br-ext are moving to on brqxxx bridge, and so I lost my connection to hosts. | 10:28 |
gokhani | it seems that interface used with host connection and neutron manage can not be same | 10:31 |
gokhani | I am also using infra nodes as network nodes. | 10:32 |
noonedeadpunk | anskiy: yes, with VLAN it's easy peasy :D | 10:59 |
jrosser | gokhani: maybe also look at how the openstack-ansible AIO creates a new interface (eth12) off the host interface to give to neutron | 11:15 |
jrosser | something like this https://github.com/openstack/openstack-ansible/blob/master/etc/network/interfaces.d/aio_interfaces.cfg#L53-L56 | 11:16 |
anskiy | gokhani: so it looks like you need to create vlan interface on brqxxx with all the IP configuration that (which seems dirty), or, maybe tell neutron to use br-ext as pre-existing bridge (if that's possible) | 11:17 |
noonedeadpunk | and create smth like mac-vlan? | 11:18 |
noonedeadpunk | and put it to fake bridge? | 11:18 |
anskiy | noonedeadpunk: it should be the same with untagged | 11:18 |
noonedeadpunk | yep, that can work actually | 11:18 |
anskiy | noonedeadpunk: mac-vlan to bridge two bridges? :) | 11:19 |
noonedeadpunk | ok, veth pair then) | 11:19 |
anskiy | so it's just a replica of what OVN/OVS does with patch port | 11:31 |
gokhani | jrosser: thanks, I think it can help, I am trying it | 11:31 |
masterpe[m] | When I run ceph-install.yml I some how get the following error:... (full message at https://matrix.org/_matrix/media/r0/download/matrix.org/QArJNrpqZlLlZbJuVjZlISvm) | 11:35 |
noonedeadpunk | tbh I won't trust putting the only interface you can reach server in ovs... | 11:35 |
masterpe[m] | Any idea what the problem is? | 11:35 |
noonedeadpunk | masterpe[m]: do you run it with some tags? | 11:35 |
noonedeadpunk | and is it xena? | 11:35 |
anskiy | noonedeadpunk: why? | 11:36 |
masterpe[m] | noonedeadpunk: No I don't use any tags, we are using OSA 21.2.6 | 11:36 |
masterpe[m] | This is Ussuri. | 11:37 |
noonedeadpunk | anskiy: it's like putting all eggs in the same basket. ovs depends on glibc heavily and was quite bggy because of that on bionic for example. | 11:39 |
noonedeadpunk | service restart causes net downtime as well | 11:39 |
noonedeadpunk | and when you don't have another way to reach server I'd call it poor design choice... I might be opionated regarding it though | 11:39 |
anskiy | noonedeadpunk: it's super stable on CentOS 7, we've been using this exact configuration for quite a long time. IPMI? :) | 11:41 |
noonedeadpunk | there's a switch that does not support vlans even, you think there's IPMI ? :p | 11:42 |
noonedeadpunk | And is CentOS 7 really supported? It's under EM I believe? | 11:42 |
anskiy | noonedeadpunk: inter-vlan routing absense is just stating that this particular switch is, indeed, a switch, not a router :) | 11:45 |
anskiy | noonedeadpunk: yeah, part of the reason I'm deploying openstack to focal now | 11:46 |
noonedeadpunk | mhm, like DES-105? | 11:46 |
noonedeadpunk | :D | 11:47 |
noonedeadpunk | btw you could also use Rocky if want to - it's ready for master at least | 11:47 |
noonedeadpunk | masterpe[m]: well it's interesting. then you should have already patch that includes potential fix for that | 11:48 |
masterpe[m] | do you know what patch? | 11:49 |
noonedeadpunk | masterpe[m]: so basically that should have worked https://review.opendev.org/c/openstack/openstack-ansible/+/737804/1/playbooks/ceph-install.yml | 11:49 |
noonedeadpunk | but it's still for running with tag | 11:50 |
noonedeadpunk | masterpe[m]: unless you have `osa_gather_facts: false` somewhere | 11:51 |
*** dviroel|pto is now known as dviroel | 11:52 | |
jrosser | masterpe[m]: can you maybe paste the whole output of the playbook to the point it fails? | 11:53 |
anskiy | noonedeadpunk: that's "dumb switch" :) There is a spot, just between this one and some router. Inability to create random L3 interfaces doesn't mean it lacks management interface. | 11:53 |
noonedeadpunk | you don't need to "route" l3 interface to understand tagged traffic and re-tag it accordingly | 11:55 |
anskiy | noonedeadpunk: nah, there is no requirement to stay on CentOS 7, besides operating system specific configurations is such a small part of moving existing system to openstack... | 11:55 |
noonedeadpunk | but well | 11:55 |
noonedeadpunk | yeah, switching between platforms is always interesting:) | 11:56 |
masterpe[m] | noonedeadpunk jrosser https://gist.github.com/mpiscaer/072b30c935ac3d3c1e2bb3eb6ee70dcd | 11:56 |
noonedeadpunk | masterpe[m]: and you limit to ceph-osd hosts? | 11:57 |
noonedeadpunk | or well | 11:57 |
masterpe[m] | I limit on storko1 | 11:58 |
noonedeadpunk | ok, so then you won't get facts gathered for mon hosts that feels like required by ceph-ansible... | 11:59 |
anskiy | noonedeadpunk: ah, changing tags? It could be achieved with connecting two ports on the same switch :) | 11:59 |
noonedeadpunk | anskiy: if it's capable of creating port groups? | 12:00 |
jrosser | masterpe[m]: i think you have two choices, to expand the --limit to include the hosts that facts are needed from | 12:03 |
jrosser | though i can see why you might not want to do that | 12:03 |
noonedeadpunk | but tbh, even some really poor dlink I had to manage in university in 2006 was capable of handling vlans... In weird way, that you couldn't specify vlan id when creating one as it was only assigning ID auto-incremental way, but still | 12:03 |
jrosser | masterpe[m]: so the alternative is to use an ansible ad-hoc command something like "ansible <ceph-mon-group-name> -m setup" to gather the facts before you then run the ceph-install playbook | 12:04 |
anskiy | noonedeadpunk: nope, you just egress one VLAN removing tag, and accept it with another: voila, it changed it's ID :P | 12:06 |
masterpe[m] | jrosser: using the ansible with the -m setup worked. | 12:08 |
noonedeadpunk | anskiy: ok, yes, this is another way around | 12:09 |
noonedeadpunk | just never had to work with managed but not supporting vlans switches:) | 12:10 |
noonedeadpunk | they always either stupid or smart enough to have that support | 12:10 |
jrosser | some very early vxlan stuff needed ports looping on the front of switches | 12:22 |
gokhani | jrosser: it didn't work. I created eth12 from br-ext and neutron created eth12.1 but it can not connect to internet. | 12:32 |
jrosser | gokhani: i would be trying simple debugging like pinging the gateway using these various interfaces as the source | 12:33 |
jrosser | and then the same from your neutron L3 agent namespace | 12:34 |
anskiy | noonedeadpunk: I've only seen VLAN recolouring a couple of times, most of them were like: this other ISP sends VLAN with ID which doesn't comply with our guides, so we change it. | 12:36 |
gokhani | jrosser: firstly, in vm I can not ping gateway and also tap interface and bridges | 12:37 |
gokhani | jrosser: I created vm provider network | 12:37 |
jrosser | noonedeadpunk: uh-oh https://zuul.opendev.org/t/openstack/build/6af29c3f0571417cb810bf4307690cd0 | 12:41 |
jrosser | i think we have new setuptools trouble there | 12:41 |
*** dviroel is now known as dviroel|brb | 12:43 | |
gokhani | jrosser: after create vethpair I think at neutron side I need to network set flat not vlan ? If I am not right, please correct me :) | 13:22 |
jrosser | that entirely depends on what you have on the actual upstream network | 13:22 |
jrosser | my preference is to always do these things as vlan and then fix up whatever you need in the switches | 13:23 |
jrosser | flat is only ever one network and if you ever need to change it / add another it's big work rather than simple | 13:23 |
*** dviroel|brb is now known as dviroel | 13:27 | |
gokhani | jrosser: in this config, I depend on only one network both for hosts and vms. I also prefer make vlan. I have br-ext tagged with vlan x. I created veth pairs from br-ext ( ip link add br-ext-veth type veth peer name eth12 || true && ip link set br-ext-veth up && ip link set eth12 up && brctl addif br-ext br-ext-veth). I used eth12 for neutron. I mean at neutron side do I need to define provider | 13:34 |
gokhani | network as vlan or flat | 13:34 |
gokhani | jrosser: yes it is flat and it worked :) thank you very much :) | 13:38 |
gokhani | noonedeadpunk: anskiy thanks for your help, I solved my problem with vethpairs :) | 13:42 |
noonedeadpunk | great! | 13:45 |
jrosser | gokhani: are your VM contents trusted? | 13:49 |
gokhani | jrosser: yes indeed, after application deployments to this environment it will be offline. | 13:52 |
NeilHanlon | some problem with tox or a dependency thereof causing trouble merging https://review.opendev.org/c/openstack/openstack-ansible-tests/+/835219 (see: https://github.com/pypa/setuptools/issues/3197) -- any thoughts noonedeadpunk? | 14:41 |
NeilHanlon | appears some other openstack projects (tripleO, others) have just done this: https://github.com/openstack/tripleo-ansible/commit/dab104315f5352800ec56f163f6ae12a4b8c9685 | 14:44 |
opendevreview | Neil Hanlon proposed openstack/openstack-ansible-tests master: Disable setuptools auto discovery https://review.opendev.org/c/openstack/openstack-ansible-tests/+/835468 | 14:47 |
opendevreview | Neil Hanlon proposed openstack/openstack-ansible-tests master: Update tests for Rocky Linux https://review.opendev.org/c/openstack/openstack-ansible-tests/+/835219 | 14:48 |
jrosser | NeilHanlon: yes something is broken with new setuptools | 14:53 |
jrosser | NeilHanlon: looks like you patch has helped there | 15:39 |
jrosser | though we have $lots of repos that would need patching into | 15:40 |
jrosser | like ~50 or so | 15:40 |
jrosser | i see now that there is a setuptools 61.2.0 which claims to fix https://github.com/pypa/setuptools/issues/3197 | 15:40 |
NeilHanlon | yeah changing setup.py doesn't seem overly scalable | 15:41 |
noonedeadpunk | it's easier to update requirements repo to update setuptool version I guess? | 15:53 |
noonedeadpunk | because they're in u-c | 15:53 |
jrosser | unfortunately not | 15:53 |
jrosser | this is openstack-tox-docs job which takes the setuptools bundled with virtualenv | 15:54 |
jrosser | and virtualenv released on the 25th contains setuptools 61.0.0 | 15:54 |
jrosser | i wonder if we can put something in the `deps` section of tox.ini | 15:56 |
jrosser | i'll poke at it in a VM | 15:57 |
*** dviroel is now known as dviroel|lunch | 16:13 | |
jrosser | so this is ugly - openstack-tox-jobs uses the bundled setuptools | 16:39 |
jrosser | then everything we do afterwards in the tox environment is subject to u-c | 16:40 |
jrosser | and the new pip resolver then won't let you move setuptools later than 61.0.0 | 16:41 |
jrosser | as thats contrary to u-c | 16:41 |
noonedeadpunk | damn it | 16:50 |
noonedeadpunk | so bumping u-c to 61.2.0 should still work?:) | 16:50 |
opendevreview | Merged openstack/openstack-ansible master: Add mysql directory for logging https://review.opendev.org/c/openstack/openstack-ansible/+/835091 | 16:54 |
*** dviroel|lunch is now known as dviroel | 16:55 | |
jrosser | noonedeadpunk: lol https://review.opendev.org/c/openstack/requirements/+/835329 | 17:11 |
jrosser | it somehow gets removed totally there | 17:12 |
jrosser | one option is to make the docs jobs n-v and just wait for a virtualenv release | 17:13 |
lowercase | should i open an issue regarding ceilometer and gnocchi playbooks not having a user defined pip package section? the xena version of the playbook is failing due to a requirement of openstacksdk pip package not being installed and i cannot add it via a playbook way | 17:32 |
jrosser | lowercase: if you think theres a bug please do - we should see something like that in our CI usually though? | 17:33 |
lowercase | TASK [os_gnocchi : Add service project] | 17:34 |
lowercase | fatal: [ceilometer2_gnocchi_container-e7bc0399 -> localhost]: FAILED! => {"attempts": 5, "changed": false, "msg": "openstacksdk is required for this module"} | 17:34 |
lowercase | according to: https://docs.openstack.org/openstack-ansible-os_ceilometer/xena/ , openstacksdk is indeed not specified as a depens | 17:35 |
jrosser | that delegates to localhost? | 17:35 |
jrosser | that looks incorrect | 17:35 |
lowercase | ohhh | 17:36 |
jrosser | uh-oh https://github.com/openstack/openstack-ansible-os_gnocchi/blob/stable/xena/tasks/service_setup.yml#L30 | 17:36 |
lowercase | yeah, my service_setup_host is a bastion/jumpbox server. ill install openstacksdk there | 17:37 |
lowercase | i just upgraded the os on that server, makes sense my the pip package list changed with the python version upgrade | 17:38 |
jrosser | no wait thats ok https://github.com/openstack/openstack-ansible-os_gnocchi/blob/stable/xena/tasks/main.yml#L104 | 17:38 |
jrosser | lowercase: no thats not right to use your bastion like that | 17:39 |
lowercase | why is that? | 17:39 |
jrosser | you want something like this | 17:39 |
jrosser | openstack_service_setup_host: "{{ groups['utility_all'][0] }}" | 17:39 |
jrosser | openstack_service_setup_host_python_interpreter: "/openstack/venvs/utility-{{ openstack_release }}/bin/python" | 17:39 |
jrosser | becasue we go to a lot of trouble already to create the correct environment for all this on the utility containers | 17:40 |
jrosser | the service setup host needs the galera client setup, the openstack python modules and so on | 17:40 |
jrosser | much much simpler to use the existing utility container rather than replicate all that | 17:41 |
lowercase | im investigating, one moment | 17:42 |
jrosser | tbh if you have got as far as gnocchi/ceilometer already then this must have worked for the other services already | 17:43 |
lowercase | oh yes, we have this config working on all of our openstack environments this way. | 17:43 |
lowercase | and i have no utility containers in this environment, let me look at the others. | 17:43 |
lowercase | my other dev environment has utility containers but they are not functioning. one of my prod environments does not have utility either. | 17:45 |
lowercase | okay, so if thats the right way to do things. I can test bringing back the utility containers. | 17:46 |
jrosser | that would be most aligned with how things are designed | 17:48 |
jrosser | they have a MySQL admin client | 17:48 |
jrosser | an openrc file for the cloud admin user | 17:48 |
jrosser | and the openstack cli setup in a venv and symlinked to /use/local/bin | 17:49 |
jrosser | that’s a fair amount of admin creds on them, which isn’t ideal, but it’s no worse than the config files on all the other hosts | 17:50 |
lowercase | that;'s not a high bar to accomplish on a bastion host. I pinging my coworker who was in charge of these environments before I and asked about the reason we do not have utility containers and may choose to persue this. | 18:01 |
jrosser | it would probably be possible to fiddle with the inventory to make the the utility playbook run against the bastion | 18:07 |
jrosser | we support bare metal deployment so that might be hackable | 18:08 |
jrosser | it could be worth grepping around your user variables and host/group_vars on the deploy host, as it’s possible to define this service setup host on a per-service basis, or globally with the vars I mentioned before | 18:10 |
lowercase | My coworker stated the reason we don't use the utility containers is a requirement that our mysql and rabbitmq containers are on separate different metal nodes that the infra designated metal nodes. Using the utility containers seemed to require all containers to be hosted on the infra nodes. | 18:33 |
*** dviroel is now known as dviroel|out | 20:30 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!