moha7 | If it's not required to set br-vxlan as a bridge, then how the its interface should be defined directly in the `openstack_user_config.yml`? the same as *br-vlan* section here: https://paste.opendev.org/show/bb714u2x5OD377JJtp14/ ? | 06:15 |
---|---|---|
moha7 | In the AIO, you use class-B IP ranges ( /22 that include 1022 IPs per subnet)! Why such big ranges for br-mgmg, br-vxlan, etc? | 08:14 |
jrosser | moha7: why not?! :) | 08:25 |
jrosser | i have a lab deployment which started with /24 sized networks and it was not enough | 08:26 |
jrosser | also if you use a L2 networking approach which almost everyone does, expanding those ranges in the future is really very difficult | 08:27 |
jrosser | if you use some L3 it's easier | 08:28 |
moha7 | Each controller needs 13 mgmt IP for its container and on for itself = 14; Excluding the lb IP, and keeping 15 for Computes, would be 238 free IP in /24 that means: "238IP = 17 Controllers" | 08:41 |
moha7 | jrosser: If the calculation is correct, You really had >17 controllers in your lab? | 08:42 |
moha7 | Maybe I'm missing something there! | 08:44 |
noonedeadpunk | mornings | 08:47 |
noonedeadpunk | moha7: you will likely also want to separate net nodes as well. and well, 200 computes is not _that_ big deployment :) | 08:48 |
noonedeadpunk | But indeed it depends on your needs and how much there's intent to scale | 08:48 |
noonedeadpunk | but IMO better to have spare then to struggle with extending network | 08:49 |
moha7 | Do you recommend the separation of net nodes like this: https://docs.openstack.org/security-guide/_images/1aa-network-domains-diagram.png ? | 08:50 |
noonedeadpunk | yes, exactly. though it's applicable for lxb/ovs scenarious, ovn is a bit different, though gateway nodes are still a thing | 08:51 |
jrosser | moha7: no not 17 controllers | 08:51 |
jrosser | but every physical node and every container needs an ip off br-mgmt | 08:51 |
jrosser | and routers and mirrors and bastions and deploy host and monitoring and on it goes :) | 08:51 |
jrosser | and then you might want to divide the address space in some sane way to be able to write concise iptables/firewall rules using CIDR boundaries, for example | 08:52 |
jrosser | there is also no reason at all why the internal networking of your openstack deployment needs to be accessible from outside the deployment | 08:54 |
jrosser | though you kind of force that to happen by putting the external vip on the mgmt network which is sort of OK for a lab but really not for a production deployment | 08:55 |
moha7 | In the new lab, I'm going to consider more prod approach as in separating in/ex lb IPs by two different vlan | 09:00 |
jrosser | right - i have a small /27 or something for external stuff | 09:01 |
moha7 | jamesdenton: Do you recommend network nodes separation from the controllers in a prod env that has OVN as its network backend? | 09:01 |
*** cloudnull2 is now known as cloudnull | 09:09 | |
moha7 | What do you recommend for the mgmt, external api, vlan (provider) and vxlan networks: 1G network interface or 10G? | 10:38 |
noonedeadpunk | jrosser: I;'ve tested https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/871517/1/tasks/main.yml#76 and it does exactly what is needed | 10:47 |
*** dviroel|out is now known as dviroel | 11:18 | |
admin1 | moha7, "If it's not required to set br-vxlan as a bridge," -- a bridge makes a lot of sense and keeps the configs sane .. in prod you might have multiple interfaces, or a bond, or a single interface doing east-west .. and each company/deployment has its own .. so by just using it as a bridge, it frees you to use the same config for multiple | 11:37 |
admin1 | deployments | 11:37 |
admin1 | so what you want to plug into br-mgmt, or br-vlan or br-vxlan comes outside of openstack and upto the deployer | 11:38 |
admin1 | so when you want to plug in a new interface/bond to grow traffic, you just add to the bridge without having to touch openstack | 11:39 |
admin1 | in prod ( i have worked and deployed openstack in big public cloud providers ) , usually the compute node is also set as a network node . that way, you guarantee that one node failure is not bringing down 33% of your network down ( when 3 controllers are used also as network nodes ) | 11:40 |
admin1 | for prod, the external vip is most of the time on a diff network .. sometimes isolated /30 between router and controllers and with firewalls only allowing openstack ports and sometimes a waf | 11:42 |
jrosser | to do distributed gateway or DVR is a pretty big decision with large implications, i don't think you can say 'usually' there means "how everyone usually does it" | 11:42 |
jrosser | its how admin1 usually does it :) | 11:42 |
admin1 | i do not do dvr | 11:43 |
admin1 | i only use the compute nodes as also network nodes -- that is the only thing i do differently | 11:43 |
admin1 | btw, has anyone migrated from ovs -> ovn ? i would like to hear their success stories and tips | 11:46 |
moha7 | "it frees you to use the same config for multiple" | 12:01 |
moha7 | admin1: it's also possible by renaming, for example, we have `set_name` in netplan configuration | 12:02 |
moha7 | But it's you recommending for not using of bridge, here: | 12:03 |
moha7 | In the deployment documentation: "Note that br-vxlan is not required to be a bridge at all, a physical interface or a bond VLAN subinterface can be used directly and will be more efficient." <-- https://docs.openstack.org/project-deploy-guide/openstack-ansible/zed/targethosts.html | 12:03 |
noonedeadpunk | admin1: I've heard plenty of success stories on summit but never did that on my own | 12:07 |
noonedeadpunk | it generally works like a charm, except things with MTU can go weird if there's no support of jumbo frames | 12:07 |
noonedeadpunk | so changing MTU is most messy part of the migration ppl said | 12:08 |
jrosser | moha7: what do *you* actually want to do. OSA is a toolkit and you can setup most things as you wish | 12:10 |
jrosser | you can use those bridges to have high uniformity across the hosts, or you can use the underlying interfaces/bonds directly | 12:10 |
dokeeffe85 | Hi all, trying to install octavia via ansible and getting this: Invalid input for operation: physical_network 'lbaas' unknown for VLAN provider network. Do I need to create a new physical_network or can I use the physnet1 that I already have | 12:14 |
noonedeadpunk | So for octavia you must have a dedicated network that will be present in neutron and will also be available on controllers inside octavia_api containers | 12:15 |
noonedeadpunk | dokeeffe85: most convenient option for that is having a VLAN network. You can use vlan on your already existing physnet1 - that's not an issue | 12:16 |
dokeeffe85 | Perfect, thanks noonedeadpunk | 12:17 |
noonedeadpunk | but eventually you will need to define a network you want to use at the end. If you want to create this network in neutron manually, you can jsut set `octavia_service_net_setup: false` | 12:17 |
dokeeffe85 | Ok will do, thanks | 12:18 |
noonedeadpunk | these are variables that you can set for the net if you want playbook to manage networks after all https://opendev.org/openstack/openstack-ansible-os_octavia/src/branch/master/defaults/main.yml#L342-L357 | 12:18 |
jrosser | dokeeffe85: thereis a worked example here (of one way, but not the only way) https://satishdotpatel.github.io/openstack-ansible-octavia/ | 12:18 |
dokeeffe85 | Thanks jrosser | 12:22 |
opendevreview | Merged openstack/ansible-role-zookeeper master: Ensure zookeeper is not stopped after role re-run https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/871517 | 14:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-zookeeper stable/zed: Ensure zookeeper is not stopped after role re-run https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/871556 | 14:40 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:00 |
opendevmeet | Meeting started Tue Jan 24 15:00:18 2023 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:00 |
noonedeadpunk | #topic rollcall | 15:00 |
noonedeadpunk | o/ | 15:00 |
mgariepy | hey | 15:03 |
noonedeadpunk | #topic office hours | 15:05 |
noonedeadpunk | well I don't have much tbh | 15:09 |
noonedeadpunk | I've holded up Y due to discovring quite critical CVE regarding erlang version there | 15:10 |
noonedeadpunk | #link https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/871304 | 15:13 |
mgariepy | i don't have anything, i'm quite busy with other stuff atm so i won't have much time for osa beside quick review. | 15:13 |
NeilHanlon | hey, sorry am late. not much for me. preparing for centos connect and fosdem next week (gasp) | 15:19 |
*** dviroel is now known as dviroel|lunch | 15:19 | |
noonedeadpunk | Will try to find you there :) | 15:21 |
noonedeadpunk | Well, yes, basically we have business as usual. As always - a bit lack of reviews, but bug fixes seem to land on time | 15:22 |
noonedeadpunk | Gates seem to be quite healthy as well now | 15:22 |
jrosser | apologies but andrewbonney and i arent available for the meeting today | 15:26 |
noonedeadpunk | sure, no worries! | 15:29 |
noonedeadpunk | I think I will close meeting early today then due to lack of agenda basically | 15:30 |
mgariepy | +1 | 15:30 |
noonedeadpunk | One thing though - I won't be around next week for the meeting | 15:30 |
noonedeadpunk | And eventually I won't be around next week at all :) | 15:30 |
noonedeadpunk | So is there anybody who wants to be a meeting chair? | 15:31 |
noonedeadpunk | As otherwise I will just cancel the meeting | 15:31 |
noonedeadpunk | Ok, I think I will cancel it then:) | 15:41 |
noonedeadpunk | #endmeeting | 15:41 |
opendevmeet | Meeting ended Tue Jan 24 15:41:28 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:41 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-01-24-15.00.html | 15:41 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-01-24-15.00.txt | 15:41 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-01-24-15.00.log.html | 15:41 |
*** dviroel|lunch is now known as dviroel | 16:30 | |
noonedeadpunk | Oh my... | 16:43 |
*** dviroel is now known as dviroel|doc_appt | 16:43 | |
jamesdenton | ? | 16:48 |
noonedeadpunk | https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031886.html | 16:56 |
NeilHanlon | oh my indeed | 17:01 |
*** dviroel|doc_appt is now known as dviroel | 19:13 | |
jrosser | i would think that https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031886.html is a really good test case to see if some of the systemd private stuff can mitigate bugs like that | 20:12 |
*** dviroel is now known as dviroel|out | 22:44 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!