noonedeadpunk | mornings | 08:01 |
---|---|---|
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Disable floating IP usage in magnum_cluster_templates https://review.opendev.org/c/openstack/openstack-ansible/+/880047 | 08:27 |
jrosser | morning | 08:30 |
jrosser | this is looking reasonable https://review.opendev.org/c/openstack/openstack-ansible/+/871189/34 | 09:16 |
jrosser | damiandabrowski: if we add TLS + no TLS jobs now to the opentack-ansible repo then we would be able to test both situations as we merge the TLS backend things | 09:18 |
damiandabrowski | okok, do you have any suggestions how to handle it? should we just enable tls backend for one already existing job or create new one? | 11:01 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-os_blazar master: Add uWSGI support to blazar https://review.opendev.org/c/openstack/openstack-ansible-os_blazar/+/880651 | 11:06 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-os_blazar master: Add TLS support to blazar backends https://review.opendev.org/c/openstack/openstack-ansible-os_blazar/+/880652 | 11:06 |
noonedeadpunk | pffffff | 11:17 |
noonedeadpunk | Seems gates are broken again | 11:17 |
noonedeadpunk | NeilHanlon: `Error: Failed to download metadata for repo 'appstream': repomd.xml parser error: Parse error at line: 1 (Extra content at the end of the document` for the https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=AppStream-$releasever$rltype | 11:19 |
jrosser | from our org wide slack where everything also has blown up `The Rocky Linux people believe they've fixed the problem and things should be recovering now` | 11:34 |
noonedeadpunk | ++ | 11:37 |
admin1 | i did an upgrade from 26.0.1 -> 26.1.0 and now horizon does not load .. https:// on horizon internal IP gives This site can’t provide a secure connection 172.29.239.156 sent an invalid response. | 13:36 |
admin1 | ERR_SSL_PROTOCOL_ERROR | 13:36 |
admin1 | https://cloud.domain.com returns 503 | 13:37 |
*** cloudnull6 is now known as cloudnull | 13:37 | |
admin1 | is there some internal tls or https:// or cert thing ? | 13:37 |
noonedeadpunk | there should not be any TLS from haproxy to horizon (yet) | 14:07 |
NeilHanlon | noonedeadpunk, jrosser: should be all fixed now. sorry :( | 14:11 |
NeilHanlon | never try to 'simply' replace your domain controllers | 14:11 |
noonedeadpunk | hehe | 14:13 |
noonedeadpunk | I was just gona to do that :D | 14:13 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:03 |
opendevmeet | Meeting started Tue Apr 18 15:03:52 2023 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:03 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:03 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:03 |
noonedeadpunk | #topic rollcall | 15:03 |
NeilHanlon | o/ | 15:03 |
noonedeadpunk | o/ | 15:04 |
noonedeadpunk | hey everyone | 15:04 |
damiandabrowski | hi! | 15:04 |
jrosser | o/ hello | 15:04 |
noonedeadpunk | #topic office hours | 15:06 |
noonedeadpunk | I've jsut found out, that we somehow missed trove for https://review.opendev.org/q/topic:osa/pki | 15:06 |
noonedeadpunk | I'm going to prepare a fix for that and was kinda wondering if we wanna backport it | 15:06 |
jrosser | what does it use it for? | 15:07 |
noonedeadpunk | rabbitmq? | 15:08 |
jrosser | oh you mean kind of like this one https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/791726 | 15:09 |
noonedeadpunk | yup, exctly | 15:09 |
noonedeadpunk | another thing that is broken at the moment is zun. | 15:09 |
noonedeadpunk | It's been a while since we've bumped version of kata, and now kata is gone from suse repos (obviously) | 15:10 |
damiandabrowski | i was gonna talk about it | 15:10 |
noonedeadpunk | I've attempted to try install kata from github sources but did not spent much time to be frank | 15:10 |
noonedeadpunk | also I'm not quite sure if still it should be integrated with docker or just podman should be good enough with modern zun | 15:11 |
noonedeadpunk | go on damiandabrowski :) | 15:11 |
damiandabrowski | firstly i thought that disabling katacontainers as default for debian/ubuntu is the best option(it's an optional component anyway and IMO it should be somehow "fixed" on zun side because they mention invalid repo in their docs: https://docs.openstack.org/zun/latest/install/compute-install.html#enable-kata-containers-optional) | 15:12 |
damiandabrowski | but disabling kata on master is not enough because obviously upgrade jobs do not pass CI | 15:12 |
damiandabrowski | so it can be solved by cherry-picking this change to stable branches which doesn't sound good :| | 15:13 |
damiandabrowski | https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/880683 | 15:13 |
noonedeadpunk | I think issue is also that main jobs times out | 15:14 |
noonedeadpunk | so it actuially does not work as well | 15:15 |
noonedeadpunk | I kinda have same "result" with https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/880288?tab=change-view-tab-header-zuul-results-summary | 15:15 |
noonedeadpunk | But the thing is that container is never ready | 15:15 |
noonedeadpunk | so regardless of kata - it needs a closer look | 15:15 |
noonedeadpunk | good thing is that octavia seems to be sorted out now | 15:16 |
damiandabrowski | ah, i thought timeout issue is just the matter of recheck but probably you're right "| | 15:16 |
jrosser | i will look through our notes here | 15:17 |
jrosser | we did some stuff with zun but didnt ever deploy it for real, but the kata thing is a mess | 15:17 |
noonedeadpunk | I think it was a while back as well... | 15:20 |
jrosser | yeah | 15:20 |
noonedeadpunk | I'd imagine that docker/podman can be a mess as well | 15:20 |
noonedeadpunk | as kata now suggests to just go with podman | 15:21 |
noonedeadpunk | and no idea if zun has support for that, as it wasn't really developent lately | 15:21 |
noonedeadpunk | we're also super close to merging https://review.opendev.org/q/topic:osa/systemd_restart_on_unit_change+status:open | 15:22 |
noonedeadpunk | So Zun, Adjutant and Magnum | 15:23 |
noonedeadpunk | For Adjutant and Magnum we need to fix upgrade jobs by backporting stuff. | 15:23 |
noonedeadpunk | Once we land this I will go through patches and backport then as we've agreed on PTG | 15:24 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_trove master: Add variables for rabbitmq ssl configuration https://review.opendev.org/c/openstack/openstack-ansible-os_trove/+/880760 | 15:25 |
noonedeadpunk | Another thing. There was a ML during weekends calling for volunteers to maintain OVS->OVN migration in neutron code. Migration is done for TripleO but I decided to pick this challange and adopt/refactor for OSA as well | 15:27 |
damiandabrowski | great! | 15:28 |
mgariepy | i will have a ovs to migrate to ovn at some point. | 15:28 |
mgariepy | but i don't have any cycle right now | 15:28 |
opendevreview | Merged openstack/openstack-ansible-os_keystone master: Use chain cert file for apache https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/879914 | 15:29 |
noonedeadpunk | Yeah, me neither, but it's smth I'd love to have :) | 15:30 |
noonedeadpunk | jamesdenton: btw, I can recall you saying smth about LXB->OVN? Do you have any draft? | 15:30 |
noonedeadpunk | As maybe it's smth I could take a look during this work as well | 15:30 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Update release name to Antelope https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/880761 | 15:31 |
noonedeadpunk | Also. RDO folks were looking into adding OSA aio deployment to their CI to spot issues early. They were barely aware that we have this feature, so with death of tripleo this path can get some attention. At very least awareness is raised a bit | 15:33 |
noonedeadpunk | I guess that's why we've seen these great doc patches for aio :) | 15:37 |
jrosser | they were nice patches | 15:37 |
noonedeadpunk | Btw, I've also changed naming of the project on this page https://docs.openstack.org/zed/deploy/index.html | 15:40 |
noonedeadpunk | So it was more clear to what project does comparing to others | 15:40 |
noonedeadpunk | Damn. Jast spotted that "Guide" is used twice.... | 15:40 |
damiandabrowski | another thing I wanted to raise is blazar haproxy service | 15:44 |
noonedeadpunk | mhm | 15:45 |
damiandabrowski | https://review.opendev.org/c/openstack/openstack-ansible/+/880564 | 15:45 |
damiandabrowski | seems like blazar doesn't have '/healthcheck' implemented and it's required to authenticate for all API requests | 15:45 |
damiandabrowski | it makes hard for haproxy to monitor backends(haproxy always receives 401 http code): | 15:45 |
damiandabrowski | {"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}} | 15:45 |
damiandabrowski | Do you think it's ok to fix it by applying below change? (at least it works on my aio) | 15:45 |
damiandabrowski | haproxy_backend_httpcheck_options: | 15:46 |
damiandabrowski | - 'expect rstatus (200|401)' | 15:46 |
noonedeadpunk | So it's requiring also for `/`? | 15:46 |
damiandabrowski | yeah | 15:46 |
noonedeadpunk | that sucks | 15:46 |
noonedeadpunk | yeah, it doesn't seem to have api-paste... | 15:47 |
damiandabrowski | we have something similar for murano(but without regex) | 15:47 |
damiandabrowski | https://opendev.org/openstack/openstack-ansible/src/commit/3f9c8300d8d09832607d2670cb3425a59bb26ac1/inventory/group_vars/haproxy/haproxy.yml#L392 | 15:47 |
noonedeadpunk | Btw. I kind wonder if for murano we could jsut drop /v1 instead | 15:48 |
noonedeadpunk | damiandabrowski: we have smth simmilar for rgw btw https://opendev.org/openstack/openstack-ansible/src/commit/3f9c8300d8d09832607d2670cb3425a59bb26ac1/inventory/group_vars/haproxy/haproxy.yml#L168 | 15:48 |
noonedeadpunk | but yes, I think that fix would be fine | 15:49 |
jamesdenton | hi noonedeadpunk | 15:50 |
jamesdenton | https://www.jimmdenton.com/migrating-lxb-to-ovn/ | 15:50 |
noonedeadpunk | aha, great, thanks! | 15:51 |
noonedeadpunk | have you tried btw running vxlans with ovn? | 15:51 |
jamesdenton | I think it was written before we went to OVN in Zed, so the skel manipulation may not be required anymore | 15:52 |
jamesdenton | I don't think i have tried vxlan, as i recall this: Also, according to the OVN manpage, VXLAN networks are only supported for gateway nodes and not traffic between hypervisors: | 15:52 |
jamesdenton | https://www.ovn.org/support/dist-docs/ovn-controller.8.html | 15:52 |
jamesdenton | " Supported tunnel types for connecting hypervisors are | 15:53 |
jamesdenton | geneve and stt. Gateways may use geneve, vxlan, or stt." | 15:53 |
jamesdenton | /shrug | 15:53 |
noonedeadpunk | aha | 15:53 |
noonedeadpunk | ok, yes, I see. As I heard rumors that it's doable... | 15:54 |
jamesdenton | it might be, let me see if i get any sort of error trying. If all nodes are gateway nodes, then maybe? | 15:55 |
noonedeadpunk | huh, might be.. But then all communication between VMs will be possible only through public networks I assume? | 15:56 |
noonedeadpunk | or well, through gateways | 15:56 |
damiandabrowski | ah, there's one more thing. As jrosser pointed out, we should somehow test tls backend in CI. | 15:58 |
damiandabrowski | Do you have any ideas how should we do this? | 15:58 |
damiandabrowski | enable tls backend on some of already existing jobs or create new ones? | 15:59 |
damiandabrowski | (I'd appreciate some help here as I'm not really experienced with zuul :|) | 15:59 |
noonedeadpunk | I think we need to add new job at least for 1 distro (like jammy) that would differ from default | 16:00 |
noonedeadpunk | but we should then discuss what job do we want | 16:01 |
jamesdenton | not public, just that every node can be an egress point | 16:01 |
jamesdenton | probably better to get vxlan->geneve | 16:01 |
noonedeadpunk | yeah, I think it's better indeed... | 16:02 |
noonedeadpunk | damiandabrowski: meaning - no tls at all, or no tls between haproxy and api, or no tls for internal at all | 16:02 |
noonedeadpunk | maybe no tls for internal endpoint and no between haproxy->uwsgi would make most sense for me | 16:03 |
noonedeadpunk | I think this is good example https://opendev.org/openstack/openstack-ansible/commit/b59b392813c060139860afb74682ce664d895562 | 16:03 |
noonedeadpunk | #endmeeting | 16:03 |
opendevmeet | Meeting ended Tue Apr 18 16:03:58 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:03 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-04-18-15.03.html | 16:03 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-04-18-15.03.txt | 16:03 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-04-18-15.03.log.html | 16:03 |
damiandabrowski | so if we're going to create one new job with no tls enabled at all, I assume we aim to enable tls backend by default? | 16:07 |
jrosser | did we decide at PTG what the defaults would be? | 16:08 |
damiandabrowski | I don't recall it(but maybe I forgot about something) | 16:09 |
noonedeadpunk | no I don't think we did | 16:10 |
noonedeadpunk | Or well, maybe during previous PTG. But then arguments were based on excessive complexity we need to maintain | 16:11 |
noonedeadpunk | Now, when it's not obliged to have it... | 16:11 |
noonedeadpunk | Maybe we want to keep TLS for haproxy<->uwsgi disabled by default | 16:12 |
noonedeadpunk | (i don't really know) | 16:12 |
damiandabrowski | maybe disabling it by default is ok for now but we still should enable it in CI? | 16:13 |
damiandabrowski | then we can still do as you say and keep tls disabled only for one job | 16:14 |
noonedeadpunk | (or enabled by 1 job) | 16:15 |
noonedeadpunk | I'd say that CI should follow mostly the default behaviour | 16:16 |
noonedeadpunk | as then by default we also don't have TLS for internal endpoints IIRC | 16:16 |
noonedeadpunk | It's only AIO thing | 16:16 |
damiandabrowski | yeah, but on the other hand it's very unlikely to break something only for non-TLS, while it's quite easy to break something for TLS :D | 16:17 |
noonedeadpunk | To be frank I'd rather discuss that next week and vote | 16:17 |
damiandabrowski | +1 | 16:17 |
opendevreview | Merged openstack/openstack-ansible master: Add missing blazar haproxy service https://review.opendev.org/c/openstack/openstack-ansible/+/880564 | 16:52 |
damiandabrowski | have anyone seen strange behavior during facts gathering recently? Now I'm struggling with issues with masakari, but I had similar issue for nova few days ago | 17:57 |
damiandabrowski | https://paste.openstack.org/raw/bdYgzmf4Aem1iVoVRKsH/ | 17:57 |
damiandabrowski | i tried to comment out pacemaker_corosync role but then I just got similar error later: | 17:57 |
damiandabrowski | https://paste.openstack.org/raw/b2KGLxNhn174Pb6uItwf/ | 17:57 |
damiandabrowski | removing /etc/openstack_deploy/ansible_facts content doesn't help | 17:58 |
damiandabrowski | but manually running setup module ansible -m setup masakari_all does help | 17:58 |
damiandabrowski | i tried to run os-placement-install.yml but I did s/placement/masakari/g beforehand and it worked fine | 17:59 |
damiandabrowski | i really can't explain it :| | 17:59 |
damiandabrowski | running setup-hosts.yml also fixes the issue so that's probably why we don't see it in CI | 18:04 |
noonedeadpunk | damiandabrowski: we have a patch merged lately for masakari | 18:09 |
noonedeadpunk | or it's even not - not sure | 18:09 |
noonedeadpunk | https://review.opendev.org/c/openstack/openstack-ansible-os_masakari/+/880360 | 18:10 |
noonedeadpunk | https://review.opendev.org/c/openstack/openstack-ansible/+/880459 | 18:10 |
noonedeadpunk | damiandabrowski: you should do some reviews to stop fighting with bugs that are already fixed :D | 18:10 |
damiandabrowski | you may be right... :D | 18:15 |
damiandabrowski | regarding https://review.opendev.org/c/openstack/openstack-ansible-os_masakari/+/880360 | 18:18 |
damiandabrowski | can you explain how/where tests/ansible-role-requirements.yml file is used? | 18:18 |
damiandabrowski | i thought that /tests directory inside service roles is not used these days | 18:19 |
noonedeadpunk | it's not | 18:19 |
noonedeadpunk | but meta is | 18:20 |
noonedeadpunk | so whenever you inlcude role it will trigger run of apt_cache_pinning that will fail on missing facts | 18:20 |
noonedeadpunk | So 2 things - we need to ensure we gather facts and we don't need this role to run :) | 18:21 |
noonedeadpunk | I'm actually not 100% sure that smth is completely wrong with facts gathering or meta is being processed even before pre-tasks (it actually can) | 18:21 |
noonedeadpunk | But regardless it won't hurt | 18:21 |
damiandabrowski | okok | 18:43 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Implement separated haproxy service config https://review.opendev.org/c/openstack/openstack-ansible/+/871189 | 19:10 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Fix blazar haproxy service https://review.opendev.org/c/openstack/openstack-ansible/+/880775 | 19:10 |
psymin | I'm hoping to set up a basic/simple 3 machine instance of openstack ansible on rocky. Going with Zed and Rocky 9 at the moment, but I'm open to other versions. Bumbling through the process now. I expect to have to reinstall the OS a number of times before I get it right. | 19:25 |
psymin | I'm not sure what I'm missing in the deploy guide. My hope is to have, for now, the most simple setup possible with three baremetal servers. | 19:43 |
noonedeadpunk | psymin: hey. So, where are you getting stuck then?:) | 19:45 |
damiandabrowski | maybe we will be able to help if you can provide more details ;) | 19:45 |
noonedeadpunk | though, I think we will be away now - it's quite late already in EU :( | 19:48 |
psymin | should I manually create the lvm volumes individually on the target machines? | 19:51 |
psymin | Each of the three machines has four nics. I have them set up with four /24 networks. I have them named Ext, Stor, Virt, and Mgmt. Does this sound acceptable? | 19:55 |
noonedeadpunk | psymin: so lvm volumes for cinder storage? | 19:57 |
noonedeadpunk | or how are you wanna utilize lvm? | 19:57 |
noonedeadpunk | regarding networks - sure, that does work | 19:58 |
psymin | At the moment I don't care if I use lvm or not. I just want to succeed with an ansible deploy so I can feel more confident about the process :) Then I'll play with it for a while, and probably reinstall the OS, change some parameters, and deploy again. | 19:58 |
noonedeadpunk | psymin: question - have you tried out AIO setup? That will setup everything on a single VM? | 19:58 |
noonedeadpunk | As that's smth I'd suggest starting with then | 19:58 |
psymin | I have done the AIO in a single server and it functions. | 19:58 |
noonedeadpunk | Ok, gotcha. | 19:59 |
noonedeadpunk | You can also replace dummy interfaces there with real ones, and expand setup for new controllers | 19:59 |
noonedeadpunk | to get mnaio :D | 19:59 |
psymin | multi node all in one? | 20:00 |
noonedeadpunk | yup. We actually have some code/doc here https://opendev.org/openstack/openstack-ansible-ops/src/branch/master/multi-node-aio but I'm not sure how relevant it is to be frank | 20:01 |
noonedeadpunk | Haven't used it in quite a while | 20:01 |
noonedeadpunk | Though, I think with just manual install you should be on right track | 20:01 |
noonedeadpunk | So LVM is needed mostly for cinder, as a volume backend | 20:01 |
noonedeadpunk | and yes, it needs to be configured manually. at least part with PV/VG | 20:02 |
psymin | For the finished system, we'll definitely need block devices. | 20:02 |
noonedeadpunk | But cinder is not _really_ required | 20:02 |
noonedeadpunk | Well, nova does provide block devices as well | 20:02 |
noonedeadpunk | and it can use just qcow files on compute node filesystem and even live migrate with that | 20:03 |
noonedeadpunk | It's not very handy, as it manages disk size with flavors, so you'll need to have way more flavors. On top of that you can have only 1 block drive (or well, 2-3 if count swap and config drive) | 20:04 |
psymin | from the mindset of wanting the least configuration to start with, and building from there / redeploying, what would I need to configure? | 20:04 |
noonedeadpunk | so cinder allows to attach/detach extra ones whenever needed | 20:04 |
psymin | Here is my openstack_user_config.yml, which I assume is missing essential info and probably has some incorrect info :) https://paste.centos.org/view/raw/353bd6d6 | 20:05 |
noonedeadpunk | aside from infra (like repo/galera/rabbit/utility/haproxy/keepalived) you will absolutely need keystone, nova, neutron, glance and placement | 20:05 |
noonedeadpunk | log_hosts is not valid anymore | 20:06 |
psymin | removed, thank you | 20:06 |
noonedeadpunk | Also I'd assume having same names of the server with same IP | 20:07 |
psymin | will I need to install rabbit manually on the targets, or does the deploy host do that with ansible for me? | 20:07 |
noonedeadpunk | rabbit should be part of os-infra_hosts | 20:07 |
noonedeadpunk | So OSA does manage that as well as mariadb galera cluster | 20:08 |
noonedeadpunk | sorry, shared-infra_hosts not os-infra_hosts :) | 20:08 |
noonedeadpunk | Also, are you wanna to play with ironic? | 20:08 |
psymin | good catch, nope, we won't be needing that, removing | 20:09 |
noonedeadpunk | And do you want bare metal deployment or using LXC? | 20:09 |
psymin | I doubt we'll need bare metal deployment and will only have these three bare metal servers for openstack. | 20:10 |
noonedeadpunk | Let me rephrase myself a bit:) | 20:10 |
psymin | each has 8 ssds, 6 cores, and 64gb of ram, currenly only one SSD is partitioned and used for the host OS, the rest are unpartitioned | 20:11 |
noonedeadpunk | So there're couple of options that are available. 1 - deploy services, like api, scheduler, rabbit, etc to LXC containers. 2 - deploy all that just to these VMs | 20:11 |
noonedeadpunk | as in 1st case you will likely need to define provider_networks as well | 20:12 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible/src/branch/stable/xena/etc/openstack_deploy/openstack_user_config.yml.aio#L39 | 20:12 |
psymin | would that be global_overrides: ? | 20:12 |
noonedeadpunk | yep | 20:12 |
psymin | we don't have a hardware load balancer, does that mean we shouldn't configure external_lb_vip_address ? | 20:14 |
noonedeadpunk | um, no? So we're deploying haproxy with keepalived by default, that failover IP over in case of any troubles | 20:15 |
noonedeadpunk | It does not require a standalone loadbalancer | 20:15 |
psymin | cool | 20:15 |
noonedeadpunk | mostly it's just fine to locate these with rest of controller plane | 20:15 |
noonedeadpunk | unless you're starting to server object storage through it and want a good throughput | 20:16 |
noonedeadpunk | but I mean - then you'll need hardware LB anyway | 20:16 |
noonedeadpunk | sry, I need to head out now, it's getting quite late... folks are mostly around during UTC business hours | 20:16 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Define blank _haproxy_service_configs_simplified https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/880781 | 20:17 |
psymin | I don't intend to need object storage | 20:18 |
psymin | thank you | 20:18 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Revert "Skip haproxy with setup-infrastructure for upgrades" https://review.opendev.org/c/openstack/openstack-ansible/+/880091 | 20:20 |
noonedeadpunk | damiandabrowski: maybe we should add meta/clear_facts to the end of https://review.opendev.org/c/openstack/openstack-ansible/+/871189/35/playbooks/common-playbooks/haproxy-service-config.yml instead? | 20:20 |
noonedeadpunk | as like 880781 saying that we rather should? | 20:21 |
* noonedeadpunk signing off | 20:22 | |
damiandabrowski | so...i tried to solve it with meta: refresh_inventory but it didn't help | 20:23 |
damiandabrowski | i can try with clear_facts tomorrow if you think it's safe | 20:23 |
damiandabrowski | but i'm not sure what's wrong with 880781 :D | 20:24 |
noonedeadpunk | meh, I don't know... just a bit meh... I wonder if defining it in task vars would do the trick as well | 20:32 |
noonedeadpunk | but also - we don't import that part multiple times? | 20:33 |
noonedeadpunk | we do tasks_from always? | 20:33 |
opendevreview | Damian Dąbrowski proposed openstack/openstack-ansible master: Add support for TLS backends https://review.opendev.org/c/openstack/openstack-ansible/+/879085 | 20:33 |
noonedeadpunk | so maybe it's "broken" due to import_role vs include_role? | 20:34 |
damiandabrowski | i'll work on this tomorrow, my brain is not working anymore... :D | 20:38 |
opendevreview | Merged openstack/openstack-ansible-os_masakari master: Drop apt_package_pinning from role requirements https://review.opendev.org/c/openstack/openstack-ansible-os_masakari/+/880360 | 20:39 |
opendevreview | Merged openstack/openstack-ansible master: Gather generic masakari facts https://review.opendev.org/c/openstack/openstack-ansible/+/880459 | 20:41 |
noonedeadpunk | sure thing, I'm not saying that it should be done now hehe, just throwing ideas aloud | 20:47 |
NeilHanlon | hey psymin, welcome :) | 21:01 |
psymin | howdy | 21:01 |
psymin | I fear that I might need more handholding than I'd like to admit. | 21:02 |
psymin | when you deploy with rocky 9, do you get the warnings "Failed to parse /opt/openstack-ansible/inventory/dynamic_inventory.py" ? | 21:07 |
psymin | perhaps I should try with different distros and compare the output | 21:08 |
jrosser | psymin: I think ansible tries to determine if it is an ini file, something like that | 21:08 |
jrosser | the warning is ok | 21:08 |
NeilHanlon | just getting my lab back up. what branch did you decide to deploy psymin? Zed or Antelope? | 21:09 |
psymin | NeilHanlon, Zed, since it seemed to me that it was "done" | 21:09 |
jrosser | psymin: you can look at the output for all the different distros in our CI jobs | 21:09 |
jrosser | NeilHanlon: psymin openstack-ansible is a “cycle trailing” project which means we get 3 months after the openstack projects make a release to finalise ours | 21:10 |
jrosser | so for OSA the most recent release is Zed, and we still work on Antelope | 21:11 |
NeilHanlon | ope, yeah. bad question on my part | 21:12 |
NeilHanlon | psymin: it might be best to try and start "fresh" on your server and try the deployment from the beginning. an AIO on a single server is a good starting point to understand how it all works together, before starting on a multi-node (distributed) setup | 21:14 |
psymin | I've done AIO on a single server a few times | 21:15 |
NeilHanlon | gotcha - how are your servers connected? | 21:15 |
NeilHanlon | you'll want to do a bit of planning on the networks you need, and how you'll configure the interfaces | 21:15 |
NeilHanlon | https://docs.openstack.org/openstack-ansible/zed/user/network-arch/example.html | 21:16 |
psymin | four nics, four /24 networks, mgmt, stor, virt, ext | 21:16 |
jrosser | don’t be afraid to use vlans if your switch supports that | 21:18 |
psymin | it does support vlans | 21:19 |
jrosser | things are vastly simpler particularly with external networks if you use trunk ports/vlans | 21:19 |
psymin | Where is the CI for OSA and distros online? | 21:20 |
jrosser | specifically network type “flat” in openstack looks appealing because it it conceptually simple, but “vlan” type eventually needs less config and is more flexible | 21:20 |
jrosser | ^ for external networks | 21:20 |
NeilHanlon | i've burned many hours on that very thing heh | 21:20 |
psymin | Whichever is easiest for me to grasp temporarily to get a functional test environment so that I have renewed vigor to continue :) | 21:21 |
jrosser | well, like NeilHanlon said you can add extra compute nodes to an existing AIO relative easily | 21:21 |
jrosser | might start from a slightly different host network setup but it will be very similar | 21:22 |
jrosser | take away the NAT and stuff that bootstrap-aio does and your pretty much there | 21:22 |
jrosser | trouble is, OSA is really like a toolkit | 21:23 |
jrosser | so you can make really anything you like, and there’s not really a right answer for anything | 21:23 |
* NeilHanlon needs to contribute some example RHEL network configs | 21:24 | |
jrosser | so perhaps what I mean is that it’s important to understand/plan what you want, rather than expect the tool to do magic and decide for you | 21:24 |
jrosser | then express what you want in the config | 21:25 |
jrosser | this is most true for networking | 21:25 |
NeilHanlon | There are a _lot_ of knobs to tune/touch/play with, if you want to, but not all of them (most of them) are required for many deployments | 21:26 |
NeilHanlon | this is a really good "just starting out" guide - https://docs.openstack.org/openstack-ansible/zed/user/test/example.html | 21:26 |
NeilHanlon | can even remove heat from there, I think, to make it more simple | 21:27 |
psymin | is it acceptable to have storage1, compute1, and infra1 all bound to the same server? | 21:27 |
jrosser | NeilHanlon: I think we lack a “homelab” type doc, everything we have is oriented more at larger scale and lots of H/A | 21:27 |
psymin | we have three baremetal machines, all with ample cpu, disk, nics and ram to utilize | 21:28 |
NeilHanlon | psymin: a more "hyperconverged" setup is definitely possible | 21:29 |
NeilHanlon | as long as your networking and such is configured and defined in the user_config, OSA doesn't "care" where you put things | 21:29 |
jrosser | psymin: you can certainly do that, just be mindful of how much ram you need to reserve for the services vs vm - otherwise the OOM killer will cause havoc | 21:29 |
NeilHanlon | jrosser: one of the things i'm working on is a project that will deploy to different cloud providers (currently digital ocean and vultr), and then install a cluster on top of those nodes | 21:32 |
admin1 | when I do a curl http://horizon-internal-ip, it redirects me to https://horizon-internal-ip | 21:32 |
admin1 | but its not listening on https:// | 21:32 |
NeilHanlon | cloudception | 21:32 |
admin1 | so where did that https redirect came from | 21:32 |
admin1 | curl 172.29.239.156:80 -I => Location: https://172.29.239.156/auth/login/?next=/ ; .. i see apache2 on both 80 and 443 listening, but the one on 443 does not repond .. could be looking for a cert that haproxy does not have | 21:33 |
jrosser | admin1: doesn’t the haproxy config have an 80->443 redirect as part of it? | 21:36 |
jrosser | admin1: actually what do you mean horizon-internal-ip? | 21:37 |
admin1 | backend horizon-back has something like server r2c1_horizon_container-19ab5602 172.29.239.156:80 check port 80 inter 12000 rise 3 fall 3 .. but when I curl to that .156:80, it redirects to 443 | 21:37 |
jrosser | the ip of the backend? or haproxy? | 21:37 |
admin1 | and that 443 does not open | 21:37 |
admin1 | that is the ip of the backend | 21:37 |
jrosser | so the answer would be in the horizon container I think | 21:38 |
jrosser | likely the Apache config | 21:39 |
admin1 | i destroyed all and recreated, same result .. so some config that turns ON this setting most probably | 21:39 |
psymin | NeilHanlon, It sounds like you're suggesting I reinstall rocky 9 on all these servers, then rewrite my configs, then try deploying again? | 21:43 |
jrosser | admin1: look in the horizon role at the openstack_dashboard.conf.j2 template - it’s fairly clear what’s going on | 21:43 |
* jrosser enough for today | 21:43 | |
NeilHanlon | psymin: that's "cleanest", at least for the node you started the deployment on. but you can also probably get away with deleting a few directories which contain the major outputs of the bootstrap-ansible.sh script | 21:48 |
admin1 | jrosser, thanks .. i will try . but i do not get it and also why it is breaking all of a sudden | 21:49 |
jrosser | admin1: well we really don’t change stuff much on stable branches | 21:49 |
psymin | NeilHanlon, I can probably have the reinstall and prep done tomorrow. Perhaps I can coerce you to help with the openstack_user_config.yml for deploying? | 21:50 |
NeilHanlon | of course, happy to give advice | 21:50 |
psymin | Awesome! I also have some questions about the network. | 21:50 |
admin1 | i think horizon_enable_ssl goes enabled somehow .. or it was not activated properly in mine, so it did it appear before | 21:51 |
admin1 | so it did not * | 21:52 |
admin1 | i will set it to false and rerun the playbooks .. thanks jrosser | 21:52 |
psymin | I assume Container / Management / br-mgmt needs to be routable to the lan and not completely isolated. Storage / br-storage can be isolated. Overlay / br-vxlan .. does that need to be routable to the internet or can it be isolated? | 21:53 |
NeilHanlon | overlay should be isolated, as it's your guest's project traffic | 21:53 |
admin1 | everything can be isolated as well .. in my case br-mgmt, br-storage, br-vxlan all are unrouted network on their own private vlans | 21:54 |
NeilHanlon | yep, that's also true. depends how you define 'isolation' | 21:54 |
NeilHanlon | as long as the relevant hosts can speak to one another | 21:54 |
psymin | Perhaps I should have the management interface on the same LAN that all of our desktops are on? | 21:55 |
psymin | rather than be its own? | 21:56 |
NeilHanlon | from a security perspective that's not as good. if it's in their own network you can firewall them off. plus there is a fair amount of traffic on that network | 21:57 |
jrosser | ^ don’t do that:) | 21:57 |
psymin | okay :) | 21:57 |
admin1 | you can have one ip that can be reached over office lan in a interfeace of its own , and then just use it as VIP /NAT | 21:57 |
psymin | so we'll just have to route between the lan and management/container ? | 21:57 |
admin1 | or have 1 haproxy or router or switch with (L3) map your internal api endpoint -> external ip | 21:57 |
psymin | I hope the management network isn't accessible from the external world, just Lan | 21:58 |
psymin | We'll wireguard in if we need access | 21:58 |
admin1 | on controllers, say you have eth0 -- this is where you do ssh to login to the server .. it has ip from network/lan ... .. in the same controller, you can have one or multiple network cards, or just this eth0 but diff vlans on top of those, you have br-mgmt, br-vxlan , br-storage etc | 21:58 |
admin1 | then your cloud IP ( VIP ) will be something on the eth0 which internally connects it to br-mgmt | 21:59 |
psymin | we currently have four physical nics on each server, so we might as well use them IMO. | 22:00 |
psymin | it sounds like Overlay / br-vxlan doesn't need any routing and can be isolated, same with Storage / br-storage. Container / br-mgmt will need routing to the local LAN so we can access the horizon interface. Then we have another nic for external network access? | 22:01 |
admin1 | br-mgmt is not routed . you use the same IP as you SSH to the server for example as a proxy endpoint which will give you access to the services running on br-mgmt | 22:02 |
admin1 | you need to ssh to the server right ? so think of 1 separate IP on the ssh range that will NAT ( in our case haproxy ) and give you access to all the services running on br-mgmt | 22:02 |
psymin | so if eno1 has IP 192.168.104.101 (ext) and eno3 has IP 192.168.103.101 (management) .. are you suggesting I set up an ssh tunnel to allow me access to the horizon web interface that is bound to management? | 22:03 |
psymin | eno1 is what I'm currently sshing to | 22:03 |
admin1 | no the ssh IP itself 192.168.104.101 will have haproxy running, so port 80 of .101 will proxy and provide you service running on 192.168.103.x ( management ) range | 22:05 |
admin1 | have you ever used lxc or docker in a system ? | 22:05 |
admin1 | how do you expose it ? | 22:05 |
admin1 | you use something like nginx or haproxy to map network reachable ip to the internal ips | 22:05 |
psymin | I'm most familiar with qemu / kvm | 22:05 |
psymin | I have used docker | 22:05 |
admin1 | lets use docker | 22:06 |
admin1 | docker creates 172.x ip | 22:06 |
admin1 | your system may have 192.168.x.1 | 22:06 |
admin1 | so you run haproxy or nginx on 192.168.x.1 and map internal docker IP/port for it to be reachable from outside | 22:06 |
admin1 | think of the same in openstack ase | 22:06 |
admin1 | case* | 22:06 |
psymin | When you say "outside" here you're meaning the local lan? | 22:07 |
admin1 | br-mgmt , br-storage, br-vxlan is like docker .. no one from outside sees it directly | 22:07 |
admin1 | yes | 22:07 |
admin1 | your ssh ip is what will be used to expose via haproxy these services to the outside lan | 22:07 |
psymin | it sounded like the IP I ssh to is supposed to be on the management network. Did I misread the documentation? | 22:08 |
admin1 | you have private br-mgmt, br-storage, br-vxlan range .. and ssh ip | 22:08 |
admin1 | you have 4 network cards ? | 22:09 |
admin1 | what are their speeds ? | 22:09 |
psymin | gigabit | 22:09 |
admin1 | 1gb each ? | 22:09 |
psymin | yes, technically I think some support more but our switch doesn't | 22:09 |
admin1 | how many controllers ? | 22:10 |
psymin | looks like they're all 10 gig nics but connected to 1 gig ports on the switch | 22:11 |
NeilHanlon | plenty of buffer space, then ! | 22:11 |
admin1 | how many controllers are you starting with ? | 22:11 |
NeilHanlon | admin1: that's the question, basically | 22:11 |
NeilHanlon | it's a three node "lab" sort of cluster, it sounds like | 22:12 |
psymin | I'm not sure what you're meaning by controller. There are three baremetal servers, each one I'd like to have offer cpu, ram and disk. If they can all be "controllers" that'd be handy. | 22:14 |
admin1 | you can have 1 controller , and 2 computes | 22:14 |
admin1 | what is your storage system ? | 22:14 |
admin1 | where do you plan to save your images and volumes | 22:15 |
psymin | we have 8 sata ssds on each server | 22:15 |
psymin | I hope to use 7 ssds on each server to offer storage | 22:15 |
psymin | one is for os | 22:16 |
admin1 | do you plan to use ceph ? | 22:16 |
psymin | a fantasy was to use ceph, but that adds more complexity than I can handle at the moment. | 22:16 |
psymin | if you think ceph will simplify things, great | 22:16 |
psymin | for our needs ceph is overkill, but it would be good to know | 22:17 |
admin1 | you can have 3 servers, all on ceph .. create your ceph cluster 3 .. so i would give 2x ssd for storage of the OS and 6x for ceph | 22:17 |
admin1 | it depends on how this cluster is going to be used | 22:17 |
admin1 | will it grow, will there be paying customers, what are growth prospects, if things go awesome, how do you see growth in 6 months, what kind of workload profile etc | 22:18 |
psymin | no customer data, mostly just our own deployments, email server, web server, probably nextcloud | 22:18 |
psymin | no paying customers, only our virtual machines | 22:18 |
psymin | migrating away from qemu / kvm | 22:18 |
psymin | however, our product does get deployed to openstack environments in our customer networks, so having one locally will be of great use to us | 22:19 |
admin1 | what is the server spec ? | 22:19 |
psymin | in summary we won't even come close to any bottlenecks on these servers | 22:19 |
NeilHanlon | psymin: a controller in this instance is basically "a host which runs the components of OpenStack which are required to run OpenStack". the biggest thing to be concerned with when having less than, say 2, controllers, is you don't have redundancy of those components. For example, if you have one controller and it goes down, your cluster no worky | 22:19 |
psymin | I agree, redundancy would be great. Multiple controllers would be great. | 22:20 |
psymin | if things "go awesome" there will be no growth and everything will run stable | 22:21 |
admin1 | you can have 1 server as controller, use 2x ssd disk for OS .. and then raid10 the other 6 .. this will be used for glance and cinder .. | 22:22 |
NeilHanlon | how much CPU/RAM do the nodes have? the services which run openstack use their own resources, so it could make sense to use one controller with two compute nodes for now, since then your compute nodes are only doing compute stuff (well, and storage) | 22:22 |
psymin | growth for our situation shouldn't put any additional load on these servers | 22:22 |
admin1 | the rest of the 2 servers can be used for your workload | 22:22 |
NeilHanlon | then if in a year you need another compute node, buy another controller too | 22:22 |
psymin | NeilHanlon, 64 gigs of ram, 6 1.9ghz cores each | 22:23 |
NeilHanlon | as long as you have backups of your database and the important stuff, it's not the end of the world to have one controller | 22:23 |
NeilHanlon | depends on your SLAs :D | 22:23 |
psymin | the hope and plan is to never need any more hardware than we have, since it is so excessive and growth of the company won't put any more load on these servers | 22:23 |
psymin | we'll be the only ones using the vms, no customer machines on them | 22:24 |
NeilHanlon | right, but if you host a mail server for example, your boss might be mad if email is down for a few days while you restore the controller | 22:24 |
psymin | yep, definitely that, so we should have two (or three) controllers if possible. | 22:25 |
psymin | It isn't possible to have one do cpu and be a controller? | 22:25 |
admin1 | for internal ones, if no SLA is needed and no one will dance on your head .. then 1 controller is fine | 22:25 |
admin1 | 3 controllers will be waste of resources as they are just replicating stuff | 22:25 |
psymin | wasting resources is absolutely fine | 22:26 |
psymin | but 2 makes sense | 22:26 |
admin1 | 2 is split brain | 22:26 |
admin1 | 1 or 3 | 22:26 |
psymin | okay, three sounds good | 22:26 |
NeilHanlon | in that case you'd just have your three hosts as all the targets in your openstack_user_config | 22:27 |
psymin | That sounds perfect! | 22:27 |
NeilHanlon | infra hosts, compute host, storage hosts.. etc | 22:27 |
admin1 | no but if you have 3 controllers, where is your compute ? | 22:27 |
admin1 | unless you buy more | 22:27 |
admin1 | hardware | 22:27 |
psymin | can compute not run on a controller? | 22:27 |
NeilHanlon | admin1: also on the 'controller' hardware | 22:27 |
NeilHanlon | hyperconverged | 22:27 |
admin1 | you should not | 22:28 |
psymin | admin1, would running AIO be better? | 22:28 |
admin1 | as compute will eat the cpu and procesing of the api and your clister will die | 22:28 |
admin1 | 3 node is fine .. | 22:28 |
admin1 | 1x = controller, 2x = compute | 22:29 |
admin1 | 1x = controller + storage ( also maybe network ) , rest 2 = compute for your workload | 22:29 |
NeilHanlon | but there's no redundancy for controller components then, admin1 | 22:30 |
psymin | sounds like you're saying that mixing compute and controller will somehow create a feedback loop that eats itself? | 22:30 |
psymin | if all it does is max one cpu 24/7 that is fine | 22:30 |
NeilHanlon | AIO is a controller with integrated compute/storage, so it can definitely work | 22:31 |
admin1 | how many cpus do you hvae ? | 22:31 |
psymin | admin1, six cores per server, so 18 total | 22:31 |
admin1 | psymin, your controller will die in itself not being a compute also .. based on your workload | 22:31 |
NeilHanlon | _may_ require some tuning of the CPUs to dedicate cores for project compute | 22:32 |
admin1 | mysql , rabbitmq, network, apis they are in constant chatter | 22:32 |
psymin | I'm okay with testing, deploying and realizing it is awful. That would be a great start. | 22:32 |
admin1 | then you start with 1 controller and 2 computes | 22:32 |
psymin | okay | 22:33 |
admin1 | so server 1 = raid1 for 2x ssd where you install the OS .. ( debian/ubuntu ) and 6x raid10 for your glance and cinder | 22:33 |
psymin | if is okay, I'm going to skip the raid for now | 22:34 |
admin1 | its more hassle if you skip the raid | 22:34 |
admin1 | because how else are you going to expose the cinder and glance | 22:34 |
psymin | raid for cinder and glance makes sense | 22:34 |
admin1 | raid10 on the 6x ssd and expose them via nfs is the cheapest ( in terms of simplicity and resources utilization) path | 22:35 |
psymin | lets back up a moment | 22:36 |
* NeilHanlon has to step away for dinner. biab | 22:37 | |
psymin | okay, for the final deployment I'll set up mdraid. But for a test deployment, like what I'm trying to do first, I think it can be safely skipped. | 22:40 |
psymin | does openstack want the block devices partitioned and formatted? | 22:41 |
admin1 | you want storage for images and volumes .. how do you plan to pass that to openstack ? | 22:41 |
psymin | whichever way it prefers | 22:42 |
admin1 | so the preferred way for your resources will be exposing it via nfs | 22:42 |
psymin | okay, so before deploying openstack w/ ansible, I should partition and format the disks, and set up nfs? | 22:43 |
admin1 | for controller, yes | 22:43 |
admin1 | for computes, you can also have all disks on a big raid10 with single /boot, swap and / everything else for simplicity | 22:44 |
psymin | I'm not sure why that isn't in the openstack ansible guide. | 22:44 |
admin1 | because there are more then 4 dozen ways to do storage | 22:44 |
admin1 | it all depends on what you have and what you want | 22:44 |
admin1 | so its not possible to put this on a guide saying this is what you must have or should do | 22:44 |
psymin | at this point I want whatever is easiest to get working with ansible. | 22:44 |
admin1 | which i already told you . controller do 2x raid1 os , ubuntu, 6x raid 10 , mount raid10 to say /srv and create glance and cinder folder and expose them via nfs to your br-storage range | 22:45 |
admin1 | compute, for simplicity, make all on raid10 so that you have the redundancy + speed and , create a boot, swap and rest all / | 22:46 |
psymin | okay, before using ansible, set up raid 1 on two disks, raid 10 on six, install ubuntu to the raid1 disk, mount raid 10 (ext4?) to /srv and serve it up with nfs to the storage network. | 22:49 |
admin1 | i would use xfs, but ext4 is upto you | 22:49 |
psymin | in the ansible config set this for cinder to use nfs? https://docs.openstack.org/openstack-ansible/12.2.6/install-guide/configure-cinder-nfs.html | 22:50 |
admin1 | do you have a vlan with IPs to use ? | 22:50 |
admin1 | what is the ip range you plan to use for the vms ? is it a new ip range, some old ip range, is it on a specific vlan , can you ask it to be tagged on a vlan ? | 22:51 |
psymin | I currently have four nics on four private /24 networks. Three of them are isolated, including management. | 22:51 |
admin1 | is management = ssh ? | 22:52 |
psymin | management is not currently ssh, because it is isolated :( | 22:52 |
admin1 | you need one non isolated network for the vms to be reachable from the office | 22:52 |
psymin | the nic and network that is not isolated I call "ext" for external access. | 22:53 |
psymin | do I need more than one nic to not be isolated? I can rename the ext network to management if that is wise. Or talk to my coworker to figure out how to get another network routable. | 22:53 |
admin1 | yes | 22:54 |
admin1 | do you have a ext network that is not isolated ? | 22:54 |
psymin | yes | 22:54 |
admin1 | is it via a specific network interface ? | 22:54 |
psymin | that is what I'm sshing with at the moment | 22:54 |
psymin | eno1 | 22:54 |
admin1 | you need 1 more | 22:54 |
admin1 | where the .1 or .254 is in a router with dhcp and possibly in a tagged vlan | 22:55 |
admin1 | sorry without dhcp | 22:55 |
admin1 | as openstack controls the dhcp and assigns the ip | 22:55 |
psymin | 192.168.104.1 is a router that isn't serving DHCP to that network, it is the default route for these machines and goes over eno1 | 22:56 |
psymin | the machines have 192.168.104.101, 192.168.104.102 and 192.168.104.103 on eno1 .. eno2, eno3 and eno4 follow a similar format and have IPs assigned and are isolated. | 22:57 |
psymin | you're saying I need one more interface that is routable? I assume it shouldn't be the interface w/ storage, or virtual, so that leaves management | 22:59 |
admin1 | you can have your ssh on eth0 .. so its 192.168.104.101 102 and 103 .. you can make your haproxy etc on 192.168.104.100 or 99 and point cloud.domain.com to that .. eth1 can have br-mgmt and br-storage , eth2 can be br-vxlan .. and then finally eth3 can be your routed network like 192.168.105.1 on VLAN and this eth3 on the switch will not be | 23:06 |
admin1 | access but trunk to this vlan | 23:06 |
admin1 | so you can add vlan say 11-19 or 21-29 etc to ports eth1 , eth2 and eth3 .. this way, u can from the same port run multiple isolated networks | 23:07 |
psymin | What would you call that network on eth0? | 23:07 |
admin1 | ssh/oob/office network | 23:08 |
admin1 | managment in terms of openstack is internal openstack api network | 23:08 |
admin1 | its 1 am .. i have to go :D | 23:08 |
psymin | okay, eno1 (eth0) is already set up with routing and I ssh there :) | 23:08 |
admin1 | can continue tomorrow | 23:08 |
psymin | sleep well, thank you | 23:08 |
NeilHanlon | noonedeadpunk generic question when you're around. the openvswitch3.1 change in openstack_hosts; I just went to deploy a rocky AIO and it was missing the exclude on rdo-deps.repo. is it correct we need to bump ansible-role-requirements.yml in stable/zed to pull updated commit? | 23:58 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!