opendevreview | Merged openstack/openstack-ansible-rabbitmq_server master: Remove "warn" parameter from command module https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/869663 | 01:24 |
---|---|---|
jamesdenton | Yoga (OVN) -> Zed (OVN) complete with minimal fuss, thanks all. neutron_ovn_ssl set to 'false' prior to maint is needed, as well as cleaning up the env.d beforehand. Only real issue (besides the stale NFS mount causing grief) is related to Ironic tftp and inspector changes that may need to be accounted for | 04:15 |
noonedeadpunk | Need some votes on https://review.opendev.org/q/topic:bump_osa+status:open :) | 08:49 |
noonedeadpunk | Hm, I kind of don't understand how variables from role defaults will have any effect on playbook.... | 09:04 |
noonedeadpunk | (it's regarding haproxy thing) | 09:05 |
noonedeadpunk | I will totally need to deploy thing thing to understand as now I'm not | 09:09 |
noonedeadpunk | *this thing | 09:09 |
jrosser | noonedeadpunk: I did not even see yet where those role vars are supposed to be used | 09:28 |
jrosser | the ones starting _ | 09:28 |
noonedeadpunk | They're used as default for haproxy_services | 09:29 |
noonedeadpunk | and haproxy_services is for haproxy role | 09:29 |
noonedeadpunk | But I don't really understand how it works.... | 09:29 |
jrosser | omg | 09:29 |
noonedeadpunk | as last line is `haproxy_services: "{{ haproxy_nova_services | default(_haproxy_services) }}"` | 09:30 |
noonedeadpunk | (in node defaults) | 09:30 |
noonedeadpunk | *nova | 09:30 |
jrosser | surely these just go in the playbooks | 09:30 |
noonedeadpunk | I don't feel we're reducing complexity either | 09:30 |
noonedeadpunk | But likely I just don't understand smth | 09:31 |
jrosser | imho we should not touch the other roles with any of this | 09:33 |
noonedeadpunk | Well, if we're not including haproxy role from other roles - there's no reason to touch them for sure | 09:34 |
noonedeadpunk | so I do agree with you here | 09:35 |
jrosser | I think there might be some middle ground with much less changes | 09:51 |
jrosser | though fundamentally this really seems to be question of if we configure haproxy up front in one go, or incrementally as the deployment proceeds | 09:52 |
jrosser | and deciding which of those is preferable seems step #0 | 09:52 |
noonedeadpunk | I kind of like idea of configuring backends when we run specific service, as there's a lot of issues historically, when we re-configure backends on haproxy role run, and services that are planned to run hours from now are affected | 09:56 |
jrosser | perhaps just filtering the haproxy_sevices list in each playbook is much simpler | 09:56 |
jrosser | kind of like we do inside roles already for filtered_services | 09:57 |
jrosser | then it would only deploy a subset of the list alum each service playbook | 09:57 |
noonedeadpunk | I was thinking even to jsut feed haproxy_*_service inside playbook... | 09:57 |
noonedeadpunk | or filter, yes | 09:57 |
jrosser | right yes either of those is possible | 09:58 |
noonedeadpunk | based on backend nodes or smth | 09:58 |
jrosser | filter might be better as then the playbook doesn’t have to know how many things a service has | 09:58 |
noonedeadpunk | yeah, just we kind of need to find smth consistent to filter | 09:59 |
noonedeadpunk | and like nova is not that easy probably? | 09:59 |
jrosser | perhaps that sort of approach leads to the same result Damian has worked on but with simplification | 09:59 |
noonedeadpunk | or well, based on backend nodes might be valid thing | 10:00 |
jrosser | nova and ironic are probably the most complex | 10:00 |
noonedeadpunk | we're running play against group and if host is in this group we likely can run backend... But yes. for nova it won't be perfect as we will configure consoles when runnign api or metadata | 10:01 |
jrosser | perhaps this is worth a prototype | 10:02 |
jrosser | sounds relatively simple to try for something like glance or keystone just to see what the code looks like | 10:02 |
noonedeadpunk | yup | 10:03 |
jrosser | noonedeadpunk> I was thinking even to jsut feed haproxy_*_service inside playbook... | 10:21 |
jrosser | ^ you know maybe this is exactly what we need - maybe no actual benefit from making it more complicated | 10:22 |
noonedeadpunk | the only concern I do have about this, is that historically haproxy_default_services - was the only way to override any given backend. So I assume plentuy of deployments have fully overriden it long time ago and rely on it | 10:37 |
noonedeadpunk | But I'm not sure how much we should worry about that. Like haproxy_*_service is a thing for couple of releases at least, and given solid release note.... | 10:38 |
noonedeadpunk | it might be fine to say that well, it's time to revise your overrides | 10:39 |
*** dviroel|out is now known as dviroel | 10:41 | |
jrosser | it would also be nice to be able to run everything on port 443 | 10:47 |
noonedeadpunk | ah, yes, that is sweet thingy | 11:03 |
noonedeadpunk | but we need that only for frontends I beleive | 11:03 |
noonedeadpunk | it doesn't matter what will be on backend side | 11:04 |
noonedeadpunk | Though it's not only haproxy ACLs | 11:04 |
noonedeadpunk | As also endpoints should be set accordingly | 11:04 |
noonedeadpunk | and tricky thing, is that now glance_service_port for example, is used both to determine endpoint URI and where backend will bind to | 11:05 |
gokhanis | hello folks, I can ssh to nodes manually but ansible can not connect to nodes. When I delete control path I can run this command > SSH: EXEC sftp -b - -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/3e38b90850 '[x.x.x.x]' | 11:32 |
gokhanis | but with control path I can't connect | 11:32 |
gokhanis | I tried with ansible version both 2.13.7 and 2.10.17, it didn't work. | 11:36 |
opendevreview | Merged openstack/openstack-ansible stable/yoga: Bump OpenStack-Ansible Yoga https://review.opendev.org/c/openstack/openstack-ansible/+/870810 | 11:58 |
admin1 | instead of having just 1 haproxy, are we moving to add a haproxy in each container ? | 12:10 |
jamesdenton | thats the juju way | 12:24 |
gokhanis | it is mtu problem ignore my question | 12:25 |
admin1 | i still could not get s3.domain.com to work :( .. else my plan was to create sections like auth.cloud.domain.com, images.cloud.domain.com and map all endpoints to the right backend, and then from firewall block the other ports so that everything is on https:// | 12:37 |
admin1 | and updating the public endpoint url to the correct one | 12:37 |
admin1 | i am thinking of doing this via a post-install ansible playbook that will add the sections on haproxy and update it | 12:38 |
dokeeffe85 | Hi all, long winded question so I dropped it here - https://paste.openstack.org/show/bdYw6r944dixmJRqZ3yq/ thanks for any response :) | 12:59 |
jrosser | admin1: share what you tried to get s3.domain.com to work | 13:08 |
jrosser | admin1: to answer your previous question, no, it is not a proposal to add haproxy in each container | 13:10 |
jrosser | admin1: but instead to make the playbook for each service setup haproxy for just that service, rather than doing all of them at the start with the haproxy playbook | 13:11 |
opendevreview | Merged openstack/openstack-ansible stable/zed: Bump OpenStack-Ansible Zed https://review.opendev.org/c/openstack/openstack-ansible/+/871152 | 13:16 |
noonedeadpunk | sweet ^ | 14:02 |
jamesdenton | sweet. | 14:03 |
jamesdenton | i will bump my zed to new zed today | 14:04 |
mgariepy | jamesdenton, you upgraded a day too early ;p | 14:04 |
jamesdenton | i know, right? lol | 14:04 |
mgariepy | how went the ovn ssl stuff? | 14:04 |
jamesdenton | Well, i skipped the SSL stuff to not break my deployment | 14:04 |
jamesdenton | in prior testing, non->ssl to ssl did not go well | 14:05 |
jamesdenton | and i haven't revisited it, yet | 14:05 |
mgariepy | ha ok | 14:05 |
jamesdenton | but, i am overall happy with how seamless the process went overall once i revisited deprecations | 14:06 |
jamesdenton | noonedeadpunk User on the mailing list hit a bug with Y->Z upgrade that I also hit yesterday. https://paste.opendev.org/show/b7qIl2idoic46ewTOFDK/. Only I hit it with both ansible-etcd and ansible-pacemaker-corosync repos. I set the version to latest commit versus master and that seemed to work | 14:24 |
noonedeadpunk | I've seen bug report regarding that | 14:25 |
noonedeadpunk | well, if latest commit does work, then new release I've pushed should work as well | 14:25 |
jamesdenton | Well, latest commit doesn't touch ansible-role-requirements block for ansible-etcd or ansible-pacemaker-corosync, so not sure if that will make a difference or not | 14:26 |
jamesdenton | *latest release, sorry | 14:27 |
jamesdenton | this was my workaround: https://paste.opendev.org/show/bLLV9QkPIs5A8UgS1Lb7/ | 14:27 |
noonedeadpunk | oh | 14:46 |
noonedeadpunk | Seems like my bump script went south | 14:46 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/zed: Fix bump of github repos https://review.opendev.org/c/openstack/openstack-ansible/+/871296 | 15:46 |
noonedeadpunk | ^ that should fix it | 15:46 |
*** dviroel is now known as dviroel|lunch | 15:49 | |
noonedeadpunk | wow, https://monty-says.blogspot.com/2022/12/i-want-to-wish-you-happy-new-year-with.html looks quite interesting.... Wonder how dramatically it will break.... | 16:17 |
noonedeadpunk | but, I think for AA we will get our mariadb version finally updated. As 10.11 supposed to be LTS one | 16:18 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Bump rabbitmq to 3.11 and erlang to 25.2 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/871303 | 16:31 |
noonedeadpunk | damn it | 16:35 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/yoga: Bump erlang version to cover CVE-2022-37026 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/871304 | 16:42 |
*** dviroel|lunch is now known as dviroel | 16:46 | |
moha7 | Hello | 19:02 |
jamesdenton | zzzz | 19:09 |
joetenmus | Hey there | 21:02 |
joetenmus | I'm gonna deploy OSA over a bunch of nodes hosted by ESXi and here is glusterFS errors that I'm encountered each time I run the 2nd step (setup-infrastructure.yml)! Would you please review this error: https://pastebin.com/7G2xfTdw | 21:03 |
joetenmus | Based on this blog post: https://satishdotpatel.github.io/openstack-ansible-glance-with-glusterfs, GlusterFS is in use for glance, but I'm using NFS for locating images! | 21:08 |
*** dviroel is now known as dviroel|out | 21:22 | |
jrosser | joetenmus: i don't think that blog post is relevant for you | 21:34 |
jrosser | joetenmus: there is a small glusterfs set up internally as part of the OSA deploymebny by default to make a shared filesystem across your 'repo' containers | 21:36 |
joetenmus | So, Is GlusterFS a required tool for repo containers, in any case? | 21:41 |
jrosser | joetenmus: that is the default, but the requirement is to have a shared filesystem of some sort | 21:45 |
jrosser | joetenmus: anyway the key thing that i see is you use the term "ESXi" and it is very important that you disable any network security around mac/ip addresses on the virtual machines | 21:46 |
joetenmus | Those dirty things are disabled | 21:47 |
jrosser | your symptom is of broken connectivity between the repo container on one VM to the repo container on the others | 21:47 |
jrosser | from infra02_repo_container-733e8f33 it does /usr/sbin/gluster --mode=script peer probe infra01-repo-container-a051b328 and this fails | 21:48 |
jrosser | you should look at the networking at make sure that from inside the infra2 repo container you can ping the ip of the repo1 infra container with `-I eth1` to ensure that the ping runs across the mgmt network | 21:49 |
joetenmus | Yeah, I manually tried to probe it, but failed; There's no any ssh connection issue or ping; What kind of protocol does it uses? | 21:50 |
jrosser | well gluster is a filesystem with it's own protcol | 21:50 |
jrosser | you could check the log of the gluster daemon on the containers | 21:50 |
jrosser | have you first built and 'all-in-one' deployment? | 21:50 |
joetenmus | all-in-one? no, in whitch step shuold be deployed? | 21:52 |
jrosser | before trying a multinode deployment it can be beneficial to start with something simpler | 21:53 |
joetenmus | Ah | 21:53 |
jrosser | https://docs.openstack.org/openstack-ansible/zed/user/aio/quickstart.html | 21:53 |
jrosser | joetenmus: unfortunately we made a release of Zed today which included an error | 21:55 |
jrosser | you will need to apply this patch https://review.opendev.org/c/openstack/openstack-ansible/+/871296/1/ansible-role-requirements.yml | 21:55 |
joetenmus | For the all-in-one? | 21:56 |
joetenmus | Ah, I missed the above line | 21:56 |
joetenmus | ok | 21:56 |
joetenmus | Is it ok to use the master branch? | 21:57 |
jrosser | master is the development branch for the next release | 21:57 |
joetenmus | Then I have no idea how to patch it! you mean editing the file manually? | 21:58 |
jrosser | clone the repo, checkout stable/zed branch | 21:58 |
jrosser | go here https://review.opendev.org/c/openstack/openstack-ansible/+/871296 | 21:59 |
jrosser | press the "three little dots" menu top right and choose download patch | 21:59 |
jrosser | press the "copy" button at the end of the cherry-pick line and paste that into the terminal where you cloned the repo | 22:00 |
jrosser | anyway, an all in one should deploy you something that works without too much difficulty | 22:00 |
jrosser | it generates its own config completely and is entirely self contained in one VM with one interface and one IP | 22:01 |
jrosser | downside is there is some compromise made, like the services are not high availability and the networking is quite specific to that single VM use case | 22:02 |
jrosser | what you do get though is a reference that you can look at when trying to understand how a multnode deployment should be | 22:03 |
joetenmus | Thank you jrosser | 22:19 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!