noonedeadpunk | ElDuderino: I think concurency would depend mostly on number of processes or threads that are running and these vars control it https://opendev.org/openstack/openstack-ansible-os_cinder/src/commit/369f01589c6d2a26f03fe5e30f2ff210dd9fb826/defaults/main.yml#L231-L233. I'm not sure what cinder driver you used, but in case of Ceph, you also want to have active/active setup which needs zookeeper. | 07:57 |
---|---|---|
noonedeadpunk | It's not only ceph though which supports active/active, but NFS don't for example | 07:57 |
noonedeadpunk | You might also be interested in using rally, as it's intention to complete such SLA tests on concurent execution of calls to APIs | 07:58 |
noonedeadpunk | https://docs.openstack.org/rally/latest/quick_start/tutorial/step_1_setting_up_env_and_running_benchmark_from_samples.html | 08:00 |
noonedeadpunk | We also have a role and playbook that installs rally | 08:00 |
jrosser | morning | 08:01 |
noonedeadpunk | o/ | 08:01 |
noonedeadpunk | jrosser: I was thinking since yestarday - how widespread is usecase of the default you proposed `haproxy_frontend_redirect_extra_raw: "{{ haproxy_frontend_extra_raw }}"`? As while I've used haproxy_frontend_raw couple of times, I never needed to add this to redirect as well | 08:05 |
noonedeadpunk | Or maybe I didn't know that I actually needed that :D | 08:06 |
jrosser | well I didn’t know either | 08:06 |
jrosser | as I figured from yesterday cve that the parser must be used in all frontends | 08:06 |
jrosser | but that’s kind of speculative | 08:07 |
noonedeadpunk | aha, okay | 08:07 |
jrosser | so I left an escape hatch there to override the redirect one to [] if needed | 08:07 |
noonedeadpunk | yeah, makes total sense then | 08:08 |
jrosser | we could choose to not default that to the other var though? | 08:08 |
Elnaz | Hey | 09:43 |
Elnaz | Is it possible to set proxy in the config file somewhere? | 09:43 |
Elnaz | You are using Curl in the ansible code to fetch contraints files. For example: https://releases.openstack.org/constraints/upper/fc7e2105e81c352602085bd2928a706d0ab8a80d | 09:45 |
Elnaz | redirected to an opendev url. | 09:45 |
Elnaz | I can replace all Curls with `curl -x socks5h://0:8080`, but I'm wondering if there's a clean way implemented by the OSA itself | 09:47 |
Elnaz | I have issue with this kind of vars: `vim +32 /opt/openstack-ansible/playbooks/utility-install.yml` | 09:54 |
noonedeadpunk | Elnaz: have you checked our docs for environments with limited connectivity? | 10:10 |
noonedeadpunk | https://docs.openstack.org/openstack-ansible/latest/user/limited-connectivity/index.html | 10:10 |
Elnaz | no i didn't know! thank you, i'm reading it now | 10:20 |
damiandabrowski | hmm either i miss something or we may have race condition for LE http-01 challenge on multinode environment. | 13:11 |
damiandabrowski | so normally during http-01 challenge, letsencrypt communicates to haproxy VIP which(with horizon acls) forwards this request to letsencrypt-backend. | 13:11 |
damiandabrowski | letsencrypt backends are up only for a few seconds during this task: https://github.com/openstack/openstack-ansible-haproxy_server/blob/master/tasks/haproxy_ssl_letsencrypt.yml#L64 | 13:11 |
damiandabrowski | so everything works fine...if only 1 letsencrypt-backend is up at a time. | 13:11 |
damiandabrowski | But we run haproxy-install.yml with serial 50%, so if 2 haproxy nodes will try to issue certificate with certbot, http-01 request may be forwarded to an incorrect node. | 13:12 |
damiandabrowski | What do you think? | 13:12 |
mgariepy | i didn't saw that issue occured when i deployed let's encrypt for a couple of places a couple years ago. | 13:29 |
mgariepy | hmm. last year actually :D | 13:31 |
damiandabrowski | okok, thanks for the input. Maybe all nodes share the same validation token so it doesn't matter where the request lands | 13:32 |
damiandabrowski | i'll check that | 13:32 |
mgariepy | i haven't tested that much eitehr since it worked the first time. on most of my deployment. | 13:36 |
mgariepy | the one that failed was another issue (previous LE certs was there for hirtoric reason) | 13:37 |
mgariepy | might also be the stick table that forward to the same server | 13:42 |
damiandabrowski | ouh, the weird thing is that i don't see any incoming requests to certbot-front when issuing new cert with certbot | 13:58 |
damiandabrowski | but instead, in /var/log/letsencrypt/letsencrypt.log i see a lot of requests TO letsencrypt.org servers | 13:58 |
jrosser | damiandabrowski: it is quite possible to have that race condition maybe - even though i have 3 infra nodes i always have 2 dedicated haproxy | 13:59 |
damiandabrowski | guess i need to read more about certbot | 13:59 |
jrosser | so that serial 50% would work correctly in my sitaution but perhaps not with 3 haproxy nodes | 13:59 |
jrosser | damiandabrowski: also what is certbot-front? i don't have those | 14:01 |
damiandabrowski | ah wait, i probably messed up something. It's my temporary service handling requests on port 80 when horizon haproxy service is not defined | 14:03 |
damiandabrowski | but horizon is also not receiving any requests lol | 14:05 |
jrosser | damiandabrowski: this would be neat to use in haproxy to avoid a race with LE https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html#restricting-execution-with-throttle | 18:35 |
jrosser | we could put the tasks that need to be serialised in a block: with throttle: 1 | 18:37 |
ElDuderino | @noonedeadpunk thanks for the info, I'm finally back to my IRC session, and saw your note. I'll check the vars and see if we can massage them. As for the driver, we use netapp.common.NetAppDriver. Thank you for responding!! | 20:03 |
damiandabrowski | jrosser: thanks, that looks promising! | 22:23 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!