*** cshen has joined #openstack-ansible | 00:04 | |
*** cshen has quit IRC | 00:08 | |
*** DanyC has joined #openstack-ansible | 00:35 | |
*** DanyC has quit IRC | 00:41 | |
*** cshen has joined #openstack-ansible | 00:44 | |
*** maharg101 has quit IRC | 00:46 | |
*** maharg101 has joined #openstack-ansible | 00:47 | |
*** cshen has quit IRC | 00:49 | |
*** miloa has quit IRC | 00:51 | |
*** macz_ has joined #openstack-ansible | 00:54 | |
*** macz_ has quit IRC | 00:59 | |
*** prometheanfire has joined #openstack-ansible | 01:02 | |
*** DanyC has joined #openstack-ansible | 01:08 | |
*** DanyC has quit IRC | 01:13 | |
*** rh-jelabarre has quit IRC | 02:21 | |
*** cshen has joined #openstack-ansible | 02:45 | |
*** cshen has quit IRC | 02:50 | |
*** gyee has quit IRC | 02:53 | |
*** cshen has joined #openstack-ansible | 03:01 | |
*** gkadam has joined #openstack-ansible | 03:03 | |
*** cshen has quit IRC | 03:05 | |
*** gkadam_ has joined #openstack-ansible | 03:16 | |
*** gkadam has quit IRC | 03:16 | |
*** gkadam_ has quit IRC | 03:17 | |
*** mloza has quit IRC | 04:45 | |
*** cshen has joined #openstack-ansible | 05:02 | |
*** udesale has joined #openstack-ansible | 05:06 | |
*** cshen has quit IRC | 05:06 | |
*** cshen has joined #openstack-ansible | 05:17 | |
*** cshen has quit IRC | 05:22 | |
*** evrardjp has quit IRC | 05:36 | |
*** evrardjp has joined #openstack-ansible | 05:36 | |
*** cshen has joined #openstack-ansible | 06:00 | |
*** cshen has quit IRC | 06:05 | |
*** cshen has joined #openstack-ansible | 07:00 | |
*** shyamb has joined #openstack-ansible | 07:14 | |
*** miloa has joined #openstack-ansible | 07:29 | |
*** shyamb has quit IRC | 07:36 | |
*** DanyC has joined #openstack-ansible | 08:00 | |
*** rpittau|afk is now known as rpittau | 08:23 | |
*** shyamb has joined #openstack-ansible | 08:30 | |
*** wpp has quit IRC | 08:31 | |
*** shyamb has quit IRC | 08:31 | |
*** shyamb has joined #openstack-ansible | 08:31 | |
*** DanyC has quit IRC | 08:33 | |
*** shyamb has quit IRC | 08:49 | |
*** DanyC has joined #openstack-ansible | 08:57 | |
*** wpp has joined #openstack-ansible | 09:00 | |
openstackgerrit | Chandan Kumar (raukadah) proposed openstack/ansible-role-python_venv_build master: Switch to CentOS-8 based tripleo Job https://review.opendev.org/715365 | 09:07 |
---|---|---|
*** shyamb has joined #openstack-ansible | 09:09 | |
openstackgerrit | Chandan Kumar (raukadah) proposed openstack/ansible-config_template master: Switch to CentOS 8 based Tripleo job https://review.opendev.org/715367 | 09:11 |
openstackgerrit | Chandan Kumar (raukadah) proposed openstack/openstack-ansible-os_tempest master: Switch to CentOS-8 based TripleO job https://review.opendev.org/715368 | 09:13 |
*** itandops has joined #openstack-ansible | 09:15 | |
itandops | Hi all, I'm installing openstack 20.0.1 but I get this error http://paste.openstack.org/show/791222/.the setup-hosts playbook works well but this arises at the end of setup-infrastructure.Any suggestion to solve this please ? | 09:21 |
itandops | I get this error also in 20.0.2 | 09:21 |
*** tosky has joined #openstack-ansible | 09:29 | |
*** udesale_ has joined #openstack-ansible | 09:35 | |
*** udesale has quit IRC | 09:38 | |
openstackgerrit | Chandan Kumar (raukadah) proposed openstack/openstack-ansible-os_tempest master: Added tempest_tempestconf_profile_ specific vars https://review.opendev.org/714601 | 09:48 |
*** sshnaidm|afk is now known as sshnaidm|off | 09:50 | |
*** gshippey has joined #openstack-ansible | 10:16 | |
*** shyamb has quit IRC | 10:17 | |
*** shyamb has joined #openstack-ansible | 10:20 | |
*** kopecmartin has quit IRC | 10:22 | |
*** kopecmartin has joined #openstack-ansible | 10:27 | |
*** jbadiapa has joined #openstack-ansible | 10:41 | |
jamesfreeman959 | Hello all - I had a problem last night deploying openstack-ansible v19.0.11 - this is a completely fresh install working from this example: https://docs.openstack.org/openstack-ansible/stein/user/ceph/full-deploy.html | 10:47 |
jamesfreeman959 | I hit a failure running setup-openstack.yml | 10:47 |
jamesfreeman959 | TASK [os_ceilometer : Initialize Gnocchi database by creating ceilometer resources] ******************************************************************************************************************************************************************************************* | 10:48 |
jamesfreeman959 | [WARNING]: Module remote_tmp /var/lib/ceilometer/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually | 10:48 |
jamesfreeman959 | fatal: [infra1_ceilometer_central_container-9d1d0923]: FAILED! => {"changed": false, "cmd": ["/openstack/venvs/ceilometer-19.0.11/bin/ceilometer-upgrade"], "delta": "0:00:14.575090", "end": "2020-03-26 21:45:09.850635", "msg": "non-zero return code", "rc": 1, "start": "2020-03-26 21:44:55.275545", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} | 10:48 |
jamesfreeman959 | No immediate errors seen in either the gnocchi or the ceilometer containers on infra1 | 10:48 |
jamesfreeman959 | If anyone has any ideas on how to debug or resolve I would be grateful! | 10:48 |
*** shyamb has quit IRC | 10:52 | |
*** shyamb has joined #openstack-ansible | 10:54 | |
*** spatel has joined #openstack-ansible | 10:57 | |
*** spatel has quit IRC | 11:01 | |
*** arxcruz is now known as arxcruz|off | 11:19 | |
*** jamesden_ has joined #openstack-ansible | 11:22 | |
*** rpittau is now known as rpittau|bbl | 11:32 | |
*** shyamb has quit IRC | 11:37 | |
noonedeadpunk | jamesfreeman959: have you tried running /openstack/venvs/ceilometer-19.0.11/bin/ceilometer-upgrade manually? any output? | 11:39 |
noonedeadpunk | jrosser: now seems that pytest is failing for rocky.... https://zuul.opendev.org/t/openstack/build/f1f26735282d4c37a706894c6869c89e/log/job-output.txt#10011 | 11:41 |
noonedeadpunk | oh, I think we just get master u-c | 11:42 |
noonedeadpunk | I guess you've patched that for train? | 11:42 |
noonedeadpunk | found it https://review.opendev.org/#/c/703979/ | 11:44 |
*** macz_ has joined #openstack-ansible | 11:46 | |
jrosser | iirc we didn’t see problems on rocky at the time | 11:48 |
jrosser | but those should really be backported further I think | 11:48 |
*** macz_ has quit IRC | 11:51 | |
jamesfreeman959 | noonedeadpunk: Please forgive me inexperience - would I run that script inside the LXC container on infra1? | 11:53 |
noonedeadpunk | yes, inside ceilometer container | 11:54 |
jamesfreeman959 | Ok - will fire up the environment and test and get back to you. Thanks! | 11:54 |
*** mathlin_ has quit IRC | 11:57 | |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_tempest stable/rocky: Use contraints for tempest plugins https://review.opendev.org/715407 | 12:04 |
jamesfreeman959 | noonedeadpunk: I just powered up the VM's again, and looked in /openstack/venvs/ - it only contains: "cinder-19.0.11 neutron-19.0.11" | 12:08 |
jamesfreeman959 | no ceilometer directory | 12:08 |
noonedeadpunk | hm. I think you;re not supposed to have neutron and cinder inside ceilometer-center lxc container | 12:09 |
noonedeadpunk | *ceilometer-central | 12:10 |
jamesfreeman959 | noonedeadpunk: ah - this is on the bare host. I powered the nodes down overnight. Powered them up this morning but no containers have come up. Do I need to run the playbook again for them to come up? | 12:11 |
noonedeadpunk | you may have them in case of metal build... | 12:11 |
jamesfreeman959 | last night there were LXC containers running | 12:11 |
noonedeadpunk | what does lxc-ls say? | 12:13 |
jamesfreeman959 | It returns no output | 12:13 |
noonedeadpunk | actually lxc containers should spawn up with node | 12:13 |
noonedeadpunk | do you run it with root privileges? | 12:14 |
jamesfreeman959 | omg - I'm so sorry - clearly not enough coffee this morning | 12:14 |
jamesfreeman959 | ok - now that I've got over that "moment" - running ceilometer-upgrade returns no output, and the exit code is 1 | 12:15 |
*** rh-jelabarre has joined #openstack-ansible | 12:17 | |
noonedeadpunk | ok. can you enter utility container, source openrc, and run openstack endpoint list --service metric ? | 12:18 |
jamesfreeman959 | It hung for a while, and now has returned "Gateway Timeout (HTTP 504)" | 12:21 |
noonedeadpunk | hum | 12:22 |
noonedeadpunk | I'd say you might have issues with keystone... | 12:23 |
noonedeadpunk | at the moment, which might be result of reboot... Is mariadb and other infra running ok? | 12:24 |
noonedeadpunk | like memcached, rabbitmq and stuff | 12:24 |
jamesfreeman959 | mariadb is down completely on all 3 nodes | 12:26 |
noonedeadpunk | I guess you'll need to repair galera then first | 12:28 |
*** DanyC has quit IRC | 12:34 | |
*** DanyC has joined #openstack-ansible | 12:34 | |
*** DanyC has quit IRC | 12:39 | |
jamesfreeman959 | noonedeadpunk: Running through https://docs.openstack.org/openstack-ansible/stein/admin/maintenance-tasks.html#galera-cluster-recovery - all nodes are down and I don't have a clear one to bootstrap the cluster from. Also no backups as I'd only just got this partially built. What's would you say is my most efficient strategy to recover? | 12:41 |
*** cshen has quit IRC | 12:43 | |
noonedeadpunk | I'd say you should jsut select one host and start new cluster on it. After that other 2 nodes should start without issues and join | 12:45 |
noonedeadpunk | liek https://mariadb.com/kb/en/getting-started-with-mariadb-galera-cluster/#bootstrapping-a-new-cluster | 12:46 |
*** tlunkw has joined #openstack-ansible | 12:47 | |
*** shyamb has joined #openstack-ansible | 12:48 | |
*** tlunkw has quit IRC | 12:48 | |
*** tlunkw has joined #openstack-ansible | 12:49 | |
*** tlunkw has quit IRC | 12:51 | |
*** tlunkw has joined #openstack-ansible | 12:51 | |
*** partlycloudy has quit IRC | 12:53 | |
*** partlycloudy has joined #openstack-ansible | 12:53 | |
*** rholloway has joined #openstack-ansible | 12:55 | |
*** spatel has joined #openstack-ansible | 13:00 | |
*** openstackstatus has quit IRC | 13:01 | |
*** openstack has joined #openstack-ansible | 13:05 | |
*** ChanServ sets mode: +o openstack | 13:05 | |
jamesfreeman959 | noonedeadpunk: Ok I backtracked - got the Galera cluster fixed and running. Now "openstack endpoint list --service metric" looks sane in the utility container I think | 13:05 |
jamesfreeman959 | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------+ | 13:06 |
jamesfreeman959 | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | | 13:06 |
jamesfreeman959 | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------+ | 13:06 |
jamesfreeman959 | | 568b598751694a26a73d4b4786f4bb43 | RegionOne | gnocchi | metric | True | admin | http://172.29.236.9:8041 | | 13:06 |
jamesfreeman959 | | 7ec4b935131c4a1ba9b898a93ceece5d | RegionOne | gnocchi | metric | True | public | http://openstack.example.org:8041 | | 13:06 |
jamesfreeman959 | | 80e0cf1cbf0f4cbabf86db15b2037a04 | RegionOne | gnocchi | metric | True | internal | http://172.29.236.9:8041 | | 13:06 |
jamesfreeman959 | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------+ | 13:06 |
noonedeadpunk | jamesfreeman959: please use paste.openstack.org :p | 13:06 |
jamesfreeman959 | Sorry - and thanks | 13:06 |
noonedeadpunk | but I got the idea | 13:06 |
noonedeadpunk | So does ceilometer-upgrade still silently fails with no output? | 13:07 |
jamesfreeman959 | Yes, and exit code is 1 | 13:07 |
*** DanyC has joined #openstack-ansible | 13:07 | |
noonedeadpunk | does `openstack metric resource-type list` have any types listed? | 13:08 |
noonedeadpunk | (that's what ceilometer-upgrade should populate) | 13:09 |
jamesfreeman959 | ok - that doesn't look so good: /openstack/venvs/utility-19.0.11/lib/python2.7/site-packages/ujson.so: undefined symbol: Buffer_AppendShortHexUnchecked | 13:10 |
*** rpittau|bbl is now known as rpittau | 13:10 | |
noonedeadpunk | jamesfreeman959: BTW, can ceilometer container resolve openstack.example.org? | 13:10 |
jamesfreeman959 | Just checked - yes it can - it corresponds to the VIP I set up in the inventory | 13:11 |
noonedeadpunk | hm, does `gnocchi resource-type list` results in the same error? | 13:12 |
jamesfreeman959 | yes - same error | 13:13 |
*** cshen has joined #openstack-ansible | 13:14 | |
noonedeadpunk | Hm. WHat about TOKEN=$(openstack token issue -c id -f value); curl http://172.20.0.9:8041/v1/resource_type -H "X-Auth-Token: $TOKEN" | 13:15 |
noonedeadpunk | wait | 13:16 |
noonedeadpunk | TOKEN=$(openstack token issue -c id -f value); curl http://172.29.236.9:8041/v1/resource_type -H "X-Auth-Token: $TOKEN" | 13:16 |
*** tlunkw has quit IRC | 13:16 | |
*** itandops has quit IRC | 13:16 | |
jamesfreeman959 | from the utility container, it complains that curl cannot be found | 13:17 |
jamesfreeman959 | hold on - will try from the host | 13:17 |
noonedeadpunk | just install curl :p | 13:18 |
jamesfreeman959 | [{"attributes": {}, "state": "active", "name": "generic"}] | 13:18 |
noonedeadpunk | ok, so this feels like gnocchi itself is ok... | 13:19 |
*** macz_ has joined #openstack-ansible | 13:21 | |
*** macz_ has quit IRC | 13:21 | |
noonedeadpunk | I'm not sure if ceilometer-upgrade does use ujson for interaction with gnocchi or not... | 13:21 |
noonedeadpunk | but ujson is probably broken badly http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012285.html | 13:21 |
*** macz_ has joined #openstack-ansible | 13:21 | |
*** shyamb has quit IRC | 13:22 | |
noonedeadpunk | jamesfreeman959: so let's return to the first question - was ceilometer installation intended one? Ie do you need it in your deployment or you're just following guide? | 13:22 |
noonedeadpunk | jamesfreeman959: yeah, ceilometer-upgrade uses gnocchi client which is apparently broken because of ujson:( | 13:26 |
noonedeadpunk | which is bad news actually... | 13:28 |
jamesfreeman959 | noonedeadpunk: so right now we want to build a reference architecture for the business on openstack-ansible - the example I'm following seemed good because it has HA storage and infrastructure nodes. | 13:58 |
jamesfreeman959 | ultimately we will need some monitoring/stats but I'm agnostic at this stage on how we achieve it | 13:58 |
jamesfreeman959 | is this a bigger issue then if bad news? | 13:59 |
*** itandops has joined #openstack-ansible | 14:00 | |
itandops | hi all, any suggestion about my issue ? | 14:01 |
itandops | http://paste.openstack.org/show/791222/ | 14:01 |
noonedeadpunk | jamesfreeman959: so ceilometer gives you info about resource usage by instances. So this type of monitoring is pretty usefull in terms of billing I'd say | 14:08 |
noonedeadpunk | so to fix this I think patching gnocchiclient is required | 14:08 |
noonedeadpunk | but gnocchi is not supported anymore... | 14:08 |
jamesfreeman959 | noonedeadpunk: that sounds like the kind of thing we'd need - we're a services company so those kind of metrics would be useful | 14:09 |
noonedeadpunk | ok, I see. | 14:09 |
jamesfreeman959 | is there a way forwards if gnocchi is not supported? | 14:09 |
jamesfreeman959 | for example if I move to train or ussuri, do they still use gnocchi? | 14:10 |
noonedeadpunk | Ceilometer supports several publishers - like prometheus or monasca. But I guess neither of them were tested with osa | 14:10 |
noonedeadpunk | yeah we still use gnocchi by default, and don't have roles for deploying other engines, as don't have really much ppl using telemetry | 14:11 |
jamesfreeman959 | that's good to know | 14:12 |
noonedeadpunk | ANd actually gnocchi is working pretty good at the moment | 14:12 |
noonedeadpunk | except it's client has problems... | 14:12 |
jamesfreeman959 | would this be a case of rolling back ujson? | 14:12 |
noonedeadpunk | I guess discussion has ended up in using rapidjson instead... | 14:16 |
noonedeadpunk | actually there might be a way of making ujson to work | 14:16 |
noonedeadpunk | I didn't dig into that much tbh as never faced real issues with that until now | 14:17 |
noonedeadpunk | there was no ujson release in the last 4 years so didn't really get what you mean under rolling it back. | 14:17 |
jamesfreeman959 | ah ok - I didn't research the release history. I know often when I've had library problems, I've fixed it by reverting to an earlier version. But as the ujson is 4 years old that won't work.... | 14:18 |
noonedeadpunk | yeah - so the thing is that it was working ok but now it fails with some compilers | 14:19 |
noonedeadpunk | out of that ML " | 14:19 |
noonedeadpunk | The original issue is that the released version of ujson is in | 14:19 |
noonedeadpunk | non-spec-conforming C which may break randomly based on used compiler | 14:19 |
noonedeadpunk | and linker. | 14:19 |
noonedeadpunk | But this part of code which gives you failure runs only when gnocchi is deployed. So if you select another dispatcher for ceilometer it's role should work for you | 14:21 |
jamesfreeman959 | this would be a manual patch on top of osa I guess? | 14:24 |
noonedeadpunk | you mean changing ceilometer dispatcher or what? | 14:24 |
jamesfreeman959 | yes - or I suppose to ask a broader question - how would you recommend I proceed? | 14:26 |
noonedeadpunk | I guess you'll need just to use ceilometer_ceilometer_conf_overrides to set event_dispatchers and meter_dispatchers but I'd say to reffer to ceilometers docs | 14:26 |
jamesfreeman959 | ok got it | 14:27 |
noonedeadpunk | Actually I'd probably try out fixing ujson in the ceilometer and gnocchi venvs.... | 14:28 |
noonedeadpunk | Not sure how good this idea is though | 14:28 |
noonedeadpunk | you can actually ask telemetry folks what solution they would recommend | 14:30 |
noonedeadpunk | as I guess they're more familiar with what's going on with their project... | 14:30 |
jamesfreeman959 | a bit of searching seems to indicate particular issues around Ubuntu 18.04, which is my build env | 14:31 |
noonedeadpunk | hm, on rocky I had it working on ubuntu 18/04 though | 14:32 |
noonedeadpunk | and ujson==1.35 | 14:32 |
noonedeadpunk | oh, wait | 14:33 |
noonedeadpunk | it has new release 3 days ago? | 14:33 |
noonedeadpunk | yeah, so it released 2.0 version on march 7 | 14:34 |
noonedeadpunk | jamesfreeman959: what version do you have?:) | 14:34 |
jamesfreeman959 | in the ceilometer container venv, 1.35 | 14:35 |
jamesfreeman959 | I was just reading their issue tracker - it looks like they were pushing a fix that should be in 2.0 | 14:35 |
noonedeadpunk | they do have 2.0.3 now | 14:36 |
noonedeadpunk | so yeah - try isntalling it manually | 14:36 |
noonedeadpunk | it's version is updated only on master. | 14:37 |
noonedeadpunk | both train and stein will install 1.35 by default | 14:37 |
jamesfreeman959 | ok - so on infra1 only, I attached to the ceilometer container, activated the venv, and upgraded ujson. ceilometer-upgrade now runs for longer - it returns no output still but exit code is now 0 | 14:40 |
jamesfreeman959 | ujson == 2.0.3 | 14:41 |
noonedeadpunk | so TOKEN=$(openstack token issue -c id -f value); curl http://172.29.236.9:8041/v1/resource_type -H "X-Auth-Token: $TOKEN" should give you the way richer result :p | 14:41 |
jamesfreeman959 | lots of lovely JSON :) | 14:45 |
noonedeadpunk | yeah, so I guess now role should run just fine. Unless it downgrade ujson or re-create venv :( | 14:48 |
jamesfreeman959 | I'll test and report back - I presume I can just run "openstack-ansible setup-openstack.yml" again? | 14:48 |
noonedeadpunk | The thing is we can't jsut change ujson version in role since we have to stick to constraints provided by https://releases.openstack.org/constraints/upper/stein | 14:49 |
jamesfreeman959 | that's fine - I'm happy to maintain a local patch for myself - as long as I've got written down what I need to do, I'm happy | 14:49 |
jamesfreeman959 | once Ussuri is released I presume this might be in there? (you mentioned it was in the master) | 14:50 |
noonedeadpunk | Yeah, it will be in U | 14:50 |
noonedeadpunk | So yes, you can either run setup-openstack.yml or just launch missing roles manually https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/setup-openstack.yml#L29-L41 | 14:51 |
noonedeadpunk | (if you have some that you need) | 14:51 |
jamesfreeman959 | ok awesome - thanks so much for all your amazing help - I've learned a lot! | 14:51 |
jamesfreeman959 | I'll make the change and test - fingers crossed! | 14:51 |
noonedeadpunk | yeah, sure, you're welcome:) | 14:52 |
noonedeadpunk | actually if you see smth worth changing upsteam - you can submit a patch | 14:53 |
*** dave-mccowan has joined #openstack-ansible | 14:53 | |
jamesfreeman959 | will do | 14:54 |
itandops | any feedback please ? http://paste.openstack.org/show/791222/ | 14:57 |
noonedeadpunk | itandops: is it rocky? | 14:58 |
noonedeadpunk | I guess no since it's using python_venv_build role | 14:59 |
itandops | noonedeadpunk: I get this error in both Stein and Train | 15:01 |
noonedeadpunk | do you have /root/.pip/pip.conf in container? | 15:01 |
chandankumar | noonedeadpunk, jrosser https://review.opendev.org/715365 and https://review.opendev.org/715368 | 15:04 |
itandops | I check utility and keystone containers no one container pip.conf file | 15:04 |
*** Open10K8S has joined #openstack-ansible | 15:04 | |
jrosser | you could go in the utility container and try to curl the url that fails | 15:05 |
noonedeadpunk | jrosser: what do we still store on repo_server for pip? wheels? | 15:06 |
jrosser | i think so yes | 15:08 |
itandops | jrosser I don't understand which link do you want me to curl ? | 15:09 |
noonedeadpunk | I guess http://172.29.236.11:8181 at least | 15:13 |
noonedeadpunk | is 172.29.236.11 your VIP? | 15:13 |
ioni | hey guys | 15:13 |
ioni | question | 15:13 |
ioni | how do i configure aio to install and configure ceph infra | 15:14 |
ioni | i've seen something related to bootstrap_host_scenarios_expanded | 15:14 |
ioni | but i don't know how to enable scenarios | 15:14 |
noonedeadpunk | ./scripts/gate-check-commit.sh aio_lxc_ceph | 15:15 |
ioni | i want have an aio that has cinder+ceph | 15:15 |
noonedeadpunk | will deploy aio in lxc containers with ceph | 15:15 |
noonedeadpunk | it should do that | 15:16 |
ioni | after running boostrap aio? | 15:16 |
noonedeadpunk | ioni: isntead of everythng. Just clone repo and run this :) | 15:16 |
noonedeadpunk | (run as root) | 15:16 |
noonedeadpunk | and kinda ensure that you are able to login as root | 15:18 |
*** udesale_ has quit IRC | 15:20 | |
ioni | noonedeadpunk, ok, i mostly want nova, neutron, cinder and ceph | 15:20 |
noonedeadpunk | it will install all of it + horizon iirc | 15:21 |
ioni | noonedeadpunk, cool | 15:22 |
noonedeadpunk | oh, actually it's probably better to run `./scripts/gate-check-commit.sh aio_lxc_ceph deploy source ` to be more specific | 15:22 |
ioni | {% if 'octavia' in bootstrap_host_scenarios_expanded %} | 15:23 |
ioni | what about this? | 15:23 |
ioni | i think i got it | 15:23 |
ioni | https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html | 15:23 |
noonedeadpunk | if you want octavia in addition just add _octavia to scenario ie aio_lxc_ceph_octavia | 15:23 |
ioni | export SCENARIO='aio_lxc_barbican_ceph' | 15:23 |
ioni | cool cool | 15:24 |
ioni | i got it! | 15:24 |
ioni | thanks | 15:24 |
noonedeadpunk | ioni: so that's what aio include https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/vars/main.yml#L28 | 15:24 |
ioni | cool | 15:24 |
itandops | noonedeadpunk: yes 172.29.236.11 is my VIP. but curl http://172.29.236.11:8181 fails because no process is listening to 8181 | 15:27 |
itandops | noonedeadpunk: the haproxy should be installed before the playbook setup-openstack ? | 15:28 |
noonedeadpunk | itandops: so actually haproxy is supposed to listen on that port and forward requests to repo server | 15:28 |
noonedeadpunk | do you have hosts in repo_all group? | 15:29 |
*** velmeran has joined #openstack-ansible | 15:29 | |
velmeran | Is this a good place to ask about a newbies deployment problem? | 15:32 |
noonedeadpunk | itandops: so before setup-openstack you should launch setup-hosts.yml and setup-infrastructure.yml | 15:32 |
noonedeadpunk | velmeran: yeah, go on :) | 15:33 |
noonedeadpunk | in case you deploy via OSA :p | 15:33 |
velmeran | I've gotten through setup-host and setup-infrastructure, no errors reported. but on the setup-openstack section, its failing on TASK [os_keystone : Create fernet keys for Keystone] because the keystone-manage executable is not in the directory /openstack/venvs/keystone-20.0.0/bin/ | 15:35 |
velmeran | I've found that folder, and there are other things in there, but no keystone-manage. I tried reinstalling, still all good through the first two playbooks, but it seems I'm just missing that file on the setup-openstack part. | 15:35 |
itandops | noonedeadpunk:The repo_all section into /etc/openstack_deploy/openstack_inventory.json contains: "repo_all": { "children": [ "pkg_repo"], "hosts": [] }, | 15:36 |
noonedeadpunk | itandops: do you have `repo-infra_hosts` defined in openstack_user_config.yml? If not try setting it to `repo-infra_hosts: *infrastructure_hosts` and re-run setup-infrastructure.yml | 15:38 |
*** gyee has joined #openstack-ansible | 15:40 | |
noonedeadpunk | velmeran: what release are you trying to deploy? | 15:43 |
velmeran | I've believe its the Train release, I've been following this guide: https://docs.openstack.org/project-deploy-guide/openstack-ansible/train/run-playbooks.html | 15:44 |
noonedeadpunk | velmeran: what do you have once you inside /opt/openstack-ansible run `git branch`? | 15:48 |
noonedeadpunk | oh, ok, I see, its 20.0.0 | 15:49 |
noonedeadpunk | so I'd recommend you to use 20.0.2 - in folder /opt/openstack-ansible/ run `git checkout 20.0.2` | 15:50 |
noonedeadpunk | after that run ./scripts/bootstrap-ansible.sh | 15:50 |
noonedeadpunk | and run setup-openstack.yml again | 15:50 |
velmeran | okay, doing those steps now | 15:51 |
noonedeadpunk | btw, what's the output of `ls -l /openstack/venvs/keystone-20.0.0/bin/ | grep keystone` ? | 15:52 |
velmeran | nothing in the containers folder with keystone in the name. | 15:54 |
noonedeadpunk | hm.... | 15:54 |
noonedeadpunk | it feels like things might fail a bit earlier than you paste... | 15:55 |
velmeran | just some active, easy install, pip, python, and wheel files/executables | 15:55 |
jamesfreeman959 | noonedeadpunk: ok - getting closer. Re-running the setup-openstack playbook gets further (so ujson didn't get rolled back). infra1 now looks ok, but the cinder service is showing problems on infra2 and infra3 | 15:56 |
jamesfreeman959 | http://paste.openstack.org/show/791233/ | 15:56 |
noonedeadpunk | ok, just try out 20.0.2 release first anyway | 15:56 |
velmeran | okay, its running through the setup-openstack now | 15:56 |
noonedeadpunk | jamesfreeman959: it doesn;t feel futher since cinder role is the way before of ceilometer... | 15:57 |
noonedeadpunk | jamesfreeman959: so what's the status of cinder-api service in the failed container? | 15:58 |
jamesfreeman959 | noonedeadpunk: This looks a bit fatal: "--- no python application found, check your startup logs for errors ---" | 15:59 |
jamesfreeman959 | (from systemctl status cinder-api) | 15:59 |
noonedeadpunk | so cinder api is laucnhed via uwsgi. It's config is placed in /etc/uwsgi/cinder-api.ini | 16:01 |
velmeran | I might be hung on: TASK [python_venv_build : Install python packages into the venv] its been on that step a while, and target host are all not showing much activity compared to when running the rest of the steps. | 16:03 |
noonedeadpunk | jamesfreeman959: so is wsgi-file location exists? | 16:04 |
noonedeadpunk | velmeran: at this step packages should be isntalled via apt/yum | 16:05 |
velmeran | humm, no yum running on any of my three host | 16:06 |
velmeran | no errors from the playbook yet, just sitting on this step | 16:06 |
noonedeadpunk | velmeran: not sure I can suggest smth here.... you can try re-run that playbook | 16:07 |
velmeran | ok | 16:08 |
noonedeadpunk | velmeran: you can also try re-creating that container | 16:10 |
noonedeadpunk | velmeran: like destroy them with openstack-ansible playbooks/containers-lxc-destroy.yml --limit keystone_all | 16:11 |
noonedeadpunk | and create again with openstack-ansible playbooks/containers-deploy.yml --limit keystone_all,lxc_hosts | 16:11 |
noonedeadpunk | after that re-run os-keystone-isntall.yml playbook | 16:12 |
*** melwitt is now known as jgwentworth | 16:12 | |
*** DanyC has quit IRC | 16:13 | |
velmeran | okay might try that. interestingly this time, I got to the step where it checks for the fernet keys and says they exist, but then goes to the create step and fails due to missing file, but then just continued on past that step instead of droping out. | 16:13 |
*** DanyC has joined #openstack-ansible | 16:14 | |
*** DanyC has joined #openstack-ansible | 16:15 | |
noonedeadpunk | can you psot output to paste.openstack.org? | 16:15 |
velmeran | okay, then it fails out on os_keystone : Wait for web server to complete starting | 16:15 |
velmeran | going to remove and reinstall keystone and see if that helps. | 16:15 |
noonedeadpunk | yeah.... | 16:15 |
velmeran | reinstall failed, http://paste.openstack.org/show/791239/ looks like its failing to find yum repo | 16:21 |
noonedeadpunk | velmeran: actually it's 404 for me as well... | 16:25 |
noonedeadpunk | does yum on bare metal host works ok? | 16:26 |
noonedeadpunk | but it seems it's already in container... | 16:26 |
velmeran | yea, I removed the container again and re-ran install, this time it connected. so I think the repo's are having some outages/overloaded. | 16:29 |
velmeran | trying the os-keystone-install playbook now | 16:30 |
*** cshen has quit IRC | 16:36 | |
*** cshen has joined #openstack-ansible | 16:37 | |
*** cshen has quit IRC | 16:42 | |
jamesfreeman959 | noonedeadpunk: Sorry for the delay - just checked - everything looks complete. Python binary is there, uwsgi, /etc/uwsgi/cinder-api.ini | 16:48 |
jamesfreeman959 | all looks complete | 16:48 |
noonedeadpunk | have you tried restarting service? | 16:48 |
jamesfreeman959 | literally just tried - the output of systemctl status cinder-api looks better now - I'm re-running the playbook | 16:50 |
noonedeadpunk | I guess this might be result of failed mariadb after all nodes being down | 16:50 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_tempest stable/rocky: Use contraints for tempest plugins https://review.opendev.org/715407 | 16:52 |
velmeran | @noonedeadpunk I found the internal ip for my deployment wasn't pingable except locally, that was causing issue when a python script was looking for http on it so I fixed that. but now I'm back to install steps failing due to repo's just not responding in time. | 16:53 |
jamesfreeman959 | noonedeadpunk: makes sense - playbook is running now | 16:53 |
jamesfreeman959 | noonedeadpunk: Possibly a dumb question, but is it expected that mariadb will be down on reboot? I shut all nodes down cleanly last night (no hard power off) and so I expected the database cluster to resume. Did I do something wrong? | 16:58 |
noonedeadpunk | velmeran eventually epel shoould be dropped here https://opendev.org/openstack/openstack-ansible-openstack_hosts/src/branch/master/tasks/main.yml#L60 | 16:59 |
noonedeadpunk | so not sure why it makes problems | 16:59 |
*** idlemind_ has quit IRC | 17:00 | |
*** idlemind has joined #openstack-ansible | 17:00 | |
noonedeadpunk | jamesfreeman959: actually when galera lose all cluster participants it should be started with recovery iirc. So the last one shuts down the first raised up | 17:01 |
noonedeadpunk | But I actually try not to let galera be fully down - that's the point of cluster actually hehe | 17:01 |
jamesfreeman959 | noonedeadpunk: That's good advice - I'm used to the old corosync, pacemaker, etc., where it sorts out who will be master on boot. Galera is a bit new to me | 17:03 |
noonedeadpunk | so actually galera also has marker which node was down the last, but I'd say it alnmost never work | 17:05 |
noonedeadpunk | except I guess cases when you manually stop mysqld one by one | 17:07 |
jamesfreeman959 | I'm arranging a lab environment in a datacenter so that can be up 24/7 - but right now this is all on a high powered workstation so I need to shut it down from time to time | 17:07 |
jamesfreeman959 | However if I know what to look for, and what needs resolving on a restart then all is fine | 17:08 |
noonedeadpunk | jamesfreeman959: I guess you could have just single maria deployed | 17:08 |
noonedeadpunk | that would resolve this cluster issues on startup | 17:08 |
noonedeadpunk | since there would be no cluster actually) | 17:09 |
*** cshen has joined #openstack-ansible | 17:09 | |
jamesfreeman959 | noonedeadpunk: That's a good plan - my brief was to build a scale model of what we'd build in production, so I built the cluster | 17:09 |
jamesfreeman959 | I made work difficult for myself! :-D | 17:09 |
*** pcaruana has quit IRC | 17:10 | |
openstackgerrit | Merged openstack/ansible-role-systemd_mount master: Missing document start "---" https://review.opendev.org/715107 | 17:10 |
openstackgerrit | Merged openstack/ansible-role-python_venv_build master: Switch to CentOS-8 based tripleo Job https://review.opendev.org/715365 | 17:13 |
*** cshen has quit IRC | 17:13 | |
velmeran | humm, since I did that update to the 20.0.2 release, all attempts to run openstack_hosts : Add requirement packages (repositories gpg keys packages, toolkits...) on keystone fail due to "One of the configured repositories failed (Unknown)". Is there an easy way to find that command so I can see what repo it might be trying to use that is now | 17:16 |
velmeran | missing/broken? | 17:16 |
openstackgerrit | Merged openstack/openstack-ansible-os_tempest master: Switch to CentOS-8 based TripleO job https://review.opendev.org/715368 | 17:16 |
noonedeadpunk | velmeran: so it's here https://opendev.org/openstack/openstack-ansible-openstack_hosts/src/branch/master/tasks/openstack_hosts_configure_yum.yml#L68-L71 | 17:17 |
noonedeadpunk | and packages it tries to install is here https://opendev.org/openstack/openstack-ansible-openstack_hosts/src/branch/master/vars/redhat.yml#L88-L101 | 17:17 |
noonedeadpunk | so actually it's jsut 1 package - yum-plugin-priorities | 17:18 |
velmeran | okay. humm, is there a way to lxc-console to these and try to run that command? | 17:21 |
velmeran | its asking me for a login/pass | 17:21 |
noonedeadpunk | lxc-attach -n container-name | 17:22 |
velmeran | ah | 17:22 |
*** pcaruana has joined #openstack-ansible | 17:23 | |
*** miloa has quit IRC | 17:25 | |
*** rholloway has quit IRC | 17:25 | |
openstackgerrit | Merged openstack/openstack-ansible stable/rocky: Bump OSA stable/rocky https://review.opendev.org/714926 | 17:28 |
*** evrardjp has quit IRC | 17:36 | |
*** evrardjp has joined #openstack-ansible | 17:36 | |
mnaser | ebbex: welcome! :) | 17:44 |
*** itandops has quit IRC | 17:48 | |
velmeran | well, I think its networking, which I thought I understood a bit. but it seems all my attempts to get things working at first might have goofed things up as none of the containers seemed to be able to ping or get dns outside. think I'm going to start fresh with the containers, see if that helps. | 17:54 |
*** theintern_ has joined #openstack-ansible | 18:06 | |
*** theintern_ has quit IRC | 18:09 | |
noonedeadpunk | jrosser: bump for rocky jsut landed... Do we want https://review.opendev.org/#/c/715407/ to be in EM as well? I guess it might ensure working tempest which was working only because pip_install_upper_constraints is defined in openstack-ansible-tests | 18:09 |
*** velmeran has quit IRC | 18:12 | |
*** jbadiapa has quit IRC | 18:15 | |
jrosser | noonedeadpunk: yeah let’s include that otherwise we know it that branch will break | 18:15 |
noonedeadpunk | in CI it probably won't but otherwise yeah... | 18:16 |
noonedeadpunk | ok let's try quickly merge it then... | 18:16 |
jrosser | i +2 it | 18:23 |
*** velmeran has joined #openstack-ansible | 18:28 | |
*** velmeran76 has joined #openstack-ansible | 18:34 | |
*** velmeran has quit IRC | 18:35 | |
*** velmeran76 has quit IRC | 18:36 | |
*** DanyC has quit IRC | 18:38 | |
*** velmeran has joined #openstack-ansible | 18:42 | |
noonedeadpunk | mnaser: guilhermesp can you kindaly vote on https://review.opendev.org/#/c/715407/ ? | 18:43 |
*** spatel has quit IRC | 18:44 | |
noonedeadpunk | or maybe ebbex whant to join party?:) | 18:44 |
noonedeadpunk | nice, thanks mnaser | 18:49 |
velmeran | Is there a good guide that goes over host network bridge setup for centos7? I think my stuff breaks as it makes containers that can't reach the internet or even local gateway/dns | 18:52 |
noonedeadpunk | in case you're going to use simple linuxbridges there's actually nothing special in it | 18:52 |
noonedeadpunk | so actually containers are connecting to the internet via lxc-br, which is created by lxc itself. | 18:53 |
velmeran | well, I think I over complicated things at first, so trying to step back to the beginning. | 18:54 |
noonedeadpunk | Also lxc should create src-nat rules in iptables | 18:54 |
*** rpittau is now known as rpittau|afk | 18:57 | |
noonedeadpunk | I guess basic diagram of networking is here https://docs.openstack.org/openstack-ansible/latest/reference/architecture/container-networking.html#network-diagrams | 18:57 |
noonedeadpunk | also https://docs.openstack.org/openstack-ansible/latest/user/network-arch/example.html#network-interfaces might be usefull | 18:58 |
noonedeadpunk | but configs there are for deb only | 18:58 |
velmeran | yea. I was trying to just give a seperate vm nic to each bridge, with each nic being on a seperate vlan. | 19:00 |
noonedeadpunk | I think you ned routed environment then | 19:04 |
noonedeadpunk | like https://docs.openstack.org/openstack-ansible/latest/user/l3pods/example.html ? | 19:05 |
noonedeadpunk | ah, wait | 19:06 |
noonedeadpunk | got you wrong | 19:06 |
noonedeadpunk | I think it's ok to have each nic in each bridge | 19:06 |
noonedeadpunk | but you need another one for lxcbr anyway | 19:06 |
noonedeadpunk | which will have access to internet | 19:07 |
velmeran | okay, it made lxcbr, but it put it off on 10.0.3.1, which wasn't anything I specified, not sure how the other containers were talking with it, if they even were. | 19:07 |
*** cshen has joined #openstack-ansible | 19:09 | |
velmeran | I'm just looking at the openstack_user_config.yml.test.example file, trying to figure out the ip spaces I need to change so things match my network. | 19:09 |
velmeran | seems like I have 3 cidr networks "container, tunnel, storage", and 4 bridges I would need "br-mgmt, br-vxlan, brvlan, br-storage" | 19:10 |
velmeran | I had changed each cidr to a real vlan on my switches, made the bridges on each host (I forgot br-vlan, so that is a problem...), but was having issues with the containers connecttivity. | 19:12 |
*** cshen has quit IRC | 19:14 | |
*** cshen has joined #openstack-ansible | 19:25 | |
*** cshen has quit IRC | 19:30 | |
*** joshualyle has joined #openstack-ansible | 19:30 | |
*** joshualyle has quit IRC | 19:32 | |
CeeMac | jamesfreeman959: regarding galera cluster, I've found setting all but one of the containers not to auto boot then manually stopping mariadb on each of those containers and wait 2 mins inbetween. When all mariadb except one (on remaining autoboot container) are shutdown the one left should have the safe to boot flag and can be shut down. I find it works best if you do a rolling graceful shutdown of all the | 19:43 |
CeeMac | nodes/containers that write to DB so that there is no traffic coming on to the cluster when you shut the last mariadb service down. Obviously you need to ensure that container starts up before any of the others to ensure the cluster is there. | 19:43 |
CeeMac | Still borks sometimes though *shrug* | 19:43 |
*** djhankb7 has joined #openstack-ansible | 19:44 | |
*** djhankb has quit IRC | 19:45 | |
*** djhankb7 is now known as djhankb | 19:45 | |
*** NewJorg has quit IRC | 19:46 | |
*** NewJorg has joined #openstack-ansible | 19:46 | |
*** Soopaman has joined #openstack-ansible | 19:50 | |
*** DanyC has joined #openstack-ansible | 19:51 | |
*** DanyC has quit IRC | 19:56 | |
*** thuydang has joined #openstack-ansible | 19:56 | |
*** thuydang has quit IRC | 19:58 | |
*** thuydang has joined #openstack-ansible | 19:58 | |
*** itsjg has joined #openstack-ansible | 20:09 | |
noonedeadpunk | velmeran so on ctl hosts lxc should create one more bridge named lxcbr0 where all lxc interfaces will be attached with eth0 | 20:13 |
noonedeadpunk | It do not take any interface from controller - just use src-nat | 20:14 |
noonedeadpunk | this should be done automatcally on containers creation | 20:15 |
noonedeadpunk | so all container should have at least 2 eth0 and eth1 | 20:15 |
noonedeadpunk | eth0 will probably have 10.0.3.0/24 and containers should be able to talk via it only on this spesific node - this network is not shared between controllers and used by containers to reach the world | 20:16 |
noonedeadpunk | and eth1 - mgmt etwork through which containers should communicate with each other between nodes | 20:17 |
noonedeadpunk | btw probably you won't need br-vlan. also br-vxlan is also not so required - it can be regular interface in case you place neutron-agents on bare metal (without containers) as suggested in docs | 20:19 |
noonedeadpunk | but to simplify things for the beggining you may leave them :) | 20:20 |
velmeran | yea, I was trying to figure out how I was going to handle vlan and vxlan with my setup as I'm not passing all vlans into the host. I think I can just make some private networks in vmware for everything but br-mgmt, that one would be a nic back into my network where it could reach the internet etc. | 20:22 |
velmeran | so long as that lxcbr0 is using that as the way out, it should work... | 20:22 |
noonedeadpunk | actually vlan and vxlan both used for tenants private networks. and mostly only vxlan is used as it's more convenient and has less limitations | 20:24 |
*** jamesden_ has quit IRC | 20:25 | |
noonedeadpunk | and to make use of vxaln just intrfave (or another vlan) can be used since there never be any interface in that bridge except 1 from host | 20:25 |
mnaser | hmm | 20:34 |
mnaser | does anyone have any idea why https://github.com/openstack/openstack-ansible-openstack_hosts/blob/master/vars/debian.yml#L71 is there? | 20:34 |
mnaser | it doesn't seem like having lvm2 installed on all hosts is something that's necessary | 20:34 |
mnaser | some of these seem like they should live in specific repos | 20:35 |
mnaser | like bridge-utils | 20:35 |
guilhermesp | yeah at least official docs doesnt mention a computing needing lvm2 | 20:35 |
mnaser | (context: lvm2 seems to be crashing debian nodes that guilhermesp is trying to deploy on) | 20:36 |
mnaser | not sure if others have seen something similar :\ | 20:36 |
mnaser | guilhermesp: can you try pushing a change that makes that list empty and see what breaks? ideally, we should fix the roles to install what they need.. | 20:36 |
mnaser | while you wait for reinstalling nodes :p | 20:36 |
spotz | mnaser: Only crashing Debian? | 20:37 |
mnaser | yes | 20:37 |
spotz | I wonder if they changed anything though Debian isn't known for changing studd vs ubuntu | 20:37 |
spotz | stuff.... | 20:37 |
*** gshippey has quit IRC | 20:40 | |
*** jamesden_ has joined #openstack-ansible | 20:46 | |
velmeran | So I will need a br-vlan on my host, is it just my infra node, or also compute node? | 20:53 |
*** cshen has joined #openstack-ansible | 21:02 | |
*** cshen has quit IRC | 21:07 | |
jrosser | good evening everyone | 21:09 |
jrosser | velmeran: it pretty much depends on how you want your external and tenant networking to work | 21:10 |
jrosser | but as a good starting point for a simple life you might want to try to make the bridges uniform across all the nodes, even if they don't go anywhere (no vlan type networks on the compute nodes for example, but make the bridge anyway) | 21:11 |
jrosser | then the neutron config can be uniform everywhere | 21:11 |
jrosser | but you can set this up however you like really - the examples are just a suggestion | 21:12 |
openstackgerrit | Merged openstack/openstack-ansible-os_tempest stable/rocky: Use contraints for tempest plugins https://review.opendev.org/715407 | 21:13 |
jrosser | git diff origin/master origin/stable/train defaults/main.yml | 21:15 |
jrosser | -keystone_upper_constraints_url: "{{ requirements_git_url | default('https://releases.openstack.org/constraints/upper/' ~ requirements_git_install_branch | default('master')) }}" | 21:15 |
jrosser | +keystone_upper_constraints_url: "{{ requirements_git_url | default('https://opendev.org/openstack/requirements/raw/' ~ requirements_git_install_branch | default('master') ~ '/upper-constraints.txt') }}" | 21:15 |
jrosser | mnaser: noonedeadpunk ^ we have this difference in pretty much all our roles - i think something might go wrong when we cut the U branch from master | 21:16 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/rocky: Bump tempest role https://review.opendev.org/715554 | 21:17 |
*** cshen has joined #openstack-ansible | 21:18 | |
*** cshen has quit IRC | 21:23 | |
*** macz_ has quit IRC | 21:25 | |
openstackgerrit | Jonathan Rosser proposed openstack/ansible-config_template master: Switch to CentOS 8 based Tripleo job https://review.opendev.org/715367 | 21:33 |
*** jamesden_ has quit IRC | 21:43 | |
*** macz_ has joined #openstack-ansible | 22:09 | |
*** macz_ has quit IRC | 22:18 | |
*** rh-jelabarre has quit IRC | 22:25 | |
mnaser | jrosser: is it because we use branch 'stable/xxx' ? | 22:31 |
*** thuydang has quit IRC | 22:49 | |
jrosser | mnaser: yes it is - the url we use on master is 404 if you try to put a branch in there “stable/blah” | 22:55 |
jrosser | rather than a release name on its own | 22:56 |
*** thuydang has joined #openstack-ansible | 22:58 | |
velmeran | humm, my containers are able to get out to the internet, but they can't seem to see my internal or external lb_vip_addresses which are in my container cidr. | 23:03 |
velmeran | I'm getting stuck on [python_venv_build : Install python packages into the venv], in the logs its failing: Getting page http://10.0.50.111:8181/os-releases/20.0.2/centos-7.7-x86_64 | 23:06 |
velmeran | with 10.0.50.111 being my internal vip | 23:06 |
velmeran | hitting that page, haproxy is returning a 503 | 23:07 |
*** DanyC has joined #openstack-ansible | 23:08 | |
*** DanyC has quit IRC | 23:13 | |
*** cshen has joined #openstack-ansible | 23:19 | |
openstackgerrit | Merged openstack/openstack-ansible-tests stable/train: Set requirements_git_url during functional tests https://review.opendev.org/714486 | 23:24 |
*** cshen has quit IRC | 23:25 | |
*** cshen has joined #openstack-ansible | 23:36 | |
*** NewJorg has quit IRC | 23:38 | |
*** NewJorg has joined #openstack-ansible | 23:39 | |
*** cshen has quit IRC | 23:40 | |
*** thuydang has quit IRC | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!