opendevreview | Merged openstack/openstack-ansible master: Run healthcheck-openstack from utility host https://review.opendev.org/c/openstack/openstack-ansible/+/883496 | 00:18 |
---|---|---|
opendevreview | Merged openstack/openstack-ansible-os_nova master: Install libvirt-deamon for RHEL systems https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/884415 | 01:25 |
NeilHanlon | jrosser: https://review.opendev.org/c/openstack/diskimage-builder/+/884452 | 02:57 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova stable/2023.1: Install libvirt-deamon for RHEL systems https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/884379 | 07:18 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova stable/zed: Install libvirt-deamon for RHEL systems https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/884380 | 07:18 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova stable/yoga: Install libvirt-deamon for RHEL systems https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/884381 | 07:18 |
derekokeeffe85 | Morning all, any pointers as to what's causing this: TASK [Get list of repo packages] ************************************************************************************************************************************ | 07:37 |
derekokeeffe85 | fatal: [infra1_utility_container-221a4415]: FAILED! => {"changed": false, "content": "", "elapsed": 0, "msg": "Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://10.37.110.100:8181/constraints/upper_constraints_cached.txt"} | 07:37 |
MrR | 8181 is swift isn't it? what release? The answer is right there though, connection was refused, is the container/bridge up? | 07:42 |
derekokeeffe85 | -Mr8 yep the container is up and I can attach to it, I'm not sure what 8181 is tbh. I don't see anything in the logs on the container either. Usually it is something I have failed to configure or configured wrong in the networking so you're probably right. What bridge would affect that? | 07:52 |
derekokeeffe85 | Actually is there documentation to show what containers rely on what bridges as that always seems a problem for me | 07:53 |
MrR | I was refering to the port but my bad, 8181 is the repo server port (its early and I haven't even drank my coffee yet) | 07:56 |
MrR | https://docs.openstack.org/openstack-ansible/latest/user/prod/example.html | 07:56 |
MrR | shows a production example of networking | 07:56 |
derekokeeffe85 | haha that's ok :) There's the log I just got when it failed | 07:56 |
derekokeeffe85 | May 26 07:56:00 infra1-utility-container-221a4415 ansible-ansible.legacy.uri[3508]: Invoked with url=http://10.37.110.100:8181/constraints/upper_constraints_cached.txt return_content=True force=False http_agent=ansible-httpget use_proxy=True validate_certs=True force_basic_auth=False use_gssapi=False body_format=raw method=GET follow_redirects=safe status_code=[200] timeout=30 headers={} remote_src=False unredirected_headers=[] | 07:56 |
derekokeeffe85 | unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None dest=None body=None src=None creates=None removes=None unix_socket=None ca_path=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None | 07:56 |
noonedeadpunk | nah 8181 is a repo_container | 07:58 |
MrR | I'm not an openstack dev btw, i'm just a guy that got some help and has learnt along the way to fix a lot of problems haha | 07:58 |
noonedeadpunk | I think everyone here are like that | 07:59 |
derekokeeffe85 | I'm on a journey too :) | 07:59 |
noonedeadpunk | so, in repo container there should be nginx running | 07:59 |
MrR | what branch are you on and what stage is this failing, i'm assumingbootstrap or setup hosts as you cant really get any further without repo working | 07:59 |
noonedeadpunk | I guess on utilit-isntall.yml | 08:00 |
noonedeadpunk | *utility-install.yml | 08:00 |
derekokeeffe85 | I bug noonedeadpunk and jrosser all the time no it's setup-infrastructure.yml, setup hosts ran through yesterday after fixing the br-storage issue | 08:00 |
noonedeadpunk | utility-install.yml is one of last pieces of setup-infrastructure https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/setup-infrastructure.yml#L23 | 08:01 |
derekokeeffe85 | 26.1.1 | 08:01 |
noonedeadpunk | I'd propose to manually re-run repo-install.yml | 08:01 |
noonedeadpunk | and see how it goes | 08:01 |
derekokeeffe85 | Will try that | 08:02 |
noonedeadpunk | But Connection refused... I wonder if that could be coming from haproxy.... | 08:02 |
noonedeadpunk | so might be worth checking if repo_server backend is considered UP in haproxy as well | 08:02 |
MrR | yeah i just found it, i'll admit i haven't jumped into all the playbooks, only the ones that have broke on me haha | 08:03 |
noonedeadpunk | MrR: thanks for stepping in btw and trying to help out! | 08:03 |
MrR | hey i might of just burnt this all down if you guys hadn't helped, it's only right i try and do the same for others | 08:04 |
derekokeeffe85 | repo-install completed but during the run of the playbook I got this: TASK [systemd_mount : Set the state of the mount] ******************************************************************** | 08:04 |
derekokeeffe85 | fatal: [infra1_repo_container-23e3ab6f]: FAILED! => {"changed": false, "cmd": "systemctl reload-or-restart $(systemd-escape -p --suffix=\"mount\" \"/var/www/repo\")", "delta": "0:00:00.041776", "end": "2023-05-26 08:03:13.896337", "msg": "non-zero return code", "rc": 1, "start": "2023-05-26 08:03:13.854561", "stderr": "Job failed. See \"journalctl -xe\" for details.", "stderr_lines": ["Job failed. See \"journalctl -xe\" for details."], | 08:04 |
derekokeeffe85 | "stdout": "", "stdout_lines": []} | 08:04 |
derekokeeffe85 | and in the container log this: May 26 08:03:13 infra1-repo-container-23e3ab6f systemd[1]: Reload failed for Auto mount for /var/www/repo. | 08:04 |
noonedeadpunk | yeah, this one is "fine" as we have block/rescue there | 08:05 |
noonedeadpunk | So, if you just `curl http://10.37.110.100:8181/constraints/upper_constraints_cached.txt -v --head` ? | 08:08 |
noonedeadpunk | what's the result will be? | 08:09 |
derekokeeffe85 | curl http://10.37.110.100:8181/constraints/upper_constraints_cached.txt -v --head | 08:09 |
derekokeeffe85 | * Trying 10.37.110.100:8181... | 08:09 |
derekokeeffe85 | * TCP_NODELAY set | 08:09 |
derekokeeffe85 | * connect to 10.37.110.100 port 8181 failed: Connection refused | 08:09 |
derekokeeffe85 | * Failed to connect to 10.37.110.100 port 8181: Connection refused | 08:09 |
derekokeeffe85 | * Closing connection 0 | 08:09 |
noonedeadpunk | osa-cores, let's quickly land https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/884379 :) | 08:09 |
derekokeeffe85 | curl: (7) Failed to connect to 10.37.110.100 port 8181: Connection refused | 08:09 |
noonedeadpunk | and what's haproxy says on repo? | 08:10 |
noonedeadpunk | echo "show stat" | nc -U /run/haproxy.stat | grep repo | 08:10 |
noonedeadpunk | derekokeeffe85: also please use paste.openstack.org for providing outputs :) | 08:10 |
noonedeadpunk | IRC not really designed for that | 08:11 |
derekokeeffe85 | Sorry noonedeadpunk will do | 08:11 |
noonedeadpunk | no worries :) | 08:11 |
noonedeadpunk | it's just tricky to read in native irc client at very least | 08:12 |
derekokeeffe85 | No worries. Em sorry for sounding completely stupid but where do I run that command? Which container? | 08:13 |
MrR | on the host | 08:14 |
noonedeadpunk | with haproxy on it | 08:14 |
MrR | you may need sudo/su | 08:14 |
derekokeeffe85 | https://paste.openstack.org/show/bW3DdqnStlBjGaRU0xsT/ | 08:16 |
noonedeadpunk | there's also hatop utility that should be present to help managing haproxy (just in case) | 08:17 |
noonedeadpunk | are you running that as root? | 08:17 |
derekokeeffe85 | Yep | 08:17 |
noonedeadpunk | is haproxy even alive? | 08:17 |
derekokeeffe85 | on infra1 (controller) | 08:17 |
derekokeeffe85 | yep https://paste.openstack.org/show/bqfvdEsjeY55YM4lbrYq/ | 08:18 |
noonedeadpunk | um | 08:19 |
noonedeadpunk | this is not healthy | 08:19 |
noonedeadpunk | these `can not bind` likely the root cause | 08:20 |
derekokeeffe85 | should I blow away and re deploy? | 08:21 |
noonedeadpunk | what distro is that? | 08:21 |
derekokeeffe85 | ubuntu 22.04.2, Jammy | 08:22 |
noonedeadpunk | huh | 08:22 |
derekokeeffe85 | The OS I'm deploying on? | 08:22 |
noonedeadpunk | can you do smth like `journalctl -xn -u haproxy` ? | 08:22 |
noonedeadpunk | so it looks like haproxy can not bind to ports for some reasons | 08:25 |
opendevreview | Merged openstack/openstack-ansible-os_nova stable/2023.1: Install libvirt-deamon for RHEL systems https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/884379 | 08:25 |
derekokeeffe85 | I think I found it noonedeadpunk, I had a copy and paste error on the interface for haproxy_keepalived_internal_vip_cidr: thanks for talking me through it. and you too Mr8 | 08:27 |
noonedeadpunk | aha, yes, ips can not be same for internal and external vips | 08:28 |
derekokeeffe85 | I had the netplans saved for easier redeployment and it had an error, same with br-storage yesterday | 08:28 |
derekokeeffe85 | Thanks again | 08:28 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Bump SHAs for OpenStack-Ansible 27.0.0.rc1 https://review.opendev.org/c/openstack/openstack-ansible/+/884203 | 08:34 |
noonedeadpunk | I hope this is final now ^ | 08:34 |
derekokeeffe85 | That sorted that issue :) | 08:47 |
MrR | noonedeadpunk seems my logging issue has eased, i'm guessing it was down to having senlin/trove etc in a broken state, only (i say only!) 2.5gb of logs per node in the last 12-18 hours, much better than the 15-20gb a day at least | 08:51 |
noonedeadpunk | well, operational cluster indeed quite intense in logs | 10:21 |
noonedeadpunk | probably we should implement variable to control verbosity of logs to warning, for example | 10:21 |
damiandabrowski | all tls-backend patches passed CI and are ready for review | 10:28 |
damiandabrowski | https://review.opendev.org/q/topic:tls-backend+status:open | 10:28 |
opendevreview | Damian DÄ browski proposed openstack/openstack-ansible master: Fix repo url in healthcheck-infrastructure.yml https://review.opendev.org/c/openstack/openstack-ansible/+/884445 | 12:24 |
jrosser | this needs another vote https://review.opendev.org/c/openstack/openstack-ansible/+/884203 | 15:06 |
admin1 | is my vote eligible :D ? | 15:07 |
admin1 | i see it straightforward .. the changes.. | 15:09 |
NeilHanlon | admin1: you're of course welcome to add your vote and review :) but each change needs at least two core reviewers' reviews, too :) we always encourage more people to review, though | 15:11 |
NeilHanlon | jrosser: looking now | 15:11 |
jrosser | NeilHanlon: this creates a SHA in openstack-ansible repo that we will then use as the branch point for antelope | 15:13 |
NeilHanlon | jrosser: roger. it looks good to me :) | 15:14 |
NeilHanlon | btw jrosser, not sure if you saw my patch for DIB to build `kernel-64k` rocky 9 images | 15:20 |
jrosser | ah yes i did - thanks for looking at that | 15:20 |
jrosser | i think we will be able to have a go next week with it | 15:20 |
NeilHanlon | awesome! glad to hear it | 15:21 |
MrR | quick question, for horizon is there a speciic way to customize it (logo etc) via override files/directorys or am i doing it directly in the horizon container? | 15:31 |
NeilHanlon | MrR: https://docs.openstack.org/horizon/latest/configuration/customizing.html | 15:38 |
MrR | I did find that but it assumes direct access to the files, obviously i can just connect to the container but then changes would be lost with upgrades, what i'm asking is if i can put them in a file or user_variables etc | 15:42 |
jrosser | MrR: the "reference" is pretty much always defaults/main.yml in the relevant ansible role | 15:43 |
jrosser | there you will find this https://github.com/openstack/openstack-ansible-os_horizon/blob/master/defaults/main.yml#L377-L388 | 15:43 |
jrosser | and you can also have an entirely custom theme if you want to | 15:44 |
MrR | great thanks, a custom theme seems like more work than i'm willing to go to right now but i'll keep it in mind | 15:46 |
MrR | i pretty much only have ironic to contemplate now and some network hardening then i may be done | 15:47 |
MrR | with ironic, is a seperate host that can't run compute neccesery or is there a way around that? For testing i'd guess a few vms would do but may not do well in production | 15:47 |
jrosser | ironic is bare metal deployment, so really for production you'd need a use case requireing that sort of facility | 15:49 |
jrosser | andd sufficient suitable nodes to make it viable | 15:49 |
jrosser | for example we use ironic today in a test lab environment, which would otherwise have been manually configured bare metal servers | 15:50 |
MrR | the idea was to have it ready to automatically provision future hardware into the stack | 15:50 |
jrosser | but with ironic we can define and build/tear down the lab with terraform | 15:50 |
MrR | right now this is 3 servers, and ironic seemed like a way to add 10 more without a headache every time | 15:51 |
jrosser | you can do that too :) | 15:51 |
jrosser | though i would say that *everything* about ironic is configurable | 15:52 |
jrosser | and in a practical situation the intersection of your use case / the hardware you have / the many bugs in everything / etc leaves actually only a small subset of things that are workable | 15:53 |
adi_ | hi | 15:53 |
MrR | so is a dedicated ironic machine needed or could i get away with it being a vm in openstack? Obviously this would be down if the instance was down but once 3 more servers are added an ironic node would be viable | 15:54 |
adi_ | i am seeing this error in my horizon container, is this any kind of bug in xena | 15:54 |
adi_ | [Thu May 25 14:35:26.734806 2023] [mpm_event:notice] [pid 840:tid 140502761360448] AH00489: Apache/2.4.41 (Ubuntu) configured -- resuming normal operations [Thu May 25 14:35:26.734995 2023] [core:notice] [pid 840:tid 140502761360448] AH00094: Command line: '/usr/sbin/apache2' [Thu May 25 14:35:28.023081 2023] [mpm_event:notice] [pid 840:tid 140502761360448] AH00491: caught SIGTERM, shutting down [Thu May 25 14:35:28.095876 2023] [mpm_event:notice] | 15:54 |
adi_ | CLI works fine | 15:54 |
jrosser | MrR: the ironic service deploys bare metal mahcines by pxebooting them for you and doing lifecycle management | 15:55 |
jrosser | its not really something that you can replace with a VM (outside an artificial CI setup) | 15:55 |
jrosser | adi_: you will need to look through the logs more to see why that has happened | 15:56 |
jrosser | for example, is it the OOM killer? | 15:56 |
jrosser | SIGTERM has to have come from somewhere | 15:56 |
adi_ | it is coming from here | 15:57 |
adi_ | OpenSSL/1.1.1f mod_wsgi/4.6.8 Python/3.8 | 15:57 |
adi_ | H00292: Apache/2.4.41 (Ubuntu) OpenSSL/1.1.1f mod_wsgi/4.6.8 Python/3.8 configured | 15:57 |
jrosser | can you please use a paste service for debug logs | 15:58 |
MrR | the bare metal machines will come, it's just in my current environment i have 3, the idea was for ironic to provision the next node that gets added, if that however requires a dedicated machine it can wait, which means i'm almost done until i start adding more machines, the first oif which will now be an ironic node to make future expansion much smoother | 15:58 |
jrosser | adi_: really i don't know what that means unnfortunately | 15:58 |
jrosser | MrR: you can use ironic to deploy the next node | 15:59 |
jrosser | the ironic service runs on your existing controllers | 15:59 |
jrosser | MrR: i recently added a full example for LXC deployment of ironic to the docs https://docs.openstack.org/openstack-ansible-os_ironic/latest/configure-lxc-example.html | 16:00 |
MrR | not yet i cant as all 3 machines are currently compute, or can i run ironic alongside compute on the same node? i was under the impressiuon i cant | 16:00 |
jrosser | sorry i thought you meant 3 controllers | 16:00 |
jrosser | "run ironic" this is confusing :) | 16:01 |
jrosser | you run the ironic service on your controllers | 16:01 |
jrosser | it then PXEboots some other servers for you, as needed | 16:01 |
jrosser | "run ironic alongside compute on the same node" <- this cannot be | 16:01 |
MrR | yeah my bad, i converged for initial testing, as nodes are added things will become less converged | 16:01 |
jrosser | as by definition ironic will PXEboot the node and wipe the disks / deploy a new OS | 16:02 |
MrR | i'm also not distinguishing ironic and the ironic api which probably isnt helping clarification | 16:03 |
jrosser | adi_: you have shown the message that apache made when it received SIGTERM. the OOM killer will typically put information into syslog for example | 16:03 |
MrR | adi_ can you ping your haproxy external vip ip? check your haproxy config in user_variables if you can't | 16:07 |
jrosser | adi_: ^ this is a good point - your CLI access from the utility container will be through the internal vip, but horizon access will be the external vip | 16:11 |
adi_ | ok | 16:13 |
adi_ | The Ip is pingable , that part i know, There is a issue because it is not that page is not openning , it sometime opens , Some time it doesnt | 16:14 |
jrosser | the only way is to do systematic debugging in the horizon container | 16:14 |
jrosser | remember that there is a loadbalancer, so requests are round-robin between the horizon backends | 16:15 |
jrosser | so if one of N is broken you can easily see this "works sometimes / broken sometimes" behaviour | 16:15 |
adi_ | i debug yesterday , There were error, i made in user variables horizon_enable_designate_ui: False , there are no errors after that | 16:15 |
MrR | do any other services periodically fail/timeout? | 16:16 |
adi_ | apache also comes clean, Sometime the horizon page open , and the login page is in a very bad shape, All the icons are here and there | 16:16 |
MrR | you said you have enough memory, do you have enough cpu power/cores? Do you have enough free hdd space, What are you trying to run on what hardware? it still sounds like a performance/hardware issue to me, especially if its sporadic | 16:18 |
MrR | and I've seen a lot of performance issues, at one point i tried to run a full stack on a 10 year old machine with 32gb ram, it was extremely painful! | 16:19 |
jrosser | adi_: you did not tell us at all why you need the designate setting | 16:20 |
jrosser | please remember we are not familiar with your deployment, i have no idea why you would need to disable the designate UI | 16:21 |
adi_ | hi jrosser, i can remove that , because i completely remove the designate initially | 16:37 |
jrosser | so is that related to the horizon troubles? | 16:38 |
adi_ | it was initially as it was showing apache 2 status, but when i remove designate it was gone, i just added this for cusriosity i can remove that | 16:40 |
adi_ | actually i have few queries about ansible, when i do a got checkout at 24.0.0 , it did fail on task " parallel repo" | 16:41 |
adi_ | when i did bootstrap ansible and gpg keys | 16:41 |
adi_ | Can it be a problem, because then i did a minror version upgrade, just to see how it goes, and there the boot strap ansible does not show any error may be it only boot strap the changes from the OLD | 16:42 |
jrosser | well, 24.0.0 would be the very first release of that branch, and probably has bugs | 16:42 |
adi_ | the minor upgarde was clean, no errors | 16:43 |
adi_ | only horizon issue , playbook is clean | 16:43 |
jrosser | it is possible that there were bugs fixed in the git clone process | 16:46 |
jrosser | but remember that the git clone retrieves the ansible roles, not the code for horizon | 16:47 |
adi_ | yeah i know | 16:47 |
adi_ | roles are imp | 16:47 |
adi_ | but this horizon is pain , in my other prod env also everything comes clean | 16:48 |
adi_ | but when u move drom one project to another, it shows gateway time out | 16:49 |
adi_ | every ip is reachable, memcache is fine | 16:49 |
MrR | is there a reason your using 24.0.0 and not 26.1.1? I defiantly had problems when deploying 24/25 that i haven't had in 26 | 16:51 |
adi_ | my openstack env is extensively used | 16:52 |
adi_ | untill i [roved the POC, i ca not upgrade | 16:52 |
adi_ | so i was testing from xena to yoga dirst | 16:52 |
adi_ | starting from scratch is easy | 16:52 |
jrosser | MrR: please do file bugs if you find any | 16:53 |
adi_ | but upgrade need to come clean, we can not do scratch eevrytime | 16:53 |
jrosser | the stable branches should be good | 16:53 |
adi_ | ok | 16:53 |
jrosser | adi_: but it is a good question, do you really use 24.0.0 or the latest tag of xena? | 16:54 |
MrR | most of the bugs i've found are already patched for the next release, some others it was debateable if i was the cause | 16:54 |
adi_ | in my test i am in 24.6.0 | 16:54 |
adi_ | i am planning to upgrade to 25.2.0 | 16:55 |
adi_ | yoga , once this horizon is fixed | 16:55 |
jrosser | bugfixes do get backported, so do let us know if we've missed something | 16:55 |
jrosser | adi_: do you try to reproduce this in an all-in-one? | 16:55 |
adi_ | i can try to, but i know if i go from scratch no issues will come | 16:56 |
opendevreview | Merged openstack/openstack-ansible master: Bump SHAs for OpenStack-Ansible 27.0.0.rc1 https://review.opendev.org/c/openstack/openstack-ansible/+/884203 | 17:29 |
NeilHanlon | š„³ | 17:46 |
noonedeadpunk | sweet :) | 18:39 |
admin1 | adi_ i have one cluster that i moved from rocky -> 26.1.1 .. and in between changed from ceph ansible -> cephadm | 18:48 |
admin1 | i think i hit the redis not present in gnocci issue again | 18:49 |
admin1 | i will have more data on monday | 18:49 |
noonedeadpunk | iirc there was a patch that allowed to install it? | 18:50 |
admin1 | yeah .. i recall this being addressed and fixed .. but that was like a 100 deployments ago .. a new one needed to do the same | 18:50 |
admin1 | gnocci using redis for cache and then using ceph for metrics | 18:50 |
noonedeadpunk | we have also var like `gnocchi_storage_redis_url` | 18:50 |
noonedeadpunk | (and gnocchi_incoming_redis_url) | 18:51 |
admin1 | its the driver that goes missing | 18:51 |
noonedeadpunk | but yes, this setup makes most sense and performance to me as well | 18:51 |
admin1 | i recall going into the venv and manually doing pip install redis to move things ahead | 18:51 |
noonedeadpunk | though I wish gnocchi was supporting zookeeper as incoming driver... | 18:51 |
noonedeadpunk | so now packages being added with that https://opendev.org/openstack/openstack-ansible-os_gnocchi/src/branch/master/defaults/main.yml#L177-L181 | 18:52 |
noonedeadpunk | so if you set gnocchi_incoming_driver to redis - it should get in | 18:53 |
admin1 | gnocchi_incoming_driver: redis -- i have this set | 18:56 |
admin1 | hmm.. i also have gnocchi_conf_overrides: => incoming: => driver: redis redis_url: redis://172.29.236.111:6379 | 18:56 |
admin1 | is the override not required anymore ? | 18:57 |
admin1 | i see | 18:57 |
admin1 | all i need is gnocchi_storage_redis_url and gnocchi_incoming_driver | 18:57 |
admin1 | maybe it was due to the overrides .. | 18:57 |
admin1 | why would this error come ? tag 26.1.1 22.04 jammy .. fatal: [ams1h2 -> ams1c1_repo_container-75a86909(172.29.239.37)]: FAILED! => {"attempts": 5, "changed": false, "msg": "No package matching '{'name': 'ubuntu-cloud-keyring', 'state': 'present'}' is available"} | 20:12 |
admin1 | it did not came in the 1st run .. from 2nd run, it starts to come | 20:12 |
admin1 | TASK [python_venv_build : Install distro packages for wheel build] ************************************************************************************************************************ | 20:12 |
admin1 | this is in the neutron playbook .. playbooks before this seem fine | 20:20 |
admin1 | issue seems to be only in the os-neutron playbook .. rest are moving along fine | 20:25 |
jrosser | try looking at the output of `apt policy` for that package | 20:59 |
jrosser | admin1: ^ | 21:00 |
jrosser | you can see here that package should be trivial to install https://packages.ubuntu.com/search?suite=all&searchon=names&keywords=ubuntu-cloud-keyring | 21:01 |
admin1 | the package is already there and the newest version .. | 21:14 |
opendevreview | Merged openstack/openstack-ansible master: Fix repo url in healthcheck-infrastructure.yml https://review.opendev.org/c/openstack/openstack-ansible/+/884445 | 21:25 |
jrosser | admin1: if you could paste that failed task with -vvvv it would be interesting | 21:57 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!