*** ysandeep|out is now known as ysandeep | 04:59 | |
*** ysandeep is now known as ysandeep|ruck | 04:59 | |
dok53 | morning, so I have cinder using quobyte at the moment, it's creating the volumes and now when I attch it's not failing but getting stuck in the attaching phase. This is the only log I can see https://paste.openstack.org/show/bTvfhBX9o3bNXwZW1SfB/ any ideas what may be causing it or where to look for a better log?# | 08:39 |
---|---|---|
noonedeadpunk | well, I assume there's nothing on cinder-volume log? | 08:41 |
dok53 | only that on the container https://paste.openstack.org/show/byiJjGDetFnUMc7MS6kd/ (volume from just now) | 08:54 |
*** ysandeep|ruck is now known as ysandeep|ruck|lunch | 09:46 | |
dok53 | noonedeadpunk, it was a nova compute issue, the service was dead so the volume couldn't attach. Attached now :) | 09:54 |
noonedeadpunk | hmm... but if it died - you would see that in the log most likely... | 09:56 |
noonedeadpunk | weird... | 09:56 |
noonedeadpunk | as eventually according to your first output it has recieved volume attachment request | 09:57 |
*** ysandeep|ruck|lunch is now known as ysandeep|ruck | 10:26 | |
dok53 | Yep I didn't see anything in the logs to suggest that. I just spotted myself thst the compute service was dead on the host | 10:45 |
*** dviroel|out is now known as dviroel | 11:23 | |
*** ysandeep|ruck is now known as ysandeep|ruck|brb | 12:00 | |
*** dviroel_ is now known as dviroel | 12:12 | |
admin1 | anyone using ceph + keystone auth (openstack) .. i have got a strange issue .. making bucket private works fine .. making bucket public , when trying to access gives NoSuchBucket | 12:12 |
*** ysandeep|ruck|brb is now known as ysandeep|ruck | 12:12 | |
noonedeadpunk | admin1: I do recall rgw issue for that but it was quite a while ago and just ceph upgrade helped | 12:44 |
noonedeadpunk | But it was during ceph 14 or smth | 12:44 |
noonedeadpunk | but it was related to acls in general | 12:45 |
dok53 | noonedeadpunk, jrosser and others, just wanted to say thanks for the help lately. I now have an openstack with networking and cinder volumes on a Quobyte backend | 14:43 |
jrosser | dok53: excellent :) | 14:44 |
dok53 | :) | 14:47 |
*** ysandeep|ruck is now known as ysandeep|ruck|dinner | 15:11 | |
*** dviroel is now known as dviroel|lunch | 15:14 | |
*** ysandeep|ruck|dinner is now known as ysandeep|ruck | 16:13 | |
*** dviroel|lunch is now known as dviroel | 16:18 | |
*** ysandeep|ruck is now known as ysandeep|out | 16:23 | |
opendevreview | Bjoern Teipel proposed openstack/openstack-ansible-os_octavia master: Adding octavia_provider_network_mtu-parameter parameter https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/864819 | 17:32 |
opendevreview | Bjoern Teipel proposed openstack/openstack-ansible-os_octavia master: Adding octavia_provider_network_mtu-parameter parameter https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/864819 | 17:34 |
jrosser | damiandabrowski: i had a quick look at glance/cinder nfs jobs and this is breaking https://zuul.opendev.org/t/openstack/build/07862b60e8b64062a432004150f9efbe/log/logs/host/cinder-volume.service.journal-19-33-56.log.txt#6762 | 19:58 |
ElDuderino | Hi all, random question and am not sure where to (or how to) ask it. I have 3 network nodes (bare-metal) that only run neutron containers. I also have three controllers, that run all the other core services. keepalived seems to honor the priority and sets the state correctly on the controllers, but the neutron nodes seem to all have the same info in /var/lib/neutron/ha_condfs/<routerID>/keepalived.conf | 21:15 |
ElDuderino | Currently, keepalived is alive and reports the proper states via systemctl on each of my controllers as expected. | 21:16 |
ElDuderino | the issue is, that on my neutron hosts all three are showing 'active' l3 agents (high availability status) in the GUI, but when I check those keepalived configs they don't have the prio weights like their controller counterparts do. | 21:17 |
ElDuderino | so,what's happening is that the router shows all three active for the l3 agent, and there's no dhcp or routing happening into the vms for those networks. Wondering, what creates those ha_confs and are they all supposed to be weighted the same and all say backup in the configs? Still learning how keepalived is being invoked via that agent, and how it all works. | 21:19 |
ElDuderino | From what I can tell, 'ha_confs' is created by /etc/ansible/roles/os_neutron/ and is used by keepaliaved which is created via /etc/ansible/roles/keepalived/*. Thx. | 21:25 |
*** dviroel is now known as dviroel|afk | 22:45 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!