*** kukacz has quit IRC | 00:14 | |
*** kukacz has joined #openstack-ansible | 00:20 | |
*** Underknowledge has quit IRC | 00:23 | |
*** Underknowledge has joined #openstack-ansible | 00:24 | |
*** tosky has quit IRC | 00:27 | |
*** ianychoi__ has quit IRC | 00:37 | |
*** ianychoi__ has joined #openstack-ansible | 00:37 | |
*** djhankb has quit IRC | 00:38 | |
*** djhankb has joined #openstack-ansible | 00:38 | |
*** maharg101 has joined #openstack-ansible | 01:44 | |
*** spatel has joined #openstack-ansible | 01:44 | |
*** maharg101 has quit IRC | 01:48 | |
*** dasp has quit IRC | 02:04 | |
*** cshen has quit IRC | 02:27 | |
*** dasp has joined #openstack-ansible | 02:32 | |
*** LowKey has joined #openstack-ansible | 02:59 | |
*** cshen has joined #openstack-ansible | 03:15 | |
*** cshen has quit IRC | 03:19 | |
*** cshen has joined #openstack-ansible | 03:22 | |
*** cshen has quit IRC | 03:27 | |
*** Underknowledge has quit IRC | 03:33 | |
*** Underknowledge2 has joined #openstack-ansible | 03:33 | |
*** Underknowledge2 is now known as Underknowledge | 03:34 | |
*** cshen has joined #openstack-ansible | 05:23 | |
*** cshen has quit IRC | 05:28 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-ansible | 05:33 | |
*** yasemind has joined #openstack-ansible | 05:38 | |
*** maharg101 has joined #openstack-ansible | 05:45 | |
*** maharg101 has quit IRC | 05:50 | |
*** yasemind has quit IRC | 06:12 | |
*** spatel has quit IRC | 06:36 | |
*** cshen has joined #openstack-ansible | 06:45 | |
*** cshen has quit IRC | 06:50 | |
CeeMac | morning | 07:27 |
---|---|---|
CeeMac | mgariepy: was just catching up in the channel. If you come up with a 'stable' process for upgrading R > T direct I'd be very interested to hear about it :) | 07:28 |
*** maharg101 has joined #openstack-ansible | 07:39 | |
*** miloa has joined #openstack-ansible | 07:44 | |
noonedeadpunk | morings | 07:49 |
*** miloa has quit IRC | 07:53 | |
*** rpittau|afk is now known as rpittau | 07:57 | |
*** vesper11 has joined #openstack-ansible | 07:58 | |
noonedeadpunk | hm, interesting... why do we run it considering https://opendev.org/openstack/openstack-ansible-openstack_hosts/src/branch/master/tasks/main.yml#L73 | 08:00 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Import wheels build only when necessary https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/774159 | 08:02 |
noonedeadpunk | because of import instead of include? | 08:02 |
noonedeadpunk | doh | 08:02 |
noonedeadpunk | I can recall Jesse has wrote about workaround method for tags I wrote down that log somewhere, need to look for it... | 08:04 |
*** gokhani has joined #openstack-ansible | 08:09 | |
*** cshen has joined #openstack-ansible | 08:12 | |
*** andrewbonney has joined #openstack-ansible | 08:18 | |
gokhani | Hi team, I have prepared 3 nfs shares for glance,cinder and nova. Firstly when I deployed OpenStack Ussuri version with OSA, systemd mount didn't work correctly. ıt didn't mount my nfs share to /var/lib/nova/instances. After restart systemd mount service, it worked. Moreover, ı rebooted one of my compute nodes. After that It didn't mount my | 08:23 |
gokhani | nova nfs share. when try to mount it manually (mount /nfsserver:/var/nfs/nova /var/lib/nova/instances), it connects to my cinder nfs share. ıt is weird. My export is in there > http://paste.openstack.org/show/802366/ . ı didn't find any solution. May be ı am missing something or doing wrong things. Can you help me please? | 08:23 |
*** ierdem has joined #openstack-ansible | 08:24 | |
*** ierdem has left #openstack-ansible | 08:24 | |
*** ierdem21 has joined #openstack-ansible | 08:24 | |
*** ierdem21 has quit IRC | 08:24 | |
*** ierdem has joined #openstack-ansible | 08:24 | |
noonedeadpunk | jrosser: hm iirc you saw it with stackviz already? https://zuul.opendev.org/t/openstack/build/f12a5aea3e674328b8d3da0e43130c31/log/job-output.txt#18586 | 08:28 |
noonedeadpunk | "ERROR: Could not satisfy constraints for 'stackviz': installation from path or url cannot be constrained to a version" | 08:28 |
noonedeadpunk | I guess the same thing spatel say yestarday with barbican-ui | 08:28 |
kleini | gokhani: we have in production massive problems with NFS shares everywhere. it is broken all the time if either client or server have restarts, hickups or the network has issues. Consider to use something else, which is more resilient against such issues like Ceph. | 08:29 |
*** vesper11 has quit IRC | 08:30 | |
noonedeadpunk | I totally agree that NFS in prod is a curse. | 08:30 |
* noonedeadpunk migrating NFS -> Ceph workloads right now | 08:30 | |
noonedeadpunk | gokhani: anyway, what does systemd status says regarding that mount? | 08:33 |
noonedeadpunk | and how nova_nfs_client is defined? | 08:34 |
gokhani | kleini, ın fact, we are using nfs in our prod (OpenStack Pike Version), ıt has been 3 years and it is working successfully. in prod we are using netapp. | 08:37 |
noonedeadpunk | oh, so it's on Pike... | 08:37 |
gokhani | no, now this problem is on my development environment and it is ussuri. | 08:38 |
noonedeadpunk | ok, right, then prevuious 2 question about systemctl status of mount and how nova_nfs_client is set? | 08:39 |
noonedeadpunk | kleini: if you decide to migrate to ceph I might have some tricks for that :) | 08:40 |
gokhani | this is my nova nfs client settings > http://paste.openstack.org/show/802368/ | 08:40 |
*** tosky has joined #openstack-ansible | 08:41 | |
gokhani | I checked uid and guid it is same with my nfs server | 08:42 |
gokhani | kleini, we will upgrade our pike environment to ussuri, and we are planning to use ceph. ıt will be good :) | 08:43 |
noonedeadpunk | What does systemctl status var_lib_nova_instances.mount (or smth like that) | 08:46 |
noonedeadpunk | might be `-` instead of `_` | 08:47 |
noonedeadpunk | btw I'm wondering if it's the missing bit https://opendev.org/openstack/ansible-role-systemd_mount/commit/bcbd5344cf56338adea03ad3ef41466fd8615e70 | 08:48 |
kleini | noonedeadpunk: thanks for your offer. for OpenStack and Proxmox we are already using Ceph since Cuttlefish release. we only have some rare cases, where still NFS is used. those will be migrated this year | 08:48 |
noonedeadpunk | yeah, it has not been backported to Ussuri | 08:49 |
jrosser | noonedeadpunk: i didnt see that trouble with stackviz here https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/770281 | 08:49 |
noonedeadpunk | gokhani: you can try cherry-picking this patch https://review.opendev.org/c/openstack/ansible-role-systemd_mount/+/754978 | 08:50 |
noonedeadpunk | jrosser: yeah, because we don't build wheels there | 08:50 |
jrosser | ah ok | 08:50 |
noonedeadpunk | and that is when I'm testing with build wheels set to true | 08:50 |
noonedeadpunk | I guess it brings up extra complexity to the new pip resolver... | 08:50 |
jrosser | ok well that is coming up because somehow stackviz ends up as a requirement and a constraint and that fails | 08:51 |
gokhani | noonedeadpunk, I am getting these errors on syslog > http://paste.openstack.org/show/802371/ | 08:51 |
noonedeadpunk | um. that sounds like networking thing... | 08:51 |
jrosser | noonedeadpunk: this is what was needed for installs from git tarball to fix similar errors | 08:52 |
jrosser | https://github.com/openstack/ansible-role-python_venv_build/commit/c9eb3b1c905333282e597598e73c5459a4f5c146 | 08:52 |
jrosser | *git repo | 08:52 |
jrosser | i expect some adjustment needed to that to handle tarball is needed | 08:53 |
gokhani | noonedeadpunk, when ı google kernel error 107 it says "107ENOTCONNTransport endpoint is not connected". May be I have problems with my NICS | 08:55 |
noonedeadpunk | or maybe with mtu (but not sure) | 08:56 |
jrosser | would be interesting to know if we can use stackviz_tarball: "https://tarballs.opendev.org/openstack/stackviz/dist/stackviz-latest.tar.gz#egg=stackviz" | 08:58 |
jrosser | and split the string on egg= again | 08:58 |
gokhani | noonedeadpunk, mtu is 1500 on al nics | 09:00 |
gokhani | noonedeadpunk, I also get lots of timeout errors or warnings on rabbitmq cluster. ı am working on performance tunings for rabbitmq. ı try these kernel settings > http://paste.openstack.org/show/802374/ . Dou you have any ideas for rabbitmq performance tuning? | 09:14 |
gokhani | noonedeadpunk, some error logs on rabbitmq > http://paste.openstack.org/show/802375/ | 09:16 |
noonedeadpunk | no, not really, never tried to tune it... | 09:17 |
noonedeadpunk | it's just working for me nowadays | 09:18 |
gokhani | noonedeadpunk, thanks for your help. ı also mounted my nova nfs share after removed my cinder nfs share from nfs server. But now ı am struggling with rabbitmq errors :( | 09:25 |
*** yasemind has joined #openstack-ansible | 09:40 | |
*** pto has joined #openstack-ansible | 09:45 | |
pto | I think there is a bug in the letsencrypt support. The renewal fails: http://paste.openstack.org/show/802378/ | 09:51 |
pto | It seems like its trying to bind on port 80 where haproxy runs | 09:51 |
noonedeadpunk | eventually it should run with `"--http-01-address {{ ansible_host }} --http-01-port 8888"` | 10:10 |
noonedeadpunk | https://docs.openstack.org/openstack-ansible-haproxy_server/ussuri/configure-haproxy.html#using-certificates-from-letsencrypt | 10:10 |
noonedeadpunk | pto: or you're running on Victoria? | 10:11 |
noonedeadpunk | because I changed default there indeed | 10:11 |
pto | Im on ussuri | 10:11 |
noonedeadpunk | then you must haproxy_ssl_letsencrypt_setup_extra_params to be set | 10:12 |
pto | http://paste.openstack.org/show/802379/ | 10:15 |
pto | Manual renewal gives: http://paste.openstack.org/show/802381/ | 10:16 |
noonedeadpunk | I'm wondering why `pre-hook command "/etc/letsencrypt/renewal-hooks/pre/haproxy-pre" returned error code 124` | 10:23 |
noonedeadpunk | as eventually this should start a temp server behind haproxy, and haproxy should have an acl to forward to it | 10:24 |
noonedeadpunk | ok, I'm wrong. actually pre-hook I guess should just sleep enough for haproxy to see backend under 8888 port | 10:32 |
noonedeadpunk | pto do you have an acl for letsencrypt in haproxy config? | 10:34 |
jrosser | if this is ussuri i think that the whole of haproxy_default_services needs overriding to make the ACL work | 10:35 |
noonedeadpunk | the nasty thing is that if you use horizon, you might need to override whole haproxy_default_services | 10:35 |
noonedeadpunk | yeah | 10:35 |
jrosser | last section here https://github.com/openstack/openstack-ansible/blob/stable/ussuri/doc/source/user/security/ssl-certificates.rst | 10:36 |
jrosser | pto: ^ this stuff is all much nicer in victoria but it wasnt really backportable | 10:37 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Add haproxy_*_service variables https://review.opendev.org/c/openstack/openstack-ansible/+/774126 | 10:37 |
noonedeadpunk | not sure how appropriate to backport so huge patch, but I think for those who already have overrides that might be ok? | 10:38 |
pto | noonedeadpunk: I am using the default config. So no acl's in haproxy | 10:39 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Add haproxy_*_service variables https://review.opendev.org/c/openstack/openstack-ansible/+/774126 | 10:40 |
jrosser | pto: the documentation link i gave is necessary | 10:40 |
jrosser | otherwise haproxy will not redirect the challenge to certbot | 10:40 |
pto | The plan is to use a static public ssl certificate in a shot while, so the problem goes away :-) | 10:40 |
pto | I think the problem is that port 8888 is not accessible from the outside right now | 10:40 |
noonedeadpunk | https://docs.openstack.org/openstack-ansible/ussuri/user/security/index.html#letsencrypt-certificates might be better readable a bit https://docs.openstack.org/openstack-ansible/ussuri/user/security/index.html#letsencrypt-certificates | 10:40 |
noonedeadpunk | yes, because you don't have acl | 10:41 |
noonedeadpunk | since letsecrypt always asks on 80 port for verification | 10:41 |
jrosser | pto: it doesnt work like that, port 8888 is only on the backend, and an ACL on the haproxy frontend port 80 sends the acme challenge to the backend/port 8888 | 10:41 |
jrosser | this is needed becasue you have to renew all the certs on all the haproxies, but the VIP is only ever present on one of them | 10:42 |
jrosser | *external VIP | 10:42 |
jrosser | so the loadbalancer function of haproxy is key to ensuring that any of the nodes can renew | 10:43 |
pto | jrosser: which is the VIP url on port 8888, which haproxy then proxies back to the letsencrypt server right? | 10:44 |
pto | jrosser: Otherise i cant see how the request ever would reach the le server | 10:44 |
jrosser | do we talk about the renewal request from certbot to LE, or the challenge from LE to certbot? | 10:45 |
pto | I guess the challenge is present in both requests | 10:50 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Import wheels build only when necessary https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/774159 | 10:51 |
jrosser | pto: certbot on all the haproxy nodes needs to be able to hit the LE API endpoint on port 80 and 443 to request a renewal | 10:52 |
jrosser | so your haproxy nodes regardless of the VIP need some egress for https, be that a default route, NAT, firewall, proxy, whatever | 10:53 |
jrosser | once they hit the LE renewal API, LE then calls back to the IP looked up from DNS for the FQDN, always on port 80 | 10:53 |
pto | From the docs: https://docs.openstack.org/openstack-ansible/ussuri/user/security/index.html#letsencrypt-certificates i think im missing the last part which introduces the acl for well-known/acme-challenge/ | 10:54 |
pto | So its probably just me who missed that part | 10:54 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774224 | 10:54 |
noonedeadpunk | no, it's not only you. because we don't mention that in haproxy docs | 10:55 |
noonedeadpunk | and even me pointed to to the wrong place :( | 10:55 |
noonedeadpunk | btw, I found what Jesse was writing about tags and that's it in 774224 | 10:55 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774224 | 10:57 |
jrosser | docs for haproxy/LE was tough because it’s a reusable role and I think I put stuff about the ACL there too? | 10:58 |
pto | I think you have updated the docs, since i deployed. It was very confusing then. Its much better now, but still not a trivial task to setup i think | 10:58 |
noonedeadpunk | yeah, you did | 10:58 |
noonedeadpunk | it wasn't just so "obvious" that you need to replace all services because of that | 10:59 |
jrosser | iirc it was only in a basic state for ussuri | 10:59 |
noonedeadpunk | and that's ok in the context of the role docs | 10:59 |
jrosser | nearly made my head explode making it work at all in the first place so I’m not surprised it’s causing difficulty :/ | 11:00 |
jrosser | I think adding a diagram to the docs would be hugely helpful | 11:01 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774224 | 11:02 |
noonedeadpunk | btw, https://opendev.org/openstack/ansible-role-pki is made | 11:02 |
noonedeadpunk | I guess it will be the place for haproxy code as well somehow? | 11:02 |
noonedeadpunk | *letsencrypt code | 11:02 |
*** gokhani has quit IRC | 11:06 | |
jrosser | noonedeadpunk: from the numbers I got yesterday the venv build role accounts for quite a large proportion of the total tasks we run | 11:09 |
jrosser | can we make the library symlinking tasks also be optional? | 11:09 |
jrosser | anything we can do there to reduce the number of tasks gets multiplied by ~20 | 11:10 |
*** gokhani has joined #openstack-ansible | 11:11 | |
pto | Thank you all for helping today. You are all Awsome :-) | 11:19 |
noonedeadpunk | jrosser: yeah, fair note about symlinking | 11:26 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Import wheels build only when necessary https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/774159 | 11:29 |
ierdem | Hi everyone, neutron-linuxbridge-agent throws "queueu not found" error on my compute nodes. I checked rabbitmq queues and I realized that queues in neutron logs don't exist. Now, I can not do anything on my cluster. How can I create this queues or do you know the proper solution of this problem? Neutron logs: | 11:31 |
ierdem | http://paste.openstack.org/show/802377/, Rabbitmq /neutron queue list:http://paste.openstack.org/show/802376/ | 11:31 |
ierdem | Similar logs exist on other services such as heat and nova | 11:32 |
noonedeadpunk | eventually neutron is one who should create them | 11:34 |
noonedeadpunk | and can you telnet 172.30.25.206 5671 from neutron container/host? | 11:35 |
noonedeadpunk | as I'd say it's rabbit issue totally | 11:36 |
ierdem | noonedeadpunk I can telnet from compute host to 172.30.25.206 5671 | 11:36 |
ierdem | Could this problem caused by network issues? Can recreate or reinstall rabbitmq cluster be a solution ? | 11:38 |
noonedeadpunk | I'd say network issue was my first assumption but since you can telnet... | 11:39 |
noonedeadpunk | well, I guess it's worth at least try to run openstack-ansible playbooks/rabbitmq_install.yml -e rabbitmq_upgrade=true | 11:39 |
noonedeadpunk | just in case | 11:39 |
noonedeadpunk | either it will confirm network issues or may heal rabbit cluster and re-create required stuff | 11:40 |
ierdem | If I run rabbitmq_install.yml playbook will older messages/queues delete? | 11:40 |
noonedeadpunk | yeah, they will, I'm afraid | 11:42 |
*** shyamb has joined #openstack-ansible | 11:43 | |
ierdem | I have ~20 instances and for now I can only do ssh to them, not ping or curl. This problem occured first when I try to change their security groups and first error in neutron was " queue q-agent-notifier-securitygroup-update.compute06 not found". So as I understand it, neutron creates necessary queues if it needs them | 11:45 |
noonedeadpunk | yep, exactly. | 11:46 |
ierdem | So even I reinstall rabbitmq, necessary queues for neutron may not be created by playbook | 11:46 |
noonedeadpunk | queues are not created by playbokk | 11:46 |
noonedeadpunk | it's neutron responibility to create and somehow manage them | 11:47 |
noonedeadpunk | and I'm absolutely sure that from neutron side things are good | 11:47 |
noonedeadpunk | and it's rabbit that needs attention (or networking) | 11:47 |
ierdem | hmm, is there any list of necessary neutron queues? Can we create them manually? I know it is not proper solution but I faced this problem first time | 11:48 |
noonedeadpunk | no, you can't | 11:48 |
noonedeadpunk | I think major thing there is `due to timeout` | 11:49 |
noonedeadpunk | So I'd say it would create it if not timeout | 11:50 |
ierdem | So it seems only way that reinstall rabbitmq as you said | 11:51 |
ierdem | noonedeadpunk I am trying it now, thank you | 11:54 |
*** gokhani has quit IRC | 11:57 | |
*** yasemind has quit IRC | 12:01 | |
ierdem | noonedeadpunk I want to ask something, I realized that minutes ago. When I check the Hypervisor List on horizon, all compute nodes mentioned with their names like compute06, compute07 but except compute06. It mentioned there with its FQDN (compute06.openstack.local). What cause to this, any idea? http://paste.openstack.org/show/802383/ | 12:03 |
noonedeadpunk | yeah, I guess it's an order of records in /etc/hosts for 127.0.1.1 or 127.0.0.1 | 12:04 |
noonedeadpunk | I can bet that output of python3 -c "import socket; print(socket.gethostname())" and python3 -c "import socket; print(socket.getfqdn())" would differ for this host | 12:05 |
*** gokhani has joined #openstack-ansible | 12:05 | |
*** shyamb has quit IRC | 12:09 | |
ierdem | noonedeadpunk you re right, I checked and it FQDN was different. In /etc/hosts 127.0.1.1 was set log01 because compute06 server's first name were log01. I changed it to compute06, thank you | 12:15 |
*** zul has joined #openstack-ansible | 12:15 | |
openstackgerrit | Merged openstack/openstack-ansible-os_zun stable/ussuri: Update zun role to match current requirements https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/771547 | 12:20 |
*** pto has quit IRC | 12:42 | |
*** cshen_ has joined #openstack-ansible | 12:55 | |
*** cshen has quit IRC | 12:59 | |
*** strattao has joined #openstack-ansible | 13:05 | |
*** spatel has joined #openstack-ansible | 13:48 | |
*** cshen_ has quit IRC | 13:56 | |
*** chandankumar is now known as raukadah | 13:59 | |
*** LowKey has quit IRC | 14:07 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Adjust magnum CI image https://review.opendev.org/c/openstack/openstack-ansible/+/774243 | 14:14 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Adjust magnum CI image https://review.opendev.org/c/openstack/openstack-ansible/+/774243 | 14:15 |
*** spatel has quit IRC | 14:35 | |
*** gokhani has quit IRC | 14:43 | |
*** cshen has joined #openstack-ansible | 14:43 | |
ierdem | I cannot see my compute06 node when I run "openstack hypervisor list" but I can see it when I run "openstack comptue service list" command. I have 3 instances on compute06 but their disks are on NFS server. Now, I want my cluster to get compute06 node as hypervisor again but I could not. How can I do that? | 14:44 |
ierdem | by the way noonedeadpunk I ran rabbitmq playbook as you said and it is working now | 14:44 |
noonedeadpunk | good news! | 14:45 |
noonedeadpunk | regarding compute06 I'm wondering if it can not register with the "new" name because it already in openstack compute service list | 14:46 |
noonedeadpunk | I guess it should report some issues in journald on compute itself | 14:46 |
noonedeadpunk | and you should restart nova-compute service as well | 14:46 |
ierdem | I checked compute06 journals but there is nothing suspicious and yes, I can restart nova-compute service succesfully | 14:50 |
ierdem | is there any way to add this host as hypervisor with don't losing any instances inside it | 14:50 |
noonedeadpunk | well eventually it needs to be discovered in nova | 14:51 |
ierdem | I ran nova-manage discover command and it gave error, http://paste.openstack.org/show/802388/ | 14:52 |
noonedeadpunk | you should run it from nova-api container | 14:52 |
ierdem | oh, okey | 14:53 |
ierdem | I ran and it didn't change , http://paste.openstack.org/show/802389/ | 14:55 |
ierdem | http://paste.openstack.org/show/802390/ | 14:55 |
noonedeadpunk | is libvirtd running? | 14:55 |
noonedeadpunk | and what about compute service list? | 14:56 |
ierdem | there was an error in libvirtd on compute06, Feb 05 14:22:41 compute06 libvirtd[2953]: End of file while reading data: Input/output error | 14:57 |
ierdem | I restarted and am waitin now | 14:57 |
ierdem | compute service list shows all nodes consist compute06 correctly | 14:58 |
ierdem | http://paste.openstack.org/show/802391/ | 14:58 |
ierdem | Oh after restarting libvirtd, I restarted nova-compute and it is working now! Thank you noonedeadpunk | 15:00 |
ierdem | I can see all hypervisors and states of all are up | 15:00 |
noonedeadpunk | ok, great) | 15:02 |
*** LowKey has joined #openstack-ansible | 15:02 | |
noonedeadpunk | I think libvirt didn't liked changed hostname | 15:02 |
noonedeadpunk | but not sure if instances are not stuck there | 15:03 |
noonedeadpunk | since in DB it might be other host | 15:03 |
ierdem | I am restarting all instances now, after that I will check instances which runs on compute06 | 15:04 |
ierdem | Now there is another problem.. Hypervisor went Down state again, I think it caused by var-lib-nova-instances.mount service. Inside this service's logs it can not umount the /var/lib/nova/instances path | 15:11 |
*** gokhani has joined #openstack-ansible | 15:12 | |
ierdem | http://paste.openstack.org/show/802393/ | 15:13 |
*** rpittau is now known as rpittau|afk | 15:17 | |
*** macz_ has joined #openstack-ansible | 16:15 | |
*** pcaruana has quit IRC | 16:18 | |
*** tosky has quit IRC | 16:29 | |
openstackgerrit | Merged openstack/openstack-ansible-os_keystone stable/train: Allow OIDCClaimDelimiter to be set in the apache config file https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/773966 | 18:19 |
*** spatel has joined #openstack-ansible | 18:20 | |
*** maharg101 has quit IRC | 18:23 | |
*** tosky has joined #openstack-ansible | 18:37 | |
*** spatel has quit IRC | 18:38 | |
*** spatel has joined #openstack-ansible | 18:40 | |
openstackgerrit | Merged openstack/openstack-ansible-os_keystone stable/victoria: Allow OIDCClaimDelimiter to be set in the apache config file https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/773964 | 18:50 |
ierdem | Hi again, I added new compute node to my OSA environment. Now, new compute host's nova-compute service goes down state, in the mount share point of instance on NFS, there are many locks http://paste.openstack.org/show/802403/ | 19:08 |
ierdem | What cause to this problem? My other compute nodes works fine. Did I miss sth? | 19:08 |
*** andrewbonney has quit IRC | 19:15 | |
ierdem | Process that creates locks is http://paste.openstack.org/show/802406/ | 19:16 |
spatel | I don't run NFS in my cloud but i would say check NFS logs etc. Are you running NFSv4 ? | 19:22 |
ierdem | I am using nfsV3 | 19:23 |
spatel | I would say use v4 it has better lock handling | 19:24 |
spatel | v3 has long history of locking issue so i would highly recommend to use v4 | 19:27 |
*** gokhani has quit IRC | 19:41 | |
*** gokhani has joined #openstack-ansible | 19:43 | |
*** gokhani has quit IRC | 20:17 | |
*** maharg101 has joined #openstack-ansible | 20:19 | |
*** ierdem has quit IRC | 20:21 | |
*** maharg101 has quit IRC | 20:24 | |
*** LowKey has quit IRC | 20:37 | |
openstackgerrit | Merged openstack/openstack-ansible-os_nova master: Move nova pip package from a constraint to a requirement https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/770279 | 20:56 |
openstackgerrit | Merged openstack/openstack-ansible-os_cinder master: Move cinder pip package from a constraint to a requirement https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/770272 | 21:05 |
openstackgerrit | Merged openstack/openstack-ansible-os_keystone master: Move keystone pip package from a constraint to a requirement https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/770271 | 21:07 |
openstackgerrit | Merged openstack/openstack-ansible-os_placement master: Move placement pip package from a constraint to a requirement https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/770280 | 21:08 |
openstackgerrit | Merged openstack/openstack-ansible-os_glance master: Move glance pip package from a constraint to a requirement https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/770546 | 21:18 |
* noonedeadpunk wondering how heavily gates will be broken on moday with release of 20.04.2 and kernel 5.8 | 21:27 | |
noonedeadpunk | oh, 5.8 is only for HWE kernel | 21:30 |
* noonedeadpunk not worried anymore. clean forgot that it's not CentOS | 21:30 | |
noonedeadpunk | that is whole log of upgradsed things for 20.04.1 -> 20.04.2 for my workstation... http://paste.openstack.org/show/802409/ | 21:32 |
*** Underknowledge3 has joined #openstack-ansible | 21:39 | |
*** Underknowledge has quit IRC | 21:41 | |
*** Underknowledge3 is now known as Underknowledge | 21:41 | |
spatel | i don't think i am going to upgrade it :) | 21:52 |
*** macz_ has quit IRC | 21:52 | |
spatel | noonedeadpunk Hey finally i finished my blog for Designate DNS implementation with Openstack-Ansible - https://satishdotpatel.github.io/designate-integration-with-powerdns/ | 21:53 |
*** spatel has quit IRC | 22:02 | |
*** spatel has joined #openstack-ansible | 22:03 | |
*** spatel has quit IRC | 22:04 | |
*** dasp_ has joined #openstack-ansible | 22:18 | |
*** dasp has quit IRC | 22:20 | |
*** waxfire has quit IRC | 22:21 | |
*** spatel has joined #openstack-ansible | 22:21 | |
*** waxfire has joined #openstack-ansible | 22:21 | |
*** fnpanic has joined #openstack-ansible | 22:23 | |
*** spatel has quit IRC | 22:26 | |
*** cshen has quit IRC | 23:27 | |
openstackgerrit | Merged openstack/openstack-ansible-os_tempest master: Move tempest pip package from a constraint to a requirement https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/770281 | 23:28 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!