*** tosky has quit IRC | 00:57 | |
*** jawad_axd has joined #openstack-ansible | 01:01 | |
*** jawad_axd has quit IRC | 01:06 | |
*** yann-kaelig has quit IRC | 01:22 | |
*** spatel has joined #openstack-ansible | 02:06 | |
*** spatel has quit IRC | 02:09 | |
*** spatel has joined #openstack-ansible | 02:09 | |
*** spatel has quit IRC | 02:51 | |
*** raukadah is now known as chandankumar | 03:22 | |
*** macz_ has quit IRC | 04:05 | |
*** macz_ has joined #openstack-ansible | 05:00 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-ansible | 05:33 | |
*** jawad_axd has joined #openstack-ansible | 06:06 | |
*** jawad_axd has quit IRC | 06:07 | |
*** jawad_axd has joined #openstack-ansible | 06:07 | |
*** PrinzElvis has quit IRC | 07:46 | |
*** PrinzElvis has joined #openstack-ansible | 07:47 | |
*** stduolc has joined #openstack-ansible | 07:48 | |
*** sshnaidm_ has joined #openstack-ansible | 08:02 | |
*** sshnaidm has quit IRC | 08:05 | |
*** shyamb has joined #openstack-ansible | 09:08 | |
*** dirk has quit IRC | 09:20 | |
*** dirk1 has joined #openstack-ansible | 09:47 | |
*** macz_ has quit IRC | 09:58 | |
*** sshnaidm_ is now known as sshnaidm|rover | 10:50 | |
*** tosky has joined #openstack-ansible | 10:55 | |
*** spatel has joined #openstack-ansible | 11:06 | |
admin0 | a new osa install on 21.2.0 gives RuntimeError: rbd python libraries not found .... while the same worked in another setup | 11:07 |
---|---|---|
admin0 | and i can't understand why the playbooks all go fine but nova compute fails | 11:07 |
*** spatel has quit IRC | 11:10 | |
kleini | rbd sounds like Ceph related | 11:12 |
kleini | I still have the problem with 21.2.0 that Ceph MONs list still needs to exist although configuration from file works fine. | 11:13 |
kleini | maybe this is the same for you as the ceph-client role does not run, if ceph_mons is not defined | 11:16 |
admin0 | i defined the ceph mons | 11:18 |
admin0 | but there is no /etc/ceph created | 11:18 |
admin0 | but the nova configs have the ceph config | 11:18 |
admin0 | openstack_config: true --this should make osa ssh to the mons, copy and download the configs and keys right ? | 11:23 |
admin0 | strange is i used the same 21.2.0 on almost 4-5 new builds involving ceph .. it worked on all others | 11:23 |
admin0 | failed here | 11:23 |
admin0 | unless i missed something or did something wrong here ;) | 11:30 |
kleini | at least your approach sounds correct to me | 11:32 |
kleini | sorry, I don't have any idea what could have been failed | 11:32 |
admin0 | also, 22.0.0.rc1 was released, but it fails on TASK [haproxy_server : Unarchive HATop] => fatal: [c1]: FAILED! => {"changed": false, "msg": "dest '/opt/cache/files/v0.8.0' must be an existing dir"} | 11:33 |
*** shyamb has quit IRC | 11:59 | |
*** rfolco has joined #openstack-ansible | 12:20 | |
*** stduolc has quit IRC | 12:30 | |
*** stduolc has joined #openstack-ansible | 12:31 | |
*** janno has quit IRC | 12:58 | |
*** janno has joined #openstack-ansible | 12:58 | |
*** spatel has joined #openstack-ansible | 14:05 | |
kleini | Is there a list or a document, that describes, which OSA variables can be used how to get IP addresses for configuration files? E.g. I want to define IP addresses in pools.yaml for Designate and I would like to get hosts br-mgmt IP running the Designate container. | 14:05 |
kleini | Or current container br-mgmt IP to insert that into some configuration. | 14:07 |
spatel | kleini: is this what you looking for https://docs.openstack.org/openstack-ansible/latest/reference/configuration/using-overrides.html | 14:17 |
kleini | I know all of that but this still does not help me to get the br-mgmt IP address of the host running some LXC container, where in that LXC container I want to put the hosts IP address into the configuration file | 14:23 |
kleini | I already put OSA at the point, where the configuration file is generated into debug mode and had a look at the available variables, but this list is somewhat long... | 14:26 |
*** sshnaidm|rover has quit IRC | 14:49 | |
*** spatel has quit IRC | 14:52 | |
admin0 | kleini, you are looking for this ? /opt/openstack-ansible/scripts/inventory-manage.py -l | 14:57 |
kleini | yeah, kind of. br-mgmt IP of container, I am currently templating a configuration file on and its LXC host br-mgmt IP | 15:02 |
*** spatel has joined #openstack-ansible | 15:05 | |
admin0 | the output of that commad has the br-mgmt of the container | 15:05 |
admin0 | exactly what you wanted | 15:05 |
kleini | and how is the variable named? | 15:06 |
kleini | containing that IP address? | 15:06 |
admin0 | you should run that command once in the deployment host | 15:07 |
admin0 | and you will see the output and know how to parse it | 15:07 |
admin0 | or for ansible etc | 15:07 |
kleini | what is the variable name for my current containers br-mgmt IP address, independent of the container I am in | 15:09 |
jawad_axd | Hi folks! Question: When I have taken snapshot of instance_disk in ceph then I try to delete instance, it deletes from openstack but instance_disk doest not get deleted from ceph. I can understand it has snapshot but shouldt it purge snapshot and delete instance_disk afterwards? | 15:11 |
jrosser | kleini: the current container mgmt network address is container_address, see also these https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/all/all.yml#L37-L38 | 15:26 |
jrosser | kleini: if you wanted a list of the mgmt address for a particular container type you can do something like this https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/keystone_all.yml#L32 | 15:28 |
kleini | jrosser: thanks, that helps a lot | 15:30 |
ThiagoCMC | Morning! Are you guys running Victoria already? I'm looking forward to execute that `scripts/run_upgrade.sh ` but I'm afraid that it'll mess up with my deployment. Thing is, my deployment is a 100% based on QEMU and LXD, I mean, all my Controllers are QEMU VMs and all my Network/Computes/OSDs are LXD, so, I'm thinking about making a `Libvirt snapshot` of at least all of my controllers, before the upgr | 16:41 |
ThiagoCMC | ade to Victoria. Sounds like a good idea, right? | 16:41 |
ThiagoCMC | If something goes wrong, I can revert my controllers back and probably reinstall the compute nodes (I don't want to make LXD snapshots and grab all the local qcows too) | 16:42 |
spatel | Victoria RC1 is out | 16:47 |
spatel | I am deploying RC1 right now on my production | 16:47 |
spatel | RC1 is pretty much close to stable (now its just matter of time) | 16:48 |
ThiagoCMC | Nice! But, fresh install or upgrade from Ussuri? | 16:48 |
spatel | fresh (because i convert centos to ubuntu) | 16:48 |
ThiagoCMC | That's a very smart move! =P | 16:48 |
spatel | why don't you quickly setup AIO ussuri and run upgrade victoria | 16:48 |
spatel | if your openstack isn't running production workload then you can go directly | 16:49 |
ThiagoCMC | It's production and I don't have a clone env to play with. Sounds like AIO is the way to go. | 16:50 |
ThiagoCMC | I was thinking about making a clone env of my OSA, in an Heat Template, so I can launch a stack that would me OpenStack within OpenStack, just to test new playbooks... | 16:51 |
ThiagoCMC | I'll start this with AIO! | 16:51 |
admin0 | ThiagoCMC, stuck wtih 22.0.0.rc1 was released, but it fails on TASK [haproxy_server : Unarchive HATop] => fatal: [c1]: FAILED! => {"changed": false, "msg": "dest '/opt/cache/files/v0.8.0' must be an existing dir"} on victoria | 16:54 |
spatel | why are you deploying 22.0.0.rc1 ? | 16:54 |
spatel | you should use stable/ussuri for aio right? | 16:55 |
admin0 | 22.0.0.rc1 is to test it and report issues | 16:55 |
admin0 | i am on 21.2.0 for prod | 16:55 |
ThiagoCMC | Are there significant code differences in between the "stable/victoria" (or stable/something) branch and the "22.0.0.rc1" tagged one (or 21.2.0)? | 16:56 |
spatel | RC1 should be close to stable | 16:57 |
ThiagoCMC | The `stable/something` are bleeding edge, right? That will be tagged in a next release...? | 16:57 |
spatel | I am running victoria on my lab and didn't see any issue | 16:57 |
spatel | Yes | 16:58 |
ThiagoCMC | cool | 16:58 |
spatel | RC1 soon turn into stable (matter of days) | 16:58 |
ThiagoCMC | Okdok =) | 16:58 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_barbican master: [doc] Add barbican configuration page https://review.opendev.org/c/openstack/openstack-ansible-os_barbican/+/768513 | 17:15 |
*** jawad_axd has quit IRC | 17:21 | |
*** jawad_axd has joined #openstack-ansible | 17:22 | |
kleini | admin0, spatel, jrosser: my results in dynamically defining designate_pools_yaml for having PowerDNS running on the LXC hosts running the designate containers: http://paste.openstack.org/show/801313/ | 17:23 |
spatel | kleini: nice! you are running pDNS on OSA host machine? | 17:27 |
kleini | yes, on infra hosts. and from there the zones a transferred to our central DNS | 17:29 |
kleini | I had good support from my colleagues developing pDNS, otherwise pDNS deployment would have been as hard as designate deployment. | 17:30 |
spatel | We are running pDNS on dedicated VMware guest machine totally isolated from openstack. | 17:32 |
spatel | Because DNS is first machine we need in datacenter before deploy any other services. | 17:32 |
kleini | this is similar here, too, and therefore I deployed here an additional instance being just responsible for the zones from Designate | 17:34 |
spatel | kleini: are you going to manage multiple zone using designate ? | 17:37 |
spatel | what did you set neutron_dns_domain: ? | 17:38 |
kleini | neutron_dns_domain: os.oxoe.int. | 17:42 |
kleini | but I am running here only two private clouds not even reachable from the internet, just in company network | 17:42 |
kleini | both clouds provide the resources for running unit and performance tests of pDNS, dovecot and Open-Xchange App Suite | 17:44 |
spatel | kleini: lets say if you want to add new zone foo.bar.com then can you do that? | 17:55 |
admin0 | kleini, thanks .. i am only on bind | 18:14 |
admin0 | but wanted to move to pdns | 18:14 |
admin0 | at least try pdns | 18:14 |
admin0 | so need to manually create a container and install pdns there first ? | 18:15 |
admin0 | give us the full howto :) | 18:15 |
*** carlosmss has joined #openstack-ansible | 18:19 | |
spatel | Do you guys using tag to checkout RC1 or just master ? (Example: git checkout tags/22.0.0.0rc1 ) | 18:19 |
carlosmss | Hi guys, someone help? I have a problem with interface link UP in openstack-ansible-stein, because the namespace used in function ,that wake up the "link" in ha-mode , is None. I edited the debug message to show the NS: "Interface brqxxx not found in namespace None get_link_id | 18:22 |
admin0 | spatel, i use tag | 18:24 |
admin0 | always use a tag | 18:24 |
spatel | i mostly use stable/<release> branch and never used tags | 18:25 |
spatel | In this case i want to try RC1 so may be tag would be good | 18:25 |
*** macz_ has joined #openstack-ansible | 18:26 | |
admin0 | good idea spatel .. on 22.0.0.rc1 , for me it fails on TASK [haproxy_server : Unarchive HATop | 18:27 |
spatel | Did you create directory by hand /var/cache/files ? | 18:27 |
spatel | I think its trying to untar inside /var/cache/files | 18:28 |
spatel | may be bug.. i can take a look.. (currently deploying so should git that bug) | 18:28 |
admin0 | i should ? | 18:31 |
admin0 | doesn't the bootstrap/playbooks take care of that :) | 18:31 |
admin0 | i mean i don't recall doing this by hand ever in any other installs | 18:31 |
spatel | admin0: it does that is why saying may be some race condition or bug | 18:31 |
spatel | Because htop was broken earlier for python3 and we changed pointer recently to use newer binary | 18:32 |
masterpe | We have OSA 20.1.6 installed, we see a couple of times a day the following message: "ERROR oslo_messaging.rpc.server neutron_lib.exceptions.ProcessExecutionError: Exit code: 255; Stdin: ; Stdout: ; Stderr: Cannot find device "vxlan-10"" I have applyed two patches to try to fix it (https://review.opendev.org/c/openstack/neutron/+/766939 and https://review.opendev.org/c/openstack/neutron/+/754005/) I don't see the | 18:32 |
masterpe | error any more. But why did it fixed it and which one? My feeling is that https://review.opendev.org/c/openstack/neutron/+/754005/ fix my issue. | 18:32 |
spatel | I did deployed on my lab and i didn't see that bug | 18:33 |
spatel | admin0: https://opendev.org/openstack/openstack-ansible-haproxy_server/src/branch/master/tasks/haproxy_install.yml#L27 | 18:37 |
admin0 | and you are on the same tag ? | 18:38 |
spatel | In lab i am on master (i had no issue there) | 18:39 |
admin0 | so maybe fixed in master but not on the tag ? | 18:39 |
spatel | right now deploying 22.0.0.rc1 tag so will see if i hit that bug | 18:39 |
admin0 | ok | 18:40 |
admin0 | reset your hosts :) | 18:40 |
admin0 | if master already created that dir, then you will not hit that bug | 18:40 |
admin0 | i reset and start greenfield everytime | 18:40 |
admin0 | and mine is also not AIO | 18:40 |
admin0 | i don't know of aio or multi build will have diff playbooks | 18:41 |
spatel | i don't think aio and multi build use different playbook | 18:41 |
spatel | otherwise we will have big problem :) | 18:41 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/victoria: Fix config_template trackbranch https://review.opendev.org/c/openstack/openstack-ansible/+/768611 | 18:49 |
admin0 | https://gist.githubusercontent.com/a1git/7938dc0f22011770739c1b0917fbbd41/raw/f70046dc6e9c30097d1b4f501c0cf5a2e2a24a65/gistfile1.txt -- this seems to always fail in this one particular host .. the URl seems to work fine | 18:50 |
admin0 | is there a way for me to manally check/continue it | 18:50 |
spatel | admin0: check your repo container | 18:52 |
spatel | vendor.urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='172.29.236.9', port=8181): Read timed | 18:52 |
spatel | look like not able to talk to repo (possible issue of repo service is dead or haproxy not able to talk) | 18:53 |
admin0 | curl http://172.29.236.9:8181/os-releases/21.2.0/ubuntu-20.04-x86_64/ gives me a whole lot of stuff from that same host | 18:53 |
admin0 | also out of 3 hypervisors, 2 work fine .. just this one fails | 18:53 |
admin0 | curl http://172.29.236.9:8181/os-releases/21.2.0/ubuntu-20.04-x86_64/ from this hypervisor also seems to work normally | 18:53 |
spatel | all 3 should work (if not then may be lsync services has issue) | 18:54 |
spatel | I had lots of fun with repo service so i would say look at that.. (remove 2 repo from haproxy so it will help to debug) | 18:55 |
admin0 | let me know if you run into the same bug as i did wth hatop | 18:58 |
admin0 | was planning to test 22 with ovn | 18:59 |
spatel | Doing it.. now | 18:59 |
spatel | admin0: what is the variable to change Region name? | 18:59 |
spatel | i forgot | 18:59 |
spatel | is it region_name: foo in user_variables ? | 19:00 |
admin0 | service_region | 19:00 |
admin0 | service_region: 'foo' | 19:00 |
admin0 | well, without the quotes | 19:01 |
spatel | got it | 19:01 |
*** sshnaidm has joined #openstack-ansible | 19:11 | |
*** macz_ has quit IRC | 19:11 | |
*** sshnaidm is now known as sshnaidm|rover | 19:11 | |
kleini | admin0: I deploy pDNS as part of "before OSA" with the following role usage http://paste.openstack.org/show/801315/ | 19:38 |
kleini | so I deploy it on shared-infra_hosts on metal and not in a container | 19:40 |
spatel | jrosser: are you around? | 20:16 |
admin0 | kleini, thanks | 20:44 |
admin0 | spatel, did you faced the issue on that tag ? | 20:44 |
spatel | I am stuck in this place... | 20:44 |
spatel | admin0: http://paste.openstack.org/show/801317/ | 20:45 |
spatel | very strange issue... | 20:45 |
spatel | trying to understand what is wrong here.. | 20:45 |
admin0 | Could not resolve hostname ostack-phx-api-1-1 | 20:48 |
spatel | but i can | 20:48 |
spatel | trying to understand from which host its not able to resolve | 20:48 |
admin0 | are you using dns for anything and not ip .. like for nfs, glance etc | 20:48 |
spatel | no DNS | 20:49 |
spatel | no nfs etc | 20:49 |
spatel | its very simple OSA deployment which i did many time in LAB | 20:49 |
spatel | never faced this issue before | 20:49 |
spatel | admin0: does your OSA deployment machine has /etc/hosts file populated with all hosts name? | 20:53 |
admin0 | checking | 21:02 |
spatel | admin0: i think i found issue | 21:02 |
admin0 | it does not | 21:03 |
spatel | its dns issue.. | 21:03 |
spatel | re-running playbook | 21:08 |
spatel | ubuntu run local DNS 127.0.0.53 and that was my issue | 21:08 |
spatel | do you upgrade ubuntu time to time (my motd saying - 129 updates can be installed immediately. ) | 21:09 |
*** macz_ has joined #openstack-ansible | 21:12 | |
*** macz_ has quit IRC | 21:17 | |
*** yann-kaelig has joined #openstack-ansible | 21:29 | |
*** carlosmss has quit IRC | 21:31 | |
spatel | admin0: i hit this bug - fatal: [ostack-phx-haproxy-1]: FAILED! => {"changed": false, "msg": "dest '/opt/cache/files/v0.8.0' must be an existing dir"} | 21:43 |
spatel | let me find why we didn't see this on master branch | 21:44 |
admin0 | yes | 21:56 |
admin0 | same with me | 21:56 |
admin0 | so it is a bug :) | 21:56 |
admin0 | i thought all tagged branches are also checked before its released | 21:59 |
admin0 | these bugs are not found by our ci ? | 21:59 |
*** stduolc has quit IRC | 22:00 | |
*** stduolc has joined #openstack-ansible | 22:00 | |
spatel | admin0: i found bug so let me submit patch | 22:01 |
spatel | this bug only impacting if your deployment-host is different machine (not part of infra) | 22:01 |
*** rfolco has quit IRC | 22:05 | |
openstackgerrit | Satish Patel proposed openstack/openstack-ansible-haproxy_server master: Fix HATop for haproxy https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/768615 | 22:07 |
spatel | admin0: that is the fix | 22:07 |
*** macz_ has joined #openstack-ansible | 22:18 | |
*** macz_ has quit IRC | 22:23 | |
*** yann-kaelig has quit IRC | 22:33 | |
*** spatel has quit IRC | 22:43 | |
*** rfolco has joined #openstack-ansible | 23:53 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!